Sample records for line scan camera

  1. Laser line scan underwater imaging by complementary metal-oxide-semiconductor camera

    NASA Astrophysics Data System (ADS)

    He, Zhiyi; Luo, Meixing; Song, Xiyu; Wang, Dundong; He, Ning

    2017-12-01

    This work employs the complementary metal-oxide-semiconductor (CMOS) camera to acquire images in a scanning manner for laser line scan (LLS) underwater imaging to alleviate backscatter impact of seawater. Two operating features of the CMOS camera, namely the region of interest (ROI) and rolling shutter, can be utilized to perform image scan without the difficulty of translating the receiver above the target as the traditional LLS imaging systems have. By the dynamically reconfigurable ROI of an industrial CMOS camera, we evenly divided the image into five subareas along the pixel rows and then scanned them by changing the ROI region automatically under the synchronous illumination by the fun beams of the lasers. Another scanning method was explored by the rolling shutter operation of the CMOS camera. The fun beam lasers were turned on/off to illuminate the narrow zones on the target in a good correspondence to the exposure lines during the rolling procedure of the camera's electronic shutter. The frame synchronization between the image scan and the laser beam sweep may be achieved by either the strobe lighting output pulse or the external triggering pulse of the industrial camera. Comparison between the scanning and nonscanning images shows that contrast of the underwater image can be improved by our LLS imaging techniques, with higher stability and feasibility than the mechanically controlled scanning method.

  2. Improved spatial resolution of luminescence images acquired with a silicon line scanning camera

    NASA Astrophysics Data System (ADS)

    Teal, Anthony; Mitchell, Bernhard; Juhl, Mattias K.

    2018-04-01

    Luminescence imaging is currently being used to provide spatially resolved defect in high volume silicon solar cell production. One option to obtain the high throughput required for on the fly detection is the use a silicon line scan cameras. However, when using a silicon based camera, the spatial resolution is reduced as a result of the weakly absorbed light scattering within the camera's chip. This paper address this issue by applying deconvolution from a measured point spread function. This paper extends the methods for determining the point spread function of a silicon area camera to a line scan camera with charge transfer. The improvement in resolution is quantified in the Fourier domain and in spatial domain on an image of a multicrystalline silicon brick. It is found that light spreading beyond the active sensor area is significant in line scan sensors, but can be corrected for through normalization of the point spread function. The application of this method improves the raw data, allowing effective detection of the spatial resolution of defects in manufacturing.

  3. A system for simulating aerial or orbital TV observations of geographic patterns

    NASA Technical Reports Server (NTRS)

    Latham, J. P.

    1972-01-01

    A system which simulates observation of the earth surface by aerial or orbiting television devices has been developed. By projecting color slides of photographs taken by aircraft and orbiting sensors upon a rear screen system, and altering scale of projected image, screen position, or TV camera position, it is possible to simulate alternatives of altitude, or optical systems. By altering scan line patterns in COHU 3200 series camera from 525 to 945 scan lines, it is possible to study implications of scan line resolution upon the detection and analysis of geographic patterns observed by orbiting TV systems.

  4. Image Intensifier Modules For Use With Commercially Available Solid State Cameras

    NASA Astrophysics Data System (ADS)

    Murphy, Howard; Tyler, Al; Lake, Donald W.

    1989-04-01

    A modular approach to design has contributed greatly to the success of the family of machine vision video equipment produced by EG&G Reticon during the past several years. Internal modularity allows high-performance area (matrix) and line scan cameras to be assembled with two or three electronic subassemblies with very low labor costs, and permits camera control and interface circuitry to be realized by assemblages of various modules suiting the needs of specific applications. Product modularity benefits equipment users in several ways. Modular matrix and line scan cameras are available in identical enclosures (Fig. 1), which allows enclosure components to be purchased in volume for economies of scale and allows field replacement or exchange of cameras within a customer-designed system to be easily accomplished. The cameras are optically aligned (boresighted) at final test; modularity permits optical adjustments to be made with the same precise test equipment for all camera varieties. The modular cameras contain two, or sometimes three, hybrid microelectronic packages (Fig. 2). These rugged and reliable "submodules" perform all of the electronic operations internal to the camera except for the job of image acquisition performed by the monolithic image sensor. Heat produced by electrical power dissipation in the electronic modules is conducted through low resistance paths to the camera case by the metal plates, which results in a thermally efficient and environmentally tolerant camera with low manufacturing costs. A modular approach has also been followed in design of the camera control, video processor, and computer interface accessory called the Formatter (Fig. 3). This unit can be attached directly onto either a line scan or matrix modular camera to form a self-contained units, or connected via a cable to retain the advantages inherent to a small, light weight, and rugged image sensing component. Available modules permit the bus-structured Formatter to be configured as required by a specific camera application. Modular line and matrix scan cameras incorporating sensors with fiber optic faceplates (Fig 4) are also available. These units retain the advantages of interchangeability, simple construction, ruggedness, and optical precision offered by the more common lens input units. Fiber optic faceplate cameras are used for a wide variety of applications. A common usage involves mating of the Reticon-supplied camera to a customer-supplied intensifier tube for low light level and/or short exposure time situations.

  5. Prism-assembly for dual-band short-wave infrared region line-scan camera

    NASA Astrophysics Data System (ADS)

    Chassagne, Bruno; de Laulanié, Lucie; Pommiès, Matthieu

    2018-02-01

    A simple dichroic splitter for dual-band line scanning is described. It comprises prisms elements that enable cheapness of the whole prototype by using only one linear detector. Validity of the design is demonstrated via in-line moisture measurement.

  6. High-speed line-scan camera with digital time delay integration

    NASA Astrophysics Data System (ADS)

    Bodenstorfer, Ernst; Fürtler, Johannes; Brodersen, Jörg; Mayer, Konrad J.; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert

    2007-02-01

    Dealing with high-speed image acquisition and processing systems, the speed of operation is often limited by the amount of available light, due to short exposure times. Therefore, high-speed applications often use line-scan cameras, based on charge-coupled device (CCD) sensors with time delayed integration (TDI). Synchronous shift and accumulation of photoelectric charges on the CCD chip - according to the objects' movement - result in a longer effective exposure time without introducing additional motion blur. This paper presents a high-speed color line-scan camera based on a commercial complementary metal oxide semiconductor (CMOS) area image sensor with a Bayer filter matrix and a field programmable gate array (FPGA). The camera implements a digital equivalent to the TDI effect exploited with CCD cameras. The proposed design benefits from the high frame rates of CMOS sensors and from the possibility of arbitrarily addressing the rows of the sensor's pixel array. For the digital TDI just a small number of rows are read out from the area sensor which are then shifted and accumulated according to the movement of the inspected objects. This paper gives a detailed description of the digital TDI algorithm implemented on the FPGA. Relevant aspects for the practical application are discussed and key features of the camera are listed.

  7. Camera Systems Rapidly Scan Large Structures

    NASA Technical Reports Server (NTRS)

    2013-01-01

    Needing a method to quickly scan large structures like an aircraft wing, Langley Research Center developed the line scanning thermography (LST) system. LST works in tandem with a moving infrared camera to capture how a material responds to changes in temperature. Princeton Junction, New Jersey-based MISTRAS Group Inc. now licenses the technology and uses it in power stations and industrial plants.

  8. Sensor for In-Motion Continuous 3D Shape Measurement Based on Dual Line-Scan Cameras

    PubMed Central

    Sun, Bo; Zhu, Jigui; Yang, Linghui; Yang, Shourui; Guo, Yin

    2016-01-01

    The acquisition of three-dimensional surface data plays an increasingly important role in the industrial sector. Numerous 3D shape measurement techniques have been developed. However, there are still limitations and challenges in fast measurement of large-scale objects or high-speed moving objects. The innovative line scan technology opens up new potentialities owing to the ultra-high resolution and line rate. To this end, a sensor for in-motion continuous 3D shape measurement based on dual line-scan cameras is presented. In this paper, the principle and structure of the sensor are investigated. The image matching strategy is addressed and the matching error is analyzed. The sensor has been verified by experiments and high-quality results are obtained. PMID:27869731

  9. Reducing flicker due to ambient illumination in camera captured images

    NASA Astrophysics Data System (ADS)

    Kim, Minwoong; Bengtson, Kurt; Li, Lisa; Allebach, Jan P.

    2013-02-01

    The flicker artifact dealt with in this paper is the scanning distortion arising when an image is captured by a digital camera using a CMOS imaging sensor with an electronic rolling shutter under strong ambient light sources powered by AC. This type of camera scans a target line-by-line in a frame. Therefore, time differences exist between the lines. This mechanism causes a captured image to be corrupted by the change of illumination. This phenomenon is called the flicker artifact. The non-content area of the captured image is used to estimate a flicker signal that is a key to being able to compensate the flicker artifact. The average signal of the non-content area taken along the scan direction has local extrema where the peaks of flicker exist. The locations of the extrema are very useful information to estimate the desired distribution of pixel intensities assuming that the flicker artifact does not exist. The flicker-reduced images compensated by our approach clearly demonstrate the reduced flicker artifact, based on visual observation.

  10. Dynamic deformation inspection of a human arm by using a line-scan imaging system

    NASA Astrophysics Data System (ADS)

    Hu, Eryi

    2009-11-01

    A line-scan imaging system is used in the dynamic deformation measurement of a human arm when the muscle is contracting and relaxing. The measurement principle is based on the projection grating profilometry, and the measuring system is consisted of a line-scan CCD camera, a projector, optical lens and a personal computer. The detected human arm is put upon a reference plane, and a sinusoidal grating is projected onto the object surface and reference plane at an incidence angle, respectively. The deformed fringe pattern in the same line of the dynamic detected arm is captured by the line-scan CCD camera with free trigger model, and the deformed fringe pattern is recorded in the personal computer for processing. A fast Fourier transform combining with a filtering and spectrum shifting method is used to extract the phase information caused by the profile of the detected object. Thus, the object surface profile can be obtained following the geometric relationship between the fringe deformation and the object surface height. Furthermore, the deformation procedure can be obtained line by line. Some experimental results are presented to prove the feasibility of the inspection system.

  11. High-throughput microfluidic line scan imaging for cytological characterization

    NASA Astrophysics Data System (ADS)

    Hutcheson, Joshua A.; Powless, Amy J.; Majid, Aneeka A.; Claycomb, Adair; Fritsch, Ingrid; Balachandran, Kartik; Muldoon, Timothy J.

    2015-03-01

    Imaging cells in a microfluidic chamber with an area scan camera is difficult due to motion blur and data loss during frame readout causing discontinuity of data acquisition as cells move at relatively high speeds through the chamber. We have developed a method to continuously acquire high-resolution images of cells in motion through a microfluidics chamber using a high-speed line scan camera. The sensor acquires images in a line-by-line fashion in order to continuously image moving objects without motion blur. The optical setup comprises an epi-illuminated microscope with a 40X oil immersion, 1.4 NA objective and a 150 mm tube lens focused on a microfluidic channel. Samples containing suspended cells fluorescently stained with 0.01% (w/v) proflavine in saline are introduced into the microfluidics chamber via a syringe pump; illumination is provided by a blue LED (455 nm). Images were taken of samples at the focal plane using an ELiiXA+ 8k/4k monochrome line-scan camera at a line rate of up to 40 kHz. The system's line rate and fluid velocity are tightly controlled to reduce image distortion and are validated using fluorescent microspheres. Image acquisition was controlled via MATLAB's Image Acquisition toolbox. Data sets comprise discrete images of every detectable cell which may be subsequently mined for morphological statistics and definable features by a custom texture analysis algorithm. This high-throughput screening method, comparable to cell counting by flow cytometry, provided efficient examination including counting, classification, and differentiation of saliva, blood, and cultured human cancer cells.

  12. Red to far-red multispectral fluorescence image fusion for detection of fecal contamination on apples

    USDA-ARS?s Scientific Manuscript database

    This research developed a multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet/blue LED excitation for detection of fecal contamination on Golden Delicious apples. Using a hyperspectral line-scan imaging system consisting of an EMCCD camera, spectrograph, an...

  13. Line scanning system for direct digital chemiluminescence imaging of DNA sequencing blots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karger, A.E.; Weiss, R.; Gesteland, R.F.

    A cryogenically cooled charge-coupled device (CCD) camera equipped with an area CCD array is used in a line scanning system for low-light-level imaging of chemiluminescent DNA sequencing blots. Operating the CCD camera in time-delayed integration (TDI) mode results in continuous data acquisition independent of the length of the CCD array. Scanning is possible with a resolution of 1.4 line pairs/mm at the 50% level of the modulation transfer function. High-sensitivity, low-light-level scanning of chemiluminescent direct-transfer electrophoresis (DTE) DNA sequencing blots is shown. The detection of DNA fragments on the blot involves DNA-DNA hybridization with oligonucleotide-alkaline phosphatase conjugate and 1,2-dioxetane-based chemiluminescence.more » The width of the scan allows the recording of up to four sequencing reactions (16 lanes) on one scan. The scan speed of 52 cm/h used for the sequencing blots corresponds to a data acquisition rate of 384 pixels/s. The chemiluminescence detection limit on the scanned images is 3.9 [times] 10[sup [minus]18] mol of plasmid DNA. A conditional median filter is described to remove spikes caused by cosmic ray events from the CCD images. 39 refs., 9 refs.« less

  14. Crackscope : automatic pavement cracking inspection system.

    DOT National Transportation Integrated Search

    2008-08-01

    The CrackScope system is an automated pavement crack rating system consisting of a : digital line scan camera, laser-line illuminator, and proprietary crack detection and classification : software. CrackScope is able to perform real-time pavement ins...

  15. Lunar UV-visible-IR mapping interferometric spectrometer

    NASA Technical Reports Server (NTRS)

    Smith, W. Hayden; Haskin, L.; Korotev, R.; Arvidson, R.; Mckinnon, W.; Hapke, B.; Larson, S.; Lucey, P.

    1992-01-01

    Ultraviolet-visible-infrared mapping digital array scanned interferometers for lunar compositional surveys was developed. The research has defined a no-moving-parts, low-weight and low-power, high-throughput, and electronically adaptable digital array scanned interferometer that achieves measurement objectives encompassing and improving upon all the requirements defined by the LEXSWIG for lunar mineralogical investigation. In addition, LUMIS provides a new, important, ultraviolet spectral mapping, high-spatial-resolution line scan camera, and multispectral camera capabilities. An instrument configuration optimized for spectral mapping and imaging of the lunar surface and provide spectral results in support of the instrument design are described.

  16. Differentiating defects in red oak lumber by discriminant analysis using color, shape, and density

    Treesearch

    B. H. Bond; D. Earl Kline; Philip A. Araman

    2002-01-01

    Defect color, shape, and density measures aid in the differentiation of knots, bark pockets, stain/mineral streak, and clearwood in red oak, (Quercus rubra). Various color, shape, and density measures were extracted for defects present in color and X-ray images captured using a color line scan camera and an X-ray line scan detector. Analysis of variance was used to...

  17. Towards Robust Self-Calibration for Handheld 3d Line Laser Scanning

    NASA Astrophysics Data System (ADS)

    Bleier, M.; Nüchter, A.

    2017-11-01

    This paper studies self-calibration of a structured light system, which reconstructs 3D information using video from a static consumer camera and a handheld cross line laser projector. Intersections between the individual laser curves and geometric constraints on the relative position of the laser planes are exploited to achieve dense 3D reconstruction. This is possible without any prior knowledge of the movement of the projector. However, inaccurrately extracted laser lines introduce noise in the detected intersection positions and therefore distort the reconstruction result. Furthermore, when scanning objects with specular reflections, such as glossy painted or metalic surfaces, the reflections are often extracted from the camera image as erroneous laser curves. In this paper we investiagte how robust estimates of the parameters of the laser planes can be obtained despite of noisy detections.

  18. High speed parallel spectral-domain OCT using spectrally encoded line-field illumination

    NASA Astrophysics Data System (ADS)

    Lee, Kye-Sung; Hur, Hwan; Bae, Ji Yong; Kim, I. Jong; Kim, Dong Uk; Nam, Ki-Hwan; Kim, Geon-Hee; Chang, Ki Soo

    2018-01-01

    We report parallel spectral-domain optical coherence tomography (OCT) at 500 000 A-scan/s. This is the highest-speed spectral-domain (SD) OCT system using a single line camera. Spectrally encoded line-field scanning is proposed to increase the imaging speed in SD-OCT effectively, and the tradeoff between speed, depth range, and sensitivity is demonstrated. We show that three imaging modes of 125k, 250k, and 500k A-scan/s can be simply switched according to the sample to be imaged considering the depth range and sensitivity. To demonstrate the biological imaging performance of the high-speed imaging modes of the spectrally encoded line-field OCT system, human skin and a whole leaf were imaged at the speed of 250k and 500k A-scan/s, respectively. In addition, there is no sensitivity dependence in the B-scan direction, which is implicit in line-field parallel OCT using line focusing of a Gaussian beam with a cylindrical lens.

  19. Tele-Education: Teaching over the Telephone with Slow-Scan Video.

    ERIC Educational Resources Information Center

    Kelleher, Kathleen

    1983-01-01

    This report describes educational applications of slow-scan television (SSTV) teleconferencing, which uses a video signal generated from a standard, low-cost, industrial television camera and compressed to a bandwidth suitable for transmission over telephone lines. Following a brief explanation of the capabilities of SSTV and the required…

  20. A design of a high speed dual spectrometer by single line scan camera

    NASA Astrophysics Data System (ADS)

    Palawong, Kunakorn; Meemon, Panomsak

    2018-03-01

    A spectrometer that can capture two orthogonal polarization components of s light beam is demanded for polarization sensitive imaging system. Here, we describe the design and implementation of a high speed spectrometer for simultaneous capturing of two orthogonal polarization components, i.e. vertical and horizontal components, of light beam. The design consists of a polarization beam splitter, two polarization-maintain optical fibers, two collimators, a single line-scan camera, a focusing lens, and a reflection blaze grating. The alignment of two beam paths was designed to be symmetrically incident on the blaze side and reverse blaze side of reflection grating, respectively. The two diffracted beams were passed through the same focusing lens and focused on the single line-scan sensors of a CMOS camera. The two spectra of orthogonal polarization were imaged on 1000 pixels per spectrum. With the proposed setup, the amplitude and shape of the two detected spectra can be controlled by rotating the collimators. The technique for optical alignment of spectrometer will be presented and discussed. The two orthogonal polarization spectra can be simultaneously captured at a speed of 70,000 spectra per second. The high speed dual spectrometer can simultaneously detected two orthogonal polarizations, which is an important component for the development of polarization-sensitive optical coherence tomography. The performance of the spectrometer have been measured and analyzed.

  1. High speed line-scan confocal imaging of stimulus-evoked intrinsic optical signals in the retina

    PubMed Central

    Li, Yang-Guo; Liu, Lei; Amthor, Franklin; Yao, Xin-Cheng

    2010-01-01

    A rapid line-scan confocal imager was developed for functional imaging of the retina. In this imager, an acousto-optic deflector (AOD) was employed to produce mechanical vibration- and inertia-free light scanning, and a high-speed (68,000 Hz) linear CCD camera was used to achieve sub-cellular and sub-millisecond spatiotemporal resolution imaging. Two imaging modalities, i.e., frame-by-frame and line-by-line recording, were validated for reflected light detection of intrinsic optical signals (IOSs) in visible light stimulus activated frog retinas. Experimental results indicated that fast IOSs were tightly correlated with retinal stimuli, and could track visible light flicker stimulus frequency up to at least 2 Hz. PMID:20125743

  2. Document Monitor

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The charters of Freedom Monitoring System will periodically assess the physical condition of the U.S. Constitution, Declaration of Independence and Bill of Rights. Although protected in helium filled glass cases, the documents are subject to damage from light vibration and humidity. The photometer is a CCD detector used as the electronic film for the camera system's scanning camera which mechanically scans the document line by line and acquires a series of images, each representing a one square inch portion of the document. Perkin-Elmer Corporation's photometer is capable of detecting changes in contrast, shape or other indicators of degradation with 5 to 10 times the sensitivity of the human eye. A Vicom image processing computer receives the data from the photometer stores it and manipulates it, allowing comparison of electronic images over time to detect changes.

  3. Structured-Light Based 3d Laser Scanning of Semi-Submerged Structures

    NASA Astrophysics Data System (ADS)

    van der Lucht, J.; Bleier, M.; Leutert, F.; Schilling, K.; Nüchter, A.

    2018-05-01

    In this work we look at 3D acquisition of semi-submerged structures with a triangulation based underwater laser scanning system. The motivation is that we want to simultaneously capture data above and below water to create a consistent model without any gaps. The employed structured light scanner consist of a machine vision camera and a green line laser. In order to reconstruct precise surface models of the object it is necessary to model and correct for the refraction of the laser line and camera rays at the water-air boundary. We derive a geometric model for the refraction at the air-water interface and propose a method for correcting the scans. Furthermore, we show how the water surface is directly estimated from sensor data. The approach is verified using scans captured with an industrial manipulator to achieve reproducible scanner trajectories with different incident angles. We show that the proposed method is effective for refractive correction and that it can be applied directly to the raw sensor data without requiring any external markers or targets.

  4. Cable and Line Inspection Mechanism

    NASA Technical Reports Server (NTRS)

    Ross, Terence J. (Inventor)

    2003-01-01

    An automated cable and line inspection mechanism visually scans the entire surface of a cable as the mechanism travels along the cable=s length. The mechanism includes a drive system, a video camera, a mirror assembly for providing the camera with a 360 degree view of the cable, and a laser micrometer for measuring the cable=s diameter. The drive system includes an electric motor and a plurality of drive wheels and tension wheels for engaging the cable or line to be inspected, and driving the mechanism along the cable. The mirror assembly includes mirrors that are positioned to project multiple images of the cable on the camera lens, each of which is of a different portion of the cable. A data transceiver and a video transmitter are preferably employed for transmission of video images, data and commands between the mechanism and a remote control station.

  5. Cable and line inspection mechanism

    NASA Technical Reports Server (NTRS)

    Ross, Terence J. (Inventor)

    2003-01-01

    An automated cable and line inspection mechanism visually scans the entire surface of a cable as the mechanism travels along the cable=s length. The mechanism includes a drive system, a video camera, a mirror assembly for providing the camera with a 360 degree view of the cable, and a laser micrometer for measuring the cable=s diameter. The drive system includes an electric motor and a plurality of drive wheels and tension wheels for engaging the cable or line to be inspected, and driving the mechanism along the cable. The mirror assembly includes mirrors that are positioned to project multiple images of the cable on the camera lens, each of which is of a different portion of the cable. A data transceiver and a video transmitter are preferably employed for transmission of video images, data and commands between the mechanism and a remote control station.

  6. Real time, TV-based, point-image quantizer and sorter

    DOEpatents

    Case, Arthur L.; Davidson, Jackson B.

    1976-01-01

    A device is provided for improving the vertical resolution in a television-based, two-dimensional readout for radiation detection systems such as are used to determine the location of light or nuclear radiation impinging a target area viewed by a television camera, where it is desired to store the data indicative of the centroid location of such images. In the example embodiment, impinging nuclear radiation detected in the form of a scintillation occurring in a crystal is stored as a charge image on a television camera tube target. The target is scanned in a raster and the image position is stored according to a corresponding vertical scan number and horizontal position number along the scan. To determine the centroid location of an image that may overlap a number of horizontal scan lines along the vertical axis of the raster, digital logic circuits are provided with at least four series-connected shift registers, each having 512 bit positions according to a selected 512 horizontal increment of resolutions along a scan line. The registers are shifted by clock pulses at a rate of 512 pulses per scan line. When an image or portion thereof is detected along a scan, its horizontal center location is determined and the present front bit is set in the first shift register and shifted through the registers one at a time for each horizontal scan. Each register is compared bit-by-bit with the preceding register to detect coincident set bit positions until the last scan line detecting a portion of the image is determined. Depending on the number of shift registers through which the first detection of the image is shifted, circuitry is provided to store the vertical center position of the event according to the number of shift registers through which the first detection of the event is shifted. Interpolation circuitry is provided to determine if the event centroid is between adjacent scan lines and stored in a vertical address accordingly. The horizontal location of the event is stored in a separate address memory.

  7. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    NASA Astrophysics Data System (ADS)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  8. High speed spectral domain optical coherence tomography for retinal imaging at 500,000 A‑lines per second

    PubMed Central

    An, Lin; Li, Peng; Shen, Tueng T.; Wang, Ruikang

    2011-01-01

    We present a new development of ultrahigh speed spectral domain optical coherence tomography (SDOCT) for human retinal imaging at 850 nm central wavelength by employing two high-speed line scan CMOS cameras, each running at 250 kHz. Through precisely controlling the recording and reading time periods of the two cameras, the SDOCT system realizes an imaging speed at 500,000 A-lines per second, while maintaining both high axial resolution (~8 μm) and acceptable depth ranging (~2.5 mm). With this system, we propose two scanning protocols for human retinal imaging. The first is aimed to achieve isotropic dense sampling and fast scanning speed, enabling a 3D imaging within 0.72 sec for a region covering 4x4 mm2. In this case, the B-frame rate is 700 Hz and the isotropic dense sampling is 500 A-lines along both the fast and slow axes. This scanning protocol minimizes the motion artifacts, thus making it possible to perform two directional averaging so that the signal to noise ratio of the system is enhanced while the degradation of its resolution is minimized. The second protocol is designed to scan the retina in a large field of view, in which 1200 A-lines are captured along both the fast and slow axes, covering 10 mm2, to provide overall information about the retinal status. Because of relatively long imaging time (4 seconds for a 3D scan), the motion artifact is inevitable, making it difficult to interpret the 3D data set, particularly in a way of depth-resolved en-face fundus images. To mitigate this difficulty, we propose to use the relatively high reflecting retinal pigmented epithelium layer as the reference to flatten the original 3D data set along both the fast and slow axes. We show that the proposed system delivers superb performance for human retina imaging. PMID:22025983

  9. High speed spectral domain optical coherence tomography for retinal imaging at 500,000 A‑lines per second.

    PubMed

    An, Lin; Li, Peng; Shen, Tueng T; Wang, Ruikang

    2011-10-01

    We present a new development of ultrahigh speed spectral domain optical coherence tomography (SDOCT) for human retinal imaging at 850 nm central wavelength by employing two high-speed line scan CMOS cameras, each running at 250 kHz. Through precisely controlling the recording and reading time periods of the two cameras, the SDOCT system realizes an imaging speed at 500,000 A-lines per second, while maintaining both high axial resolution (~8 μm) and acceptable depth ranging (~2.5 mm). With this system, we propose two scanning protocols for human retinal imaging. The first is aimed to achieve isotropic dense sampling and fast scanning speed, enabling a 3D imaging within 0.72 sec for a region covering 4x4 mm(2). In this case, the B-frame rate is 700 Hz and the isotropic dense sampling is 500 A-lines along both the fast and slow axes. This scanning protocol minimizes the motion artifacts, thus making it possible to perform two directional averaging so that the signal to noise ratio of the system is enhanced while the degradation of its resolution is minimized. The second protocol is designed to scan the retina in a large field of view, in which 1200 A-lines are captured along both the fast and slow axes, covering 10 mm(2), to provide overall information about the retinal status. Because of relatively long imaging time (4 seconds for a 3D scan), the motion artifact is inevitable, making it difficult to interpret the 3D data set, particularly in a way of depth-resolved en-face fundus images. To mitigate this difficulty, we propose to use the relatively high reflecting retinal pigmented epithelium layer as the reference to flatten the original 3D data set along both the fast and slow axes. We show that the proposed system delivers superb performance for human retina imaging.

  10. Robot calibration with a photogrammetric on-line system using reseau scanning cameras

    NASA Astrophysics Data System (ADS)

    Diewald, Bernd; Godding, Robert; Henrich, Andreas

    1994-03-01

    The possibility for testing and calibration of industrial robots becomes more and more important for manufacturers and users of such systems. Exacting applications in connection with the off-line programming techniques or the use of robots as measuring machines are impossible without a preceding robot calibration. At the LPA an efficient calibration technique has been developed. Instead of modeling the kinematic behavior of a robot, the new method describes the pose deviations within a user-defined section of the robot's working space. High- precision determination of 3D coordinates of defined path positions is necessary for calibration and can be done by digital photogrammetric systems. For the calibration of a robot at the LPA a digital photogrammetric system with three Rollei Reseau Scanning Cameras was used. This system allows an automatic measurement of a large number of robot poses with high accuracy.

  11. High-speed spectral domain polarization-sensitive OCT using a single InGaAs line-scan camera and an optical switch

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Won; Jeong, Hyun-Woo; Kim, Beop-Min

    2010-02-01

    We demonstrated high-speed spectral domain polarization-sensitive optical coherence tomography (SD-PSOCT) using a single InGaAs line-scan camera and an optical switch at 1.3-μm region. The polarization-sensitive low coherence interferometer in the system was based on the original free-space PS-OCT system published by Hee et al. The horizontal and vertical polarization light rays split by polarization beam splitter were delivered and detected via an optical switch to a single spectrometer by turns instead of dual spectrometers. The SD-PSOCT system had an axial resolution of 8.2 μm, a sensitivity of 101.5 dB, and an acquisition speed of 23,496 Alines/s. We obtained the intensity, phase retardation, and fast axis orientation images of a biological tissue. In addition, we calculated the averaged axial profiles of the phase retardation in human skin.

  12. Measurement of large steel plates based on linear scan structured light scanning

    NASA Astrophysics Data System (ADS)

    Xiao, Zhitao; Li, Yaru; Lei, Geng; Xi, Jiangtao

    2018-01-01

    A measuring method based on linear structured light scanning is proposed to achieve the accurate measurement of the complex internal shape of large steel plates. Firstly, by using a calibration plate with round marks, an improved line scanning calibration method is designed. The internal and external parameters of camera are determined through the calibration method. Secondly, the images of steel plates are acquired by line scan camera. Then the Canny edge detection method is used to extract approximate contours of the steel plate images, the Gauss fitting algorithm is used to extract the sub-pixel edges of the steel plate contours. Thirdly, for the problem of inaccurate restoration of contour size, by measuring the distance between adjacent points in the grid of known dimensions, the horizontal and vertical error curves of the images are obtained. Finally, these horizontal and vertical error curves can be used to correct the contours of steel plates, and then combined with the calibration parameters of internal and external, the size of these contours can be calculated. The experiments results demonstrate that the proposed method can achieve the error of 1 mm/m in 1.2m×2.6m field of view, which has satisfied the demands of industrial measurement.

  13. Development of online lines-scan imaging system for chicken inspection and differentiation

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Chan, Diane E.; Chao, Kuanglin; Chen, Yud-Ren; Kim, Moon S.

    2006-10-01

    An online line-scan imaging system was developed for differentiation of wholesome and systemically diseased chickens. The hyperspectral imaging system used in this research can be directly converted to multispectral operation and would provide the ideal implementation of essential features for data-efficient high-speed multispectral classification algorithms. The imaging system consisted of an electron-multiplying charge-coupled-device (EMCCD) camera and an imaging spectrograph for line-scan images. The system scanned the surfaces of chicken carcasses on an eviscerating line at a poultry processing plant in December 2005. A method was created to recognize birds entering and exiting the field of view, and to locate a Region of Interest on the chicken images from which useful spectra were extracted for analysis. From analysis of the difference spectra between wholesome and systemically diseased chickens, four wavelengths of 468 nm, 501 nm, 582 nm and 629 nm were selected as key wavelengths for differentiation. The method of locating the Region of Interest will also have practical application in multispectral operation of the line-scan imaging system for online chicken inspection. This line-scan imaging system makes possible the implementation of multispectral inspection using the key wavelengths determined in this study with minimal software adaptations and without the need for cross-system calibration.

  14. High-Speed Edge-Detecting Line Scan Smart Camera

    NASA Technical Reports Server (NTRS)

    Prokop, Norman F.

    2012-01-01

    A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..

  15. Spacecraft camera image registration

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  16. High Resolution Trichromatic Road Surface Scanning with a Line Scan Camera and Light Emitting Diode Lighting for Road-Kill Detection.

    PubMed

    Lopes, Gil; Ribeiro, A Fernando; Sillero, Neftalí; Gonçalves-Seco, Luís; Silva, Cristiano; Franch, Marc; Trigueiros, Paulo

    2016-04-19

    This paper presents a road surface scanning system that operates with a trichromatic line scan camera with light emitting diode (LED) lighting achieving road surface resolution under a millimeter. It was part of a project named Roadkills-Intelligent systems for surveying mortality of amphibians in Portuguese roads, sponsored by the Portuguese Science and Technology Foundation. A trailer was developed in order to accommodate the complete system with standalone power generation, computer image capture and recording, controlled lighting to operate day or night without disturbance, incremental encoder with 5000 pulses per revolution attached to one of the trailer wheels, under a meter Global Positioning System (GPS) localization, easy to utilize with any vehicle with a trailer towing system and focused on a complete low cost solution. The paper describes the system architecture of the developed prototype, its calibration procedure, the performed experimentation and some obtained results, along with a discussion and comparison with existing systems. Sustained operating trailer speeds of up to 30 km/h are achievable without loss of quality at 4096 pixels' image width (1 m width of road surface) with 250 µm/pixel resolution. Higher scanning speeds can be achieved by lowering the image resolution (120 km/h with 1 mm/pixel). Computer vision algorithms are under development to operate on the captured images in order to automatically detect road-kills of amphibians.

  17. High Resolution Trichromatic Road Surface Scanning with a Line Scan Camera and Light Emitting Diode Lighting for Road-Kill Detection

    PubMed Central

    Lopes, Gil; Ribeiro, A. Fernando; Sillero, Neftalí; Gonçalves-Seco, Luís; Silva, Cristiano; Franch, Marc; Trigueiros, Paulo

    2016-01-01

    This paper presents a road surface scanning system that operates with a trichromatic line scan camera with light emitting diode (LED) lighting achieving road surface resolution under a millimeter. It was part of a project named Roadkills—Intelligent systems for surveying mortality of amphibians in Portuguese roads, sponsored by the Portuguese Science and Technology Foundation. A trailer was developed in order to accommodate the complete system with standalone power generation, computer image capture and recording, controlled lighting to operate day or night without disturbance, incremental encoder with 5000 pulses per revolution attached to one of the trailer wheels, under a meter Global Positioning System (GPS) localization, easy to utilize with any vehicle with a trailer towing system and focused on a complete low cost solution. The paper describes the system architecture of the developed prototype, its calibration procedure, the performed experimentation and some obtained results, along with a discussion and comparison with existing systems. Sustained operating trailer speeds of up to 30 km/h are achievable without loss of quality at 4096 pixels’ image width (1 m width of road surface) with 250 µm/pixel resolution. Higher scanning speeds can be achieved by lowering the image resolution (120 km/h with 1 mm/pixel). Computer vision algorithms are under development to operate on the captured images in order to automatically detect road-kills of amphibians. PMID:27104535

  18. Comparison of Cyberware PX and PS 3D human head scanners

    NASA Astrophysics Data System (ADS)

    Carson, Jeremy; Corner, Brian D.; Crockett, Eric; Li, Peng; Paquette, Steven

    2008-02-01

    A common limitation of laser line three-Dimensional (3D) scanners is the inability to scan objects with surfaces that are either parallel to the laser line or that self-occlude. Filling in missing areas adds some unwanted inaccuracy to the 3D model. Capturing the human head with a Cyberware PS Head Scanner is an example of obtaining a model where the incomplete areas are difficult to fill accurately. The PS scanner uses a single vertical laser line to illuminate the head and is unable to capture data at top of the head, where the line of sight is tangent to the surface, and under the chin, an area occluded by the chin when the subject looks straight forward. The Cyberware PX Scanner was developed to obtain this missing 3D head data. The PX scanner uses two cameras offset at different angles to provide a more detailed head scan that captures surfaces missed by the PS scanner. The PX scanner cameras also use new technology to obtain color maps that are of higher resolution than the PS Scanner. The two scanners were compared in terms of amount of surface captured (surface area and volume) and the quality of head measurements when compared to direct measurements obtained through standard anthropometry methods. Relative to the PS scanner, the PX head scans were more complete and provided the full set of head measurements, but actual measurement values, when available from both scanners, were about the same.

  19. The research on calibration methods of dual-CCD laser three-dimensional human face scanning system

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Yang, Fengting; Shi, Shendong

    2013-09-01

    In this paper, on the basis of considering the performance advantages of two-step method, we combines the stereo matching of binocular stereo vision with active laser scanning to calibrate the system. Above all, we select a reference camera coordinate system as the world coordinate system and unity the coordinates of two CCD cameras. And then obtain the new perspective projection matrix (PPM) of each camera after the epipolar rectification. By those, the corresponding epipolar equation of two cameras can be defined. So by utilizing the trigonometric parallax method, we can measure the space point position after distortion correction and achieve stereo matching calibration between two image points. Experiments verify that this method can improve accuracy and system stability is guaranteed. The stereo matching calibration has a simple process with low-cost, and simplifies regular maintenance work. It can acquire 3D coordinates only by planar checkerboard calibration without the need of designing specific standard target or using electronic theodolite. It is found that during the experiment two-step calibration error and lens distortion lead to the stratification of point cloud data. The proposed calibration method which combining active line laser scanning and binocular stereo vision has the both advantages of them. It has more flexible applicability. Theory analysis and experiment shows the method is reasonable.

  20. In-line interferometer for broadband near-field scanning optical spectroscopy.

    PubMed

    Brauer, Jens; Zhan, Jinxin; Chimeh, Abbas; Korte, Anke; Lienau, Christoph; Gross, Petra

    2017-06-26

    We present and investigate a novel approach towards broad-bandwidth near-field scanning optical spectroscopy based on an in-line interferometer for homodyne mixing of the near field and a reference field. In scattering-type scanning near-field optical spectroscopy, the near-field signal is usually obscured by a large amount of unwanted background scattering from the probe shaft and the sample. Here we increase the light reflected from the sample by a semi-transparent gold layer and use it as a broad-bandwidth, phase-stable reference field to amplify the near-field signal in the visible and near-infrared spectral range. We experimentally demonstrate that this efficiently suppresses the unwanted background signal in monochromatic near-field measurements. For rapid acquisition of complete broad-bandwidth spectra we employ a monochromator and a fast line camera. Using this fast acquisition of spectra and the in-line interferometer we demonstrate the measurement of pure near-field spectra. The experimental observations are quantitatively explained by analytical expressions for the measured optical signals, based on Fourier decomposition of background and near field. The theoretical model and in-line interferometer together form an important step towards broad-bandwidth near-field scanning optical spectroscopy.

  1. Line-scan spatially offset Raman spectroscopy for inspecting subsurface food safety and quality

    NASA Astrophysics Data System (ADS)

    Qin, Jianwei; Chao, Kuanglin; Kim, Moon S.

    2016-05-01

    This paper presented a method for subsurface food inspection using a newly developed line-scan spatially offset Raman spectroscopy (SORS) technique. A 785 nm laser was used as a Raman excitation source. The line-shape SORS data was collected in a wavenumber range of 0-2815 cm-1 using a detection module consisting of an imaging spectrograph and a CCD camera. A layered sample, which was created by placing a plastic sheet cut from the original container on top of cane sugar, was used to test the capability for subsurface food inspection. A whole set of SORS data was acquired in an offset range of 0-36 mm (two sides of the laser) with a spatial interval of 0.07 mm. Raman spectrum from the cane sugar under the plastic sheet was resolved using self-modeling mixture analysis algorithms, demonstrating the potential of the technique for authenticating foods and ingredients through packaging. The line-scan SORS measurement technique provides a new method for subsurface inspection of food safety and quality.

  2. Real-time vehicle matching for multi-camera tunnel surveillance

    NASA Astrophysics Data System (ADS)

    Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried

    2011-03-01

    Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.

  3. Applications of digital image acquisition in anthropometry

    NASA Technical Reports Server (NTRS)

    Woolford, B.; Lewis, J. L.

    1981-01-01

    A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.

  4. Edge-following algorithm for tracking geological features

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.

    1977-01-01

    Sequential edge-tracking algorithm employs circular scanning to point permit effective real-time tracking of coastlines and rivers from earth resources satellites. Technique eliminates expensive high-resolution cameras. System might also be adaptable for application in monitoring automated assembly lines, inspecting conveyor belts, or analyzing thermographs, or x ray images.

  5. Iodine filter imaging system for subtraction angiography using synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Umetani, K.; Ueda, K.; Takeda, T.; Itai, Y.; Akisada, M.; Nakajima, T.

    1993-11-01

    A new type of real-time imaging system was developed for transvenous coronary angiography. A combination of an iodine filter and a single energy broad-bandwidth X-ray produces two-energy images for the iodine K-edge subtraction technique. X-ray images are sequentially converted to visible images by an X-ray image intensifier. By synchronizing the timing of the movement of the iodine filter into and out of the X-ray beam, two output images of the image intensifier are focused side by side on the photoconductive layer of a camera tube by an oscillating mirror. Both images are read out by electron beam scanning of a 1050-scanning-line video camera within a camera frame time of 66.7 ms. One hundred ninety two pairs of iodine-filtered and non-iodine-filtered images are stored in the frame memory at a rate of 15 pairs/s. In vivo subtracted images of coronary arteries in dogs were obtained in the form of motion pictures.

  6. Processing the Viking lander camera data

    NASA Technical Reports Server (NTRS)

    Levinthal, E. C.; Tucker, R.; Green, W.; Jones, K. L.

    1977-01-01

    Over 1000 camera events were returned from the two Viking landers during the Primary Mission. A system was devised for processing camera data as they were received, in real time, from the Deep Space Network. This system provided a flexible choice of parameters for three computer-enhanced versions of the data for display or hard-copy generation. Software systems allowed all but 0.3% of the imagery scan lines received on earth to be placed correctly in the camera data record. A second-order processing system was developed which allowed extensive interactive image processing including computer-assisted photogrammetry, a variety of geometric and photometric transformations, mosaicking, and color balancing using six different filtered images of a common scene. These results have been completely cataloged and documented to produce an Experiment Data Record.

  7. Quantifying the movement of multiple insects using an optical insect counter

    USDA-ARS?s Scientific Manuscript database

    An optical insect counter (OIC) was designed and tested. The new system integrated a line-scan camera and a vertical light sheet along with data collection and image processing software to count numbers of flying insects crossing a vertical plane defined by the light sheet. The system also allows ...

  8. Wireless multipoint communication for optical sensors in the industrial environment using the new Bluetooth standard

    NASA Astrophysics Data System (ADS)

    Hussmann, Stephan; Lau, Wing Y.; Chu, Terry; Grothof, Markus

    2003-07-01

    Traditionally, the measuring or monitoring system of manufacturing industries uses sensors, computers and screens for their quality control (Q.C.). The acquired information is fed back to the control room by wires, which - for obvious reason - are not suitable in many environments. This paper describes a method to solve this problem by employing the new Bluetooth technology to set up a complete new system, where a total wireless solution is made feasible. This new Q.C. system allows several line scan cameras to be connected at once to a graphical user interface (GUI) that can monitor the production process. There are many Bluetooth devices available on the market such as cell-phones, headsets, printers, PDA etc. However, the detailed application is a novel implementation in the industrial Q.C. area. This paper will contain more details about the Bluetooth standard and why it is used (nework topologies, host controller interface, data rates, etc.), the Bluetooth implemetation in the microcontroller of the line scan camera, and the GUI and its features.

  9. Real time automated inspection

    DOEpatents

    Fant, Karl M.; Fundakowski, Richard A.; Levitt, Tod S.; Overland, John E.; Suresh, Bindinganavle R.; Ulrich, Franz W.

    1985-01-01

    A method and apparatus relating to the real time automatic detection and classification of characteristic type surface imperfections occurring on the surfaces of material of interest such as moving hot metal slabs produced by a continuous steel caster. A data camera transversely scans continuous lines of such a surface to sense light intensities of scanned pixels and generates corresponding voltage values. The voltage values are converted to corresponding digital values to form a digital image of the surface which is subsequently processed to form an edge-enhanced image having scan lines characterized by intervals corresponding to the edges of the image. The edge-enhanced image is thresholded to segment out the edges and objects formed by the edges are segmented out by interval matching and bin tracking. Features of the objects are derived and such features are utilized to classify the objects into characteristic type surface imperfections.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loehle, Stefan; Lein, Sebastian

    A revised scientific instrument to measure simultaneously kinetic temperatures of different atoms from their optical emission profile is reported. Emission lines are simultaneously detected using one single scanning Fabry-Perot-interferometer (FPI) for a combined spectroscopic setup to acquire different emission lines simultaneously. The setup consists in a commercial Czerny-Turner spectrometer configuration which is combined with a scanning Fabry-Perot interferometer. The fast image acquisition mode of an intensified charge coupled device camera allows the detection of a wavelength interval of interest continuously while acquiring the highly resolved line during the scan of the FPI ramp. Results using this new setup are presentedmore » for the simultaneous detection of atomic nitrogen and oxygen in a high enthalpy air plasma flow as used for atmospheric re-entry research and their respective kinetic temperatures derived from the measured line profiles. The paper presents the experimental setup, the calibration procedure, and an exemplary result. The determined temperatures are different, a finding that has been published so far as due to a drawback of the experimental setup of sequential measurements, and which has now to be investigated in more detail.« less

  11. Comparison of myocardial perfusion imaging between the new high-speed gamma camera and the standard anger camera.

    PubMed

    Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi; Uchida, Kenji; Igarashi, Yuko; Yokoyama, Tsuyoshi; Takahashi, Masaki; Shiba, Chie; Yoshimura, Mana; Tokuuye, Koichi; Yamashina, Akira

    2013-01-01

    Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest (99m)Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time.

  12. Hyperspectral imaging for food processing automation

    NASA Astrophysics Data System (ADS)

    Park, Bosoon; Lawrence, Kurt C.; Windham, William R.; Smith, Doug P.; Feldner, Peggy W.

    2002-11-01

    This paper presents the research results that demonstrates hyperspectral imaging could be used effectively for detecting feces (from duodenum, ceca, and colon) and ingesta on the surface of poultry carcasses, and potential application for real-time, on-line processing of poultry for automatic safety inspection. The hyperspectral imaging system included a line scan camera with prism-grating-prism spectrograph, fiber optic line lighting, motorized lens control, and hyperspectral image processing software. Hyperspectral image processing algorithms, specifically band ratio of dual-wavelength (565/517) images and thresholding were effective on the identification of fecal and ingesta contamination of poultry carcasses. A multispectral imaging system including a common aperture camera with three optical trim filters (515.4 nm with 8.6- nm FWHM), 566.4 nm with 8.8-nm FWHM, and 631 nm with 10.2-nm FWHM), which were selected and validated by a hyperspectral imaging system, was developed for a real-time, on-line application. A total image processing time required to perform the current multispectral images captured by a common aperture camera was approximately 251 msec or 3.99 frames/sec. A preliminary test shows that the accuracy of real-time multispectral imaging system to detect feces and ingesta on corn/soybean fed poultry carcasses was 96%. However, many false positive spots that cause system errors were also detected.

  13. Making The Invisible Visible

    NASA Technical Reports Server (NTRS)

    1978-01-01

    In public and private archives throughout the world there are many historically important documents that have become illegible with the passage of time. They have faded, been erased, acquired mold, water and dirt stain, suffered blotting or lost readability in other ways. While ultraviolet and infrared photography are widely used to enhance deteriorated legibility, these methods are more limited in their effectiveness than the space-derived image enhancement technique. The aim of the JPL effort with Caltech and others is to better define the requirements for a system to restore illegible information for study at a low page-cost with simple operating procedures. The investigators' principle tools are a vidicon camera and an image processing computer program, the same equipment used to produce sharp space pictures. The camera is the same type as those on NASA's Mariner spacecraft which returned to Earth thousands of images of Mars, Venus and Mercury. Space imagery works something like television. The vidicon camera does not take a photograph in the ordinary sense; rather it "scans" a scene, recording different light and shade values which are reproduced as a pattern of dots, hundreds of dots to a line, hundreds of lines in the total picture. The dots are transmitted to an Earth receiver, where they are assembled line by line to form a picture like that on the home TV screen.

  14. High-performance camera module for fast quality inspection in industrial printing applications

    NASA Astrophysics Data System (ADS)

    Fürtler, Johannes; Bodenstorfer, Ernst; Mayer, Konrad J.; Brodersen, Jörg; Heiss, Dorothea; Penz, Harald; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert

    2007-02-01

    Today, printing products which must meet highest quality standards, e.g., banknotes, stamps, or vouchers, are automatically checked by optical inspection systems. Typically, the examination of fine details of the print or security features demands images taken from various perspectives, with different spectral sensitivity (visible, infrared, ultraviolet), and with high resolution. Consequently, the inspection system is equipped with several cameras and has to cope with an enormous data rate to be processed in real-time. Hence, it is desirable to move image processing tasks into the camera to reduce the amount of data which has to be transferred to the (central) image processing system. The idea is to transfer relevant information only, i.e., features of the image instead of the raw image data from the sensor. These features are then further processed. In this paper a color line-scan camera for line rates up to 100 kHz is presented. The camera is based on a commercial CMOS (complementary metal oxide semiconductor) area image sensor and a field programmable gate array (FPGA). It implements extraction of image features which are well suited to detect print flaws like blotches of ink, color smears, splashes, spots and scratches. The camera design and several image processing methods implemented on the FPGA are described, including flat field correction, compensation of geometric distortions, color transformation, as well as decimation and neighborhood operations.

  15. High-throughput Raman chemical imaging for evaluating food safety and quality

    NASA Astrophysics Data System (ADS)

    Qin, Jianwei; Chao, Kuanglin; Kim, Moon S.

    2014-05-01

    A line-scan hyperspectral system was developed to enable Raman chemical imaging for large sample areas. A custom-designed 785 nm line-laser based on a scanning mirror serves as an excitation source. A 45° dichroic beamsplitter reflects the laser light to form a 24 cm x 1 mm excitation line normally incident on the sample surface. Raman signals along the laser line are collected by a detection module consisting of a dispersive imaging spectrograph and a CCD camera. A hypercube is accumulated line by line as a motorized table moves the samples transversely through the laser line. The system covers a Raman shift range of -648.7-2889.0 cm-1 and a 23 cm wide area. An example application, for authenticating milk powder, was presented to demonstrate the system performance. In four minutes, the system acquired a 512x110x1024 hypercube (56,320 spectra) from four 47-mm-diameter Petri dishes containing four powder samples. Chemical images were created for detecting two adulterants (melamine and dicyandiamide) that had been mixed into the milk powder.

  16. Real time automated inspection

    DOEpatents

    Fant, K.M.; Fundakowski, R.A.; Levitt, T.S.; Overland, J.E.; Suresh, B.R.; Ulrich, F.W.

    1985-05-21

    A method and apparatus are described relating to the real time automatic detection and classification of characteristic type surface imperfections occurring on the surfaces of material of interest such as moving hot metal slabs produced by a continuous steel caster. A data camera transversely scans continuous lines of such a surface to sense light intensities of scanned pixels and generates corresponding voltage values. The voltage values are converted to corresponding digital values to form a digital image of the surface which is subsequently processed to form an edge-enhanced image having scan lines characterized by intervals corresponding to the edges of the image. The edge-enhanced image is thresholded to segment out the edges and objects formed by the edges by interval matching and bin tracking. Features of the objects are derived and such features are utilized to classify the objects into characteristic type surface imperfections. 43 figs.

  17. In-line monitoring of Li-ion battery electrode porosity and areal loading using active thermal scanning - modeling and initial experiment

    DOE PAGES

    Rupnowski, Przemyslaw; Ulsh, Michael J.; Sopori, Bhushan; ...

    2017-08-18

    This work focuses on a new technique called active thermal scanning for in-line monitoring of porosity and areal loading of Li-ion battery electrodes. In this technique a moving battery electrode is subjected to thermal excitation and the induced temperature rise is monitored using an infra-red camera. Static and dynamic experiments with speeds up to 1.5 m min -1 are performed on both cathodes and anodes and a combined micro- and macro-scale finite element thermal model of the system is developed. It is shown experimentally and through simulations that during thermal scanning the temperature profile generated in an electrode depends onmore » both coating porosity (or area loading) and thickness. Here, it is concluded that by inverting this relation the porosity (or areal loading) can be determined, if thermal response and thickness are simultaneously measured.« less

  18. In-line monitoring of Li-ion battery electrode porosity and areal loading using active thermal scanning - modeling and initial experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rupnowski, Przemyslaw; Ulsh, Michael J.; Sopori, Bhushan

    This work focuses on a new technique called active thermal scanning for in-line monitoring of porosity and areal loading of Li-ion battery electrodes. In this technique a moving battery electrode is subjected to thermal excitation and the induced temperature rise is monitored using an infra-red camera. Static and dynamic experiments with speeds up to 1.5 m min -1 are performed on both cathodes and anodes and a combined micro- and macro-scale finite element thermal model of the system is developed. It is shown experimentally and through simulations that during thermal scanning the temperature profile generated in an electrode depends onmore » both coating porosity (or area loading) and thickness. Here, it is concluded that by inverting this relation the porosity (or areal loading) can be determined, if thermal response and thickness are simultaneously measured.« less

  19. In-line monitoring of Li-ion battery electrode porosity and areal loading using active thermal scanning - modeling and initial experiment

    NASA Astrophysics Data System (ADS)

    Rupnowski, Przemyslaw; Ulsh, Michael; Sopori, Bhushan; Green, Brian G.; Wood, David L.; Li, Jianlin; Sheng, Yangping

    2018-01-01

    This work focuses on a new technique called active thermal scanning for in-line monitoring of porosity and areal loading of Li-ion battery electrodes. In this technique a moving battery electrode is subjected to thermal excitation and the induced temperature rise is monitored using an infra-red camera. Static and dynamic experiments with speeds up to 1.5 m min-1 are performed on both cathodes and anodes and a combined micro- and macro-scale finite element thermal model of the system is developed. It is shown experimentally and through simulations that during thermal scanning the temperature profile generated in an electrode depends on both coating porosity (or area loading) and thickness. It is concluded that by inverting this relation the porosity (or areal loading) can be determined, if thermal response and thickness are simultaneously measured.

  20. Multipurpose Hyperspectral Imaging System

    NASA Technical Reports Server (NTRS)

    Mao, Chengye; Smith, David; Lanoue, Mark A.; Poole, Gavin H.; Heitschmidt, Jerry; Martinez, Luis; Windham, William A.; Lawrence, Kurt C.; Park, Bosoon

    2005-01-01

    A hyperspectral imaging system of high spectral and spatial resolution that incorporates several innovative features has been developed to incorporate a focal plane scanner (U.S. Patent 6,166,373). This feature enables the system to be used for both airborne/spaceborne and laboratory hyperspectral imaging with or without relative movement of the imaging system, and it can be used to scan a target of any size as long as the target can be imaged at the focal plane; for example, automated inspection of food items and identification of single-celled organisms. The spectral resolution of this system is greater than that of prior terrestrial multispectral imaging systems. Moreover, unlike prior high-spectral resolution airborne and spaceborne hyperspectral imaging systems, this system does not rely on relative movement of the target and the imaging system to sweep an imaging line across a scene. This compact system (see figure) consists of a front objective mounted at a translation stage with a motorized actuator, and a line-slit imaging spectrograph mounted within a rotary assembly with a rear adaptor to a charged-coupled-device (CCD) camera. Push-broom scanning is carried out by the motorized actuator which can be controlled either manually by an operator or automatically by a computer to drive the line-slit across an image at a focal plane of the front objective. To reduce the cost, the system has been designed to integrate as many as possible off-the-shelf components including the CCD camera and spectrograph. The system has achieved high spectral and spatial resolutions by using a high-quality CCD camera, spectrograph, and front objective lens. Fixtures for attachment of the system to a microscope (U.S. Patent 6,495,818 B1) make it possible to acquire multispectral images of single cells and other microscopic objects.

  1. Sheet-scanned dual-axis confocal microscopy using Richardson-Lucy deconvolution.

    PubMed

    Wang, D; Meza, D; Wang, Y; Gao, L; Liu, J T C

    2014-09-15

    We have previously developed a line-scanned dual-axis confocal (LS-DAC) microscope with subcellular resolution suitable for high-frame-rate diagnostic imaging at shallow depths. Due to the loss of confocality along one dimension, the contrast (signal-to-background ratio) of a LS-DAC microscope is deteriorated compared to a point-scanned DAC microscope. However, by using a sCMOS camera for detection, a short oblique light-sheet is imaged at each scanned position. Therefore, by scanning the light sheet in only one dimension, a thin 3D volume is imaged. Both sequential two-dimensional deconvolution and three-dimensional deconvolution are performed on the thin image volume to improve the resolution and contrast of one en face confocal image section at the center of the volume, a technique we call sheet-scanned dual-axis confocal (SS-DAC) microscopy.

  2. Temperature measurements during laser skin welding

    NASA Astrophysics Data System (ADS)

    Fried, Nathaniel M.; Choi, Bernard; Welch, Ashley J.; Walsh, Joseph T., Jr.

    1999-06-01

    A thermal camera was used to measure surface temperatures during laser skin welding to provide feedback for optimization of the laser parameters. Two-cm-long, full- thickness incisions were made in guinea pig skin. India ink was used as an absorber. Continuous-wave, 1.06-μm, Nd:YAG laser radiation was scanned over the incisions, producing a pulse duration of approximately 100 ms. Cooling durations between scans of 1.6, 4.0, and 8.0 s were studied with total operation times of 3, 5, and 10 min, respectively. A laser spot diameter of 5 mm was used with the power constant at 10 W. Thermal images were obtained at 30 frames per second with a thermal camera detecting 3.5 micrometers radiation. Surface temperatures were recorded at 0, 1, and 6 mm from the center line of the incision. Cooling durations between scans of 1.6 s and 4.0 s in vitro resulted in temperatures at the weld site remaining above ~65°C for prolonged periods of time. Cooling durations between scans as long as 8.0 s were sufficient both in vitro and in vivo to prevent a significant rise in baseline temperatures at the weld site over time.

  3. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  4. Commissioning of a new SeHCAT detector and comparison with an uncollimated gamma camera.

    PubMed

    Taylor, Jonathan C; Hillel, Philip G; Himsworth, John M

    2014-10-01

    Measurements of SeHCAT (tauroselcholic [75selenium] acid) retention have been used to diagnose bile acid malabsorption for a number of years. In current UK practice the vast majority of centres calculate uptake using an uncollimated gamma camera. Because of ever-increasing demands on gamma camera time, a new 'probe' detector was designed, assembled and commissioned. To validate the system, nine patients were scanned at day 0 and day 7 with both the new probe detector and an uncollimated gamma camera. Commissioning results were largely in line with expectations. Spatial resolution (full-width 95% of maximum) at 1 m was 36.6 cm, the background count rate was 24.7 cps and sensitivity at 1 m was 720.8 cps/MBq. The patient comparison study showed a mean absolute difference in retention measurements of 0.8% between the probe and uncollimated gamma camera, and SD of ± 1.8%. The study demonstrated that it is possible to create a simple, reproducible SeHCAT measurement system using a commercially available scintillation detector. Retention results from the probe closely agreed with those from the uncollimated gamma camera.

  5. Slow Scan Telemedicine

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Originally developed under contract for NASA by Ball Bros. Research Corporation for acquiring visual information from lunar and planetary spacecraft, system uses standard closed circuit camera connected to a device called a scan converter, which slows the stream of images to match an audio circuit, such as a telephone line. Transmitted to its destination, the image is reconverted by another scan converter and displayed on a monitor. In addition to assist scans, technique allows transmission of x-rays, nuclear scans, ultrasonic imagery, thermograms, electrocardiograms or live views of patient. Also allows conferencing and consultation among medical centers, general practitioners, specialists and disease control centers. Commercialized by Colorado Video, Inc., major employment is in business and industry for teleconferencing, cable TV news, transmission of scientific/engineering data, security, information retrieval, insurance claim adjustment, instructional programs, and remote viewing of advertising layouts, real estate, construction sites or products.

  6. A High Performance Micro Channel Interface for Real-Time Industrial Image Processing

    Treesearch

    Thomas H. Drayer; Joseph G. Tront; Richard W. Conners

    1995-01-01

    Data collection and transfer devices are critical to the performance of any machine vision system. The interface described in this paper collects image data from a color line scan camera and transfers the data obtained into the system memory of a Micro Channel-based host computer. A maximum data transfer rate of 20 Mbytes/sec can be achieved using the DMA capabilities...

  7. Multi-band infrared camera systems

    NASA Astrophysics Data System (ADS)

    Davis, Tim; Lang, Frank; Sinneger, Joe; Stabile, Paul; Tower, John

    1994-12-01

    The program resulted in an IR camera system that utilizes a unique MOS addressable focal plane array (FPA) with full TV resolution, electronic control capability, and windowing capability. Two systems were delivered, each with two different camera heads: a Stirling-cooled 3-5 micron band head and a liquid nitrogen-cooled, filter-wheel-based, 1.5-5 micron band head. Signal processing features include averaging up to 16 frames, flexible compensation modes, gain and offset control, and real-time dither. The primary digital interface is a Hewlett-Packard standard GPID (IEEE-488) port that is used to upload and download data. The FPA employs an X-Y addressed PtSi photodiode array, CMOS horizontal and vertical scan registers, horizontal signal line (HSL) buffers followed by a high-gain preamplifier and a depletion NMOS output amplifier. The 640 x 480 MOS X-Y addressed FPA has a high degree of flexibility in operational modes. By changing the digital data pattern applied to the vertical scan register, the FPA can be operated in either an interlaced or noninterlaced format. The thermal sensitivity performance of the second system's Stirling-cooled head was the best of the systems produced.

  8. Forensics for flatbed scanners

    NASA Astrophysics Data System (ADS)

    Gloe, Thomas; Franz, Elke; Winkler, Antje

    2007-02-01

    Within this article, we investigate possibilities for identifying the origin of images acquired with flatbed scanners. A current method for the identification of digital cameras takes advantage of image sensor noise, strictly speaking, the spatial noise. Since flatbed scanners and digital cameras use similar technologies, the utilization of image sensor noise for identifying the origin of scanned images seems to be possible. As characterization of flatbed scanner noise, we considered array reference patterns and sensor line reference patterns. However, there are particularities of flatbed scanners which we expect to influence the identification. This was confirmed by extensive tests: Identification was possible to a certain degree, but less reliable than digital camera identification. In additional tests, we simulated the influence of flatfielding and down scaling as examples for such particularities of flatbed scanners on digital camera identification. One can conclude from the results achieved so far that identifying flatbed scanners is possible. However, since the analyzed methods are not able to determine the image origin in all cases, further investigations are necessary.

  9. Performance of a scanning laser line striper in outdoor lighting

    NASA Astrophysics Data System (ADS)

    Mertz, Christoph

    2013-05-01

    For search and rescue robots and reconnaissance robots it is important to detect objects in their vicinity. We have developed a scanning laser line striper that can produce dense 3D images using active illumination. The scanner consists of a camera and a MEMS-micro mirror based projector. It can also detect the presence of optically difficult material like glass and metal. The sensor can be used for autonomous operation or it can help a human operator to better remotely control the robot. In this paper we will evaluate the performance of the scanner under outdoor illumination, i.e. from operating in the shade to operating in full sunlight. We report the range, resolution and accuracy of the sensor and its ability to reconstruct objects like grass, wooden blocks, wires, metal objects, electronic devices like cell phones, blank RPG, and other inert explosive devices. Furthermore we evaluate its ability to detect the presence of glass and polished metal objects. Lastly we report on a user study that shows a significant improvement in a grasping task. The user is tasked with grasping a wire with the remotely controlled hand of a robot. We compare the time it takes to complete the task using the 3D scanner with using a traditional video camera.

  10. Line-scan system for continuous hand authentication

    NASA Astrophysics Data System (ADS)

    Liu, Xiaofeng; Kong, Lingsheng; Diao, Zhihui; Jia, Ping

    2017-03-01

    An increasing number of heavy machinery and vehicles have come into service, giving rise to a significant concern over protecting these high-security systems from misuse. Conventionally, authentication performed merely at the initial login may not be sufficient for detecting intruders throughout the operating session. To address this critical security flaw, a line-scan continuous hand authentication system with the appearance of an operating rod is proposed. Given that the operating rod is occupied throughout the operating period, it can be a possible solution for unobtrusively recording the personal characteristics for continuous monitoring. The ergonomics in the physiological and psychological aspects are fully considered. Under the shape constraints, a highly integrated line-scan sensor, a controller unit, and a gear motor with encoder are utilized. This system is suitable for both the desktop and embedded platforms with a universal serial bus interface. The volume of the proposed system is smaller than 15% of current multispectral area-based camera systems. Based on experiments on a database with 4000 images from 200 volunteers, a competitive equal error rate of 0.1179% is achieved, which is far more accurate than the state-of-the-art continuous authentication systems using other modalities.

  11. Line Scanning Thermography for Rapid Nondestructive Inspection of Large Scale Composites

    NASA Astrophysics Data System (ADS)

    Chung, S.; Ley, O.; Godinez, V.; Bandos, B.

    2011-06-01

    As next generation structures are utilizing larger amounts of composite materials, a rigorous and reliable method is needed to inspect these structures in order to prevent catastrophic failure and extend service life. Current inspection methods, such as ultrasonic, generally require extended down time and man hours as they are typically carried out via point-by-point measurements. A novel Line Scanning Thermography (LST) System has been developed for the non-contact, large-scale field inspection of composite structures with faster scanning times than conventional thermography systems. LST is a patented dynamic thermography technique where the heat source and thermal camera move in tandem, which allows the continuous scan of long surfaces without the loss of resolution. The current system can inspect an area of 10 in2 per 1 second, and has a resolution of 0.05×0.03 in2. Advanced data gathering protocols have been implemented for near-real time damage visualization and post-analysis algorithms for damage interpretation. The system has been used to successfully detect defects (delamination, dry areas) in fiber-reinforced composite sandwich panels for Navy applications, as well as impact damage in composite missile cases and armor ceramic panels.

  12. Design of a portable optical emission tomography system for microwave induced compact plasma for visible to near-infrared emission lines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rathore, Kavita, E-mail: kavira@iitk.ac.in, E-mail: pmunshi@iitk.ac.in, E-mail: sudeepb@iitk.ac.in; Munshi, Prabhat, E-mail: kavira@iitk.ac.in, E-mail: pmunshi@iitk.ac.in, E-mail: sudeepb@iitk.ac.in; Bhattacharjee, Sudeep, E-mail: kavira@iitk.ac.in, E-mail: pmunshi@iitk.ac.in, E-mail: sudeepb@iitk.ac.in

    A new non-invasive diagnostic system is developed for Microwave Induced Plasma (MIP) to reconstruct tomographic images of a 2D emission profile. A compact MIP system has wide application in industry as well as research application such as thrusters for space propulsion, high current ion beams, and creation of negative ions for heating of fusion plasma. Emission profile depends on two crucial parameters, namely, the electron temperature and density (over the entire spatial extent) of the plasma system. Emission tomography provides basic understanding of plasmas and it is very useful to monitor internal structure of plasma phenomena without disturbing its actualmore » processes. This paper presents development of a compact, modular, and versatile Optical Emission Tomography (OET) tool for a cylindrical, magnetically confined MIP system. It has eight slit-hole cameras and each consisting of a complementary metal–oxide–semiconductor linear image sensor for light detection. The optical noise is reduced by using aspheric lens and interference band-pass filters in each camera. The entire cylindrical plasma can be scanned with automated sliding ring mechanism arranged in fan-beam data collection geometry. The design of the camera includes a unique possibility to incorporate different filters to get the particular wavelength light from the plasma. This OET system includes selected band-pass filters for particular argon emission 750 nm, 772 nm, and 811 nm lines and hydrogen emission H{sub α} (656 nm) and H{sub β} (486 nm) lines. Convolution back projection algorithm is used to obtain the tomographic images of plasma emission line. The paper mainly focuses on (a) design of OET system in detail and (b) study of emission profile for 750 nm argon emission lines to validate the system design.« less

  13. Laser Scanning In Inspection

    NASA Astrophysics Data System (ADS)

    West, Patricia; Baker, Lionel R.

    1989-03-01

    This paper is a review of the applications of laser scanning in inspection. The reasons for the choice of a laser in flying spot scanning and the optical properties of a laser beam which are of value in a scanning instrument will be given. The many methods of scanning laser beams in both one and two dimensions will be described. The use of one dimensional laser scanners for automatic surface inspection for transmitting and reflective products will be covered in detail, with particular emphasis on light collection techniques. On-line inspection applications which will be mentioned include: photographic film web, metal strip products, paper web, glass sheet, car body paint surfaces and internal cylinder bores. Two dimensional laser scanning is employed in applications where increased resolution, increased depth of focus, and better contrast are required compared with conventional vidicon TV or solid state array cameras. Such examples as special microscope laser scanning systems and a TV compatible system for use in restricted areas of a nuclear reactor will be described. The technical and economic benefits and limitations of laser scanning video systems will be compared with conventional TV and CCD array devices.

  14. VirtoScan - a mobile, low-cost photogrammetry setup for fast post-mortem 3D full-body documentations in x-ray computed tomography and autopsy suites.

    PubMed

    Kottner, Sören; Ebert, Lars C; Ampanozi, Garyfalia; Braun, Marcel; Thali, Michael J; Gascho, Dominic

    2017-03-01

    Injuries such as bite marks or boot prints can leave distinct patterns on the body's surface and can be used for 3D reconstructions. Although various systems for 3D surface imaging have been introduced in the forensic field, most techniques are both cost-intensive and time-consuming. In this article, we present the VirtoScan, a mobile, multi-camera rig based on close-range photogrammetry. The system can be integrated into automated PMCT scanning procedures or used manually together with lifting carts, autopsy tables and examination couch. The VirtoScan is based on a moveable frame that carries 7 digital single-lens reflex cameras. A remote control is attached to each camera and allows the simultaneous triggering of the shutter release of all cameras. Data acquisition in combination with the PMCT scanning procedures took 3:34 min for the 3D surface documentation of one side of the body compared to 20:20 min of acquisition time when using our in-house standard. A surface model comparison between the high resolution output from our in-house standard and a high resolution model from the multi-camera rig showed a mean surface deviation of 0.36 mm for the whole body scan and 0.13 mm for a second comparison of a detailed section of the scan. The use of the multi-camera rig reduces the acquisition time for whole-body surface documentations in medico-legal examinations and provides a low-cost 3D surface scanning alternative for forensic investigations.

  15. Emission Spectroscopy of the Interior of Optically Dense Post-Detonation Fireballs

    DTIC Science & Technology

    2013-03-01

    sample. Light from the fiber optics was sent to spectrograph located in a shielded observation room several meters away from the explosive charge. The...spectrograph was constructed from a 1/8 m spectrometer (Oriel) interfaced to a 4096 pixel line-scan camera (Basler Sprint ) with a data collection rate... 400 ) 45 4000 (200) … FIG. 3. Time-resolved emission spectra obtained from detonation of 20 g charges of RDX containing 20 wt. % aluminum nanoparticles

  16. Flight Calibration of the LROC Narrow Angle Camera

    NASA Astrophysics Data System (ADS)

    Humm, D. C.; Tschimmel, M.; Brylow, S. M.; Mahanti, P.; Tran, T. N.; Braden, S. E.; Wiseman, S.; Danton, J.; Eliason, E. M.; Robinson, M. S.

    2016-04-01

    Characterization and calibration are vital for instrument commanding and image interpretation in remote sensing. The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) takes 500 Mpixel greyscale images of lunar scenes at 0.5 meters/pixel. It uses two nominally identical line scan cameras for a larger crosstrack field of view. Stray light, spatial crosstalk, and nonlinearity were characterized using flight images of the Earth and the lunar limb. These are important for imaging shadowed craters, studying ˜1 meter size objects, and photometry respectively. Background, nonlinearity, and flatfield corrections have been implemented in the calibration pipeline. An eight-column pattern in the background is corrected. The detector is linear for DN = 600--2000 but a signal-dependent additive correction is required and applied for DN<600. A predictive model of detector temperature and dark level was developed to command dark level offset. This avoids images with a cutoff at DN=0 and minimizes quantization error in companding. Absolute radiometric calibration is derived from comparison of NAC images with ground-based images taken with the Robotic Lunar Observatory (ROLO) at much lower spatial resolution but with the same photometric angles.

  17. Development and experimental testing of an optical micro-spectroscopic technique incorporating true line-scan excitation.

    PubMed

    Biener, Gabriel; Stoneman, Michael R; Acbas, Gheorghe; Holz, Jessica D; Orlova, Marianna; Komarova, Liudmila; Kuchin, Sergei; Raicu, Valerică

    2013-12-27

    Multiphoton micro-spectroscopy, employing diffraction optics and electron-multiplying CCD (EMCCD) cameras, is a suitable method for determining protein complex stoichiometry, quaternary structure, and spatial distribution in living cells using Förster resonance energy transfer (FRET) imaging. The method provides highly resolved spectra of molecules or molecular complexes at each image pixel, and it does so on a timescale shorter than that of molecular diffusion, which scrambles the spectral information. Acquisition of an entire spectrally resolved image, however, is slower than that of broad-bandwidth microscopes because it takes longer times to collect the same number of photons at each emission wavelength as in a broad bandwidth. Here, we demonstrate an optical micro-spectroscopic scheme that employs a laser beam shaped into a line to excite in parallel multiple sample voxels. The method presents dramatically increased sensitivity and/or acquisition speed and, at the same time, has excellent spatial and spectral resolution, similar to point-scan configurations. When applied to FRET imaging using an oligomeric FRET construct expressed in living cells and consisting of a FRET acceptor linked to three donors, the technique based on line-shaped excitation provides higher accuracy compared to the point-scan approach, and it reduces artifacts caused by photobleaching and other undesired photophysical effects.

  18. ProxiScan™: A Novel Camera for Imaging Prostate Cancer

    ScienceCinema

    Ralph James

    2017-12-09

    ProxiScan is a compact gamma camera suited for high-resolution imaging of prostate cancer. Developed by Brookhaven National Laboratory and Hybridyne Imaging Technologies, Inc., ProxiScan won a 2009 R&D 100 Award, sponsored by R&D Magazine to recognize t

  19. Research Instruments

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The GENETI-SCANNER, newest product of Perceptive Scientific Instruments, Inc. (PSI), rapidly scans slides, locates, digitizes, measures and classifies specific objects and events in research and diagnostic applications. Founded by former NASA employees, PSI's primary product line is based on NASA image processing technology. The instruments karyotype - a process employed in analysis and classification of chromosomes - using a video camera mounted on a microscope. Images are digitized, enabling chromosome image enhancement. The system enables karyotyping to be done significantly faster, increasing productivity and lowering costs. Product is no longer being manufactured.

  20. Dynamically reconfigurable holographic metasurface aperture for a Mills-Cross monochromatic microwave camera.

    PubMed

    Yurduseven, Okan; Marks, Daniel L; Fromenteze, Thomas; Smith, David R

    2018-03-05

    We present a reconfigurable, dynamic beam steering holographic metasurface aperture to synthesize a microwave camera at K-band frequencies. The aperture consists of a 1D printed microstrip transmission line with the front surface patterned into an array of slot-shaped subwavelength metamaterial elements (or meta-elements) dynamically tuned between "ON" and "OFF" states using PIN diodes. The proposed aperture synthesizes a desired radiation pattern by converting the waveguide-mode to a free space radiation by means of a binary modulation scheme. This is achieved in a holographic manner; by interacting the waveguide-mode (reference-wave) with the metasurface layer (hologram layer). It is shown by means of full-wave simulations that using the developed metasurface aperture, the radiated wavefronts can be engineered in an all-electronic manner without the need for complex phase-shifting circuits or mechanical scanning apparatus. Using the dynamic beam steering capability of the developed antenna, we synthesize a Mills-Cross composite aperture, forming a single-frequency all-electronic microwave camera.

  1. 1920x1080 pixel color camera with progressive scan at 50 to 60 frames per second

    NASA Astrophysics Data System (ADS)

    Glenn, William E.; Marcinka, John W.

    1998-09-01

    For over a decade, the broadcast industry, the film industry and the computer industry have had a long-range objective to originate high definition images with progressive scan. This produces images with better vertical resolution and much fewer artifacts than interlaced scan. Computers almost universally use progressive scan. The broadcast industry has resisted switching from interlace to progressive because no cameras were available in that format with the 1920 X 1080 resolution that had obtained international acceptance for high definition program production. The camera described in this paper produces an output in that format derived from two 1920 X 1080 CCD sensors produced by Eastman Kodak.

  2. FPGA Based Adaptive Rate and Manifold Pattern Projection for Structured Light 3D Camera System †

    PubMed Central

    Lee, Sukhan

    2018-01-01

    The quality of the captured point cloud and the scanning speed of a structured light 3D camera system depend upon their capability of handling the object surface of a large reflectance variation in the trade-off of the required number of patterns to be projected. In this paper, we propose and implement a flexible embedded framework that is capable of triggering the camera single or multiple times for capturing single or multiple projections within a single camera exposure setting. This allows the 3D camera system to synchronize the camera and projector even for miss-matched frame rates such that the system is capable of projecting different types of patterns for different scan speed applications. This makes the system capturing a high quality of 3D point cloud even for the surface of a large reflectance variation while achieving a high scan speed. The proposed framework is implemented on the Field Programmable Gate Array (FPGA), where the camera trigger is adaptively generated in such a way that the position and the number of triggers are automatically determined according to camera exposure settings. In other words, the projection frequency is adaptive to different scanning applications without altering the architecture. In addition, the proposed framework is unique as it does not require any external memory for storage because pattern pixels are generated in real-time, which minimizes the complexity and size of the application-specific integrated circuit (ASIC) design and implementation. PMID:29642506

  3. Low-cost and high-speed optical mark reader based on an intelligent line camera

    NASA Astrophysics Data System (ADS)

    Hussmann, Stephan; Chan, Leona; Fung, Celine; Albrecht, Martin

    2003-08-01

    Optical Mark Recognition (OMR) is thoroughly reliable and highly efficient provided that high standards are maintained at both the planning and implementation stages. It is necessary to ensure that OMR forms are designed with due attention to data integrity checks, the best use is made of features built into the OMR, used data integrity is checked before the data is processed and data is validated before it is processed. This paper describes the design and implementation of an OMR prototype system for marking multiple-choice tests automatically. Parameter testing is carried out before the platform and the multiple-choice answer sheet has been designed. Position recognition and position verification methods have been developed and implemented in an intelligent line scan camera. The position recognition process is implemented into a Field Programmable Gate Array (FPGA), whereas the verification process is implemented into a micro-controller. The verified results are then sent to the Graphical User Interface (GUI) for answers checking and statistical analysis. At the end of the paper the proposed OMR system will be compared with commercially available system on the market.

  4. Completely optical orientation determination for an unstabilized aerial three-line camera

    NASA Astrophysics Data System (ADS)

    Wohlfeil, Jürgen

    2010-10-01

    Aerial line cameras allow the fast acquisition of high-resolution images at low costs. Unfortunately the measurement of the camera's orientation with the necessary rate and precision is related with large effort, unless extensive camera stabilization is used. But also stabilization implicates high costs, weight, and power consumption. This contribution shows that it is possible to completely derive the absolute exterior orientation of an unstabilized line camera from its images and global position measurements. The presented approach is based on previous work on the determination of the relative orientation of subsequent lines using optical information from the remote sensing system. The relative orientation is used to pre-correct the line images, in which homologous points can reliably be determined using the SURF operator. Together with the position measurements these points are used to determine the absolute orientation from the relative orientations via bundle adjustment of a block of overlapping line images. The approach was tested at a flight with the DLR's RGB three-line camera MFC. To evaluate the precision of the resulting orientation the measurements of a high-end navigation system and ground control points are used.

  5. Multi-MHz laser-scanning single-cell fluorescence microscopy by spatiotemporally encoded virtual source array

    PubMed Central

    Wu, Jianglai; Tang, Anson H. L.; Mok, Aaron T. Y.; Yan, Wenwei; Chan, Godfrey C. F.; Wong, Kenneth K. Y.; Tsia, Kevin K.

    2017-01-01

    Apart from the spatial resolution enhancement, scaling of temporal resolution, equivalently the imaging throughput, of fluorescence microscopy is of equal importance in advancing cell biology and clinical diagnostics. Yet, this attribute has mostly been overlooked because of the inherent speed limitation of existing imaging strategies. To address the challenge, we employ an all-optical laser-scanning mechanism, enabled by an array of reconfigurable spatiotemporally-encoded virtual sources, to demonstrate ultrafast fluorescence microscopy at line-scan rate as high as 8 MHz. We show that this technique enables high-throughput single-cell microfluidic fluorescence imaging at 75,000 cells/second and high-speed cellular 2D dynamical imaging at 3,000 frames per second, outperforming the state-of-the-art high-speed cameras and the gold-standard laser scanning strategies. Together with its wide compatibility to the existing imaging modalities, this technology could empower new forms of high-throughput and high-speed biological fluorescence microscopy that was once challenged. PMID:28966855

  6. Camera-Based Lock-in and Heterodyne Carrierographic Photoluminescence Imaging of Crystalline Silicon Wafers

    NASA Astrophysics Data System (ADS)

    Sun, Q. M.; Melnikov, A.; Mandelis, A.

    2015-06-01

    Carrierographic (spectrally gated photoluminescence) imaging of a crystalline silicon wafer using an InGaAs camera and two spread super-bandgap illumination laser beams is introduced in both low-frequency lock-in and high-frequency heterodyne modes. Lock-in carrierographic images of the wafer up to 400 Hz modulation frequency are presented. To overcome the frame rate and exposure time limitations of the camera, a heterodyne method is employed for high-frequency carrierographic imaging which results in high-resolution near-subsurface information. The feasibility of the method is guaranteed by the typical superlinearity behavior of photoluminescence, which allows one to construct a slow enough beat frequency component from nonlinear mixing of two high frequencies. Intensity-scan measurements were carried out with a conventional single-element InGaAs detector photocarrier radiometry system, and the nonlinearity exponent of the wafer was found to be around 1.7. Heterodyne images of the wafer up to 4 kHz have been obtained and qualitatively analyzed. With the help of the complementary lock-in and heterodyne modes, camera-based carrierographic imaging in a wide frequency range has been realized for fundamental research and industrial applications toward in-line nondestructive testing of semiconductor materials and devices.

  7. Two-dimensional laser servoing for precision motion control of an ODV robotic license plate recognition system

    NASA Astrophysics Data System (ADS)

    Song, Zhen; Moore, Kevin L.; Chen, YangQuan; Bahl, Vikas

    2003-09-01

    As an outgrowth of series of projects focused on mobility of unmanned ground vehicles (UGV), an omni-directional (ODV), multi-robot, autonomous mobile parking security system has been developed. The system has two types of robots: the low-profile Omni-Directional Inspection System (ODIS), which can be used for under-vehicle inspections, and the mid-sized T4 robot, which serves as a ``marsupial mothership'' for the ODIS vehicles and performs coarse resolution inspection. A key task for the T4 robot is license plate recognition (LPR). For a successful LPR task without compromising the recognition rate, the robot must be able to identify the bumper locations of vehicles in the parking area and then precisely position the LPR camera relative to the bumper. This paper describes a 2D-laser scanner based approach to bumper identification and laser servoing for the T4 robot. The system uses a gimbal-mounted scanning laser. As the T4 robot travels down a row of parking stalls, data is collected from the laser every 100ms. For each parking stall in the range of the laser during the scan, the data is matched to a ``bumper box'' corresponding to where a car bumper is expected, resulting in a point cloud of data corresponding to a vehicle bumper for each stall. Next, recursive line-fitting algorithms are used to determine a line for the data in each stall's ``bumper box.'' The fitting technique uses Hough based transforms, which are robust against segmentation problems and fast enough for real-time line fitting. Once a bumper line is fitted with an acceptable confidence, the bumper location is passed to the T4 motion controller, which moves to position the LPR camera properly relative to the bumper. The paper includes examples and results that show the effectiveness of the technique, including its ability to work in real-time.

  8. Skeletal Scintigraphy (Bone Scan)

    MedlinePlus

    ... The special camera and imaging techniques used in nuclear medicine include the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation camera, detects radioactive energy that is emitted from the patient's body and ...

  9. Using ultrahigh sensitive optical microangiography to achieve comprehensive depth resolved microvasculature mapping for human retina

    NASA Astrophysics Data System (ADS)

    An, Lin; Shen, Tueng T.; Wang, Ruikang K.

    2011-10-01

    This paper presents comprehensive and depth-resolved retinal microvasculature images within human retina achieved by a newly developed ultrahigh sensitive optical microangiography (UHS-OMAG) system. Due to its high flow sensitivity, UHS-OMAG is much more sensitive to tissue motion due to the involuntary movement of the human eye and head compared to the traditional OMAG system. To mitigate these motion artifacts on final imaging results, we propose a new phase compensation algorithm in which the traditional phase-compensation algorithm is repeatedly used to efficiently minimize the motion artifacts. Comparatively, this new algorithm demonstrates at least 8 to 25 times higher motion tolerability, critical for the UHS-OMAG system to achieve retinal microvasculature images with high quality. Furthermore, the new UHS-OMAG system employs a high speed line scan CMOS camera (240 kHz A-line scan rate) to capture 500 A-lines for one B-frame at a 400 Hz frame rate. With this system, we performed a series of in vivo experiments to visualize the retinal microvasculature in humans. Two featured imaging protocols are utilized. The first is of the low lateral resolution (16 μm) and a wide field of view (4 × 3 mm2 with single scan and 7 × 8 mm2 for multiple scans), while the second is of the high lateral resolution (5 μm) and a narrow field of view (1.5 × 1.2 mm2 with single scan). The great imaging performance delivered by our system suggests that UHS-OMAG can be a promising noninvasive alternative to the current clinical retinal microvasculature imaging techniques for the diagnosis of eye diseases with significant vascular involvement, such as diabetic retinopathy and age-related macular degeneration.

  10. Integrating Terrestrial Time-Lapse Photography with Laser Scanning to Distinguish the Drivers of Movement at Sólheimajökull, Iceland

    NASA Astrophysics Data System (ADS)

    How, P.; James, M. R.; Wynn, P.

    2014-12-01

    Glacier movement is attributed to a sensitive configuration of driving forces. Here, we present an approach designed to evaluate the drivers of movement at Sólheimajökull, an outlet glacier from the Myrdalsjökull ice cap, Iceland, through combining terrestrial time-lapse photography and laser scanning (TLS). A time-lapse camera (a dSLR with intervalometer and solar-recharged battery power supply) collected hourly data over the summer of 2013. The data are subject to all the difficulties that are usually present in long time-lapse sequences, such as highly variable illumination and visibility conditions, evolving surfaces, and camera instabilities. Feature-tracking software [1] was used to: 1) track regions of static topography (e.g. the skyline) from which camera alignment could be continuously updated throughout the sequence; and 2) track glacial surface features for velocity estimation. Absolute georeferencing of the image sequence was carried out by registering the camera to a TLS survey acquired at the beginning of the monitoring period. A second TLS survey (July 2013) provided an additional 3D surface. By assuming glacial features moved in approximately planimetrically straight lines between the two survey dates, combining the two TLS surfaces with the monoscopic feature tracking allows 3D feature tracks to be derived. Such tracks will enable contributions from different drivers (e.g. surface melting) to be extracted, even in imagery that is acquired not perpendicular to glacier motion. At Sólheimajökull, our aim is to elucidate any volcanic contribution to the observed movement.[1] http://www.lancaster.ac.uk/staff/jamesm/software/pointcatcher.htm

  11. NOSL experiment support

    NASA Technical Reports Server (NTRS)

    Brook, M.

    1986-01-01

    An optical lightning detector was constructed and flown, along with Vinton cameras and a Fairchild Line Scan Spectrometer, on a U-2 during the summer of 1979. The U-2 lightning data was obtained in daylight, and was supplemented with ground truth taken at Langmuir Laboratory. Simulations were prepared as required to establish experiment operating procedures and science training for the astronauts who would operate the Night/Day Optical Survey of Thunderstorm Lightning (NOSL) equipment during the STS-2 NOSL experiment on the Space Shuttle. Data was analyzed and papers were prepared for publication.

  12. Acousto-Optic Applications for Multichannel Adaptive Optical Processor

    DTIC Science & Technology

    1992-06-01

    AO cell and the two- channel line-scan camera system described in Subsection 4.1. The AO material for this IntraAction AOD-70 device was flint glass (n...Single-Channel 1.68 (flint glass ) 60,.0 AO Cell Multichannel 2.26 (TeO 2) 20.0 AO Cell Beam splitter 1.515 ( glass ) 50.8 Multichannel correlation was...Tone Intermodulation Dynamic Ranges of Longitudinal TeO2 Bragg Cells for Several Acoustic Power Densities 4-92 f f2 f 3 1 t SOURCE: Reference 21 TR-92

  13. A digital gigapixel large-format tile-scan camera.

    PubMed

    Ben-Ezra, M

    2011-01-01

    Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.

  14. A new spherical scanning system for infrared reflectography of paintings

    NASA Astrophysics Data System (ADS)

    Gargano, M.; Cavaliere, F.; Viganò, D.; Galli, A.; Ludwig, N.

    2017-03-01

    Infrared reflectography is an imaging technique used to visualize the underdrawings of ancient paintings; it relies on the fact that most pigment layers are quite transparent to infrared radiation in the spectral band between 0.8 μm and 2.5 μm. InGaAs sensor cameras are nowadays the most used devices to visualize the underdrawings but due to the small size of the detectors, these cameras are usually mounted on scanning systems to record high resolution reflectograms. This work describes a portable scanning system prototype based on a peculiar spherical scanning system built through a light weight and low cost motorized head. The motorized head was built with the purpose of allowing the refocusing adjustment needed to compensate the variable camera-painting distance during the rotation of the camera. The prototype has been tested first in laboratory and then in-situ for the Giotto panel "God the Father with Angels" with a 256 pixel per inch resolution. The system performance is comparable with that of other reflectographic devices with the advantage of extending the scanned area up to 1 m × 1 m, with a 40 min scanning time. The present configuration can be easily modified to increase the resolution up to 560 pixels per inch or to extend the scanned area up to 2 m × 2 m.

  15. Film cameras or digital sensors? The challenge ahead for aerial imaging

    USGS Publications Warehouse

    Light, D.L.

    1996-01-01

    Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.

  16. The imaging system design of three-line LMCCD mapping camera

    NASA Astrophysics Data System (ADS)

    Zhou, Huai-de; Liu, Jin-Guo; Wu, Xing-Xing; Lv, Shi-Liang; Zhao, Ying; Yu, Da

    2011-08-01

    In this paper, the authors introduced the theory about LMCCD (line-matrix CCD) mapping camera firstly. On top of the introduction were consists of the imaging system of LMCCD mapping camera. Secondly, some pivotal designs which were Introduced about the imaging system, such as the design of focal plane module, the video signal's procession, the controller's design of the imaging system, synchronous photography about forward and nadir and backward camera and the nadir camera of line-matrix CCD. At last, the test results of LMCCD mapping camera imaging system were introduced. The results as following: the precision of synchronous photography about forward and nadir and backward camera is better than 4 ns and the nadir camera of line-matrix CCD is better than 4 ns too; the photography interval of line-matrix CCD of the nadir camera can satisfy the butter requirements of LMCCD focal plane module; the SNR tested in laboratory is better than 95 under typical working condition(the solar incidence degree is 30, the reflectivity of the earth's surface is 0.3) of each CCD image; the temperature of the focal plane module is controlled under 30° in a working period of 15 minutes. All of these results can satisfy the requirements about the synchronous photography, the temperature control of focal plane module and SNR, Which give the guarantee of precision for satellite photogrammetry.

  17. Measurement system for 3-D foot coordinates and parameters

    NASA Astrophysics Data System (ADS)

    Liu, Guozhong; Li, Yunhui; Wang, Boxiong; Shi, Hui; Luo, Xiuzhi

    2008-12-01

    The 3-D foot-shape measurement system based on laser-line-scanning principle and the model of the measurement system were presented. Errors caused by nonlinearity of CCD cameras and caused by installation can be eliminated by using the global calibration method for CCD cameras, which based on nonlinear coordinate mapping function and the optimized method. A local foot coordinate system is defined with the Pternion and the Acropodion extracted from the boundaries of foot projections. The characteristic points can thus be located and foot parameters be extracted automatically by the local foot coordinate system and the related sections. Foot measurements for about 200 participants were conducted and the measurement results for male and female participants were presented. 3-D foot coordinates and parameters measurement makes it possible to realize custom-made shoe-making and shows great prosperity in shoe design, foot orthopaedic treatment, shoe size standardization, and establishment of a feet database for consumers.

  18. Electronic cameras for low-light microscopy.

    PubMed

    Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith

    2013-01-01

    This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels. Copyright © 2007 Elsevier Inc. All rights reserved.

  19. Electricity Breakdown Management for Sarawak Energy: Use of Condition-Based Equipment for Detection of Defective Insulator

    NASA Astrophysics Data System (ADS)

    Tan, J. K.; Abas, N.

    2017-07-01

    Managing electricity breakdown is vital since an outage causes economic losses for customers and the utility companies. However, electricity breakdown is unavoidable due to some internal or external factors beyond our control. Electricity breakdown on overhead lines tend occur more frequently because it is prone to external disturbances such as animal, overgrown vegetation and defective pole top accessories. In Sarawak Energy Berhad (SEB), majority of the network are composed of overhead lines and hence, is more prone to failure. Conventional method of equipment inspection and fault finding are not effective to quickly identify the root cause of failure. SEB has engaged the use of corona discharge camera as condition-based monitoring equipment to carry out condition based inspection on the line in order to diagnose the condition of the lines prior to failure. Experimental testing has been carried out to determine the correlation between the corona discharge count and the level of defect on line insulator. The result shall be tabulated and will be used as reference for future scanning and diagnostic on any defect on the lines.

  20. The 3D scanner prototype utilize object profile imaging using line laser and octave software

    NASA Astrophysics Data System (ADS)

    Nurdini, Mugi; Manunggal, Trikarsa Tirtadwipa; Samsi, Agus

    2016-11-01

    Three-dimensional scanner or 3D Scanner is a device to reconstruct the real object into digital form on a computer. 3D Scanner is a technology that is being developed, especially in developed countries, where the current 3D Scanner devices is the advanced version with a very expensive prices. This study is basically a simple prototype of 3D Scanner with a very low investment costs. 3D Scanner prototype device consists of a webcam, a rotating desk system controlled by a stepper motor and Arduino UNO, and a line laser. Objects that limit the research is the object with same radius from its center point (object pivot). Scanning is performed by using object profile imaging by line laser which is then captured by the camera and processed by a computer (image processing) using Octave software. On each image acquisition, the scanned object on a rotating desk rotated by a certain degree, so for one full turn multiple images of a number of existing side are finally obtained. Then, the profile of the entire images is extracted in order to obtain digital object dimension. Digital dimension is calibrated by length standard, called gage block. Overall dimensions are then digitally reconstructed into a three-dimensional object. Validation of the scanned object reconstruction of the original object dimensions expressed as a percentage error. Based on the results of data validation, horizontal dimension error is about 5% to 23% and vertical dimension error is about +/- 3%.

  1. Photometric Repeatability of Scanned Imagery: UVIS

    NASA Astrophysics Data System (ADS)

    Shanahan, Clare E.; McCullough, Peter; Baggett, Sylvia

    2017-08-01

    We provide the preliminary results of a study on the photometric repeatability of spatial scans of bright, isolated white dwarf stars with the UVIS channel of the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST). We analyze straight-line scans from the first pair of identical orbits of HST program 14878 to assess if sub 0.1% repeatability can be attained with WFC3/UVIS. This study is motivated by the desire to achieve better signal-to-noise in the UVIS contamination and stability monitor, in which observations of standard stars in staring mode have been taken from the installation of WFC3 in 2009 to the present to assess temporal photometric stability. Higher signal to noise in this program would greatly benefit the sensitivity to detect contamination, and to better characterize the observed small throughput drifts over time. We find excellent repeatability between identical visits of program 14878, with sub 0.1% repeatability achieved in most filters. These! results support the initiative to transition the staring mode UVIS contamination and photometric stability monitor from staring mode images to spatial scans.

  2. SU-F-J-140: Using Handheld Stereo Depth Cameras to Extend Medical Imaging for Radiation Therapy Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenkins, C; Xing, L; Yu, S

    Purpose: A correct body contour is essential for the accuracy of dose calculation in radiation therapy. While modern medical imaging technologies provide highly accurate representations of body contours, there are times when a patient’s anatomy cannot be fully captured or there is a lack of easy access to CT/MRI scanning. Recently, handheld cameras have emerged that are capable of performing three dimensional (3D) scans of patient surface anatomy. By combining 3D camera and medical imaging data, the patient’s surface contour can be fully captured. Methods: A proof-of-concept system matches a patient surface model, created using a handheld stereo depth cameramore » (DC), to the available areas of a body contour segmented from a CT scan. The matched surface contour is then converted to a DICOM structure and added to the CT dataset to provide additional contour information. In order to evaluate the system, a 3D model of a patient was created by segmenting the body contour with a treatment planning system (TPS) and fabricated with a 3D printer. A DC and associated software were used to create a 3D scan of the printed phantom. The surface created by the camera was then registered to a CT model that had been cropped to simulate missing scan data. The aligned surface was then imported into the TPS and compared with the originally segmented contour. Results: The RMS error for the alignment between the camera and cropped CT models was 2.26 mm. Mean distance between the aligned camera surface and ground truth model was −1.23 +/−2.47 mm. Maximum deviations were < 1 cm and occurred in areas of high concavity or where anatomy was close to the couch. Conclusion: The proof-of-concept study shows an accurate, easy and affordable method to extend medical imaging for radiation therapy planning using 3D cameras without additional radiation. Intel provided the camera hardware used in this study.« less

  3. Comparison of central corneal thickness measurements by rotating Scheimpflug camera, ultrasonic pachymetry, and scanning-slit corneal topography.

    PubMed

    Amano, Shiro; Honda, Norihiko; Amano, Yuki; Yamagami, Satoru; Miyai, Takashi; Samejima, Tomokazu; Ogata, Miyuki; Miyata, Kazunori

    2006-06-01

    To compare central corneal thickness measurements and their reproducibility when taken by a rotating Scheimpflug camera, ultrasonic pachymetry, and scanning-slit corneal topography/pachymetry. Experimental study. Seventy-four eyes of 64 subjects without ocular abnormalities other than cataract. Corneal thickness measurements were compared among the 3 methods in 54 eyes of 54 subjects. Two sets of measurements were repeated by a single examiner for each pachymetry in another 10 eyes of 5 subjects, and the intraexaminer repeatability was assessed as the absolute difference of the first and second measurements. Two experienced examiners took one measurement for each pachymetry in another 10 eyes of 5 subjects, and the interexaminer reproducibility was assessed as the absolute difference of the 2 measurements of the first and second examiners. Central corneal thickness measurements by the 3 methods, absolute difference of the first and second measurements by a single examiner, absolute difference of the 2 measurements by 2 examiners, and relative amount of variation. The average measurements of central corneal thickness by a rotating Scheimpflug camera, scanning-slit topography, and ultrasonic pachymetry were 538+/-31.3 microm, 541+/-40.7 microm, and 545+/-31.3 microm, respectively. There were no statistically significant differences in the measurement results among the 3 methods (P = 0.569, repeated-measures analysis of variance). There was a significant linear correlation between the rotating Scheimpflug camera and ultrasonic pachymetry (r = 0.908, P<0.0001), rotating Scheimpflug camera and scanning-slit topography (r = 0.930, P<0.0001), and ultrasonic pachymetry and scanning-slit topography (r = 0.887, P<0.0001). Ultrasonic pachymetry had the smallest intraexaminer variability, and scanning-slit topography had the largest intraexaminer variability among the 3 methods. There were similar variations in interexaminer reproducibility among the 3 methods. Mean corneal thicknesses were comparable among rotating Scheimpflug camera, ultrasonic pachymetry, and scanning-slit topography with the acoustic equivalent correction factor. The measurements of the 3 instruments had significant linear correlations with one another, and all methods had highly satisfactory measurement repeatability.

  4. Assessing the Potential of Low-Cost 3D Cameras for the Rapid Measurement of Plant Woody Structure

    PubMed Central

    Nock, Charles A; Taugourdeau, Olivier; Delagrange, Sylvain; Messier, Christian

    2013-01-01

    Detailed 3D plant architectural data have numerous applications in plant science, but many existing approaches for 3D data collection are time-consuming and/or require costly equipment. Recently, there has been rapid growth in the availability of low-cost, 3D cameras and related open source software applications. 3D cameras may provide measurements of key components of plant architecture such as stem diameters and lengths, however, few tests of 3D cameras for the measurement of plant architecture have been conducted. Here, we measured Salix branch segments ranging from 2–13 mm in diameter with an Asus Xtion camera to quantify the limits and accuracy of branch diameter measurement with a 3D camera. By scanning at a variety of distances we also quantified the effect of scanning distance. In addition, we also test the sensitivity of the program KinFu for continuous 3D object scanning and modeling as well as other similar software to accurately record stem diameters and capture plant form (<3 m in height). Given its ability to accurately capture the diameter of branches >6 mm, Asus Xtion may provide a novel method for the collection of 3D data on the branching architecture of woody plants. Improvements in camera measurement accuracy and available software are likely to further improve the utility of 3D cameras for plant sciences in the future. PMID:24287538

  5. Underwater Inspection of Navigation Structures with an Acoustic Camera

    DTIC Science & Technology

    2013-08-01

    the camera with a slow angular speed while recording the images. 5. After the scanning has been performed, review recorded data to determine the...Core x86) or newer  2GB RAM  120GB disc space Operating system requirements  Windows XP, Vista, Windows 7, 32/64 bit Java requirements  Sun... Java JDK, Version 1.6, Update 16 or newer, for installation Limitations and tips for proper scanning  Best results are achieved when scanning in

  6. Towards next generation 3D cameras

    NASA Astrophysics Data System (ADS)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (<100 microns resolution) scans in extremely demanding scenarios with low-cost components. Several of these cameras are making a practical impact in industrial automation, being adopted in robotic inspection and assembly systems.

  7. Design and fabrication of an angle-scanning based platform for the construction of surface plasmon resonance biosensor

    NASA Astrophysics Data System (ADS)

    Hu, Jiandong; Cao, Baiqiong; Wang, Shun; Li, Jianwei; Wei, Wensong; Zhao, Yuanyuan; Hu, Xinran; Zhu, Juanhua; Jiang, Min; Sun, Xiaohui; Chen, Ruipeng; Ma, Liuzheng

    2016-03-01

    A sensing system for an angle-scanning optical surface-plasmon-resonance (SPR) based biosensor has been designed with a laser line generator in which a P polarizer is embedded to utilize as an excitation source for producing the surface plasmon wave. In this system, the emitting beam from the laser line generator is controlled to realize the angle-scanning using a variable speed direct current (DC) motor. The light beam reflected from the prism deposited with a 50 nm Au film is then captured using the area CCD array which was controlled by a personal computer (PC) via a universal serial bus (USB) interface. The photoelectric signals from the high speed digital camera (an area CCD array) were converted by a 16 bit A/D converter before it transferred to the PC. One of the advantages of this SPR biosensing platform is greatly demonstrated by the label-free and real-time bio-molecular analysis without moving the area CCD array by following the laser line generator. It also could provide a low-cost surface plasmon resonance platform to improve the detection range in the measurement of bioanalytes. The SPR curve displayed on the PC screen promptly is formed by the effective data from the image on the area CCD array and the sensing responses of the platform to bulk refractive indices were calibrated using various concentrations of ethanol solution. These ethanol concentrations indicated with volumetric fraction of 5%, 10%, 15%, 20%, and 25%, respectively, were experimented to validate the performance of the angle-scanning optic SPR biosensing platform. As a result, the SPR sensor was capable to detect a change in the refractive index of the ethanol solution with the relative high linearity at the correlation coefficient of 0.9842. This greatly enhanced detection range is obtained from the position relationship between the laser line generator and the right-angle prism to allow direct quantification of the samples over a wide range of concentrations.

  8. Determining fast orientation changes of multi-spectral line cameras from the primary images

    NASA Astrophysics Data System (ADS)

    Wohlfeil, Jürgen

    2012-01-01

    Fast orientation changes of airborne and spaceborne line cameras cannot always be avoided. In such cases it is essential to measure them with high accuracy to ensure a good quality of the resulting imagery products. Several approaches exist to support the orientation measurement by using optical information received through the main objective/telescope. In this article an approach is proposed that allows the determination of non-systematic orientation changes between every captured line. It does not require any additional camera hardware or onboard processing capabilities but the payload images and a rough estimate of the camera's trajectory. The approach takes advantage of the typical geometry of multi-spectral line cameras with a set of linear sensor arrays for different spectral bands on the focal plane. First, homologous points are detected within the heavily distorted images of different spectral bands. With their help a connected network of geometrical correspondences can be built up. This network is used to calculate the orientation changes of the camera with the temporal and angular resolution of the camera. The approach was tested with an extensive set of aerial surveys covering a wide range of different conditions and achieved precise and reliable results.

  9. RESTORATION OF ATMOSPHERICALLY DEGRADED IMAGES. VOLUME 3.

    DTIC Science & Technology

    AERIAL CAMERAS, LASERS, ILLUMINATION, TRACKING CAMERAS, DIFFRACTION, PHOTOGRAPHIC GRAIN, DENSITY, DENSITOMETERS, MATHEMATICAL ANALYSIS, OPTICAL SCANNING, SYSTEMS ENGINEERING, TURBULENCE, OPTICAL PROPERTIES, SATELLITE TRACKING SYSTEMS.

  10. Multi-camera synchronization core implemented on USB3 based FPGA platform

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  11. Image synchronization for 3D application using the NanEye sensor

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Based on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a novel technique to perfectly synchronize up to 8 individual self-timed cameras. Minimal form factor self-timed camera modules of 1 mm x 1 mm or smaller do not generally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge to synchronize multiple self-timed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras to synchronize their frame rate and frame phase. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames of multiple cameras, a Master-Slave interface was implemented. A single camera is defined as the Master entity, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the realization of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  12. Digital methods of recording color television images on film tape

    NASA Astrophysics Data System (ADS)

    Krivitskaya, R. Y.; Semenov, V. M.

    1985-04-01

    Three methods are now available for recording color television images on film tape, directly or after appropriate finish of signal processing. Conventional recording of images from the screens of three kinescopes with synthetic crystal face plates is still most effective for high fidelity. This method was improved by digital preprocessing of brightness color-difference signal. Frame-by-frame storage of these signals in the memory in digital form is followed by gamma and aperture correction and electronic correction of crossover distortions in the color layers of the film with fixing in accordance with specific emulsion procedures. The newer method of recording color television images with line arrays of light-emitting diodes involves dichromic superposing mirrors and a movable scanning mirror. This method allows the use of standard movie cameras, simplifies interlacing-to-linewise conversion and the mechanical equipment, and lengthens exposure time while it shortens recording time. The latest image transform method requires an audio-video recorder, a memory disk, a digital computer, and a decoder. The 9-step procedure includes preprocessing the total color television signal with reduction of noise level and time errors, followed by frame frequency conversion and setting the number of lines. The total signal is then resolved into its brightness and color-difference components and phase errors and image blurring are also reduced. After extraction of R,G,B signals and colorimetric matching of TV camera and film tape, the simultaneous R,B, B signals are converted from interlacing to sequential triades of color-quotient frames with linewise scanning at triple frequency. Color-quotient signals are recorded with an electron beam on a smoothly moving black-and-white film tape under vacuum. While digital techniques improve the signal quality and simplify the control of processes, not requiring stabilization of circuits, image processing is still analog.

  13. The sequence measurement system of the IR camera

    NASA Astrophysics Data System (ADS)

    Geng, Ai-hui; Han, Hong-xia; Zhang, Hai-bo

    2011-08-01

    Currently, the IR cameras are broadly used in the optic-electronic tracking, optic-electronic measuring, fire control and optic-electronic countermeasure field, but the output sequence of the most presently applied IR cameras in the project is complex and the giving sequence documents from the leave factory are not detailed. Aiming at the requirement that the continuous image transmission and image procession system need the detailed sequence of the IR cameras, the sequence measurement system of the IR camera is designed, and the detailed sequence measurement way of the applied IR camera is carried out. The FPGA programming combined with the SignalTap online observation way has been applied in the sequence measurement system, and the precise sequence of the IR camera's output signal has been achieved, the detailed document of the IR camera has been supplied to the continuous image transmission system, image processing system and etc. The sequence measurement system of the IR camera includes CameraLink input interface part, LVDS input interface part, FPGA part, CameraLink output interface part and etc, thereinto the FPGA part is the key composed part in the sequence measurement system. Both the video signal of the CmaeraLink style and the video signal of LVDS style can be accepted by the sequence measurement system, and because the image processing card and image memory card always use the CameraLink interface as its input interface style, the output signal style of the sequence measurement system has been designed into CameraLink interface. The sequence measurement system does the IR camera's sequence measurement work and meanwhile does the interface transmission work to some cameras. Inside the FPGA of the sequence measurement system, the sequence measurement program, the pixel clock modification, the SignalTap file configuration and the SignalTap online observation has been integrated to realize the precise measurement to the IR camera. Te sequence measurement program written by the verilog language combining the SignalTap tool on line observation can count the line numbers in one frame, pixel numbers in one line and meanwhile account the line offset and row offset of the image. Aiming at the complex sequence of the IR camera's output signal, the sequence measurement system of the IR camera accurately measures the sequence of the project applied camera, supplies the detailed sequence document to the continuous system such as image processing system and image transmission system and gives out the concrete parameters of the fval, lval, pixclk, line offset and row offset. The experiment shows that the sequence measurement system of the IR camera can get the precise sequence measurement result and works stably, laying foundation for the continuous system.

  14. Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping.

    PubMed

    Cui, Tingting; Ji, Shunping; Shan, Jie; Gong, Jianya; Liu, Kejian

    2016-12-31

    For multi-sensor integrated systems, such as the mobile mapping system (MMS), data fusion at sensor-level, i.e., the 2D-3D registration between an optical camera and LiDAR, is a prerequisite for higher level fusion and further applications. This paper proposes a line-based registration method for panoramic images and a LiDAR point cloud collected by a MMS. We first introduce the system configuration and specification, including the coordinate systems of the MMS, the 3D LiDAR scanners, and the two panoramic camera models. We then establish the line-based transformation model for the panoramic camera. Finally, the proposed registration method is evaluated for two types of camera models by visual inspection and quantitative comparison. The results demonstrate that the line-based registration method can significantly improve the alignment of the panoramic image and the LiDAR datasets under either the ideal spherical or the rigorous panoramic camera model, with the latter being more reliable.

  15. Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping

    PubMed Central

    Cui, Tingting; Ji, Shunping; Shan, Jie; Gong, Jianya; Liu, Kejian

    2016-01-01

    For multi-sensor integrated systems, such as the mobile mapping system (MMS), data fusion at sensor-level, i.e., the 2D-3D registration between an optical camera and LiDAR, is a prerequisite for higher level fusion and further applications. This paper proposes a line-based registration method for panoramic images and a LiDAR point cloud collected by a MMS. We first introduce the system configuration and specification, including the coordinate systems of the MMS, the 3D LiDAR scanners, and the two panoramic camera models. We then establish the line-based transformation model for the panoramic camera. Finally, the proposed registration method is evaluated for two types of camera models by visual inspection and quantitative comparison. The results demonstrate that the line-based registration method can significantly improve the alignment of the panoramic image and the LiDAR datasets under either the ideal spherical or the rigorous panoramic camera model, with the latter being more reliable. PMID:28042855

  16. Infrared needle mapping to assist biopsy procedures and training.

    PubMed

    Shar, Bruce; Leis, John; Coucher, John

    2018-04-01

    A computed tomography (CT) biopsy is a radiological procedure which involves using a needle to withdraw tissue or a fluid specimen from a lesion of interest inside a patient's body. The needle is progressively advanced into the patient's body, guided by the most recent CT scan. CT guided biopsies invariably expose patients to high dosages of radiation, due to the number of scans required whilst the needle is advanced. This study details the design of a novel method to aid biopsy procedures using infrared cameras. Two cameras are used to image the biopsy needle area, from which the proposed algorithm computes an estimate of the needle endpoint, which is projected onto the CT image space. This estimated position may be used to guide the needle between scans, and results in a reduction in the number of CT scans that need to be performed during the biopsy procedure. The authors formulate a 2D augmentation system which compensates for camera pose, and show that multiple low-cost infrared imaging devices provide a promising approach.

  17. Confocal non-line-of-sight imaging based on the light-cone transform.

    PubMed

    O'Toole, Matthew; Lindell, David B; Wetzstein, Gordon

    2018-03-15

    How to image objects that are hidden from a camera's view is a problem of fundamental importance to many fields of research, with applications in robotic vision, defence, remote sensing, medical imaging and autonomous vehicles. Non-line-of-sight (NLOS) imaging at macroscopic scales has been demonstrated by scanning a visible surface with a pulsed laser and a time-resolved detector. Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections, NLOS imaging reconstructs the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that a confocal scanning procedure can address these challenges by facilitating the derivation of the light-cone transform to solve the NLOS reconstruction problem. This method requires much smaller computational and memory resources than previous reconstruction methods do and images hidden objects at unprecedented resolution. Confocal scanning also provides a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate its potential for real-time tracking and derive efficient algorithms that incorporate image priors and a physically accurate noise model. Additionally, we describe successful outdoor experiments of NLOS imaging under indirect sunlight.

  18. TASS - The Amateur Sky Survey

    NASA Astrophysics Data System (ADS)

    Droege, T. F.; Albertson, C.; Gombert, G.; Gutzwiller, M.; Molhant, N. W.; Johnson, H.; Skvarc, J.; Wickersham, R. J.; Richmond, M. W.; Rybski, P.; Henden, A.; Beser, N.; Pittinger, M.; Kluga, B.

    1997-05-01

    As a non-astronomer watching Shoemaker/Levy 9 crash into Jupiter through postings on sci.astro, it occurred to me that it might be fun to build a comet finding machine. After wild speculations on how such a device might be built - I considered a 26" x 40" fresnel lens and a string of pin diodes -- postings to sci.astro brought me down to earth. I quickly made contact with both professionals and amateurs and found that there was interesting science to be done with an all sky survey. After several prototype drift scan cameras were built using various CCDs, I determined the real problem was software. How does one get the software written for an all sky survey? Willie Sutton could tell you, "Go where the programmers are." Our strategy has been to build a bunch of drift scan cameras and just give them away (without software) to programmers found on the Internet. This author reports more success by this technique than when he had a business and hired and paid programmers at a cost of a million or so a year. To date, 22 drift scan cameras have been constructed. Most of these are operated as triplets spaced 15 degrees apart in Right Ascension and with I, V, I filters. The cameras use 135mm fl, f.2.8 camera lenses for a plate scale of 14 arc seconds per pixel and reach magnitude 13. With 512 pixels across the drift scan direction and running through the night, a triplet will collect 200 Mb of data on three overlapping areas of 3 x 120 degrees each. To date four of the triplets and one single have taken data. Production has started on 25 second generation cameras using 2k x 2k devices and a barn door mount.

  19. Line following using a two camera guidance system for a mobile robot

    NASA Astrophysics Data System (ADS)

    Samu, Tayib; Kelkar, Nikhal; Perdue, David; Ruthemeyer, Michael A.; Matthews, Bradley O.; Hall, Ernest L.

    1996-10-01

    Automated unmanned guided vehicles have many potential applications in manufacturing, medicine, space and defense. A mobile robot has been designed for the 1996 Automated Unmanned Vehicle Society competition which was held in Orlando, Florida on July 15, 1996. The competition required the vehicle to follow solid and dashed lines around an approximately 800 ft. path while avoiding obstacles, overcoming terrain changes such as inclines and sand traps, and attempting to maximize speed. The purpose of this paper is to describe the algorithm developed for the line following. The line following algorithm images two windows and locates their centroid and with the knowledge that the points are on the ground plane, a mathematical and geometrical relationship between the image coordinates of the points and their corresponding ground coordinates are established. The angle of the line and minimum distance from the robot centroid are then calculated and used in the steering control. Two cameras are mounted on the robot with a camera on each side. One camera guides the robot and when it loses track of the line on its side, the robot control system automatically switches to the other camera. The test bed system has provided an educational experience for all involved and permits understanding and extending the state of the art in autonomous vehicle design.

  20. ARGon{sup 3}: ''3D appearance robot-based gonioreflectometer'' at PTB

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoepe, A.; Atamas, T.; Huenerhoff, D.

    At the Physikalisch-Technische Bundesanstalt, the National Metrology Institute of Germany, a new facility for measuring visual appearance-related quantities has been built up. The acronym ARGon{sup 3} stands for ''3D appearance robot-based gonioreflectometer''. Compared to standard gonioreflectometers, there are two main new features within this setup. First, a photometric luminance camera with a spatial resolution of 28 {mu}m on the device under test (DUT) enables spatially high-resolved measurements of luminance and color coordinates. Second, a line-scan CCD-camera mounted to a spectrometer provides measurements of the radiance factor, respectively the bidirectional reflectance distribution function, in full V({lambda})-range (360 nm-830 nm) with arbitrarymore » angles of irradiation and detection relative to the surface normal, on a time scale of about 2 min. First goniometric measurements of diffuse reflection within 3D-space above the DUT with subsequent colorimetric representation of the obtained data of special effect pigments based on the interference effect are presented.« less

  1. Comparison of three coding strategies for a low cost structure light scanner

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming

    2014-12-01

    Coded structure light is widely used for 3D scanning, and different coding strategies are adopted to suit for different goals. In this paper, three coding strategies are compared, and one of them is selected to implement a low cost structure light scanner under the cost of €100. To reach this goal, the projector and the video camera must be the cheapest, which will lead to some problems related to light coding. For a cheapest projector, complex intensity pattern can't be generated; even if it can be generated, it can't be captured by a cheapest camera. Based on Gray code, three different strategies are implemented and compared, called phase-shift, line-shift, and bit-shift, respectively. The bit-shift Gray code is the contribution of this paper, in which a simple, stable light pattern is used to generate dense(mean points distance<0.4mm) and accurate(mean error<0.1mm) results. The whole algorithm details and some example are presented in the papers.

  2. High temporal and spatial resolution studies of bone cells using real-time confocal reflection microscopy.

    PubMed

    Boyde, A; Vesely, P; Gray, C; Jones, S J

    1994-01-01

    Chick and rat bone-derived cells were mounted in sealed coverslip-covered chambers; individual osteoclasts (but also osteoblasts) were selected and studied at 37 degrees C using three different types of high-speed scanning confocal microscopes: (1) A Noran Tandem Scanning Microscope (TSM) was used with a low light level, cooled CCD camera for image transfer to a Noran TN8502 frame store-based image analysing computer to make time lapse movie sequences using 0.1 s exposure periods, thus losing some of the advantage of the high frame rate of the TSM. Rapid focus adjustment using computer controlled piezo drivers permitted two or more focus planes to be imaged sequentially: thus (with additional light-source shuttering) the reflection confocal image could be alternated with the phase contrast image at a different focus. Individual cells were followed for up to 5 days, suggesting no significant irradiation problem. (2) Exceptional temporal and spatial resolution is available in video rate laser confocal scanning microscopes (VRCSLMs). We used the Noran Odyssey unitary beam VRCSLM with an argon ion laser at 488 nm and acousto-optic deflection (AOD) on the line axis: this instrument is truly and adjustably confocal in the reflection mode. (3) We also used the Lasertec 1LM11 line scan instrument, with an He-Ne laser at 633 nm, and AOD for the frame scan. We discuss the technical problems and merits of the different approaches. The VRCSLMs documented rapid, real-time oscillatory motion: all the methods used show rapid net movement of organelles within bone cells. The interference reflection mode gives particularly strong contrasts in confocal instruments. Phase contrast and other interference methods used in the microscopy of living cells can be used simultaneously in the TSM.

  3. Out of lab calibration of a rotating 2D scanner for 3D mapping

    NASA Astrophysics Data System (ADS)

    Koch, Rainer; Böttcher, Lena; Jahrsdörfer, Maximilian; Maier, Johannes; Trommer, Malte; May, Stefan; Nüchter, Andreas

    2017-06-01

    Mapping is an essential task in mobile robotics. To fulfil advanced navigation and manipulation tasks a 3D representation of the environment is required. Applying stereo cameras or Time-of-flight cameras (TOF cameras) are one way to archive this requirement. Unfortunately, they suffer from drawbacks which makes it difficult to map properly. Therefore, costly 3D laser scanners are applied. An inexpensive way to build a 3D representation is to use a 2D laser scanner and rotate the scan plane around an additional axis. A 3D point cloud acquired with such a custom device consists of multiple 2D line scans. Therefore the scanner pose of each line scan need to be determined as well as parameters resulting from a calibration to generate a 3D point cloud. Using external sensor systems are a common method to determine these calibration parameters. This is costly and difficult when the robot needs to be calibrated outside the lab. Thus, this work presents a calibration method applied on a rotating 2D laser scanner. It uses a hardware setup to identify the required parameters for calibration. This hardware setup is light, small, and easy to transport. Hence, an out of lab calibration is possible. Additional a theoretical model was created to test the algorithm and analyse impact of the scanner accuracy. The hardware components of the 3D scanner system are an HOKUYO UTM-30LX-EW 2D laser scanner, a Dynamixel servo-motor, and a control unit. The calibration system consists of an hemisphere. In the inner of the hemisphere a circular plate is mounted. The algorithm needs to be provided with a dataset of a single rotation from the laser scanner. To achieve a proper calibration result the scanner needs to be located in the middle of the hemisphere. By means of geometric formulas the algorithms determine the individual deviations of the placed laser scanner. In order to minimize errors, the algorithm solves the formulas in an iterative process. First, the calibration algorithm was tested with an ideal hemisphere model created in Matlab. Second, laser scanner was mounted differently, the scanner position and the rotation axis was modified. In doing so, every deviation, was compared with the algorithm results. Several measurement settings were tested repeatedly with the 3D scanner system and the calibration system. The results show that the length accuracy of the laser scanner is most critical. It influences the required size of the hemisphere and the calibration accuracy.

  4. Infrared spectro-polarimeter on the Solar Flare Telescope at NAOJ/Mitaka

    NASA Astrophysics Data System (ADS)

    Sakurai, Takashi; Hanaoka, Yoichiro; Arai, Takehiko; Hagino, Masaoki; Kawate, Tomoko; Kitagawa, Naomasa; Kobiki, Toshihiko; Miyashita, Masakuni; Morita, Satoshi; Otsuji, Ken'ichi; Shinoda, Kazuya; Suzuki, Isao; Yaji, Kentaro; Yamasaki, Takayuki; Fukuda, Takeo; Noguchi, Motokazu; Takeyama, Norihide; Kanai, Yoshikazu; Yamamuro, Tomoyasu

    2018-05-01

    An infrared spectro-polarimeter installed on the Solar Flare Telescope at the Mitaka headquarters of the National Astronomical Observatory of Japan is described. The new spectro-polarimeter observes the full Sun via slit scans performed at two wavelength bands, one near 1565 nm for a Zeeman-sensitive spectral line of Fe I and the other near 1083 nm for He I and Si I lines. The full Stokes profiles are recorded; the Fe I and Si I lines give information on photospheric vector magnetic fields, and the helium line is suitable for deriving chromospheric magnetic fields. The infrared detector we are using is an InGaAs camera with 640 × 512 pixels and a read-out speed of 90 frames s-1. The solar disk is covered by two swaths (the northern and southern hemispheres) of 640 pixels each. The final magnetic maps are made of 1200 × 1200 pixels with a pixel size of 1{^''.}8. We have been carrying out regular observations since 2010 April, and have provided full-disk, full-Stokes maps, at the rate of a few maps per day, on the internet.

  5. High-speed, random-access fluorescence microscopy: I. High-resolution optical recording with voltage-sensitive dyes and ion indicators.

    PubMed

    Bullen, A; Patel, S S; Saggau, P

    1997-07-01

    The design and implementation of a high-speed, random-access, laser-scanning fluorescence microscope configured to record fast physiological signals from small neuronal structures with high spatiotemporal resolution is presented. The laser-scanning capability of this nonimaging microscope is provided by two orthogonal acousto-optic deflectors under computer control. Each scanning point can be randomly accessed and has a positioning time of 3-5 microseconds. Sampling time is also computer-controlled and can be varied to maximize the signal-to-noise ratio. Acquisition rates up to 200k samples/s at 16-bit digitizing resolution are possible. The spatial resolution of this instrument is determined by the minimal spot size at the level of the preparation (i.e., 2-7 microns). Scanning points are selected interactively from a reference image collected with differential interference contrast optics and a video camera. Frame rates up to 5 kHz are easily attainable. Intrinsic variations in laser light intensity and scanning spot brightness are overcome by an on-line signal-processing scheme. Representative records obtained with this instrument by using voltage-sensitive dyes and calcium indicators demonstrate the ability to make fast, high-fidelity measurements of membrane potential and intracellular calcium at high spatial resolution (2 microns) without any temporal averaging.

  6. High-speed, random-access fluorescence microscopy: I. High-resolution optical recording with voltage-sensitive dyes and ion indicators.

    PubMed Central

    Bullen, A; Patel, S S; Saggau, P

    1997-01-01

    The design and implementation of a high-speed, random-access, laser-scanning fluorescence microscope configured to record fast physiological signals from small neuronal structures with high spatiotemporal resolution is presented. The laser-scanning capability of this nonimaging microscope is provided by two orthogonal acousto-optic deflectors under computer control. Each scanning point can be randomly accessed and has a positioning time of 3-5 microseconds. Sampling time is also computer-controlled and can be varied to maximize the signal-to-noise ratio. Acquisition rates up to 200k samples/s at 16-bit digitizing resolution are possible. The spatial resolution of this instrument is determined by the minimal spot size at the level of the preparation (i.e., 2-7 microns). Scanning points are selected interactively from a reference image collected with differential interference contrast optics and a video camera. Frame rates up to 5 kHz are easily attainable. Intrinsic variations in laser light intensity and scanning spot brightness are overcome by an on-line signal-processing scheme. Representative records obtained with this instrument by using voltage-sensitive dyes and calcium indicators demonstrate the ability to make fast, high-fidelity measurements of membrane potential and intracellular calcium at high spatial resolution (2 microns) without any temporal averaging. Images FIGURE 6 PMID:9199810

  7. Low-cost printing of computerised tomography (CT) images where there is no dedicated CT camera.

    PubMed

    Tabari, Abdulkadir M

    2007-01-01

    Many developing countries still rely on conventional hard copy images to transfer information among physicians. We have developed a low-cost alternative method of printing computerised tomography (CT) scan images where there is no dedicated camera. A digital camera is used to photograph images from the CT scan screen monitor. The images are then transferred to a PC via a USB port, before being printed on glossy paper using an inkjet printer. The method can be applied to other imaging modalities like ultrasound and MRI and appears worthy of emulation elsewhere in the developing world where resources and technical expertise are scarce.

  8. On-line, continuous monitoring in solar cell and fuel cell manufacturing using spectral reflectance imaging

    DOEpatents

    Sopori, Bhushan; Rupnowski, Przemyslaw; Ulsh, Michael

    2016-01-12

    A monitoring system 100 comprising a material transport system 104 providing for the transportation of a substantially planar material 102, 107 through the monitoring zone 103 of the monitoring system 100. The system 100 also includes a line camera 106 positioned to obtain multiple line images across a width of the material 102, 107 as it is transported through the monitoring zone 103. The system 100 further includes an illumination source 108 providing for the illumination of the material 102, 107 transported through the monitoring zone 103 such that light reflected in a direction normal to the substantially planar surface of the material 102, 107 is detected by the line camera 106. A data processing system 110 is also provided in digital communication with the line camera 106. The data processing system 110 is configured to receive data output from the line camera 106 and further configured to calculate and provide substantially contemporaneous information relating to a quality parameter of the material 102, 107. Also disclosed are methods of monitoring a quality parameter of a material.

  9. One-click scanning of large-size documents using mobile phone camera

    NASA Astrophysics Data System (ADS)

    Liu, Sijiang; Jiang, Bo; Yang, Yuanjie

    2016-07-01

    Currently mobile apps for document scanning do not provide convenient operations to tackle large-size documents. In this paper, we present a one-click scanning approach for large-size documents using mobile phone camera. After capturing a continuous video of documents, our approach automatically extracts several key frames by optical flow analysis. Then based on key frames, a mobile GPU based image stitching method is adopted to generate a completed document image with high details. There are no extra manual intervention in the process and experimental results show that our app performs well, showing convenience and practicability for daily life.

  10. High-speed adaptive optics line scan confocal retinal imaging for human eye

    PubMed Central

    Wang, Xiaolin; Zhang, Yuhua

    2017-01-01

    Purpose Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. Methods A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye’s optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. Results The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. Conclusions We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss. PMID:28257458

  11. High-speed adaptive optics line scan confocal retinal imaging for human eye.

    PubMed

    Lu, Jing; Gu, Boyu; Wang, Xiaolin; Zhang, Yuhua

    2017-01-01

    Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye's optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss.

  12. Comparison of central corneal thickness measurement using ultrasonic pachymetry, rotating Scheimpflug camera, and scanning-slit topography.

    PubMed

    Sedaghat, Mohammad Reza; Daneshvar, Ramin; Kargozar, Abbas; Derakhshan, Akbar; Daraei, Mona

    2010-12-01

    To evaluate and compare central corneal thickness measurements using rotating Scheimpflug camera, scanning-slit topography, and ultrasound pachymetry in virgin, healthy corneas. Prospective, observational, cross-sectional study. Central corneal thickness in 157 healthy eyes of 157 patients without ocular abnormalities other than refractive errors was measured, in a sequential order, once with rotating Scheimpflug camera and scanning-slit topography and 3 times with ultrasound pachymetry as the last part of examination. All measurements were performed by a single experienced examiner. The results from scanning-slit topography are given with and without correction for "acoustic correction factor" of 0.92. The average measurements of central corneal thickness by rotating Scheimpflug imaging, scanning-slit pachymetry, and ultrasound were 537.15 ± 32.98 μm, 542.06 ± 39.04 μm, and 544.07 ± 34.75 μm, respectively. The mean differences between modalities were 6.92 μm between rotating Scheimpflug and ultrasound (P < .0001), 2.01 μm between corrected scanning-slit and ultrasound (P = .204), and 4.91 μm between corrected scanning-slit and rotating Scheimpflug imaging (P = .001). According to Bland-Altman analysis, highest agreement was between ultrasonic and rotating Scheimpflug pachymetry. In the assessment of normal corneas, rotating Scheimpflug topography measures central corneal thickness values with higher agreement to ultrasound pachymetry. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. SU-F-BRB-05: Collision Avoidance Mapping Using Consumer 3D Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardan, R; Popple, R

    2015-06-15

    Purpose: To develop a fast and economical method of scanning a patient’s full body contour for use in collision avoidance mapping without the use of ionizing radiation. Methods: Two consumer level 3D cameras used in electronic gaming were placed in a CT simulator room to scan a phantom patient set up in a high collision probability position. A registration pattern and computer vision algorithms were used to transform the scan into the appropriate coordinate systems. The cameras were then used to scan the surface of a gantry in the treatment vault. Each scan was converted into a polygon mesh formore » collision testing in a general purpose polygon interference algorithm. All clinically relevant transforms were applied to the gantry and patient support to create a map of all possible collisions. The map was then tested for accuracy by physically testing the collisions with the phantom in the vault. Results: The scanning fidelity of both the gantry and patient was sufficient to produce a collision prediction accuracy of 97.1% with 64620 geometry states tested in 11.5 s. The total scanning time including computation, transformation, and generation was 22.3 seconds. Conclusion: Our results demonstrate an economical system to generate collision avoidance maps. Future work includes testing the speed of the framework in real-time collision avoidance scenarios. Research partially supported by a grant from Varian Medical Systems.« less

  14. Ultra-fast framing camera tube

    DOEpatents

    Kalibjian, Ralph

    1981-01-01

    An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.

  15. A novel simultaneous streak and framing camera without principle errors

    NASA Astrophysics Data System (ADS)

    Jingzhen, L.; Fengshan, S.; Ningwen, L.; Xiangdong, G.; Bin, H.; Qingyang, W.; Hongyi, C.; Yi, C.; Xiaowei, L.

    2018-02-01

    A novel simultaneous streak and framing camera with continuous access, the perfect information of which is far more important for the exact interpretation and precise evaluation of many detonation events and shockwave phenomena, has been developed. The camera with the maximum imaging frequency of 2 × 106 fps and the maximum scanning velocity of 16.3 mm/μs has fine imaging properties which are the eigen resolution of over 40 lp/mm in the temporal direction and over 60 lp/mm in the spatial direction and the framing frequency principle error of zero for framing record, and the maximum time resolving power of 8 ns and the scanning velocity nonuniformity of 0.136%~-0.277% for streak record. The test data have verified the performance of the camera quantitatively. This camera, simultaneously gained frames and streak with parallax-free and identical time base, is characterized by the plane optical system at oblique incidence different from space system, the innovative camera obscura without principle errors, and the high velocity motor driven beryllium-like rotating mirror, made of high strength aluminum alloy with cellular lateral structure. Experiments demonstrate that the camera is very useful and reliable to take high quality pictures of the detonation events.

  16. Data annotation, recording and mapping system for the US open skies aircraft

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, B.W.; Goede, W.F.; Farmer, R.G.

    1996-11-01

    This paper discusses the system developed by Northrop Grumman for the Defense Nuclear Agency (DNA), US Air Force, and the On-Site Inspection Agency (OSIA) to comply with the data annotation and reporting provisions of the Open Skies Treaty. This system, called the Data Annotation, Recording and Mapping System (DARMS), has been installed on the US OC-135 and meets or exceeds all annotation requirements for the Open Skies Treaty. The Open Skies Treaty, which will enter into force in the near future, allows any of the 26 signatory countries to fly fixed wing aircraft with imaging sensors over any of themore » other treaty participants, upon very short notice, and with no restricted flight areas. Sensor types presently allowed by the treaty are: optical framing and panoramic film cameras; video cameras ranging from analog PAL color television cameras to the more sophisticated digital monochrome and color line scanning or framing cameras; infrared line scanners; and synthetic aperture radars. Each sensor type has specific performance parameters which are limited by the treaty, as well as specific annotation requirements which must be achieved upon full entry into force. DARMS supports U.S. compliance with the Opens Skies Treaty by means of three subsystems: the Data Annotation Subsytem (DAS), which annotates sensor media with data obtained from sensors and the aircraft`s avionics system; the Data Recording System (DRS), which records all sensor and flight events on magnetic media for later use in generating Treaty mandated mission reports; and the Dynamic Sensor Mapping Subsystem (DSMS), which provides observers and sensor operators with a real-time moving map displays of the progress of the mission, complete with instantaneous and cumulative sensor coverages. This paper will describe DARMS and its subsystems in greater detail, along with the supporting avionics sub-systems. 7 figs.« less

  17. Application of gamma imaging techniques for the characterisation of position sensitive gamma detectors

    NASA Astrophysics Data System (ADS)

    Habermann, T.; Didierjean, F.; Duchêne, G.; Filliger, M.; Gerl, J.; Kojouharov, I.; Li, G.; Pietralla, N.; Schaffner, H.; Sigward, M.-H.

    2017-11-01

    A device to characterize position-sensitive germanium detectors has been implemented at GSI. The main component of this so called scanning table is a gamma camera that is capable of producing online 2D images of the scanned detector by means of a PET technique. To calibrate the gamma camera Compton imaging is employed. The 2D data can be processed further offline to obtain depth information. Of main interest is the response of the scanned detector in terms of the digitized pulse shapes from the preamplifier. This is an important input for pulse-shape analysis algorithms as they are in use for gamma tracking arrays in gamma spectroscopy. To validate the scanning table, a comparison of its results with a second scanning table implemented at the IPHC Strasbourg is envisaged. For this purpose a pixelated germanium detector has been scanned.

  18. Parallelised photoacoustic signal acquisition using a Fabry-Perot sensor and a camera-based interrogation scheme

    NASA Astrophysics Data System (ADS)

    Saeb Gilani, T.; Villringer, C.; Zhang, E.; Gundlach, H.; Buchmann, J.; Schrader, S.; Laufer, J.

    2018-02-01

    Tomographic photoacoustic (PA) images acquired using a Fabry-Perot (FP) based scanner offer high resolution and image fidelity but can result in long acquisition times due to the need for raster scanning. To reduce the acquisition times, a parallelised camera-based PA signal detection scheme is developed. The scheme is based on using a sCMOScamera and FPI sensors with high homogeneity of optical thickness. PA signals were acquired using the camera-based setup and the signal to noise ratio (SNR) was measured. A comparison of the SNR of PA signal detected using 1) a photodiode in a conventional raster scanning detection scheme and 2) a sCMOS camera in parallelised detection scheme is made. The results show that the parallelised interrogation scheme has the potential to provide high speed PA imaging.

  19. Beam line shielding calculations for an Electron Accelerator Mo-99 production facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mocko, Michal

    2016-05-03

    The purpose of this study is to evaluate the photon and neutron fields in and around the latest beam line design for the Mo-99 production facility. The radiation dose to the beam line components (quadrupoles, dipoles, beam stops and the linear accelerator) are calculated in the present report. The beam line design assumes placement of two cameras: infra red (IR) and optical transition radiation (OTR) for continuous monitoring of the beam spot on target during irradiation. The cameras will be placed off the beam axis offset in vertical direction. We explored typical shielding arrangements for the cameras and report themore » resulting neutron and photon dose fields.« less

  20. Orientation Modeling for Amateur Cameras by Matching Image Line Features and Building Vector Data

    NASA Astrophysics Data System (ADS)

    Hung, C. H.; Chang, W. C.; Chen, L. C.

    2016-06-01

    With the popularity of geospatial applications, database updating is getting important due to the environmental changes over time. Imagery provides a lower cost and efficient way to update the database. Three dimensional objects can be measured by space intersection using conjugate image points and orientation parameters of cameras. However, precise orientation parameters of light amateur cameras are not always available due to their costliness and heaviness of precision GPS and IMU. To automatize data updating, the correspondence of object vector data and image may be built to improve the accuracy of direct georeferencing. This study contains four major parts, (1) back-projection of object vector data, (2) extraction of image feature lines, (3) object-image feature line matching, and (4) line-based orientation modeling. In order to construct the correspondence of features between an image and a building model, the building vector features were back-projected onto the image using the initial camera orientation from GPS and IMU. Image line features were extracted from the imagery. Afterwards, the matching procedure was done by assessing the similarity between the extracted image features and the back-projected ones. Then, the fourth part utilized line features in orientation modeling. The line-based orientation modeling was performed by the integration of line parametric equations into collinearity condition equations. The experiment data included images with 0.06 m resolution acquired by Canon EOS Mark 5D II camera on a Microdrones MD4-1000 UAV. Experimental results indicate that 2.1 pixel accuracy may be reached, which is equivalent to 0.12 m in the object space.

  1. 2D Measurements of the Balmer Series in Proto-MPEX using a Fast Visible Camera Setup

    NASA Astrophysics Data System (ADS)

    Lindquist, Elizabeth G.; Biewer, Theodore M.; Ray, Holly B.

    2017-10-01

    The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device with densities up to 1020 m-3 and temperatures up to 20 eV. Broadband spectral measurements show the visible emission spectra are solely due to the Balmer lines of deuterium. Monochromatic and RGB color Sanstreak SC1 Edgertronic fast visible cameras capture high speed video of plasmas in Proto-MPEX. The color camera is equipped with a long pass 450 nm filter and an internal Bayer filter to view the Dα line at 656 nm on the red channel and the Dβ line at 486 nm on the blue channel. The monochromatic camera has a 434 nm narrow bandpass filter to view the Dγ intensity. In the setup, a 50/50 beam splitter is used so both cameras image the same region of the plasma discharge. Camera images were aligned to each other by viewing a grid ensuring 1 pixel registration between the two cameras. A uniform intensity calibrated white light source was used to perform a pixel-to-pixel relative and an absolute intensity calibration for both cameras. Python scripts that combined the dual camera data, rendering the Dα, Dβ, and Dγ intensity ratios. Observations from Proto-MPEX discharges will be presented. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.

  2. 3D Modeling of Interior Building Environments and Objects from Noisy Sensor Suites

    DTIC Science & Technology

    2015-05-14

    building environments. The interior environment of a building is scanned by a custom hardware system, which provides raw laser and camera sensor readings...interior environment of a building is scanned by a custom hardware system, which provides raw laser and camera sensor readings used to develop these...seemed straight out of a Calvin & Hobbes strip . As soon as I met the people here, I immediately found that the intellectual adventure matched the

  3. Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects

    DOEpatents

    Lu, Shin-Yee

    1998-01-01

    A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.

  4. Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects

    DOEpatents

    Lu, S.Y.

    1998-12-22

    A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.

  5. Use of digital micromirror devices as dynamic pinhole arrays for adaptive confocal fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Pozzi, Paolo; Wilding, Dean; Soloviev, Oleg; Vdovin, Gleb; Verhaegen, Michel

    2018-02-01

    In this work, we present a new confocal laser scanning microscope capable to perform sensorless wavefront optimization in real time. The device is a parallelized laser scanning microscope in which the excitation light is structured in a lattice of spots by a spatial light modulator, while a deformable mirror provides aberration correction and scanning. A binary DMD is positioned in an image plane of the detection optical path, acting as a dynamic array of reflective confocal pinholes, images by a high performance cmos camera. A second camera detects images of the light rejected by the pinholes for sensorless aberration correction.

  6. Multivariate image analysis of laser-induced photothermal imaging used for detection of caries tooth

    NASA Astrophysics Data System (ADS)

    El-Sherif, Ashraf F.; Abdel Aziz, Wessam M.; El-Sharkawy, Yasser H.

    2010-08-01

    Time-resolved photothermal imaging has been investigated to characterize tooth for the purpose of discriminating between normal and caries areas of the hard tissue using thermal camera. Ultrasonic thermoelastic waves were generated in hard tissue by the absorption of fiber-coupled Q-switched Nd:YAG laser pulses operating at 1064 nm in conjunction with a laser-induced photothermal technique used to detect the thermal radiation waves for diagnosis of human tooth. The concepts behind the use of photo-thermal techniques for off-line detection of caries tooth features were presented by our group in earlier work. This paper illustrates the application of multivariate image analysis (MIA) techniques to detect the presence of caries tooth. MIA is used to rapidly detect the presence and quantity of common caries tooth features as they scanned by the high resolution color (RGB) thermal cameras. Multivariate principal component analysis is used to decompose the acquired three-channel tooth images into a two dimensional principal components (PC) space. Masking score point clusters in the score space and highlighting corresponding pixels in the image space of the two dominant PCs enables isolation of caries defect pixels based on contrast and color information. The technique provides a qualitative result that can be used for early stage caries tooth detection. The proposed technique can potentially be used on-line or real-time resolved to prescreen the existence of caries through vision based systems like real-time thermal camera. Experimental results on the large number of extracted teeth as well as one of the thermal image panoramas of the human teeth voltanteer are investigated and presented.

  7. A novel near real-time laser scanning device for geometrical determination of pleural cavity surface.

    PubMed

    Kim, Michele M; Zhu, Timothy C

    2013-02-02

    During HPPH-mediated pleural photodynamic therapy (PDT), it is critical to determine the anatomic geometry of the pleural surface quickly as there may be movement during treatment resulting in changes with the cavity. We have developed a laser scanning device for this purpose, which has the potential to obtain the surface geometry in real-time. A red diode laser with a holographic template to create a pattern and a camera with auto-focusing abilities are used to scan the cavity. In conjunction with a calibration with a known surface, we can use methods of triangulation to reconstruct the surface. Using a chest phantom, we are able to obtain a 360 degree scan of the interior in under 1 minute. The chest phantom scan was compared to an existing CT scan to determine its accuracy. The laser-camera separation can be determined through the calibration with 2mm accuracy. The device is best suited for environments that are on the scale of a chest cavity (between 10cm and 40cm). This technique has the potential to produce cavity geometry in real-time during treatment. This would enable PDT treatment dosage to be determined with greater accuracy. Works are ongoing to build a miniaturized device that moves the light source and camera via a fiber-optics bundle commonly used for endoscopy with increased accuracy.

  8. Attitude identification for SCOLE using two infrared cameras

    NASA Technical Reports Server (NTRS)

    Shenhar, Joram

    1991-01-01

    An algorithm is presented that incorporates real time data from two infrared cameras and computes the attitude parameters of the Spacecraft COntrol Lab Experiment (SCOLE), a lab apparatus representing an offset feed antenna attached to the Space Shuttle by a flexible mast. The algorithm uses camera position data of three miniature light emitting diodes (LEDs), mounted on the SCOLE platform, permitting arbitrary camera placement and an on-line attitude extraction. The continuous nature of the algorithm allows identification of the placement of the two cameras with respect to some initial position of the three reference LEDs, followed by on-line six degrees of freedom attitude tracking, regardless of the attitude time history. A description is provided of the algorithm in the camera identification mode as well as the mode of target tracking. Experimental data from a reduced size SCOLE-like lab model, reflecting the performance of the camera identification and the tracking processes, are presented. Computer code for camera placement identification and SCOLE attitude tracking is listed.

  9. Infrared-enhanced TV for fire detection

    NASA Technical Reports Server (NTRS)

    Hall, J. R.

    1978-01-01

    Closed-circuit television is superior to conventional smoke or heat sensors for detecting fires in large open spaces. Single TV camera scans entire area, whereas many conventional sensors and maze of interconnecting wiring might be required to get same coverage. Camera is monitored by person who would trip alarm if fire were detected, or electronic circuitry could process camera signal for fully-automatic alarm system.

  10. Joint Calibration of 3d Laser Scanner and Digital Camera Based on Dlt Algorithm

    NASA Astrophysics Data System (ADS)

    Gao, X.; Li, M.; Xing, L.; Liu, Y.

    2018-04-01

    Design a calibration target that can be scanned by 3D laser scanner while shot by digital camera, achieving point cloud and photos of a same target. A method to joint calibrate 3D laser scanner and digital camera based on Direct Linear Transformation algorithm was proposed. This method adds a distortion model of digital camera to traditional DLT algorithm, after repeating iteration, it can solve the inner and external position element of the camera as well as the joint calibration of 3D laser scanner and digital camera. It comes to prove that this method is reliable.

  11. Gallbladder radionuclide scan (image)

    MedlinePlus

    ... gallbladder radionuclide scan is performed by injecting a tracer (radioactive chemical) into the bloodstream. A gamma camera ... detect the gamma rays being emitted from the tracer, and the image of where the tracer is ...

  12. Buried mine detection using fractal geometry analysis to the LWIR successive line scan data image

    NASA Astrophysics Data System (ADS)

    Araki, Kan

    2012-06-01

    We have engaged in research on buried mine/IED detection by remote sensing method using LWIR camera. A IR image of a ground, containing buried objects can be assumed as a superimposed pattern including thermal scattering which may depend on the ground surface roughness, vegetation canopy, and effect of the sun light, and radiation due to various heat interaction caused by differences in specific heat, size, and buried depth of the objects and local temperature of their surrounding environment. In this cumbersome environment, we introduce fractal geometry for analyzing from an IR image. Clutter patterns due to these complex elements have oftentimes low ordered fractal dimension of Hausdorff Dimension. On the other hand, the target patterns have its tendency of obtaining higher ordered fractal dimension in terms of Information Dimension. Random Shuffle Surrogate method or Fourier Transform Surrogate method is used to evaluate fractional statistics by applying shuffle of time sequence data or phase of spectrum. Fractal interpolation to each line scan was also applied to improve the signal processing performance in order to evade zero division and enhance information of data. Some results of target extraction by using relationship between low and high ordered fractal dimension are to be presented.

  13. Homography-based multiple-camera person-tracking

    NASA Astrophysics Data System (ADS)

    Turk, Matthew R.

    2009-01-01

    Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of live targets for training. No calibration is required. Testing shows that the algorithm performs very well in real-world sequences. The consistent labelling problem is solved, even for targets that appear via in-scene entrances. Full occlusions are handled. Although implemented in Matlab, the multiple-camera tracking system runs at eight frames per second. A faster implementation would be suitable for real-world use at typical video frame rates.

  14. Surface scanning through a cylindrical tank of coupling fluid for clinical microwave breast imaging exams

    PubMed Central

    Pallone, Matthew J.; Meaney, Paul M.; Paulsen, Keith D.

    2012-01-01

    Purpose: Microwave tomographic image quality can be improved significantly with prior knowledge of the breast surface geometry. The authors have developed a novel laser scanning system capable of accurately recovering surface renderings of breast-shaped phantoms immersed within a cylindrical tank of coupling fluid which resides completely external to the tank (and the aqueous environment) and overcomes the challenges associated with the optical distortions caused by refraction from the air, tank wall, and liquid bath interfaces. Methods: The scanner utilizes two laser line generators and a small CCD camera mounted concentrically on a rotating gantry about the microwave imaging tank. Various calibration methods were considered for optimizing the accuracy of the scanner in the presence of the optical distortions including traditional ray tracing and image registration approaches. In this paper, the authors describe the construction and operation of the laser scanner, compare the efficacy of several calibration methods—including analytical ray tracing and piecewise linear, polynomial, locally weighted mean, and thin-plate-spline (TPS) image registrations—and report outcomes from preliminary phantom experiments. Results: The results show that errors in calibrating camera angles and position prevented analytical ray tracing from achieving submillimeter accuracy in the surface renderings obtained from our scanner configuration. Conversely, calibration by image registration reliably attained mean surface errors of less than 0.5 mm depending on the geometric complexity of the object scanned. While each of the image registration approaches outperformed the ray tracing strategy, the authors found global polynomial methods produced the best compromise between average surface error and scanner robustness. Conclusions: The laser scanning system provides a fast and accurate method of three dimensional surface capture in the aqueous environment commonly found in microwave breast imaging. Optical distortions imposed by the imaging tank and coupling bath diminished the effectiveness of the ray tracing approach; however, calibration through image registration techniques reliably produced scans of submillimeter accuracy. Tests of the system with breast-shaped phantoms demonstrated the successful implementation of the scanner for the intended application. PMID:22755695

  15. Extrinsic Calibration of Camera Networks Based on Pedestrians

    PubMed Central

    Guan, Junzhi; Deboeverie, Francis; Slembrouck, Maarten; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2016-01-01

    In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life. PMID:27171080

  16. Synchronous scan-projection lithography on overall circumference of fine pipes with a diameter of 2 mm

    NASA Astrophysics Data System (ADS)

    Horiuchi, Toshiyuki; Furuhata, Takahiro; Muro, Hideyuki

    2016-06-01

    The scan-projection exposure of small-diameter pipe surfaces was investigated using a newly developed prototype exposure system. It is necessary to secure a very large depth of focus for printing thick resist patterns on round pipe surfaces with a roughness larger than that of semiconductor wafers. For this reason, a camera lens with a low numerical aperture of 0.089 was used as a projection lens, and the momentary exposure area was limited by a narrow slit with a width of 800 µm. Thus, patterns on a flat reticle were replicated on a pipe surface by linearly moving the reticle and rotating the pipe synchronously. By using a reticle with inclined line-and-space patterns, helical patterns with a width of 30 µm were successfully replicated on stainless-steel pipes with an outer diameter of 2 mm and coated with a 10-µm-thick negative resist. The patterns replicated at the start and stop edges were smoothly stitched seamlessly.

  17. High-resolution imaging optomechatronics for precise liquid crystal display module bonding automated optical inspection

    NASA Astrophysics Data System (ADS)

    Ni, Guangming; Liu, Lin; Zhang, Jing; Liu, Juanxiu; Liu, Yong

    2018-01-01

    With the development of the liquid crystal display (LCD) module industry, LCD modules become more and more precise with larger sizes, which demands harsh imaging requirements for automated optical inspection (AOI). Here, we report a high-resolution and clearly focused imaging optomechatronics for precise LCD module bonding AOI inspection. It first presents and achieves high-resolution imaging for LCD module bonding AOI inspection using a line scan camera (LSC) triggered by a linear optical encoder, self-adaptive focusing for the whole large imaging region using LSC, and a laser displacement sensor, which reduces the requirements of machining, assembly, and motion control of AOI devices. Results show that this system can directly achieve clearly focused imaging for AOI inspection of large LCD module bonding with 0.8 μm image resolution, 2.65-mm scan imaging width, and no limited imaging width theoretically. All of these are significant for AOI inspection in the LCD module industry and other fields that require imaging large regions with high resolution.

  18. Geometric Calibration and Validation of Kompsat-3A AEISS-A Camera

    PubMed Central

    Seo, Doocheon; Oh, Jaehong; Lee, Changno; Lee, Donghan; Choi, Haejin

    2016-01-01

    Kompsat-3A, which was launched on 25 March 2015, is a sister spacecraft of the Kompsat-3 developed by the Korea Aerospace Research Institute (KARI). Kompsat-3A’s AEISS-A (Advanced Electronic Image Scanning System-A) camera is similar to Kompsat-3’s AEISS but it was designed to provide PAN (Panchromatic) resolution of 0.55 m, MS (multispectral) resolution of 2.20 m, and TIR (thermal infrared) at 5.5 m resolution. In this paper we present the geometric calibration and validation work of Kompsat-3A that was completed last year. A set of images over the test sites was taken for two months and was utilized for the work. The workflow includes the boresight calibration, CCDs (charge-coupled devices) alignment and focal length determination, the merge of two CCD lines, and the band-to-band registration. Then, the positional accuracies without any GCPs (ground control points) were validated for hundreds of test sites across the world using various image acquisition modes. In addition, we checked the planimetric accuracy by bundle adjustments with GCPs. PMID:27783054

  19. Agent-based station for on-line diagnostics by self-adaptive laser Doppler vibrometry

    NASA Astrophysics Data System (ADS)

    Serafini, S.; Paone, N.; Castellini, P.

    2013-12-01

    A self-adaptive diagnostic system based on laser vibrometry is proposed for quality control of mechanical defects by vibration testing; it is developed for appliances at the end of an assembly line, but its characteristics are generally suited for testing most types of electromechanical products. It consists of a laser Doppler vibrometer, equipped with scanning mirrors and a camera, which implements self-adaptive bahaviour for optimizing the measurement. The system is conceived as a Quality Control Agent (QCA) and it is part of a Multi Agent System that supervises all the production line. The QCA behaviour is defined so to minimize measurement uncertainty during the on-line tests and to compensate target mis-positioning under guidance of a vision system. Best measurement conditions are reached by maximizing the amplitude of the optical Doppler beat signal (signal quality) and consequently minimize uncertainty. In this paper, the optimization strategy for measurement enhancement achieved by the down-hill algorithm (Nelder-Mead algorithm) and its effect on signal quality improvement is discussed. Tests on a washing machine in controlled operating conditions allow to evaluate the efficacy of the method; significant reduction of noise on vibration velocity spectra is observed. Results from on-line tests are presented, which demonstrate the potential of the system for industrial quality control.

  20. Only Image Based for the 3d Metric Survey of Gothic Structures by Using Frame Cameras and Panoramic Cameras

    NASA Astrophysics Data System (ADS)

    Pérez Ramos, A.; Robleda Prieto, G.

    2016-06-01

    Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS) internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.

  1. Optimization and verification of image reconstruction for a Compton camera towards application as an on-line monitor for particle therapy

    NASA Astrophysics Data System (ADS)

    Taya, T.; Kataoka, J.; Kishimoto, A.; Tagawa, L.; Mochizuki, S.; Toshito, T.; Kimura, M.; Nagao, Y.; Kurita, K.; Yamaguchi, M.; Kawachi, N.

    2017-07-01

    Particle therapy is an advanced cancer therapy that uses a feature known as the Bragg peak, in which particle beams suddenly lose their energy near the end of their range. The Bragg peak enables particle beams to damage tumors effectively. To achieve precise therapy, the demand for accurate and quantitative imaging of the beam irradiation region or dosage during therapy has increased. The most common method of particle range verification is imaging of annihilation gamma rays by positron emission tomography. Not only 511-keV gamma rays but also prompt gamma rays are generated during therapy; therefore, the Compton camera is expected to be used as an on-line monitor for particle therapy, as it can image these gamma rays in real time. Proton therapy, one of the most common particle therapies, uses a proton beam of approximately 200 MeV, which has a range of ~ 25 cm in water. As gamma rays are emitted along the path of the proton beam, quantitative evaluation of the reconstructed images of diffuse sources becomes crucial, but it is far from being fully developed for Compton camera imaging at present. In this study, we first quantitatively evaluated reconstructed Compton camera images of uniformly distributed diffuse sources, and then confirmed that our Compton camera obtained 3 %(1 σ) and 5 %(1 σ) uniformity for line and plane sources, respectively. Based on this quantitative study, we demonstrated on-line gamma imaging during proton irradiation. Through these studies, we show that the Compton camera is suitable for future use as an on-line monitor for particle therapy.

  2. Solid State Television Camera (CID)

    NASA Technical Reports Server (NTRS)

    Steele, D. W.; Green, W. T.

    1976-01-01

    The design, development and test are described of a charge injection device (CID) camera using a 244x248 element array. A number of video signal processing functions are included which maximize the output video dynamic range while retaining the inherently good resolution response of the CID. Some of the unique features of the camera are: low light level performance, high S/N ratio, antiblooming, geometric distortion, sequential scanning and AGC.

  3. Photogrammetry of Apollo 15 photography, part C

    NASA Technical Reports Server (NTRS)

    Wu, S. S. C.; Schafer, F. J.; Jordan, R.; Nakata, G. M.; Derick, J. L.

    1972-01-01

    In the Apollo 15 mission, a mapping camera system and a 61 cm optical bar, high resolution panoramic camera, as well as a laser altimeter were used. The panoramic camera is described, having several distortion sources, such as cylindrical shape of the negative film surface, the scanning action of the lens, the image motion compensator, and the spacecraft motion. Film products were processed on a specifically designed analytical plotter.

  4. Evaluating video digitizer errors

    NASA Astrophysics Data System (ADS)

    Peterson, C.

    2016-01-01

    Analog output video cameras remain popular for recording meteor data. Although these cameras uniformly employ electronic detectors with fixed pixel arrays, the digitization process requires resampling the horizontal lines as they are output in order to reconstruct the pixel data, usually resulting in a new data array of different horizontal dimensions than the native sensor. Pixel timing is not provided by the camera, and must be reconstructed based on line sync information embedded in the analog video signal. Using a technique based on hot pixels, I present evidence that jitter, sync detection, and other timing errors introduce both position and intensity errors which are not present in cameras which internally digitize their sensors and output the digital data directly.

  5. Tomographic Small-Animal Imaging Using a High-Resolution Semiconductor Camera

    PubMed Central

    Kastis, GA; Wu, MC; Balzer, SJ; Wilson, DW; Furenlid, LR; Stevenson, G; Barber, HB; Barrett, HH; Woolfenden, JM; Kelly, P; Appleby, M

    2015-01-01

    We have developed a high-resolution, compact semiconductor camera for nuclear medicine applications. The modular unit has been used to obtain tomographic images of phantoms and mice. The system consists of a 64 x 64 CdZnTe detector array and a parallel-hole tungsten collimator mounted inside a 17 cm x 5.3 cm x 3.7 cm tungsten-aluminum housing. The detector is a 2.5 cm x 2.5 cm x 0.15 cm slab of CdZnTe connected to a 64 x 64 multiplexer readout via indium-bump bonding. The collimator is 7 mm thick, with a 0.38 mm pitch that matches the detector pixel pitch. We obtained a series of projections by rotating the object in front of the camera. The axis of rotation was vertical and about 1.5 cm away from the collimator face. Mouse holders were made out of acrylic plastic tubing to facilitate rotation and the administration of gas anesthetic. Acquisition times were varied from 60 sec to 90 sec per image for a total of 60 projections at an equal spacing of 6 degrees between projections. We present tomographic images of a line phantom and mouse bone scan and assess the properties of the system. The reconstructed images demonstrate spatial resolution on the order of 1–2 mm. PMID:26568676

  6. Physical and engineering aspect of carbon beam therapy

    NASA Astrophysics Data System (ADS)

    Kanai, Tatsuaki; Kanematsu, Nobuyuki; Minohara, Shinichi; Yusa, Ken; Urakabe, Eriko; Mizuno, Hideyuki; Iseki, Yasushi; Kanazawa, Mitsutaka; Kitagawa, Atsushi; Tomitani, Takehiro

    2003-08-01

    Conformal irradiation system of HIMAC has been up-graded for a clinical trial using a technique of a layer-stacking method. The system has been developed for localizing irradiation dose to target volume more effectively than the present irradiation dose. With dynamic control of the beam modifying devices, a pair of wobbler magnets, and multileaf collimator and range shifter, during the irradiation, more conformal radiotherapy can be achieved. The system, which has to be adequately safe for patient irradiations, was constructed and tested from a viewpoint of safety and the quality of the dose localization realized. A secondary beam line has been constructed for use of radioactive beam in heavy-ion radiotherapy. Spot scanning method has been adapted for the beam delivery system of the radioactive beam. Dose distributions of the spot beam were measured and analyzed taking into account of aberration of the beam optics. Distributions of the stopped positron-emitter beam can be observed by PET. Pencil beam of the positron-emitter, about 1 mm size, can also be used for measurements ranges of the test beam in patients using positron camera. The positron camera, consisting of a pair of Anger-type scintillation detectors, has been developed for this verification before treatment. Wash-out effect of the positron-emitter was examined using the positron camera installed. In this report, present status of the HIMAC irradiation system is described in detail.

  7. Exploration of Mars by Mariner 9 - Television sensors and image processing.

    NASA Technical Reports Server (NTRS)

    Cutts, J. A.

    1973-01-01

    Two cameras equipped with selenium sulfur slow scan vidicons were used in the orbital reconnaissance of Mars by the U.S. Spacecraft Mariner 9 and the performance characteristics of these devices are presented. Digital image processing techniques have been widely applied in the analysis of images of Mars and its satellites. Photometric and geometric distortion corrections, image detail enhancement and transformation to standard map projection have been routinely employed. More specializing applications included picture differencing, limb profiling, solar lighting corrections, noise removal, line plots and computer mosaics. Information on enhancements as well as important picture geometric information was stored in a master library. Display of the library data in graphic or numerical form was accomplished by a data management computer program.

  8. A Three-Line Stereo Camera Concept for Planetary Exploration

    NASA Technical Reports Server (NTRS)

    Sandau, Rainer; Hilbert, Stefan; Venus, Holger; Walter, Ingo; Fang, Wai-Chi; Alkalai, Leon

    1997-01-01

    This paper presents a low-weight stereo camera concept for planetary exploration. The camera uses three CCD lines within the image plane of one single objective. Some of the main features of the camera include: focal length-90 mm, FOV-18.5 deg, IFOV-78 (mu)rad, convergence angles-(+/-)10 deg, radiometric dynamics-14 bit, weight-2 kg, and power consumption-12.5 Watts. From an orbit altitude of 250 km the ground pixel size is 20m x 20m and the swath width is 82 km. The CCD line data is buffered in the camera internal mass memory of 1 Gbit. After performing radiometric correction and application-dependent preprocessing the data is compressed and ready for downlink. Due to the aggressive application of advanced technologies in the area of microelectronics and innovative optics, the low mass and power budgets of 2 kg and 12.5 Watts is achieved, while still maintaining high performance. The design of the proposed light-weight camera is also general purpose enough to be applicable to other planetary missions such as the exploration of Mars, Mercury, and the Moon. Moreover, it is an example of excellent international collaboration on advanced technology concepts developed at DLR, Germany, and NASA's Jet Propulsion Laboratory, USA.

  9. Robust camera calibration for sport videos using court models

    NASA Astrophysics Data System (ADS)

    Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang

    2003-12-01

    We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.

  10. Subsampling phase retrieval for rapid thermal measurements of heated microstructures.

    PubMed

    Taylor, Lucas N; Talghader, Joseph J

    2016-07-15

    A subsampling technique for real-time phase retrieval of high-speed thermal signals is demonstrated with heated metal lines such as those found in microelectronic interconnects. The thermal signals were produced by applying a current through aluminum resistors deposited on soda-lime-silica glass, and the resulting refractive index changes were measured using a Mach-Zehnder interferometer with a microscope objective and high-speed camera. The temperatures of the resistors were measured both by the phase-retrieval method and by monitoring the resistance of the aluminum lines. The method used to analyze the phase is at least 60× faster than the state of the art but it maintains a small spatial phase noise of 16 nm, remaining comparable to the state of the art. For slowly varying signals, the system is able to perform absolute phase measurements over time, distinguishing temperature changes as small as 2 K. With angular scanning or structured illumination improvements, the system could also perform fast thermal tomography.

  11. Clever imaging with SmartScan

    NASA Astrophysics Data System (ADS)

    Tchernykh, Valerij; Dyblenko, Sergej; Janschek, Klaus; Seifart, Klaus; Harnisch, Bernd

    2005-08-01

    The cameras commonly used for Earth observation from satellites require high attitude stability during the image acquisition. For some types of cameras (high-resolution "pushbroom" scanners in particular), instantaneous attitude changes of even less than one arcsecond result in significant image distortion and blurring. Especially problematic are the effects of high-frequency attitude variations originating from micro-shocks and vibrations produced by the momentum and reaction wheels, mechanically activated coolers, and steering and deployment mechanisms on board. The resulting high attitude-stability requirements for Earth-observation satellites are one of the main reasons for their complexity and high cost. The novel SmartScan imaging concept, based on an opto-electronic system with no moving parts, offers the promise of high-quality imaging with only moderate satellite attitude stability. SmartScan uses real-time recording of the actual image motion in the focal plane of the camera during frame acquisition to correct the distortions in the image. Exceptional real-time performances with subpixel-accuracy image-motion measurement are provided by an innovative high-speed onboard opto-electronic correlation processor. SmartScan will therefore allow pushbroom scanners to be used for hyper-spectral imaging from satellites and other space platforms not primarily intended for imaging missions, such as micro- and nano-satellites with simplified attitude control, low-orbiting communications satellites, and manned space stations.

  12. Security camera resolution measurements: Horizontal TV lines versus modulation transfer function measurements.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-01-01

    The horizontal television lines (HTVL) metric has been the primary quantity used by division 6000 related to camera resolution for high consequence security systems. This document shows HTVL measurements are fundamen- tally insufficient as a metric to determine camera resolution, and propose a quantitative, standards based methodology by measuring the camera system modulation transfer function (MTF), the most common and accepted metric of res- olution in the optical science community. Because HTVL calculations are easily misinterpreted or poorly defined, we present several scenarios in which HTVL is frequently reported, and discuss their problems. The MTF metric is discussed, and scenariosmore » are presented with calculations showing the application of such a metric.« less

  13. Inspecting rapidly moving surfaces for small defects using CNN cameras

    NASA Astrophysics Data System (ADS)

    Blug, Andreas; Carl, Daniel; Höfler, Heinrich

    2013-04-01

    A continuous increase in production speed and manufacturing precision raises a demand for the automated detection of small image features on rapidly moving surfaces. An example are wire drawing processes where kilometers of cylindrical metal surfaces moving with 10 m/s have to be inspected for defects such as scratches, dents, grooves, or chatter marks with a lateral size of 100 μm in real time. Up to now, complex eddy current systems are used for quality control instead of line cameras, because the ratio between lateral feature size and surface speed is limited by the data transport between camera and computer. This bottleneck is avoided by "cellular neural network" (CNN) cameras which enable image processing directly on the camera chip. This article reports results achieved with a demonstrator based on this novel analogue camera - computer system. The results show that computational speed and accuracy of the analogue computer system are sufficient to detect and discriminate the different types of defects. Area images with 176 x 144 pixels are acquired and evaluated in real time with frame rates of 4 to 10 kHz - depending on the number of defects to be detected. These frame rates correspond to equivalent line rates on line cameras between 360 and 880 kHz, a number far beyond the available features. Using the relation between lateral feature size and surface speed as a figure of merit, the CNN based system outperforms conventional image processing systems by an order of magnitude.

  14. 3D model assisted fully automated scanning laser Doppler vibrometer measurements

    NASA Astrophysics Data System (ADS)

    Sels, Seppe; Ribbens, Bart; Bogaerts, Boris; Peeters, Jeroen; Vanlanduit, Steve

    2017-12-01

    In this paper, a new fully automated scanning laser Doppler vibrometer (LDV) measurement technique is presented. In contrast to existing scanning LDV techniques which use a 2D camera for the manual selection of sample points, we use a 3D Time-of-Flight camera in combination with a CAD file of the test object to automatically obtain measurements at pre-defined locations. The proposed procedure allows users to test prototypes in a shorter time because physical measurement locations are determined without user interaction. Another benefit from this methodology is that it incorporates automatic mapping between a CAD model and the vibration measurements. This mapping can be used to visualize measurements directly on a 3D CAD model. The proposed method is illustrated with vibration measurements of an unmanned aerial vehicle

  15. Microwave Scanning System Correlations

    DTIC Science & Technology

    2010-08-11

    The follow equipment is needed for each of the individual scanning systems: Handheld Scanner Equipment list 1. Dell Netbook (with the...proper software installed by Evisive) 2. Bluetooth USB port transmitter 3. Handheld Probe 4. USB to mini-USB Converter (links camera to netbook

  16. Fast and compact internal scanning CMOS-based hyperspectral camera: the Snapscan

    NASA Astrophysics Data System (ADS)

    Pichette, Julien; Charle, Wouter; Lambrechts, Andy

    2017-02-01

    Imec has developed a process for the monolithic integration of optical filters on top of CMOS image sensors, leading to compact, cost-efficient and faster hyperspectral cameras. Linescan cameras are typically used in remote sensing or for conveyor belt applications. Translation of the target is not always possible for large objects or in many medical applications. Therefore, we introduce a novel camera, the Snapscan (patent pending), exploiting internal movement of a linescan sensor enabling fast and convenient acquisition of high-resolution hyperspectral cubes (up to 2048x3652x150 in spectral range 475-925 nm). The Snapscan combines the spectral and spatial resolutions of a linescan system with the convenience of a snapshot camera.

  17. Performance prediction of optical image stabilizer using SVM for shaker-free production line

    NASA Astrophysics Data System (ADS)

    Kim, HyungKwan; Lee, JungHyun; Hyun, JinWook; Lim, Haekeun; Kim, GyuYeol; Moon, HyukSoo

    2016-04-01

    Recent smartphones adapt the camera module with optical image stabilizer(OIS) to enhance imaging quality in handshaking conditions. However, compared to the non-OIS camera module, the cost for implementing the OIS module is still high. One reason is that the production line for the OIS camera module requires a highly precise shaker table in final test process, which increases the unit cost of the production. In this paper, we propose a framework for the OIS quality prediction that is trained with the support vector machine and following module characterizing features : noise spectral density of gyroscope, optically measured linearity and cross-axis movement of hall and actuator. The classifier was tested on an actual production line and resulted in 88% accuracy of recall rate.

  18. Camera artifacts in IUE spectra

    NASA Technical Reports Server (NTRS)

    Bruegman, O. W.; Crenshaw, D. M.

    1994-01-01

    This study of emission line mimicking features in the IUE cameras has produced an atlas of artifiacts in high-dispersion images with an accompanying table of prominent artifacts and a table of prominent artifacts in the raw images along with a medium image of the sky background for each IUE camera.

  19. Applications and requirements for MEMS scanner mirrors

    NASA Astrophysics Data System (ADS)

    Wolter, Alexander; Hsu, Shu-Ting; Schenk, Harald; Lakner, Hubert K.

    2005-01-01

    Micro scanning mirrors are quite versatile MEMS devices for the deflection of a laser beam or a shaped beam from another light source. The most exciting application is certainly in laser-scanned displays. Laser television, home cinema and data projectors will display the most brilliant colors exceeding even plasma, OLED and CRT. Devices for front and rear projection will have advantages in size, weight and price. These advantages will be even more important in near-eye virtual displays like head-mounted displays or viewfinders in digital cameras and potentially in UMTS handsets. Optical pattern generation by scanning a modulated beam over an area can be used also in a number of other applications: laser printers, direct writing of photo resist for printed circuit boards or laser marking and with higher laser power laser ablation or material processing. Scanning a continuous laser beam over a printed pattern and analyzing the scattered reflection is the principle of barcode reading in 1D and 2D. This principle works also for identification of signatures, coins, bank notes, vehicles and other objects. With a focused white-light or RGB beam even full color imaging with high resolution is possible from an amazingly small device. The form factor is also very interesting for the application in endoscopes. Further applications are light curtains for intrusion control and the generation of arbitrary line patterns for triangulation. Scanning a measurement beam extends point measurements to 1D or 2D scans. Automotive LIDAR (laser RADAR) or scanning confocal microscopy are just two examples. Last but not least there is the field of beam steering. E.g. for all-optical fiber switches or positioning of read-/write heads in optical storage devices. The variety of possible applications also brings a variety of specifications. This publication discusses various applications and their requirements.

  20. Software Graphical User Interface For Analysis Of Images

    NASA Technical Reports Server (NTRS)

    Leonard, Desiree M.; Nolf, Scott R.; Avis, Elizabeth L.; Stacy, Kathryn

    1992-01-01

    CAMTOOL software provides graphical interface between Sun Microsystems workstation and Eikonix Model 1412 digitizing camera system. Camera scans and digitizes images, halftones, reflectives, transmissives, rigid or flexible flat material, or three-dimensional objects. Users digitize images and select from three destinations: work-station display screen, magnetic-tape drive, or hard disk. Written in C.

  1. Progress in passive submillimeter-wave video imaging

    NASA Astrophysics Data System (ADS)

    Heinz, Erik; May, Torsten; Born, Detlef; Zieger, Gabriel; Peiselt, Katja; Zakosarenko, Vyacheslav; Krause, Torsten; Krüger, André; Schulz, Marco; Bauer, Frank; Meyer, Hans-Georg

    2014-06-01

    Since 2007 we are developing passive submillimeter-wave video cameras for personal security screening. In contradiction to established portal-based millimeter-wave scanning techniques, these are suitable for stand-off or stealth operation. The cameras operate in the 350GHz band and use arrays of superconducting transition-edge sensors (TES), reflector optics, and opto-mechanical scanners. Whereas the basic principle of these devices remains unchanged, there has been a continuous development of the technical details, as the detector array, the scanning scheme, and the readout, as well as system integration and performance. The latest prototype of this camera development features a linear array of 128 detectors and a linear scanner capable of 25Hz frame rate. Using different types of reflector optics, a field of view of 1×2m2 and a spatial resolution of 1-2 cm is provided at object distances of about 5-25m. We present the concept of this camera and give details on system design and performance. Demonstration videos show its capability for hidden threat detection and illustrate possible application scenarios.

  2. Dynamic light scattering microscopy

    NASA Astrophysics Data System (ADS)

    Dzakpasu, Rhonda

    An optical microscope technique, dynamic light scattering microscopy (DLSM) that images dynamically scattered light fluctuation decay rates is introduced. Using physical optics we show theoretically that within the optical resolution of the microscope, relative motions between scattering centers are sufficient to produce significant phase variations resulting in interference intensity fluctuations in the image plane. The time scale for these intensity fluctuations is predicted. The spatial coherence distance defining the average distance between constructive and destructive interference in the image plane is calculated and compared with the pixel size. We experimentally tested DLSM on polystyrene latex nanospheres and living macrophage cells. In order to record these rapid fluctuations, on a slow progressive scan CCD camera, we used a thin laser line of illumination on the sample such that only a single column of pixels in the CCD camera is illuminated. This allowed the use of the rate of the column-by-column readout transfer process as the acquisition rate of the camera. This manipulation increased the data acquisition rate by at least an order of magnitude in comparison to conventional CCD cameras rates defined by frames/s. Analysis of the observed fluctuations provides information regarding the rates of motion of the scattering centers. These rates, acquired from each position on the sample are used to create a spatial map of the fluctuation decay rates. Our experiments show that with this technique, we are able to achieve a good signal-to-noise ratio and can monitor fast intensity fluctuations, on the order of milliseconds. DLSM appears to provide dynamic information about fast motions within cells at a sub-optical resolution scale and provides a new kind of spatial contrast.

  3. Continuous modulations of femtosecond laser-induced periodic surface structures and scanned line-widths on silicon by polarization changes.

    PubMed

    Han, Weina; Jiang, Lan; Li, Xiaowei; Liu, Pengjun; Xu, Le; Lu, YongFeng

    2013-07-01

    Large-area, uniform laser-induced periodic surface structures (LIPSS) are of wide potential industry applications. The continuity and processing precision of LIPSS are mainly determined by the scanning intervals of adjacent scanning lines. Therefore, continuous modulations of LIPSS and scanned line-widths within one laser scanning pass are of great significance. This study proposes that by varying the laser (800 nm, 50 fs, 1 kHz) polarization direction, LIPSS and the scanned line-widths on a silicon (111) surface can be continuously modulated with high precision. It shows that the scanned line-width reaches the maximum when the polarization direction is perpendicular to the scanning direction. As an application example, the experiments show large-area, uniform LIPSS can be fabricated by controlling the scanning intervals based on the one-pass scanned line-widths. The simulation shows that the initially formed LIPSS structures induce directional surface plasmon polaritons (SPP) scattering along the laser polarization direction, which strengthens the subsequently anisotropic LIPSS fabrication. The simulation results are in good agreement with the experiments, which both support the conclusions of continuous modulations of the LIPSS and scanned line-widths.

  4. High-speed potato grading and quality inspection based on a color vision system

    NASA Astrophysics Data System (ADS)

    Noordam, Jacco C.; Otten, Gerwoud W.; Timmermans, Toine J. M.; van Zwol, Bauke H.

    2000-03-01

    A high-speed machine vision system for the quality inspection and grading of potatoes has been developed. The vision system grades potatoes on size, shape and external defects such as greening, mechanical damages, rhizoctonia, silver scab, common scab, cracks and growth cracks. A 3-CCD line-scan camera inspects the potatoes in flight as they pass under the camera. The use of mirrors to obtain a 360-degree view of the potato and the lack of product holders guarantee a full view of the potato. To achieve the required capacity of 12 tons/hour, 11 SHARC Digital Signal Processors perform the image processing and classification tasks. The total capacity of the system is about 50 potatoes/sec. The color segmentation procedure uses Linear Discriminant Analysis (LDA) in combination with a Mahalanobis distance classifier to classify the pixels. The procedure for the detection of misshapen potatoes uses a Fourier based shape classification technique. Features such as area, eccentricity and central moments are used to discriminate between similar colored defects. Experiments with red and yellow skin-colored potatoes have shown that the system is robust and consistent in its classification.

  5. Assessment of Risk Reduction for Lymphedema Following Sentinel Lymph Noded Guided Surgery for Primary Breast Cancer

    DTIC Science & Technology

    2006-10-01

    patients with breast cancer underwent scanning with a hybrid camera which combined a dual-head SPECT camera and a low-dose, single slice CT scanner , (GE...investigated a novel approach which combines the output of a dual-head SPECT camera and a low-dose, single slice CT scanner , (GE Hawkeye®). This... scanner , (Hawkeye®, GE Medical system) is attempted in this study. This device is widely available in cardiology community and has the potential to

  6. Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.

    PubMed

    Donné, Simon; Goossens, Bart; Philips, Wilfried

    2017-08-23

    Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid-we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.

  7. Hardwood lumber scanning tests to determine NHLA lumber grades

    Treesearch

    Philip A. Araman; Ssang-Mook Lee; A. Lynn Abbott; Matthew F. Winn

    2011-01-01

    This paper concerns the scanning, and grading of kiln-dried hardwood lumber. A prototype system is described that uses laser sources and a video camera to scan boards. The system automatically detects defects and wane, grades the boards, and then searches for higher value boards within the original board. The goal is to derive maximum commercial value based on current...

  8. Re-scan confocal microscopy: scanning twice for better resolution.

    PubMed

    De Luca, Giulia M R; Breedijk, Ronald M P; Brandt, Rick A J; Zeelenberg, Christiaan H C; de Jong, Babette E; Timmermans, Wendy; Azar, Leila Nahidi; Hoebe, Ron A; Stallinga, Sjoerd; Manders, Erik M M

    2013-01-01

    We present a new super-resolution technique, Re-scan Confocal Microscopy (RCM), based on standard confocal microscopy extended with an optical (re-scanning) unit that projects the image directly on a CCD-camera. This new microscope has improved lateral resolution and strongly improved sensitivity while maintaining the sectioning capability of a standard confocal microscope. This simple technology is typically useful for biological applications where the combination high-resolution and high-sensitivity is required.

  9. Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2015-03-01

    The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.

  10. Thermal Scanning of Dental Pulp Chamber by Thermocouple System and Infrared Camera during Photo Curing of Resin Composites.

    PubMed

    Hamze, Faeze; Ganjalikhan Nasab, Seyed Abdolreza; Eskandarizadeh, Ali; Shahravan, Arash; Akhavan Fard, Fatemeh; Sinaee, Neda

    2018-01-01

    Due to thermal hazard during composite restorations, this study was designed to scan the pulp temperature by thermocouple and infrared camera during photo polymerizing different composites. A mesio-occlso-distal (MOD) cavity was prepared in an extracted tooth and the K-type thermocouple was fixed in its pulp chamber. Subsequently, 1 mm increment of each composites were inserted (four composite types were incorporated) and photo polymerized employing either LED or QTH systems for 60 sec while the temperature was recorded with 10 sec intervals. Ultimately, the same tooth was hemisected bucco-lingually and the amalgam was removed. The same composite curing procedure was repeated while the thermogram was recorded using an infrared camera. Thereafter, the data was analyzed by repeated measured ANOVA followed by Tukey's HSD Post Hoc test for multiple comparisons ( α =0.05). The pulp temperature was significantly increased (repeated measures) during photo polymerization ( P =0.000) while there was no significant difference among the results recorded by thermocouple comparing to infrared camera ( P >0.05). Moreover, different composite materials and LCUs lead to similar outcomes ( P >0.05). Although various composites have significant different chemical compositions, they lead to similar pulp thermal changes. Moreover, both the infrared camera and the thermocouple would record parallel results of dental pulp temperature.

  11. Studies of coronal lines with electronic cameras during the eclipse of 7 march 1970.

    PubMed

    Fort, B

    1970-12-01

    The experimental design described here allows us to study with 2-A. bandpass filters the brightness distribution of the green coronal line, the two infrared lines of Fe XIII, and the neighboring coronal continuum. For the first time, in an eclipse expedition, electrostatic cameras derived from the Lallemand type are used; full advantage was taken of their speed, especially in the near infrared spectral range, and their good photometric qualities. They permit the measurement of intensity and polarization of the lines in the corona to a height of 1.25 solar radii above the limb of the sun, with a spatial resolution >/= (10")(2).

  12. Dynamics of the formation of an aureole in the bursting of soap films

    NASA Astrophysics Data System (ADS)

    Liang, N. Y.; Chan, C. K.; Choi, H. J.

    1996-10-01

    The thickness profiles of the aureole created in the bursting of vertical soap films are studied by a fast line scan charge-coupled device camera. Detail dynamics of the aureole are reported. Phenomena of the wavelike motions of the bursting rim and detachments of the aureole from the bursting film are also observed. We find that the stability of the aureole increases with the surfactant concentrations and is sensitive to the types of surfactant being used. The concentration dependence suggests that the interaction of micelles might be important in the bursting process. Furthermore, the surfactant monolayer in the aureole is found to be highly compressed and behaves like a rigid film. Existing theories of the aureole formation cannot account for all the observed phenomena.

  13. Saturated Imaging for Inspecting Transparent Aesthetic Defects in a Polymeric Polarizer with Black and White Stripes.

    PubMed

    Yu, Cilong; Chen, Peibing; Zhong, Xiaopin; Pan, Xizhou; Deng, Yuanlong

    2018-05-07

    Machine vision systems have been widely used in industrial production lines because of their automation and contactless inspection mode. In polymeric polarizers, extremely slight transparent aesthetic defects are difficult to detect and characterize through conventional illumination. To inspect such defects rapidly and accurately, a saturated imaging technique was proposed, which innovatively uses the characteristics of saturated light in imaging by adjusting the light intensity, exposure time, and camera gain. An optical model of defect was established to explain the theory by simulation. Based on the optimum experimental conditions, active two-step scanning was conducted to demonstrate the feasibility of this detection scheme, and the proposed method was found to be efficient for real-time and in situ inspection of defects in polymer films and products.

  14. Graphics processing unit accelerated intensity-based optical coherence tomography angiography using differential frames with real-time motion correction.

    PubMed

    Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi

    2014-02-01

    We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.

  15. Model-based Estimation for Pose, Velocity of Projectile from Stereo Linear Array Image

    NASA Astrophysics Data System (ADS)

    Zhao, Zhuxin; Wen, Gongjian; Zhang, Xing; Li, Deren

    2012-01-01

    The pose (position and attitude) and velocity of in-flight projectiles have major influence on the performance and accuracy. A cost-effective method for measuring the gun-boosted projectiles is proposed. The method adopts only one linear array image collected by the stereo vision system combining a digital line-scan camera and a mirror near the muzzle. From the projectile's stereo image, the motion parameters (pose and velocity) are acquired by using a model-based optimization algorithm. The algorithm achieves optimal estimation of the parameters by matching the stereo projection of the projectile and that of the same size 3D model. The speed and the AOA (angle of attack) could also be determined subsequently. Experiments are made to test the proposed method.

  16. Video flowmeter

    DOEpatents

    Lord, D.E.; Carter, G.W.; Petrini, R.R.

    1983-08-02

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid. 4 figs.

  17. Scalable wide-field optical coherence tomography-based angiography for in vivo imaging applications

    PubMed Central

    Xu, Jingjiang; Wei, Wei; Song, Shaozhen; Qi, Xiaoli; Wang, Ruikang K.

    2016-01-01

    Recent advances in optical coherence tomography (OCT)-based angiography have demonstrated a variety of biomedical applications in the diagnosis and therapeutic monitoring of diseases with vascular involvement. While promising, its imaging field of view (FOV) is however still limited (typically less than 9 mm2), which somehow slows down its clinical acceptance. In this paper, we report a high-speed spectral-domain OCT operating at 1310 nm to enable wide FOV up to 750 mm2. Using optical microangiography (OMAG) algorithm, we are able to map vascular networks within living biological tissues. Thanks to 2,048 pixel-array line scan InGaAs camera operating at 147 kHz scan rate, the system delivers a ranging depth of ~7.5 mm and provides wide-field OCT-based angiography at a single data acquisition. We implement two imaging modes (i.e., wide-field mode and high-resolution mode) in the OCT system, which gives highly scalable FOV with flexible lateral resolution. We demonstrate scalable wide-field vascular imaging for multiple finger nail beds in human and whole brain in mice with skull left intact at a single 3D scan, promising new opportunities for wide-field OCT-based angiography for many clinical applications. PMID:27231630

  18. Graphic Arts: Book Two. Process Camera, Stripping, and Platemaking.

    ERIC Educational Resources Information Center

    Farajollahi, Karim; And Others

    The second of a three-volume set of instructional materials for a course in graphic arts, this manual consists of 10 instructional units dealing with the process camera, stripping, and platemaking. Covered in the individual units are the process camera and darkroom photography, line photography, half-tone photography, other darkroom techniques,…

  19. Graphic Arts: Process Camera, Stripping, and Platemaking. Third Edition.

    ERIC Educational Resources Information Center

    Crummett, Dan

    This document contains teacher and student materials for a course in graphic arts concentrating on camera work, stripping, and plate making in the printing process. Eight units of instruction cover the following topics: (1) the process camera and darkroom equipment; (2) line photography; (3) halftone photography; (4) other darkroom techniques; (5)…

  20. A scanning PIV method for fine-scale turbulence measurements

    NASA Astrophysics Data System (ADS)

    Lawson, John M.; Dawson, James R.

    2014-12-01

    A hybrid technique is presented that combines scanning PIV with tomographic reconstruction to make spatially and temporally resolved measurements of the fine-scale motions in turbulent flows. The technique uses one or two high-speed cameras to record particle images as a laser sheet is rapidly traversed across a measurement volume. This is combined with a fast method for tomographic reconstruction of the particle field for use in conjunction with PIV cross-correlation. The method was tested numerically using DNS data and with experiments in a large mixing tank that produces axisymmetric homogeneous turbulence at . A parametric investigation identifies the important parameters for a scanning PIV set-up and provides guidance to the interested experimentalist in achieving the best accuracy. Optimal sheet spacings and thicknesses are reported, and it was found that accurate results could be obtained at quite low scanning speeds. The two-camera method is the most robust to noise, permitting accurate measurements of the velocity gradients and direct determination of the dissipation rate.

  1. Flexible mobile robot system for smart optical pipe inspection

    NASA Astrophysics Data System (ADS)

    Kampfer, Wolfram; Bartzke, Ralf; Ziehl, Wolfgang

    1998-03-01

    Damages of pipes can be inspected and graded by TV technology available on the market. Remotely controlled vehicles carry a TV-camera through pipes. Thus, depending on the experience and the capability of the operator, diagnosis failures can not be avoided. The classification of damages requires the knowledge of the exact geometrical dimensions of the damages such as width and depth of cracks, fractures and defect connections. Within the framework of a joint R&D project a sensor based pipe inspection system named RODIAS has been developed with two partners from industry and research institute. It consists of a remotely controlled mobile robot which carries intelligent sensors for on-line sewerage inspection purpose. The sensor is based on a 3D-optical sensor and a laser distance sensor. The laser distance sensor is integrated in the optical system of the camera and can measure the distance between camera and object. The angle of view can be determined from the position of the pan and tilt unit. With coordinate transformations it is possible to calculate the spatial coordinates for every point of the video image. So the geometry of an object can be described exactly. The company Optimess has developed TriScan32, a special software for pipe condition classification. The user can start complex measurements of profiles, pipe displacements or crack widths simply by pressing a push-button. The measuring results are stored together with other data like verbal damage descriptions and digitized images in a data base.

  2. An on-line calibration algorithm for external parameters of visual system based on binocular stereo cameras

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua

    2014-11-01

    Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation robustness and accuracy of the fundamental matrix. Finally, we take an experiment for computing the relationship of a pair of stereo cameras to demonstrate accurate performance of the algorithm.

  3. Field Demonstration of Electro-Scan Defect Location Technology for Condition Assessment of Wastewater Collection Systems

    EPA Science Inventory

    The purpose of the field demonstration program is to gather technically reliable cost and performance information on selected condition assessment technologies under defined field conditions. The selected technologies include zoom camera, electro-scan (FELL-41), and a multi-sens...

  4. Re-scan confocal microscopy: scanning twice for better resolution

    PubMed Central

    De Luca, Giulia M.R.; Breedijk, Ronald M.P.; Brandt, Rick A.J.; Zeelenberg, Christiaan H.C.; de Jong, Babette E.; Timmermans, Wendy; Azar, Leila Nahidi; Hoebe, Ron A.; Stallinga, Sjoerd; Manders, Erik M.M.

    2013-01-01

    We present a new super-resolution technique, Re-scan Confocal Microscopy (RCM), based on standard confocal microscopy extended with an optical (re-scanning) unit that projects the image directly on a CCD-camera. This new microscope has improved lateral resolution and strongly improved sensitivity while maintaining the sectioning capability of a standard confocal microscope. This simple technology is typically useful for biological applications where the combination high-resolution and high-sensitivity is required. PMID:24298422

  5. Mapping gray-scale image to 3D surface scanning data by ray tracing

    NASA Astrophysics Data System (ADS)

    Li, Peng; Jones, Peter R. M.

    1997-03-01

    The extraction and location of feature points from range imaging is an important but difficult task in machine vision based measurement systems. There exist some feature points which are not able to be detected from pure geometric characteristics, particularly in those measurement tasks related to the human body. The Loughborough Anthropometric Shadow Scanner (LASS) is a whole body surface scanner based on structured light technique. Certain applications of LASS require accurate location of anthropometric landmarks from the scanned data. This is sometimes impossible from existing raw data because some landmarks do not appear in the scanned data. Identification of these landmarks has to resort to surface texture of the scanned object. Modifications to LASS were made to allow gray-scale images to be captured before or after the object was scanned. Two-dimensional gray-scale image must be mapped to the scanned data to acquire the 3D coordinates of a landmark. The method to map 2D images to the scanned data is based on the colinearity conditions and ray-tracing method. If the camera center and image coordinates are known, the corresponding object point must lie on a ray starting from the camera center and connecting to the image coordinate. By intersecting the ray with the scanned surface of the object, the 3D coordinates of a point can be solved. Experimentation has demonstrated the feasibility of the method.

  6. Nimbus Satellite Data Rescue Project for Sea Ice Extent: Data Processing

    NASA Astrophysics Data System (ADS)

    Campbell, G. G.; Sandler, M.; Moses, J. F.; Gallaher, D. W.

    2011-12-01

    Early Nimbus satellites collected both visible and infrared observations of the Earth at high resolution. Nimbus I operated in September, 1964. Nimbus II operated from April to November 1966 and Nimbus III operated from May 1969 to November 1969. We will discuss our procedures to recover this data into a modern digital archive useful for scientific analysis. The Advanced Videocon Camera System data was transmitted as an analog signal proportional to the brightness detected by a video camera. This was archived on black and white film. At NSIDC we are scanning and digitizing the film images using equipment derived from the motion picture industry. The High Resolution Infrared Radiance data was originally recorded in 36 bit words on 7 track digital tapes. The HRIR data were recently recovered from the tapes and TAP (tape file format from 1966) files were placed in EOSDIS archives for online access. The most interesting parts of the recovery project were the additional processing required to rectify and navigate the raw digital files. One of the artifacts we needed to identify and remove were fiducial marks representing latitude and longitude lines added to the film for users in the 1960's. The IR data recording inserted an artificial random jitter in the alignment of individual scan lines. We will describe our procedures to navigate, remap, detect noise and remove artifacts in the data. Beyond cleaning up the HRIR swath data or the AVCS picture data, we are remapping the data into standard grids for comparisons in time. A first run of all the Nimbus 2 HRIR data into EASE grids in NetCDF format has been completed. This turned up interesting problems of overlaps and missing data issues. Some of these processes require extensive computer resources and we have established methods for using the 'Elastic Compute Cloud' facility at Amazon.com to run the many processes in parallel. In addition we have set up procedures at the University of Colorado to monitor the ongoing scanning and simple quality control of more than 200,000 pictures. Preliminary results from September 1964, 1966 and 1969 data analysis will be discussed in this presentation. Our scientific use of the data will focus on recovering the sea ice extent around the poles. We especially welcome new users interested in the meteorology from 50N to 50S in the 1960's. Lessons and examples of the scanning and quality control procedures will be highlighted in the presentation. Illustrations will include mapped and reformatted data. When the project is finished a public archive from September 1964, April to November 1966 and May to December 1969 will be available for general use.

  7. SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darne, C; Robertson, D; Alsanea, F

    2016-06-15

    Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm{sup 3}) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirectmore » scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.« less

  8. The AR Sandbox: Augmented Reality in Geoscience Education

    NASA Astrophysics Data System (ADS)

    Kreylos, O.; Kellogg, L. H.; Reed, S.; Hsi, S.; Yikilmaz, M. B.; Schladow, G.; Segale, H.; Chan, L.

    2016-12-01

    The AR Sandbox is a combination of a physical box full of sand, a 3D (depth) camera such as a Microsoft Kinect, a data projector, and a computer running open-source software, creating a responsive and interactive system to teach geoscience concepts in formal or informal contexts. As one or more users shape the sand surface to create planes, hills, or valleys, the 3D camera scans the surface in real-time, the software creates a dynamic topographic map including elevation color maps and contour lines, and the projector projects that map back onto the sand surface such that real and projected features match exactly. In addition, users can add virtual water to the sandbox, which realistically flows over the real surface driven by a real-time fluid flow simulation. The AR Sandbox can teach basic geographic and hydrologic skills and concepts such as reading topographic maps, interpreting contour lines, formation of watersheds, flooding, or surface wave propagation in a hands-on and explorative manner. AR Sandbox installations in more than 150 institutions have shown high audience engagement and long dwell times of often 20 minutes and more. In a more formal context, the AR Sandbox can be used in field trip preparation, and can teach advanced geoscience skills such as extrapolating 3D sub-surface shapes from surface expression, via advanced software features such as the ability to load digital models of real landscapes and guiding users towards recreating them in the sandbox. Blueprints, installation instructions, and the open-source AR Sandbox software package are available at http://arsandbox.org .

  9. Performance Characterization of UV Science Cameras Developed for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP)

    NASA Technical Reports Server (NTRS)

    Champey, Patrick; Kobayashi, Ken; Winebarger, Amy; Cirtin, Jonathan; Hyde, David; Robertson, Bryan; Beabout, Brent; Beabout, Dyana; Stewart, Mike

    2014-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-alpha and to detect the Hanle effect in the line core. Due to the nature of Lyman-alpha polarization in the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1% in the line core. CLASP is a dual-beam spectro-polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1% polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. Coating the e2v CCD57-10 512x512 detectors with Lumogen-E coating allows for a relatively high (30%) quantum efficiency at the Lyman-$\\alpha$ line. The CLASP cameras were designed to operate with =10 e- /pixel/second dark current, = 25 e- read noise, a gain of 2.0 and =0.1% residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; dark current, read noise, camera gain and residual non-linearity.

  10. Performance Characterization of UV Science Cameras Developed for the Chromospheric Lyman-Alpha Spectro-Polarimeter

    NASA Technical Reports Server (NTRS)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, D.; Beabout, B.; Stewart, M.

    2014-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-alpha and to detect the Hanle effect in the line core. Due to the nature of Lyman-alpha polarization in the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1 percent in the line core. CLASP is a dual-beam spectro-polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1 percent polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. Coating the e2v CCD57-10 512x512 detectors with Lumogen-E coating allows for a relatively high (30 percent) quantum efficiency at the Lyman-alpha line. The CLASP cameras were designed to operate with 10 e-/pixel/second dark current, 25 e- read noise, a gain of 2.0 +/- 0.5 and 1.0 percent residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; dark current, read noise, camera gain and residual non-linearity.

  11. NAOMI instrument: a product line of compact and versatile cameras designed for HR and VHR missions in Earth observation

    NASA Astrophysics Data System (ADS)

    Luquet, Ph.; Brouard, L.; Chinal, E.

    2017-11-01

    Astrium has developed a product line of compact and versatile instruments for HR and VHR missions in Earth Observation. These cameras consist on a Silicon Carbide Korsch-type telescope, a focal plane with one or several retina modules - including five lines CCD, optical filters and front end electronics - and the instrument main electronics. Several versions have been developed with a telescope pupil diameter from 200 mm up to 650 mm, covering a large range of GSD (from 2.5 m down to sub-metric) and swath (from 10km up to 30 km) and compatible with different types of platform. Nine cameras have already been manufactured for five different programs: ALSAT2 (Algeria), SSOT (Chile), SPOT6 & SPOT7 (France), KRS (Kazakhstan) and VNREDSat (Vietnam). Two of them have already been launched and are delivering high quality images.

  12. Development and characterization of a monoclonal antibody to human embryonal carcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khazaeli, M.B.; Beierwaltes, W.H.; Pitt, G.S.

    1987-06-01

    A monoclonal anti-testicular carcinoma antibody was obtained via the somatic cell fusion technique by immunization of BALB/c mice with freshly prepared single cell suspension from a patient with testicular embryonal carcinoma with choriocarcinoma components. The hybridoma supernates were screened against the testicular carcinoma cells used in the immunization as well as normal mononuclear white blood cells isolated from the same patient. An antibody (5F9) was selected which bound to fresh tumor cells from two patients with embryonal testicular carcinoma and failed to bind to fresh tumor cells from 24 patients (2 seminoma, 2 melanoma, 3 neck, 2 esophageal, 1 ovarian,more » 3 colon, 1 prostate, 2 breast, 1 liposarcoma, 3 endometrial, 1 kidney, 1 adrenal, 1 larynx and 1 bladder tumors) or cell suspensions prepared from normal liver, lung, spleen, ovary, testes, kidney, red blood cells or white blood cells. The antibody was tested for its binding to several well established cancer cell lines, and was found to bind to the BeWo human choriocarcinoma and two human embryonal carcinoma cell lines. The antibody did not react with 22 other cell lines or with hCG. The antibody was labeled with /sup 131/I and injected into nude mice bearing BeWo tumors and evaluated for tumor localization by performing whole body scans with a gamma camera 5 days later. Six mice injected with the antibody showed positive tumor localization without the need for background subtraction while six mice injected with MOPC-21, a murine myeloma immunoglobulin, demonstrated much less tumor localization. Tissue distribution studies performed after scanning showed specific tumor localization (8:1 tumor: muscle) for the monoclonal antibody and no specific localization for MOPC-21.« less

  13. Procurement specification color graphic camera system

    NASA Technical Reports Server (NTRS)

    Prow, G. E.

    1980-01-01

    The performance and design requirements for a Color Graphic Camera System are presented. The system is a functional part of the Earth Observation Department Laboratory System (EODLS) and will be interfaced with Image Analysis Stations. It will convert the output of a raster scan computer color terminal into permanent, high resolution photographic prints and transparencies. Images usually displayed will be remotely sensed LANDSAT imager scenes.

  14. Nondestructive evaluation using dipole model analysis with a scan type magnetic camera

    NASA Astrophysics Data System (ADS)

    Lee, Jinyi; Hwang, Jiseong

    2005-12-01

    Large structures such as nuclear power, thermal power, chemical and petroleum refining plants are drawing interest with regard to the economic aspect of extending component life in respect to the poor environment created by high pressure, high temperature, and fatigue, securing safety from corrosion and exceeding their designated life span. Therefore, technology that accurately calculates and predicts degradation and defects of aging materials is extremely important. Among different methods available, nondestructive testing using magnetic methods is effective in predicting and evaluating defects on the surface of or surrounding ferromagnetic structures. It is important to estimate the distribution of magnetic field intensity for applicable magnetic methods relating to industrial nondestructive evaluation. A magnetic camera provides distribution of a quantitative magnetic field with a homogeneous lift-off and spatial resolution. It is possible to interpret the distribution of magnetic field when the dipole model was introduced. This study proposed an algorithm for nondestructive evaluation using dipole model analysis with a scan type magnetic camera. The numerical and experimental considerations of the quantitative evaluation of several sizes and shapes of cracks using magnetic field images of the magnetic camera were examined.

  15. Spectral survey of helium lines in a linear plasma device for use in HELIOS imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, H. B., E-mail: rayhb@ornl.gov; Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831; Biewer, T. M.

    2016-11-15

    Fast visible cameras and a filterscope are used to examine the visible light emission from Oak Ridge National Laboratory’s Proto-MPEX. The filterscope has been configured to perform helium line ratio measurements using emission lines at 667.9, 728.1, and 706.5 nm. The measured lines should be mathematically inverted and the ratios compared to a collisional radiative model (CRM) to determine T{sub e} and n{sub e}. Increasing the number of measurement chords through the plasma improves the inversion calculation and subsequent T{sub e} and n{sub e} localization. For the filterscope, one spatial chord measurement requires three photomultiplier tubes (PMTs) connected to pelliclemore » beam splitters. Multiple, fast visible cameras with narrowband filters are an alternate technique for performing these measurements with superior spatial resolution. Each camera contains millions of pixels; each pixel is analogous to one filterscope PMT. The data can then be inverted and the ratios compared to the CRM to determine 2-dimensional “images” of T{sub e} and n{sub e} in the plasma. An assessment is made in this paper of the candidate He I emission lines for an imaging technique.« less

  16. Active solution of homography for pavement crack recovery with four laser lines.

    PubMed

    Xu, Guan; Chen, Fang; Wu, Guangwei; Li, Xiaotao

    2018-05-08

    An active solution method of the homography, which is derived from four laser lines, is proposed to recover the pavement cracks captured by the camera to the real-dimension cracks in the pavement plane. The measurement system, including a camera and four laser projectors, captures the projection laser points on the 2D reference in different positions. The projection laser points are reconstructed in the camera coordinate system. Then, the laser lines are initialized and optimized by the projection laser points. Moreover, the plane-indicated Plücker matrices of the optimized laser lines are employed to model the laser projection points of the laser lines on the pavement. The image-pavement homography is actively determined by the solutions of the perpendicular feet of the projection laser points. The pavement cracks are recovered by the active solution of homography in the experiments. The recovery accuracy of the active solution method is verified by the 2D dimension-known reference. The test case with the measurement distance of 700 mm and the relative angle of 8° achieves the smallest recovery error of 0.78 mm in the experimental investigations, which indicates the application potentials in the vision-based pavement inspection.

  17. Thermal Scanning of Dental Pulp Chamber by Thermocouple System and Infrared Camera during Photo Curing of Resin Composites

    PubMed Central

    Hamze, Faeze; Ganjalikhan Nasab, Seyed Abdolreza; Eskandarizadeh, Ali; Shahravan, Arash; Akhavan Fard, Fatemeh; Sinaee, Neda

    2018-01-01

    Introduction: Due to thermal hazard during composite restorations, this study was designed to scan the pulp temperature by thermocouple and infrared camera during photo polymerizing different composites. Methods and Materials: A mesio-occlso-distal (MOD) cavity was prepared in an extracted tooth and the K-type thermocouple was fixed in its pulp chamber. Subsequently, 1 mm increment of each composites were inserted (four composite types were incorporated) and photo polymerized employing either LED or QTH systems for 60 sec while the temperature was recorded with 10 sec intervals. Ultimately, the same tooth was hemisected bucco-lingually and the amalgam was removed. The same composite curing procedure was repeated while the thermogram was recorded using an infrared camera. Thereafter, the data was analyzed by repeated measured ANOVA followed by Tukey’s HSD Post Hoc test for multiple comparisons (α=0.05). Results: The pulp temperature was significantly increased (repeated measures) during photo polymerization (P=0.000) while there was no significant difference among the results recorded by thermocouple comparing to infrared camera (P>0.05). Moreover, different composite materials and LCUs lead to similar outcomes (P>0.05). Conclusion: Although various composites have significant different chemical compositions, they lead to similar pulp thermal changes. Moreover, both the infrared camera and the thermocouple would record parallel results of dental pulp temperature. PMID:29707014

  18. Measurement of marine picoplankton cell size by using a cooled, charge-coupled device camera with image-analyzed fluorescence microscopy.

    PubMed Central

    Viles, C L; Sieracki, M E

    1992-01-01

    Accurate measurement of the biomass and size distribution of picoplankton cells (0.2 to 2.0 microns) is paramount in characterizing their contribution to the oceanic food web and global biogeochemical cycling. Image-analyzed fluorescence microscopy, usually based on video camera technology, allows detailed measurements of individual cells to be taken. The application of an imaging system employing a cooled, slow-scan charge-coupled device (CCD) camera to automated counting and sizing of individual picoplankton cells from natural marine samples is described. A slow-scan CCD-based camera was compared to a video camera and was superior for detecting and sizing very small, dim particles such as fluorochrome-stained bacteria. Several edge detection methods for accurately measuring picoplankton cells were evaluated. Standard fluorescent microspheres and a Sargasso Sea surface water picoplankton population were used in the evaluation. Global thresholding was inappropriate for these samples. Methods used previously in image analysis of nanoplankton cells (2 to 20 microns) also did not work well with the smaller picoplankton cells. A method combining an edge detector and an adaptive edge strength operator worked best for rapidly generating accurate cell sizes. A complete sample analysis of more than 1,000 cells averages about 50 min and yields size, shape, and fluorescence data for each cell. With this system, the entire size range of picoplankton can be counted and measured. Images PMID:1610183

  19. Cameras Monitor Spacecraft Integrity to Prevent Failures

    NASA Technical Reports Server (NTRS)

    2014-01-01

    The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.

  20. Hand-eye calibration using a target registration error model.

    PubMed

    Chen, Elvis C S; Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M

    2017-10-01

    Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand-eye calibration between the camera and the tracking system. The authors introduce the concept of 'guided hand-eye calibration', where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand-eye calibration as a registration problem between homologous point-line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera.

  1. Assessment of a Microsoft Kinect-based 3D scanning system for taking body segment girth measurements: a comparison to ISAK and ISO standards.

    PubMed

    Clarkson, Sean; Wheat, Jon; Heller, Ben; Choppin, Simon

    2016-01-01

    Use of anthropometric data to infer sporting performance is increasing in popularity, particularly within elite sport programmes. Measurement typically follows standards set by the International Society for the Advancement of Kinanthropometry (ISAK). However, such techniques are time consuming, which reduces their practicality. Schranz et al. recently suggested 3D body scanners could replace current measurement techniques; however, current systems are costly. Recent interest in natural user interaction has led to a range of low-cost depth cameras capable of producing 3D body scans, from which anthropometrics can be calculated. A scanning system comprising 4 depth cameras was used to scan 4 cylinders, representative of the body segments. Girth measurements were calculated from the 3D scans and compared to gold standard measurements. Requirements of a Level 1 ISAK practitioner were met in all 4 cylinders, and ISO standards for scan-derived girth measurements were met in the 2 larger cylinders only. A fixed measurement bias was identified that could be corrected with a simple offset factor. Further work is required to determine comparable performance across a wider range of measurements performed upon living participants. Nevertheless, findings of the study suggest such a system offers many advantages over current techniques, having a range of potential applications.

  2. SU-F-T-235: Optical Scan Based Collision Avoidance Using Multiple Stereotactic Cameras During Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardan, R; Popple, R; Dobelbower, M

    Purpose: To demonstrate the ability to quickly generate an accurate collision avoidance map using multiple stereotactic cameras during simulation. Methods: Three Kinect stereotactic cameras were placed in the CT simulation room and optically calibrated to the DICOM isocenter. Immediately before scanning, the patient was optically imaged to generate a 3D polygon mesh, which was used to calculate the collision avoidance area using our previously developed framework. The mesh was visually compared to the CT scan body contour to ensure accurate coordinate alignment. To test the accuracy of the collision calculation, the patient and machine were physically maneuvered in the treatmentmore » room to calculated collision boundaries. Results: The optical scan and collision calculation took 38.0 seconds and 2.5 seconds to complete respectively. The collision prediction accuracy was determined using a receiver operating curve (ROC) analysis, where the true positive, true negative, false positive and false negative values were 837, 821, 43, and 79 points respectively. The ROC accuracy was 93.1% over the sampled collision space. Conclusion: We have demonstrated a framework which is fast and accurate for predicting collision avoidance for treatment which can be determined during the normal simulation process. Because of the speed, the system could be used to add a layer of safety with a negligible impact on the normal patient simulation experience. This information could be used during treatment planning to explore the feasible geometries when optimizing plans. Research supported by Varian Medical Systems.« less

  3. Molecular Shocks Associated with Massive Young Stars: CO Line Images with a New Far-Infrared Spectroscopic Camera on the Kuiper Airborne Observatory

    NASA Technical Reports Server (NTRS)

    Watson, Dan M.

    1997-01-01

    Under the terms of our contract with NASA Ames Research Center, the University of Rochester (UR) offers the following final technical report on grant NAG 2-958, Molecular shocks associated with massive young stars: CO line images with a new far-infrared spectroscopic camera, given for implementation of the UR Far-Infrared Spectroscopic Camera (FISC) on the Kuiper Airborne Observatory (KAO), and use of this camera for observations of star-formation regions 1. Two KAO flights in FY 1995, the final year of KAO operations, were awarded to this program, conditional upon a technical readiness confirmation which was given in January 1995. The funding period covered in this report is 1 October 1994 - 30 September 1996. The project was supported with $30,000, and no funds remained at the conclusion of the project.

  4. Close-range photogrammetry with video cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1985-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  5. Close-Range Photogrammetry with Video Cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1983-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  6. Design of a MATLAB(registered trademark) Image Comparison and Analysis Tool for Augmentation of the Results of the Ann Arbor Distortion Test

    DTIC Science & Technology

    2016-06-25

    The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was

  7. ETR COMPRESSOR BUILDING, TRA643. CAMERA FACES NORTH. AIR HEATERS LINE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    ETR COMPRESSOR BUILDING, TRA-643. CAMERA FACES NORTH. AIR HEATERS LINE UP AGAINST WALL, TO BE USED IN CONNECTION WITH ETR EXPERIMENTS. EACH HAD A HEAT OUTPUT OF 8 MILLION BTU PER HOUR, OPERATED AT 1260 DEGREES F. AND A PRESSURE OF 320 PSI. NOTE METAL WALLS AND ROOF. INL NEGATIVE NO. 56-3709. R.G. Larsen, Photographer, 11/13/1956 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  8. Monte Carlo simulations of medical imaging modalities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estes, G.P.

    Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computermore » power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.« less

  9. Video flowmeter

    DOEpatents

    Lord, David E.; Carter, Gary W.; Petrini, Richard R.

    1983-01-01

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid (10) containing entrained particles (12) is formed and positioned by a rod optic lens assembly (31) on the raster area of a low-light level television camera (20). The particles (12) are illuminated by light transmitted through a bundle of glass fibers (32) surrounding the rod optic lens assembly (31). Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen (40). The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid (10).

  10. Counting neutrons with a commercial S-CMOS camera

    NASA Astrophysics Data System (ADS)

    Patrick, Van Esch; Paolo, Mutti; Emilio, Ruiz-Martinez; Estefania, Abad Garcia; Marita, Mosconi; Jon, Ortega

    2018-01-01

    It is possible to detect individual flashes from thermal neutron impacts in a ZnS scintillator using a CMOS camera looking at the scintillator screen, and off line image processing. Some preliminary results indicated that the efficiency of recognition could be improved by optimizing the light collection and the image processing. We will report on this ongoing work which is a result from the collaboration between ESS Bilbao and the ILL. The main progress to be reported is situated on the level of the on-line treatment of the imaging data. If this technology is to work on a genuine scientific instrument, it is necessary that all the processing happens on line, to avoid the accumulation of large amounts of image data to be analyzed off line. An FPGA-based real-time full-deca mode VME-compatible CameraLink board has been developed at the SCI of the ILL, which is able to manage the data flow from the camera and convert it in a reasonable "neutron impact" data flow like from a usual neutron counting detector. The main challenge of the endeavor is the optical light collection from the scintillator. While the light yield of a ZnS scintillator is a priori rather important, the amount of light collected with a photographic objective is small. Different scintillators and different light collection techniques have been experimented with and results will be shown for different setups improving upon the light recuperation on the camera sensor. Improvements on the algorithm side will also be presented. The algorithms have to be at the same time efficient in their recognition of neutron signals, in their rejection of noise signals (internal and external to the camera) but also have to be simple enough to be easily implemented in the FPGA. The path from the idea of detecting individual neutron impacts with a CMOS camera to a practical working instrument detector is challenging, and in this paper we will give an overview of the part of the road that has already been walked.

  11. An evaluation of new high resolution image collection and processing techniques for estimating shrub cover and detecting landscape changes associated with military training in arid lands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, D.J.; Ostler, W.K.

    2000-02-01

    Research funded by the US Department of Defense, US Department of Energy, and the US Environmental Protection Agency as part of Project CS-1131 of the Strategic Environmental Research and Development Program evaluated novel techniques for collecting high-resolution images in the Mojave Desert using helicopters, helium-filled blimps, kites, and hand-held telescoping poles at heights from 1 to 150 meters. Several camera types, lens, films, and digital techniques were evaluated on the basis of their ability to correctly estimate canopy cover of shrubs. A high degree of accuracy was obtained with photo scales of 1:4,000 or larger and flatbed scanning rates frommore » films or prints of 300 lines per inch or larger. Smaller scale images were of value in detecting retrospective changes in cover of large shrubs, but failed to detect smaller shrubs. Excellent results were obtained using inexpensive 35-millimeter cameras and new super-fine grain film such as Kodak's Royal Gold{trademark} (ASA 100) film or megapixel digital cameras. New image-processing software, such as SigmaScan Pro{trademark}, makes it possible to accurately measure areas up to 1 hectare in size for total cover and density in 10 minutes compared to several hours or days of field work. In photographs with scales of 1:1,000 and 1:2,000, it was possible to detect cover and density of up to four dominant shrub species. Canopy cover and other parameters such as width, length, feet diameter, and shape factors can be nearly instantaneously measured for each individual shrub yielding size distribution histograms and other statistical data on plant community structure. Use of the technique is being evaluated in a four-year study of military training impacts at Fort Irwin, California, and results compared with image processing using conventional aerial photography and satellite imagery, including the new 1-meter pixel IKONOS images. The technique is a valuable new emerging tool to accurately assess vegetation structure and landscape changes due to military or other land-use disturbances.« less

  12. EAARL coastal topography--Alligator Point, Louisiana, 2010

    USGS Publications Warehouse

    Nayegandhi, Amar; Bonisteel-Cormier, J.M.; Wright, C.W.; Brock, J.C.; Nagle, D.B.; Vivekanandan, Saisudha; Fredericks, Xan; Barras, J.A.

    2012-01-01

    This project provides highly detailed and accurate datasets of a portion of Alligator Point, Louisiana, acquired on March 5 and 6, 2010. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the National Aeronautics and Space Administration (NASA) Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the "bare earth" under vegetation from a point cloud of last return elevations.

  13. New concept high-speed and high-resolution color scanner

    NASA Astrophysics Data System (ADS)

    Nakashima, Keisuke; Shinoda, Shin'ichi; Konishi, Yoshiharu; Sugiyama, Kenji; Hori, Tetsuya

    2003-05-01

    We have developed a new concept high-speed and high-resolution color scanner (Blinkscan) using digital camera technology. With our most advanced sub-pixel image processing technology, approximately 12 million pixel image data can be captured. High resolution imaging capability allows various uses such as OCR, color document read, and document camera. The scan time is only about 3 seconds for a letter size sheet. Blinkscan scans documents placed "face up" on its scan stage and without any special illumination lights. Using Blinkscan, a high-resolution color document can be easily inputted into a PC at high speed, a paperless system can be built easily. It is small, and since the occupancy area is also small, setting it on an individual desk is possible. Blinkscan offers the usability of a digital camera and accuracy of a flatbed scanner with high-speed processing. Now, about several hundred of Blinkscan are mainly shipping for the receptionist operation in a bank and a security. We will show the high-speed and high-resolution architecture of Blinkscan. Comparing operation-time with conventional image capture device, the advantage of Blinkscan will make clear. And image evaluation for variety of environment, such as geometric distortions or non-uniformity of brightness, will be made.

  14. An evaluation of inexpensive methods for root image acquisition when using rhizotrons.

    PubMed

    Mohamed, Awaz; Monnier, Yogan; Mao, Zhun; Lobet, Guillaume; Maeght, Jean-Luc; Ramel, Merlin; Stokes, Alexia

    2017-01-01

    Belowground processes play an essential role in ecosystem nutrient cycling and the global carbon budget cycle. Quantifying fine root growth is crucial to the understanding of ecosystem structure and function and in predicting how ecosystems respond to climate variability. A better understanding of root system growth is necessary, but choosing the best method of observation is complex, especially in the natural soil environment. Here, we compare five methods of root image acquisition using inexpensive technology that is currently available on the market: flatbed scanner, handheld scanner, manual tracing, a smartphone application scanner and a time-lapse camera. Using the five methods, root elongation rate (RER) was measured for three months, on roots of hybrid walnut ( Juglans nigra  ×  Juglans regia L.) in rhizotrons installed in agroforests. When all methods were compared together, there were no significant differences in relative cumulative root length. However, the time-lapse camera and the manual tracing method significantly overestimated the relative mean diameter of roots compared to the three scanning methods. The smartphone scanning application was found to perform best overall when considering image quality and ease of use in the field. The automatic time-lapse camera was useful for measuring RER over several months without any human intervention. Our results show that inexpensive scanning and automated methods provide correct measurements of root elongation and length (but not diameter when using the time-lapse camera). These methods are capable of detecting fine roots to a diameter of 0.1 mm and can therefore be selected by the user depending on the data required.

  15. Correcting nonlinear drift distortion of scanning probe and scanning transmission electron microscopies from image pairs with orthogonal scan directions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ophus, Colin; Ciston, Jim; Nelson, Chris T.

    Unwanted motion of the probe with respect to the sample is a ubiquitous problem in scanning probe and scanning transmission electron microscopies, causing both linear and nonlinear artifacts in experimental images. We have designed a procedure to correct these artifacts by using orthogonal scan pairs to align each measurement line-by-line along the slow scan direction, by fitting contrast variation along the lines. We demonstrate the accuracy of our algorithm on both synthetic and experimental data and provide an implementation of our method.

  16. Correcting nonlinear drift distortion of scanning probe and scanning transmission electron microscopies from image pairs with orthogonal scan directions

    DOE PAGES

    Ophus, Colin; Ciston, Jim; Nelson, Chris T.

    2015-12-10

    Unwanted motion of the probe with respect to the sample is a ubiquitous problem in scanning probe and scanning transmission electron microscopies, causing both linear and nonlinear artifacts in experimental images. We have designed a procedure to correct these artifacts by using orthogonal scan pairs to align each measurement line-by-line along the slow scan direction, by fitting contrast variation along the lines. We demonstrate the accuracy of our algorithm on both synthetic and experimental data and provide an implementation of our method.

  17. Integration of Geodata in Documenting Castle Ruins

    NASA Astrophysics Data System (ADS)

    Delis, P.; Wojtkowska, M.; Nerc, P.; Ewiak, I.; Lada, A.

    2016-06-01

    Textured three dimensional models are currently the one of the standard methods of representing the results of photogrammetric works. A realistic 3D model combines the geometrical relations between the structure's elements with realistic textures of each of its elements. Data used to create 3D models of structures can be derived from many different sources. The most commonly used tool for documentation purposes, is a digital camera and nowadays terrestrial laser scanning (TLS). Integration of data acquired from different sources allows modelling and visualization of 3D models historical structures. Additional aspect of data integration is possibility of complementing of missing points for example in point clouds. The paper shows the possibility of integrating data from terrestrial laser scanning with digital imagery and an analysis of the accuracy of the presented methods. The paper describes results obtained from raw data consisting of a point cloud measured using terrestrial laser scanning acquired from a Leica ScanStation2 and digital imagery taken using a Kodak DCS Pro 14N camera. The studied structure is the ruins of the Ilza castle in Poland.

  18. Sidelooking laser altimeter for a flight simulator

    NASA Technical Reports Server (NTRS)

    Webster, L. D. (Inventor)

    1983-01-01

    An improved laser altimeter for a flight simulator which allows measurement of the height of the simulator probe above the terrain directly below the probe tip is described. A laser beam is directed from the probe at an angle theta to the horizontal to produce a beam spot on the terrain. The angle theta that the laser beam makes with the horizontal is varied so as to bring the beam spot into coincidence with a plumb line coaxial with the longitudinal axis of the probe. A television altimeter camera observes the beam spot and has a raster line aligned with the plumb line. Spot detector circuit coupled to the output of the TV camera monitors the position of the beam spot relative to the plumb line.

  19. Scalable software architecture for on-line multi-camera video processing

    NASA Astrophysics Data System (ADS)

    Camplani, Massimo; Salgado, Luis

    2011-03-01

    In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead.

  20. Remote Attitude Measurement Techniques.

    DTIC Science & Technology

    1982-12-01

    televison camera). The incident illumination produces a non-uniformity on the scanned side of the sensitive material which can be modeled as an...to compute the probabilistic attitude matrix. Fourth, the experiment will be conducted with the televison camera mounted on a machinists table, such... the optical axis does not necesarily pass through the center of the lens assembly and impact the center pixel in the active region of

  1. Active learning in camera calibration through vision measurement application

    NASA Astrophysics Data System (ADS)

    Li, Xiaoqin; Guo, Jierong; Wang, Xianchun; Liu, Changqing; Cao, Binfang

    2017-08-01

    Since cameras are increasingly more used in scientific application as well as in the applications requiring precise visual information, effective calibration of such cameras is getting more important. There are many reasons why the measurements of objects are not accurate. The largest reason is that the lens has a distortion. Another detrimental influence on the evaluation accuracy is caused by the perspective distortions in the image. They happen whenever we cannot mount the camera perpendicularly to the objects we want to measure. In overall, it is very important for students to understand how to correct lens distortions, that is camera calibration. If the camera is calibrated, the images are rectificated, and then it is possible to obtain undistorted measurements in world coordinates. This paper presents how the students should develop a sense of active learning for mathematical camera model besides the theoretical scientific basics. The authors will present the theoretical and practical lectures which have the goal of deepening the students understanding of the mathematical models of area scan cameras and building some practical vision measurement process by themselves.

  2. On-line dimensional measurement of small components on the eyeglasses assembly line

    NASA Astrophysics Data System (ADS)

    Rosati, G.; Boschetti, G.; Biondi, A.; Rossi, A.

    2009-03-01

    Dimensional measurement of the subassemblies at the beginning of the assembly line is a very crucial process for the eyeglasses industry, since even small manufacturing errors of the components can lead to very visible defects on the final product. For this reason, all subcomponents of the eyeglass are verified before beginning the assembly process either with a 100% inspection or on a statistical basis. Inspection is usually performed by human operators, with high costs and a degree of repeatability which is not always satisfactory. This paper presents a novel on-line measuring system for dimensional verification of small metallic subassemblies for the eyeglasses industry. The machine vision system proposed, which was designed to be used at the beginning of the assembly line, could also be employed in the Statistical Process Control (SPC) by the manufacturer of the subassemblies. The automated system proposed is based on artificial vision, and exploits two CCD cameras and an anthropomorphic robot to inspect and manipulate the subcomponents of the eyeglass. Each component is recognized by the first camera in a quite large workspace, picked up by the robot and placed in the small vision field of the second camera which performs the measurement process. Finally, the part is palletized by the robot. The system can be easily taught by the operator by simply placing the template object in the vision field of the measurement camera (for dimensional data acquisition) and hence by instructing the robot via the Teaching Control Pendant within the vision field of the first camera (for pick-up transformation acquisition). The major problem we dealt with is that the shape and dimensions of the subassemblies can vary in a quite wide range, but different positioning of the same component can look very similar one to another. For this reason, a specific shape recognition procedure was developed. In the paper, the whole system is presented together with first experimental lab results.

  3. The HRSC on Mars Express: Mert Davies' Involvement in a Novel Planetary Cartography Experiment

    NASA Astrophysics Data System (ADS)

    Oberst, J.; Waehlisch, M.; Giese, B.; Scholten, F.; Hoffmann, H.; Jaumann, R.; Neukum, G.

    2002-12-01

    Mert Davies was a team member of the HRSC (High Resolution Stereo Camera) imaging experiment (PI: Gerhard Neukum) on ESA's Mars Express mission. This pushbroom camera is equipped with 9 forward- and backward-looking CCD lines, 5184 samples each, mounted in parallel, perpendicular to the spacecraft velocity vector. Flight image data with resolutions of up to 10m/pix (from an altitude of 250 km) will be acquired line by line as the spacecraft moves. This acquisition strategy will result in 9 separate almost completely overlapping image strips, each of them having more than 27,000 image lines, typically. [HRSC is also equipped with a superresolution channel for imaging of selected targets at up to 2.3 m/pixel]. The combined operation of the nadir and off-nadir CCD lines (+18.9°, 0°, -18.9°) gives HRSC a triple-stereo capability for precision mapping of surface topography and for modelling of spacecraft orbit- and camera pointing errors. The goals of the camera are to obtain accurate control point networks, Digital Elevation Models (DEMs) in Mars-fixed coordinates, and color orthoimages at global (100% of the surface will be covered with resolutions better than 30m/pixel) and local scales. With his long experience in all aspects of planetary geodesy and cartography, Mert Davies was involved in the preparations of this novel Mars imaging experiment which included: (a) development of a ground data system for the analysis of triple-stereo images, (b) camera testing during airborne imaging campaigns, (c) re-analysis of the Mars control point network, and generation of global topographic orthoimage maps on the basis of MOC images and MOLA data, (d) definition of the quadrangle scheme for a new topographic image map series 1:200K, (e) simulation of synthetic HRSC imaging sequences and their photogrammetric analysis. Mars Express is scheduled for launch in May of 2003. We miss Mert very much!

  4. Postprocessing Algorithm for Driving Conventional Scanning Tunneling Microscope at Fast Scan Rates.

    PubMed

    Zhang, Hao; Li, Xianqi; Chen, Yunmei; Park, Jewook; Li, An-Ping; Zhang, X-G

    2017-01-01

    We present an image postprocessing framework for Scanning Tunneling Microscope (STM) to reduce the strong spurious oscillations and scan line noise at fast scan rates and preserve the features, allowing an order of magnitude increase in the scan rate without upgrading the hardware. The proposed method consists of two steps for large scale images and four steps for atomic scale images. For large scale images, we first apply for each line an image registration method to align the forward and backward scans of the same line. In the second step we apply a "rubber band" model which is solved by a novel Constrained Adaptive and Iterative Filtering Algorithm (CIAFA). The numerical results on measurement from copper(111) surface indicate the processed images are comparable in accuracy to data obtained with a slow scan rate, but are free of the scan drift error commonly seen in slow scan data. For atomic scale images, an additional first step to remove line-by-line strong background fluctuations and a fourth step of replacing the postprocessed image by its ranking map as the final atomic resolution image are required. The resulting image restores the lattice image that is nearly undetectable in the original fast scan data.

  5. Color line scan camera technology and machine vision: requirements to consider

    NASA Astrophysics Data System (ADS)

    Paernaenen, Pekka H. T.

    1997-08-01

    Color machine vision has shown a dynamic uptrend in use within the past few years as the introduction of new cameras and scanner technologies itself underscores. In the future, the movement from monochrome imaging to color will hasten, as machine vision system users demand more knowledge about their product stream. As color has come to the machine vision, certain requirements for the equipment used to digitize color images are needed. Color machine vision needs not only a good color separation but also a high dynamic range and a good linear response from the camera used. Good dynamic range and linear response is necessary for color machine vision. The importance of these features becomes even more important when the image is converted to another color space. There is always lost some information when converting integer data to another form. Traditionally the color image processing has been much slower technique than the gray level image processing due to the three times greater data amount per image. The same has applied for the three times more memory needed. The advancements in computers, memory and processing units has made it possible to handle even large color images today cost efficiently. In some cases he image analysis in color images can in fact even be easier and faster than with a similar gray level image because of more information per pixel. Color machine vision sets new requirements for lighting, too. High intensity and white color light is required in order to acquire good images for further image processing or analysis. New development in lighting technology is bringing eventually solutions for color imaging.

  6. Economical Video Monitoring of Traffic

    NASA Technical Reports Server (NTRS)

    Houser, B. C.; Paine, G.; Rubenstein, L. D.; Parham, O. Bruce, Jr.; Graves, W.; Bradley, C.

    1986-01-01

    Data compression allows video signals to be transmitted economically on telephone circuits. Telephone lines transmit television signals to remote traffic-control center. Lines also carry command signals from center to TV camera and compressor at highway site. Video system with television cameras positioned at critical points on highways allows traffic controllers to determine visually, almost immediately, exact cause of traffic-flow disruption; e.g., accidents, breakdowns, or spills, almost immediately. Controllers can then dispatch appropriate emergency services and alert motorists to minimize traffic backups.

  7. Jungle pipeline inspected for corrosion by camera pig

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1984-03-01

    Acting on a suspicion that internal corrosion could be affection the integrity of the Cambai-to-Simpans Y line - a 14-in. OD, 36-mile natural gas pipeline in Indonesia's South Sumatra region - Pertamina elected to use GEO Pipeline Services's camera pig to photographically inspect the inner pipe wall's condition. As a result of this inspection, the Indonesian company was able to obtain more than 400 high-resolution photographs surveying the interior of the line, in addition to precise measurements of the corrosion in these areas.

  8. Fluorescence molecular imaging system with a novel mouse surface extraction method and a rotary scanning scheme

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Zhu, Dianwen; Baikejiang, Reheman; Li, Changqing

    2015-03-01

    We have developed a new fluorescence molecular tomography (FMT) imaging system, in which we utilized a phase shifting method to extract the mouse surface geometry optically and a rotary laser scanning approach to excite fluorescence molecules and acquire fluorescent measurements on the whole mouse body. Nine fringe patterns with a phase shifting of 2π/9 are projected onto the mouse surface by a projector. The fringe patterns are captured using a webcam to calculate a phase map that is converted to the geometry of the mouse surface with our algorithms. We used a DigiWarp approach to warp a finite element mesh of a standard digital mouse to the measured mouse surface thus the tedious and time-consuming procedure from a point cloud to mesh is avoided. Experimental results indicated that the proposed method is accurate with errors less than 0.5 mm. In the FMT imaging system, the mouse is placed inside a conical mirror and scanned with a line pattern laser that is mounted on a rotation stage. After being reflected by the conical mirror, the emitted fluorescence photons travel through central hole of the rotation stage and the band pass filters in a motorized filter wheel, and are collected by a CCD camera. Phantom experimental results of the proposed new FMT imaging system can reconstruct the target accurately.

  9. Use of three-dimensional computer graphic animation to illustrate cleft lip and palate surgery.

    PubMed

    Cutting, C; Oliker, A; Haring, J; Dayan, J; Smith, D

    2002-01-01

    Three-dimensional (3D) computer animation is not commonly used to illustrate surgical techniques. This article describes the surgery-specific processes that were required to produce animations to teach cleft lip and palate surgery. Three-dimensional models were created using CT scans of two Chinese children with unrepaired clefts (one unilateral and one bilateral). We programmed several custom software tools, including an incision tool, a forceps tool, and a fat tool. Three-dimensional animation was found to be particularly useful for illustrating surgical concepts. Positioning the virtual "camera" made it possible to view the anatomy from angles that are impossible to obtain with a real camera. Transparency allows the underlying anatomy to be seen during surgical repair while maintaining a view of the overlaying tissue relationships. Finally, the representation of motion allows modeling of anatomical mechanics that cannot be done with static illustrations. The animations presented in this article can be viewed on-line at http://www.smiletrain.org/programs/virtual_surgery2.htm. Sophisticated surgical procedures are clarified with the use of 3D animation software and customized software tools. The next step in the development of this technology is the creation of interactive simulators that recreate the experience of surgery in a safe, digital environment. Copyright 2003 Wiley-Liss, Inc.

  10. Development and application of 3-D foot-shape measurement system under different loads

    NASA Astrophysics Data System (ADS)

    Liu, Guozhong; Wang, Boxiong; Shi, Hui; Luo, Xiuzhi

    2008-03-01

    The 3-D foot-shape measurement system under different loads based on laser-line-scanning principle was designed and the model of the measurement system was developed. 3-D foot-shape measurements without blind areas under different loads and the automatic extraction of foot-parameter are achieved with the system. A global calibration method for CCD cameras using a one-axis motion unit in the measurement system and the specialized calibration kits is presented. Errors caused by the nonlinearity of CCD cameras and other devices and caused by the installation of the one axis motion platform, the laser plane and the toughened glass plane can be eliminated by using the nonlinear coordinate mapping function and the Powell optimized method in calibration. Foot measurements under different loads for 170 participants were conducted and the statistic foot parameter measurement results for male and female participants under non-weight condition and changes of foot parameters under half-body-weight condition, full-body-weight condition and over-body-weight condition compared with non-weight condition are presented. 3-D foot-shape measurement under different loads makes it possible to realize custom-made shoe-making and shows great prosperity in shoe design, foot orthopaedic treatment, shoe size standardization, and establishment of a feet database for consumers and athletes.

  11. Do it yourself remote sensing: Generating an inexpensive, high tech, real science lake mapping project for the classroom

    NASA Technical Reports Server (NTRS)

    Metzger, Stephen M.

    1993-01-01

    The utilization of modest equipment and software revealed bottom contours and water column conditions of a dynamic water body. Classroom discussions of field techniques and equipment capabilities followed by exercises with the data sets in cause-and-effect analysis all contributed to participatory education in the process of science. This project is presented as a case study of the value of engaging secondary and collegiate level students in planning, executing and appraising a real world investigation which they can directly relate to. A 1 km wide bay, experiencing marsh inflow, along an 8 km long lake situated 120 km north of Ottawa, Canada, on the glaciated Canadian Precambrian Shield was mapped in midsummer for submerged topography, bottom composition, temperature profile, turbudity, dissolved oxygen and biota distribution. Low level aerial photographs scanned into image processing software are permitting spatial classification of bottom variations in biology and geology. Instrumentation consisted of a portable sport fishing SONAR depth finder, an electronic lead line multiprobe with photocell, thermistor and dissolved oxygen sensors, a selective depth water sampler, portable pH meter, an underwater camera mounted on a home-made platform with a bottom-contact trigger and a disposable underwater camera for shallow survey work. Sampling transects were referenced using a Brunton hand transit triangulating several shore markers.

  12. Surveying the Newly Digitized Apollo Metric Images for Highland Fault Scarps on the Moon

    NASA Astrophysics Data System (ADS)

    Williams, N. R.; Pritchard, M. E.; Bell, J. F.; Watters, T. R.; Robinson, M. S.; Lawrence, S.

    2009-12-01

    The presence and distribution of thrust faults on the Moon have major implications for lunar formation and thermal evolution. For example, thermal history models for the Moon imply that most of the lunar interior was initially hot. As the Moon cooled over time, some models predict global-scale thrust faults should form as stress builds from global thermal contraction. Large-scale thrust fault scarps with lengths of hundreds of kilometers and maximum relief of up to a kilometer or more, like those on Mercury, are not found on the Moon; however, relatively small-scale linear and curvilinear lobate scarps with maximum lengths typically around 10 km have been observed in the highlands [Binder and Gunga, Icarus, v63, 1985]. These small-scale scarps are interpreted to be thrust faults formed by contractional stresses with relatively small maximum (tens of meters) displacements on the faults. These narrow, low relief landforms could only be identified in the highest resolution Lunar Orbiter and Apollo Panoramic Camera images and under the most favorable lighting conditions. To date, the global distribution and other properties of lunar lobate faults are not well understood. The recent micron-resolution scanning and digitization of the Apollo Mapping Camera (Metric) photographic negatives [Lawrence et al., NLSI Conf. #1415, 2008; http://wms.lroc.asu.edu/apollo] provides a new dataset to search for potential scarps. We examined more than 100 digitized Metric Camera image scans, and from these identified 81 images with favorable lighting (incidence angles between about 55 and 80 deg.) to manually search for features that could be potential tectonic scarps. Previous surveys based on Panoramic Camera and Lunar Orbiter images found fewer than 100 lobate scarps in the highlands; in our Apollo Metric Camera image survey, we have found additional regions with one or more previously unidentified linear and curvilinear features on the lunar surface that may represent lobate thrust fault scarps. In this presentation we review the geologic characteristics and context of these newly-identified, potentially tectonic landforms. The lengths and relief of some of these linear and curvilinear features are consistent with previously identified lobate scarps. Most of these features are in the highlands, though a few occur along the edges of mare and/or crater ejecta deposits. In many cases the resolution of the Metric Camera frames (~10 m/pix) is not adequate to unequivocally determine the origin of these features. Thus, to assess if the newly identified features have tectonic or other origins, we are examining them in higher-resolution Panoramic Camera (currently being scanned) and Lunar Reconnaissance Orbiter Camera Narrow Angle Camera images [Watters et al., this meeting, 2009].

  13. Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras

    NASA Technical Reports Server (NTRS)

    Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

  14. Flexibility and utility of pre-processing methods in converting STXM setups for ptychography - Final Paper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fromm, Catherine

    2015-08-20

    Ptychography is an advanced diffraction based imaging technique that can achieve resolution of 5nm and below. It is done by scanning a sample through a beam of focused x-rays using discrete yet overlapping scan steps. Scattering data is collected on a CCD camera, and the phase of the scattered light is reconstructed with sophisticated iterative algorithms. Because the experimental setup is similar, ptychography setups can be created by retrofitting existing STXM beam lines with new hardware. The other challenge comes in the reconstruction of the collected scattering images. Scattering data must be adjusted and packaged with experimental parameters to calibratemore » the reconstruction software. The necessary pre-processing of data prior to reconstruction is unique to each beamline setup, and even the optical alignments used on that particular day. Pre-processing software must be developed to be flexible and efficient in order to allow experiments appropriate control and freedom in the analysis of their hard-won data. This paper will describe the implementation of pre-processing software which successfully connects data collection steps to reconstruction steps, letting the user accomplish accurate and reliable ptychography.« less

  15. Single shot laser speckle based 3D acquisition system for medical applications

    NASA Astrophysics Data System (ADS)

    Khan, Danish; Shirazi, Muhammad Ayaz; Kim, Min Young

    2018-06-01

    The state of the art techniques used by medical practitioners to extract the three-dimensional (3D) geometry of different body parts requires a series of images/frames such as laser line profiling or structured light scanning. Movement of the patients during scanning process often leads to inaccurate measurements due to sequential image acquisition. Single shot structured techniques are robust to motion but the prevalent challenges in single shot structured light methods are the low density and algorithm complexity. In this research, a single shot 3D measurement system is presented that extracts the 3D point cloud of human skin by projecting a laser speckle pattern using a single pair of images captured by two synchronized cameras. In contrast to conventional laser speckle 3D measurement systems that realize stereo correspondence by digital correlation of projected speckle patterns, the proposed system employs KLT tracking method to locate the corresponding points. The 3D point cloud contains no outliers and sufficient quality of 3D reconstruction is achieved. The 3D shape acquisition of human body parts validates the potential application of the proposed system in the medical industry.

  16. Line-scanning, stage scanning confocal microscope

    NASA Astrophysics Data System (ADS)

    Carucci, John A.; Stevenson, Mary; Gareau, Daniel

    2016-03-01

    We created a line-scanning, stage scanning confocal microscope as part of a new procedure: video assisted micrographic surgery (VAMS). The need for rapid pathological assessment of the tissue on the surface of skin excisions very large since there are 3.5 million new skin cancers diagnosed annually in the United States. The new design presented here is a confocal microscope without any scanning optics. Instead, a line is focused in space and the sample, which is flattened, is physically translated such that the line scans across its face in a direction perpendicular to the line its self. The line is 6mm long and the stage is capable of scanning 50 mm, hence the field of view is quite large. The theoretical diffraction-limited resolution is 0.7um lateral and 3.7um axial. However, in this preliminary report, we present initial results that are a factor of 5-7 poorer in resolution. The results are encouraging because they demonstrate that the linear array detector measures sufficient signal from fluorescently labeled tissue and also demonstrate the large field of view achievable with VAMS.

  17. Ultrahigh speed Spectral / Fourier domain OCT ophthalmic imaging at 70,000 to 312,500 axial scans per second

    PubMed Central

    Potsaid, Benjamin; Gorczynska, Iwona; Srinivasan, Vivek J.; Chen, Yueli; Jiang, James; Cable, Alex; Fujimoto, James G.

    2009-01-01

    We demonstrate ultrahigh speed spectral / Fourier domain optical coherence tomography (OCT) using an ultrahigh speed CMOS line scan camera at rates of 70,000 - 312,500 axial scans per second. Several design configurations are characterized to illustrate trade-offs between acquisition speed, resolution, imaging range, sensitivity and sensitivity roll-off performance. Ultrahigh resolution OCT with 2.5 - 3.0 micron axial image resolution is demonstrated at ∼ 100,000 axial scans per second. A high resolution spectrometer design improves sensitivity roll-off and imaging range performance, trading off imaging speed to 70,000 axial scans per second. Ultrahigh speed imaging at >300,000 axial scans per second with standard image resolution is also demonstrated. Ophthalmic OCT imaging of the normal human retina is investigated. The high acquisition speeds enable dense raster scanning to acquire densely sampled volumetric three dimensional OCT (3D-OCT) data sets of the macula and optic disc with minimal motion artifacts. Imaging with ∼ 8 - 9 micron axial resolution at 250,000 axial scans per second, a 512 × 512 × 400 voxel volumetric 3D-OCT data set can be acquired in only ∼ 1.3 seconds. Orthogonal registration scans are used to register OCT raster scans and remove residual axial eye motion, resulting in 3D-OCT data sets which preserve retinal topography. Rapid repetitive imaging over small volumes can visualize small retinal features without motion induced distortions and enables volume registration to remove eye motion. Cone photoreceptors in some regions of the retina can be visualized without adaptive optics or active eye tracking. Rapid repetitive imaging of 3D volumes also provides dynamic volumetric information (4D-OCT) which is shown to enhance visualization of retinal capillaries and should enable functional imaging. Improvements in the speed and performance of 3D-OCT volumetric imaging promise to enable earlier diagnosis and improved monitoring of disease progression and response to therapy in ophthalmology, as well as have a wide range of research and clinical applications in other areas. PMID:18795054

  18. Hand–eye calibration using a target registration error model

    PubMed Central

    Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M.

    2017-01-01

    Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand–eye calibration between the camera and the tracking system. The authors introduce the concept of ‘guided hand–eye calibration’, where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand–eye calibration as a registration problem between homologous point–line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera. PMID:29184657

  19. In-camera video-stream processing for bandwidth reduction in web inspection

    NASA Astrophysics Data System (ADS)

    Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.

    1996-02-01

    Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.

  20. A modular approach to detection and identification of defects in rough lumber

    Treesearch

    Sang Mook Lee; A. Lynn Abbott; Daniel L. Schmoldt

    2001-01-01

    This paper describes a prototype scanning system that can automatically identify several important defects on rough hardwood lumber. The scanning system utilizes 3 laser sources and an embedded-processor camera to capture and analyze profile and gray-scale images. The modular approach combines the detection of wane (the curved sides of a board, possibly containing...

  1. An augmented-reality edge enhancement application for Google Glass.

    PubMed

    Hwang, Alex D; Peli, Eli

    2014-08-01

    Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer's real-world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Google Glass' camera lens distortions were corrected by using an image warping. Because the camera and virtual display are horizontally separated by 16 mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of three-dimensional transformations to minimize parallax errors before the final projection to the Glass' see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal-vision subjects, with and without a diffuser film to simulate vision loss. For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera's performance. The authors assume that this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration.

  2. Cheap streak camera based on the LD-S-10 intensifier tube

    NASA Astrophysics Data System (ADS)

    Dashevsky, Boris E.; Krutik, Mikhail I.; Surovegin, Alexander L.

    1992-01-01

    Basic properties of a new streak camera and its test results are reported. To intensify images on its screen, we employed modular G1 tubes, the LD-A-1.0 and LD-A-0.33, enabling magnification of 1.0 and 0.33, respectively. If necessary, the LD-A-0.33 tube may be substituted by any other image intensifier of the LDA series, the choice to be determined by the size of the CCD matrix with fiber-optical windows. The reported camera employs a 12.5- mm-long CCD strip consisting of 1024 pixels, each 12 X 500 micrometers in size. Registered radiation was imaged on a 5 X 0.04 mm slit diaphragm tightly connected with the LD-S- 10 fiber-optical input window. Electrons escaping the cathode are accelerated in a 5 kV electric field and focused onto a phosphor screen covering a fiber-optical plate as they travel between deflection plates. Sensitivity of the latter was 18 V/mm, which implies that the total deflecting voltage was 720 V per 40 mm of the screen surface, since reversed-polarity scan pulses +360 V and -360 V were applied across the deflection plate. The streak camera provides full scan times over the screen of 15, 30, 50, 100, 250, and 500 ns. Timing of the electrically or optically driven camera was done using a 10 ns step-controlled-delay (0 - 500 ns) circuit.

  3. Commissioning and Characterization of a Dedicated High-Resolution Breast PET Camera

    DTIC Science & Technology

    2014-02-01

    aim to achieve 1 mm3 resolution using a unique detector design that is able to measure annihilation radiation coming from the PET tracer in 3...undergoing a regular staging PET /CT. We will image with the novel two-panel system after the standard PET /CT scan , in order not to interfere with the...Resolution Breast PET Camera PRINCIPAL INVESTIGATOR: Arne Vandenbroucke, Ph.D. CONTRACTING ORGANIZATION: Stanford University

  4. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras

    PubMed Central

    Morris, Mark; Sellers, William I.

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778

  5. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras.

    PubMed

    Peyer, Kathrin E; Morris, Mark; Sellers, William I

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints.

  6. Evaluation of the Quality of Action Cameras with Wide-Angle Lenses in Uav Photogrammetry

    NASA Astrophysics Data System (ADS)

    Hastedt, H.; Ekkel, T.; Luhmann, T.

    2016-06-01

    The application of light-weight cameras in UAV photogrammetry is required due to restrictions in payload. In general, consumer cameras with normal lens type are applied to a UAV system. The availability of action cameras, like the GoPro Hero4 Black, including a wide-angle lens (fish-eye lens) offers new perspectives in UAV projects. With these investigations, different calibration procedures for fish-eye lenses are evaluated in order to quantify their accuracy potential in UAV photogrammetry. Herewith the GoPro Hero4 is evaluated using different acquisition modes. It is investigated to which extent the standard calibration approaches in OpenCV or Agisoft PhotoScan/Lens can be applied to the evaluation processes in UAV photogrammetry. Therefore different calibration setups and processing procedures are assessed and discussed. Additionally a pre-correction of the initial distortion by GoPro Studio and its application to the photogrammetric purposes will be evaluated. An experimental setup with a set of control points and a prospective flight scenario is chosen to evaluate the processing results using Agisoft PhotoScan. Herewith it is analysed to which extent a pre-calibration and pre-correction of a GoPro Hero4 will reinforce the reliability and accuracy of a flight scenario.

  7. Postprocessing Algorithm for Driving Conventional Scanning Tunneling Microscope at Fast Scan Rates

    PubMed Central

    Zhang, Hao; Li, Xianqi; Park, Jewook; Li, An-Ping

    2017-01-01

    We present an image postprocessing framework for Scanning Tunneling Microscope (STM) to reduce the strong spurious oscillations and scan line noise at fast scan rates and preserve the features, allowing an order of magnitude increase in the scan rate without upgrading the hardware. The proposed method consists of two steps for large scale images and four steps for atomic scale images. For large scale images, we first apply for each line an image registration method to align the forward and backward scans of the same line. In the second step we apply a “rubber band” model which is solved by a novel Constrained Adaptive and Iterative Filtering Algorithm (CIAFA). The numerical results on measurement from copper(111) surface indicate the processed images are comparable in accuracy to data obtained with a slow scan rate, but are free of the scan drift error commonly seen in slow scan data. For atomic scale images, an additional first step to remove line-by-line strong background fluctuations and a fourth step of replacing the postprocessed image by its ranking map as the final atomic resolution image are required. The resulting image restores the lattice image that is nearly undetectable in the original fast scan data. PMID:29362664

  8. Absolute orbit determination using line-of-sight vector measurements between formation flying spacecraft

    NASA Astrophysics Data System (ADS)

    Ou, Yangwei; Zhang, Hongbo; Li, Bin

    2018-04-01

    The purpose of this paper is to show that absolute orbit determination can be achieved based on spacecraft formation. The relative position vectors expressed in the inertial frame are used as measurements. In this scheme, the optical camera is applied to measure the relative line-of-sight (LOS) angles, i.e., the azimuth and elevation. The LIDAR (Light radio Detecting And Ranging) or radar is used to measure the range and we assume that high-accuracy inertial attitude is available. When more deputies are included in the formation, the formation configuration is optimized from the perspective of the Fisher information theory. Considering the limitation on the field of view (FOV) of cameras, the visibility of spacecraft and the installation of cameras are investigated. In simulations, an extended Kalman filter (EKF) is used to estimate the position and velocity. The results show that the navigation accuracy can be enhanced by using more deputies and the installation of cameras significantly affects the navigation performance.

  9. A Structured Light Sensor System for Tree Inventory

    NASA Technical Reports Server (NTRS)

    Chien, Chiun-Hong; Zemek, Michael C.

    2000-01-01

    Tree Inventory is referred to measurement and estimation of marketable wood volume in a piece of land or forest for purposes such as investment or for loan applications. Exist techniques rely on trained surveyor conducting measurements manually using simple optical or mechanical devices, and hence are time consuming subjective and error prone. The advance of computer vision techniques makes it possible to conduct automatic measurements that are more efficient, objective and reliable. This paper describes 3D measurements of tree diameters using a uniquely designed ensemble of two line laser emitters rigidly mounted on a video camera. The proposed laser camera system relies on a fixed distance between two parallel laser planes and projections of laser lines to calculate tree diameters. Performance of the laser camera system is further enhanced by fusion of information induced from structured lighting and that contained in video images. Comparison will be made between the laser camera sensor system and a stereo vision system previously developed for measurements of tree diameters.

  10. Technical Note: Range verification system using edge detection method for a scintillator and a CCD camera system.

    PubMed

    Saotome, Naoya; Furukawa, Takuji; Hara, Yousuke; Mizushima, Kota; Tansho, Ryohei; Saraya, Yuichi; Shirai, Toshiyuki; Noda, Koji

    2016-04-01

    Three-dimensional irradiation with a scanned carbon-ion beam has been performed from 2011 at the authors' facility. The authors have developed the rotating-gantry equipped with the scanning irradiation system. The number of combinations of beam properties to measure for the commissioning is more than 7200, i.e., 201 energy steps, 3 intensities, and 12 gantry angles. To compress the commissioning time, quick and simple range verification system is required. In this work, the authors develop a quick range verification system using scintillator and charge-coupled device (CCD) camera and estimate the accuracy of the range verification. A cylindrical plastic scintillator block and a CCD camera were installed on the black box. The optical spatial resolution of the system is 0.2 mm/pixel. The camera control system was connected and communicates with the measurement system that is part of the scanning system. The range was determined by image processing. Reference range for each energy beam was determined by a difference of Gaussian (DOG) method and the 80% of distal dose of the depth-dose distribution that were measured by a large parallel-plate ionization chamber. The authors compared a threshold method and a DOG method. The authors found that the edge detection method (i.e., the DOG method) is best for the range detection. The accuracy of range detection using this system is within 0.2 mm, and the reproducibility of the same energy measurement is within 0.1 mm without setup error. The results of this study demonstrate that the authors' range check system is capable of quick and easy range verification with sufficient accuracy.

  11. Technical Note: Range verification system using edge detection method for a scintillator and a CCD camera system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saotome, Naoya, E-mail: naosao@nirs.go.jp; Furukawa, Takuji; Hara, Yousuke

    Purpose: Three-dimensional irradiation with a scanned carbon-ion beam has been performed from 2011 at the authors’ facility. The authors have developed the rotating-gantry equipped with the scanning irradiation system. The number of combinations of beam properties to measure for the commissioning is more than 7200, i.e., 201 energy steps, 3 intensities, and 12 gantry angles. To compress the commissioning time, quick and simple range verification system is required. In this work, the authors develop a quick range verification system using scintillator and charge-coupled device (CCD) camera and estimate the accuracy of the range verification. Methods: A cylindrical plastic scintillator blockmore » and a CCD camera were installed on the black box. The optical spatial resolution of the system is 0.2 mm/pixel. The camera control system was connected and communicates with the measurement system that is part of the scanning system. The range was determined by image processing. Reference range for each energy beam was determined by a difference of Gaussian (DOG) method and the 80% of distal dose of the depth-dose distribution that were measured by a large parallel-plate ionization chamber. The authors compared a threshold method and a DOG method. Results: The authors found that the edge detection method (i.e., the DOG method) is best for the range detection. The accuracy of range detection using this system is within 0.2 mm, and the reproducibility of the same energy measurement is within 0.1 mm without setup error. Conclusions: The results of this study demonstrate that the authors’ range check system is capable of quick and easy range verification with sufficient accuracy.« less

  12. Atmospheric scanning electron microscope for correlative microscopy.

    PubMed

    Morrison, Ian E G; Dennison, Clare L; Nishiyama, Hidetoshi; Suga, Mitsuo; Sato, Chikara; Yarwood, Andrew; O'Toole, Peter J

    2012-01-01

    The JEOL ClairScope is the first truly correlative scanning electron and optical microscope. An inverted scanning electron microscope (SEM) column allows electron images of wet samples to be obtained in ambient conditions in a biological culture dish, via a silicon nitride film window in the base. A standard inverted optical microscope positioned above the dish holder can be used to take reflected light and epifluorescence images of the same sample, under atmospheric conditions that permit biochemical modifications. For SEM, the open dish allows successive staining operations to be performed without moving the holder. The standard optical color camera used for fluorescence imaging can be exchanged for a high-sensitivity monochrome camera to detect low-intensity fluorescence signals, and also cathodoluminescence emission from nanophosphor particles. If these particles are applied to the sample at a suitable density, they can greatly assist the task of perfecting the correlation between the optical and electron images. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Verification of image orthorectification techniques for low-cost geometric inspection of masonry arch bridges

    NASA Astrophysics Data System (ADS)

    González-Jorge, Higinio; Riveiro, Belén; Varela, María; Arias, Pedro

    2012-07-01

    A low-cost image orthorectification tool based on the utilization of compact cameras and scale bars is developed to obtain the main geometric parameters of masonry bridges for inventory and routine inspection purposes. The technique is validated in three different bridges by comparison with laser scanning data. The surveying process is very delicate and must make a balance between working distance and angle. Three different cameras are used in the study to establish the relationship between the error and the camera model. Results depict nondependence in error between the length of the bridge element, the type of bridge, and the type of element. Error values for all the cameras are below 4 percent (95 percent of the data). A compact Canon camera, the model with the best technical specifications, shows an error level ranging from 0.5 to 1.5 percent.

  14. Method and apparatus for coherent imaging of infrared energy

    DOEpatents

    Hutchinson, D.P.

    1998-05-12

    A coherent camera system performs ranging, spectroscopy, and thermal imaging. Local oscillator radiation is combined with target scene radiation to enable heterodyne detection by the coherent camera`s two-dimensional photodetector array. Versatility enables deployment of the system in either a passive mode (where no laser energy is actively transmitted toward the target scene) or an active mode (where a transmitting laser is used to actively illuminate the target scene). The two-dimensional photodetector array eliminates the need to mechanically scan the detector. Each element of the photodetector array produces an intermediate frequency signal that is amplified, filtered, and rectified by the coherent camera`s integrated circuitry. By spectroscopic examination of the frequency components of each pixel of the detector array, a high-resolution, three-dimensional or holographic image of the target scene is produced for applications such as air pollution studies, atmospheric disturbance monitoring, and military weapons targeting. 8 figs.

  15. Multi-Angle Snowflake Camera Value-Added Product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shkurko, Konstantin; Garrett, T.; Gaustad, K

    The Multi-Angle Snowflake Camera (MASC) addresses a need for high-resolution multi-angle imaging of hydrometeors in freefall with simultaneous measurement of fallspeed. As illustrated in Figure 1, the MASC consists of three cameras, separated by 36°, each pointing at an identical focal point approximately 10 cm away. Located immediately above each camera, a light aims directly at the center of depth of field for its corresponding camera. The focal point at which the cameras are aimed lies within a ring through which hydrometeors fall. The ring houses a system of near-infrared emitter-detector pairs, arranged in two arrays separated vertically by 32more » mm. When hydrometeors pass through the lower array, they simultaneously trigger all cameras and lights. Fallspeed is calculated from the time it takes to traverse the distance between the upper and lower triggering arrays. The trigger electronics filter out ambient light fluctuations associated with varying sunlight and shadows. The microprocessor onboard the MASC controls the camera system and communicates with the personal computer (PC). The image data is sent via FireWire 800 line, and fallspeed (and camera control) is sent via a Universal Serial Bus (USB) line that relies on RS232-over-USB serial conversion. See Table 1 for specific details on the MASC located at the Oliktok Point Mobile Facility on the North Slope of Alaska. The value-added product (VAP) detailed in this documentation analyzes the raw data (Section 2.0) using Python: images rely on OpenCV image processing library and derived aggregated statistics rely on some clever averaging. See Sections 4.1 and 4.2 for more details on what variables are computed.« less

  16. Digital video system for on-line portal verification

    NASA Astrophysics Data System (ADS)

    Leszczynski, Konrad W.; Shalev, Shlomo; Cosby, N. Scott

    1990-07-01

    A digital system has been developed for on-line acquisition, processing and display of portal images during radiation therapy treatment. A metal/phosphor screen combination is the primary detector, where the conversion from high-energy photons to visible light takes place. A mirror angled at 45 degrees reflects the primary image to a low-light-level camera, which is removed from the direct radiation beam. The image registered by the camera is digitized, processed and displayed on a CRT monitor. Advanced digital techniques for processing of on-line images have been developed and implemented to enhance image contrast and suppress the noise. Some elements of automated radiotherapy treatment verification have been introduced.

  17. Deployable Wireless Camera Penetrators

    NASA Technical Reports Server (NTRS)

    Badescu, Mircea; Jones, Jack; Sherrit, Stewart; Wu, Jiunn Jeng

    2008-01-01

    A lightweight, low-power camera dart has been designed and tested for context imaging of sampling sites and ground surveys from an aerobot or an orbiting spacecraft in a microgravity environment. The camera penetrators also can be used to image any line-of-sight surface, such as cliff walls, that is difficult to access. Tethered cameras to inspect the surfaces of planetary bodies use both power and signal transmission lines to operate. A tether adds the possibility of inadvertently anchoring the aerobot, and requires some form of station-keeping capability of the aerobot if extended examination time is required. The new camera penetrators are deployed without a tether, weigh less than 30 grams, and are disposable. They are designed to drop from any altitude with the boost in transmitting power currently demonstrated at approximately 100-m line-of-sight. The penetrators also can be deployed to monitor lander or rover operations from a distance, and can be used for surface surveys or for context information gathering from a touch-and-go sampling site. Thanks to wireless operation, the complexity of the sampling or survey mechanisms may be reduced. The penetrators may be battery powered for short-duration missions, or have solar panels for longer or intermittent duration missions. The imaging device is embedded in the penetrator, which is dropped or projected at the surface of a study site at 90 to the surface. Mirrors can be used in the design to image the ground or the horizon. Some of the camera features were tested using commercial "nanny" or "spy" camera components with the charge-coupled device (CCD) looking at a direction parallel to the ground. Figure 1 shows components of one camera that weighs less than 8 g and occupies a volume of 11 cm3. This camera could transmit a standard television signal, including sound, up to 100 m. Figure 2 shows the CAD models of a version of the penetrator. A low-volume array of such penetrator cameras could be deployed from an aerobot or a spacecraft onto a comet or asteroid. A system of 20 of these penetrators could be designed and built in a 1- to 2-kg mass envelope. Possible future modifications of the camera penetrators, such as the addition of a chemical spray device, would allow the study of simple chemical reactions of reagents sprayed at the landing site and looking at the color changes. Zoom lenses also could be added for future use.

  18. MUSIC - Multifunctional stereo imaging camera system for wide angle and high resolution stereo and color observations on the Mars-94 mission

    NASA Astrophysics Data System (ADS)

    Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.

    1990-10-01

    Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.

  19. Value of automatic patient motion detection and correction in myocardial perfusion imaging using a CZT-based SPECT camera.

    PubMed

    van Dijk, Joris D; van Dalen, Jorn A; Mouden, Mohamed; Ottervanger, Jan Paul; Knollema, Siert; Slump, Cornelis H; Jager, Pieter L

    2018-04-01

    Correction of motion has become feasible on cadmium-zinc-telluride (CZT)-based SPECT cameras during myocardial perfusion imaging (MPI). Our aim was to quantify the motion and to determine the value of automatic correction using commercially available software. We retrospectively included 83 consecutive patients who underwent stress-rest MPI CZT-SPECT and invasive fractional flow reserve (FFR) measurement. Eight-minute stress acquisitions were reformatted into 1.0- and 20-second bins to detect respiratory motion (RM) and patient motion (PM), respectively. RM and PM were quantified and scans were automatically corrected. Total perfusion deficit (TPD) and SPECT interpretation-normal, equivocal, or abnormal-were compared between the noncorrected and corrected scans. Scans with a changed SPECT interpretation were compared with FFR, the reference standard. Average RM was 2.5 ± 0.4 mm and maximal PM was 4.5 ± 1.3 mm. RM correction influenced the diagnostic outcomes in two patients based on TPD changes ≥7% and in nine patients based on changed visual interpretation. In only four of these patients, the changed SPECT interpretation corresponded with FFR measurements. Correction for PM did not influence the diagnostic outcomes. Respiratory motion and patient motion were small. Motion correction did not appear to improve the diagnostic outcome and, hence, the added value seems limited in MPI using CZT-based SPECT cameras.

  20. Developments on a SEM-based X-ray tomography system: Stabilization scheme and performance evaluation

    NASA Astrophysics Data System (ADS)

    Gomes Perini, L. A.; Bleuet, P.; Filevich, J.; Parker, W.; Buijsse, B.; Kwakman, L. F. Tz.

    2017-06-01

    Recent improvements in a SEM-based X-ray tomography system are described. In this type of equipment, X-rays are generated through the interaction between a highly focused electron-beam and a geometrically confined anode target. Unwanted long-term drifts of the e-beam can lead to loss of X-ray flux or decrease of spatial resolution in images. To circumvent this issue, a closed-loop control using FFT-based image correlation is integrated to the acquisition routine, in order to provide an in-line drift correction. The X-ray detection system consists of a state-of-the-art scientific CMOS camera (indirect detection), featuring high quantum efficiency (˜60%) and low read-out noise (˜1.2 electrons). The system performance is evaluated in terms of resolution, detectability, and scanning times for applications covering three different scientific fields: microelectronics, technical textile, and material science.

  1. Solid state electro-optic color filter and iris

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The electro-optic properties of lanthanum-modified lead zirconate titanate (PLZT) ferroelectric ceramic material are evaluated when utilized as a variable density and/or spectral filter in conjunction with a television scanning system. Emphasis was placed on the development of techniques and procedures for processing the PLZT disks and for applying efficient electrode structures. A number of samples were processed using different combinations of cleaning, electrode material, and deposition process. Best overall performance resulted from the direct evaporation of gold over chrome electrodes. A ruggedized mounting holder assembly was designed, fabricated, and tested. The assembly provides electrical contacts, high voltage protection, and support for the fragile PLZT disk, and permits mounting and optical alignment of the associated polarizers. Operational measurements of a PLZT sample mounted in the holder assembly were performed in conjunction with a television camera and the associated drive circuits. The data verified achievement of the elimination of the observed white-line effect.

  2. Remote sensing and spectral analysis of plumes from ocean dumping in the New York Bight Apex

    NASA Technical Reports Server (NTRS)

    Johnson, R. W.

    1980-01-01

    The application of the remote sensing techniques of aerial photography and multispectral scanning in the qualitative and quantitative analysis of plumes from ocean dumping of waste materials is investigated in the New York Bight Apex. Plumes resulting from the dumping of acid waste and sewage sludge were observed by Ocean Color Scanner at an altitude of 19.7 km and by Modular Multispectral Scanner and mapping camera at an altitude of 3.0 km. Results of the qualitative analysis of multispectral and photographic data for the mapping, location, and identification of pollution features without concurrent sea truth measurements are presented which demonstrate the usefulness of in-scene calibration. Quantitative distributions of the suspended solids in sewage sludge released in spot and line dumps are also determined by a multiple regression analysis of multispectral and sea truth data.

  3. Adaptive DFT-based Interferometer Fringe Tracking

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2004-01-01

    An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA) observatory at Mt. Hopkins, Arizona. The system can minimize the optical path differences (OPDs) for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse.

  4. Adaptive DIT-Based Fringe Tracking and Prediction at IOTA

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2004-01-01

    An automatic fringe tracking system has been developed and implemented at the Infrared Optical Telescope Array (IOTA). In testing during May 2002, the system successfully minimized the optical path differences (OPDs) for all three baselines at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHZ PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. Preliminary analysis on an extension of this algorithm indicates a potential for predictive tracking, although at present, real-time implementation of this extension would require significantly more computational capacity.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sels, Seppe, E-mail: Seppe.Sels@uantwerpen.be; Ribbens, Bart; Mertens, Luc

    Scanning laser Doppler vibrometers (LDV) are used to measure full-field vibration shapes of products and structures. In most commercially available scanning laser Doppler vibrometer systems the user manually draws a grid of measurement locations on a 2D camera image of the product. The determination of the correct physical measurement locations can be a time consuming and diffcult task. In this paper we present a new methodology for product testing and quality control that integrates 3D imaging techniques with vibration measurements. This procedure allows to test prototypes in a shorter period because physical measurements locations will be located automatically. The proposedmore » methodology uses a 3D time-of-flight camera to measure the location and orientation of the test-object. The 3D image of the time-of-flight camera is then matched with the 3D-CAD model of the object in which measurement locations are pre-defined. A time of flight camera operates strictly in the near infrared spectrum. To improve the signal to noise ratio in the time-of-flight measurement, a time-of-flight camera uses a band filter. As a result of this filter, the laser spot of most laser vibrometers is invisible in the time-of-flight image. Therefore a 2D RGB-camera is used to find the laser-spot of the vibrometer. The laser spot is matched to the 3D image obtained by the time-of-flight camera. Next an automatic calibration procedure is used to aim the laser at the (pre)defined locations. Another benefit from this methodology is that it incorporates automatic mapping between a CAD model and the vibration measurements. This mapping can be used to visualize measurements directly on a 3D CAD model. Secondly the orientation of the CAD model is known with respect to the laser beam. This information can be used to find the direction of the measured vibration relatively to the surface of the object. With this direction, the vibration measurements can be compared more precisely with numerical experiments.« less

  6. Frequency-Domain Streak Camera and Tomography for Ultrafast Imaging of Evolving and Channeled Plasma Accelerator Structures

    NASA Astrophysics Data System (ADS)

    Li, Zhengyan; Zgadzaj, Rafal; Wang, Xiaoming; Reed, Stephen; Dong, Peng; Downer, Michael C.

    2010-11-01

    We demonstrate a prototype Frequency Domain Streak Camera (FDSC) that can capture the picosecond time evolution of the plasma accelerator structure in a single shot. In our prototype Frequency-Domain Streak Camera, a probe pulse propagates obliquely to a sub-picosecond pump pulse that creates an evolving nonlinear index "bubble" in fused silica glass, supplementing a conventional Frequency Domain Holographic (FDH) probe-reference pair that co-propagates with the "bubble". Frequency Domain Tomography (FDT) generalizes Frequency-Domain Streak Camera by probing the "bubble" from multiple angles and reconstructing its morphology and evolution using algorithms similar to those used in medical CAT scans. Multiplexing methods (Temporal Multiplexing and Angular Multiplexing) improve data storage and processing capability, demonstrating a compact Frequency Domain Tomography system with a single spectrometer.

  7. Automated grading, upgrading, and cuttings prediction of surfaced dry hardwood lumber

    Treesearch

    Sang-Mook Lee; Phil Araman; A.Lynn Abbott; Matthew F. Winn

    2010-01-01

    This paper concerns the scanning, sawing, and grading of kiln-dried hardwood lumber. A prototype system is described that uses laser sources and a video camera to scan boards. The system automatically detects defects and wane, searches for optimal sawing solutions, and then estimates the grades of the boards that would result. The goal is to derive maximum commercial...

  8. A Flexile and High Precision Calibration Method for Binocular Structured Light Scanning System

    PubMed Central

    Yuan, Jianying; Wang, Qiong; Li, Bailin

    2014-01-01

    3D (three-dimensional) structured light scanning system is widely used in the field of reverse engineering, quality inspection, and so forth. Camera calibration is the key for scanning precision. Currently, 2D (two-dimensional) or 3D fine processed calibration reference object is usually applied for high calibration precision, which is difficult to operate and the cost is high. In this paper, a novel calibration method is proposed with a scale bar and some artificial coded targets placed randomly in the measuring volume. The principle of the proposed method is based on hierarchical self-calibration and bundle adjustment. We get initial intrinsic parameters from images. Initial extrinsic parameters in projective space are estimated with the method of factorization and then upgraded to Euclidean space with orthogonality of rotation matrix and rank 3 of the absolute quadric as constraint. Last, all camera parameters are refined through bundle adjustment. Real experiments show that the proposed method is robust, and has the same precision level as the result using delicate artificial reference object, but the hardware cost is very low compared with the current calibration method used in 3D structured light scanning system. PMID:25202736

  9. Design and evaluation of a filter spectrometer concept for facsimile cameras

    NASA Technical Reports Server (NTRS)

    Kelly, W. L., IV; Jobson, D. J.; Rowland, C. W.

    1974-01-01

    The facsimile camera is an optical-mechanical scanning device which was selected as the imaging system for the Viking '75 lander missions to Mars. A concept which uses an interference filter-photosensor array to integrate a spectrometric capability with the basic imagery function of this camera was proposed for possible application to future missions. This paper is concerned with the design and evaluation of critical electronic circuits and components that are required to implement this concept. The feasibility of obtaining spectroradiometric data is demonstrated, and the performance of a laboratory model is described in terms of spectral range, angular and spectral resolution, and noise-equivalent radiance.

  10. Economical Emission-Line Mapping: ISM Properties of Nearby Protogalaxy Analogs

    NASA Astrophysics Data System (ADS)

    Monkiewicz, Jacqueline A.

    2017-01-01

    Optical emission line imaging can produce a wealth of information about the conditions of the interstellar medium, but a full set of custom emission-line filters for a professional-grade telescope camera can cost many thousands of dollars. A cheaper alternative is to use commercially-produced 2-inch narrow-band astrophotography filters. In order to use these standardized filters with professional-grade telescope cameras, custom filter mounts must be manufactured for each individual filter wheel. These custom filter adaptors are produced by 3-D printing rather than standard machining, which further lowers the total cost.I demonstrate the feasibility of this technique with H-alpha, H-beta, and [OIII] emission line mapping of the low metallicity star-forming galaxies IC10 and NGC 1569, taken with my astrophotography filter set on three different 2-meter class telescopes in Southern Arizona.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishii, H.; Fujino, H.; Bian, Z.

    In this study, two types of marker-based tracking methods for Augmented Reality have been developed. One is a method which employs line-shaped markers and the other is a method which employs circular-shaped markers. These two methods recognize the markers by means of image processing and calculate the relative position and orientation between the markers and the camera in real time. The line-shaped markers are suitable to be pasted in the buildings such as NPPs where many pipes and tanks exist. The circular-shaped markers are suitable for the case that there are many obstacles and it is difficult to use line-shapedmore » markers because the obstacles hide the part of the line-shaped markers. Both methods can extend the maximum distance between the markers and the camera compared to the legacy marker-based tracking methods. (authors)« less

  12. INFIBRA: machine vision inspection of acrylic fiber production

    NASA Astrophysics Data System (ADS)

    Davies, Roger; Correia, Bento A. B.; Contreiras, Jose; Carvalho, Fernando D.

    1998-10-01

    This paper describes the implementation of INFIBRA, a machine vision system for the inspection of acrylic fiber production lines. The system was developed by INETI under a contract from Fisipe, Fibras Sinteticas de Portugal, S.A. At Fisipe there are ten production lines in continuous operation, each approximately 40 m in length. A team of operators used to perform periodic manual visual inspection of each line in conditions of high ambient temperature and humidity. It is not surprising that failures in the manual inspection process occurred with some frequency, with consequences that ranged from reduced fiber quality to production stoppages. The INFIBRA system architecture is a specialization of a generic, modular machine vision architecture based on a network of Personal Computers (PCs), each equipped with a low cost frame grabber. Each production line has a dedicated PC that performs automatic inspection, using specially designed metrology algorithms, via four video cameras located at key positions on the line. The cameras are mounted inside custom-built, hermetically sealed water-cooled housings to protect them from the unfriendly environment. The ten PCs, one for each production line, communicate with a central PC via a standard Ethernet connection. The operator controls all aspects of the inspection process, from configuration through to handling alarms, via a simple graphical interface on the central PC. At any time the operator can also view on the central PC's screen the live image from any one of the 40 cameras employed by the system.

  13. Implicit multiplane 3D camera calibration matrices for stereo image processing

    NASA Astrophysics Data System (ADS)

    McKee, James W.; Burgett, Sherrie J.

    1997-12-01

    By implicit camera calibration, we mean the process of calibrating cameras without explicitly computing their physical parameters. We introduce a new implicit model based on a generalized mapping between an image plane and multiple, parallel calibration planes (usually between four to seven planes). This paper presents a method of computing a relationship between a point on a three-dimensional (3D) object and its corresponding two-dimensional (2D) coordinate in a camera image. This relationship is expanded to form a mapping of points in 3D space to points in image (camera) space and visa versa that requires only matrix multiplication operations. This paper presents the rationale behind the selection of the forms of four matrices and the algorithms to calculate the parameters for the matrices. Two of the matrices are used to map 3D points in object space to 2D points on the CCD camera image plane. The other two matrices are used to map 2D points on the image plane to points on user defined planes in 3D object space. The mappings include compensation for lens distortion and measurement errors. The number of parameters used can be increased, in a straight forward fashion, to calculate and use as many parameters as needed to obtain a user desired accuracy. Previous methods of camera calibration use a fixed number of parameters which can limit the obtainable accuracy and most require the solution of nonlinear equations. The procedure presented can be used to calibrate a single camera to make 2D measurements or calibrate stereo cameras to make 3D measurements. Positional accuracy of better than 3 parts in 10,000 have been achieved. The algorithms in this paper were developed and are implemented in MATLABR (registered trademark of The Math Works, Inc.). We have developed a system to analyze the path of optical fiber during high speed payout (unwinding) of optical fiber off a bobbin. This requires recording and analyzing high speed (5 microsecond exposure time), synchronous, stereo images of the optical fiber during payout. A 3D equation for the fiber at an instant in time is calculated from the corresponding pair of stereo images as follows. In each image, about 20 points along the 2D projection of the fiber are located. Each of these 'fiber points' in one image is mapped to its projection line in 3D space. Each projection line is mapped into another line in the second image. The intersection of each mapped projection line and a curve fitted to the fiber points of the second image (fiber projection in second image) is calculated. Each intersection point is mapped back to the 3D space. A 3D fiber coordinate is formed from the intersection, in 3D space, of a mapped intersection point with its corresponding projection line. The 3D equation for the fiber is computed from this ordered list of 3D coordinates. This process requires a method of accurately mapping 2D (image space) to 3D (object space) and visa versa.3173

  14. Wavelength-Filter Based Spectral Calibrated Wave number - Linearization in 1.3 mm Spectral Domain Optical Coherence.

    PubMed

    Wijeisnghe, Ruchire Eranga Henry; Cho, Nam Hyun; Park, Kibeom; Shin, Yongseung; Kim, Jeehyun

    2013-12-01

    In this study, we demonstrate the enhanced spectral calibration method for 1.3 μm spectral-domain optical coherence tomography (SD-OCT). The calibration method using wavelength-filter simplifies the SD-OCT system, and also the axial resolution and the entire speed of the OCT system can be dramatically improved as well. An externally connected wavelength-filter is utilized to obtain the information of the wavenumber and the pixel position. During the calibration process the wavelength-filter is placed after a broadband source by connecting through an optical circulator. The filtered spectrum with a narrow line width of 0.5 nm is detected by using a line-scan camera. The method does not require a filter or a software recalibration algorithm for imaging as it simply resamples the OCT signal from the detector array without employing rescaling or interpolation methods. One of the main drawbacks of SD-OCT is the broadened point spread functions (PSFs) with increasing imaging depth can be compensated by increasing the wavenumber-linearization order. The sensitivity of our system was measured at 99.8 dB at an imaging depth of 2.1 mm compared with the uncompensated case.

  15. Scene Segmentation For Autonomous Robotic Navigation Using Sequential Laser Projected Structured Light

    NASA Astrophysics Data System (ADS)

    Brown, C. David; Ih, Charles S.; Arce, Gonzalo R.; Fertell, David A.

    1987-01-01

    Vision systems for mobile robots or autonomous vehicles navigating in an unknown terrain environment must provide a rapid and accurate method of segmenting the scene ahead into regions of pathway and background. A major distinguishing feature between the pathway and background is the three dimensional texture of these two regions. Typical methods of textural image segmentation are very computationally intensive, often lack the required robustness, and are incapable of sensing the three dimensional texture of various regions of the scene. A method is presented where scanned laser projected lines of structured light, viewed by a stereoscopically located single video camera, resulted in an image in which the three dimensional characteristics of the scene were represented by the discontinuity of the projected lines. This image was conducive to processing with simple regional operators to classify regions as pathway or background. Design of some operators and application methods, and demonstration on sample images are presented. This method provides rapid and robust scene segmentation capability that has been implemented on a microcomputer in near real time, and should result in higher speed and more reliable robotic or autonomous navigation in unstructured environments.

  16. Automatic detection system of shaft part surface defect based on machine vision

    NASA Astrophysics Data System (ADS)

    Jiang, Lixing; Sun, Kuoyuan; Zhao, Fulai; Hao, Xiangyang

    2015-05-01

    Surface physical damage detection is an important part of the shaft parts quality inspection and the traditional detecting methods are mostly human eye identification which has many disadvantages such as low efficiency, bad reliability. In order to improve the automation level of the quality detection of shaft parts and establish its relevant industry quality standard, a machine vision inspection system connected with MCU was designed to realize the surface detection of shaft parts. The system adopt the monochrome line-scan digital camera and use the dark-field and forward illumination technology to acquire images with high contrast; the images were segmented to Bi-value images through maximum between-cluster variance method after image filtering and image enhancing algorithms; then the mainly contours were extracted based on the evaluation criterion of the aspect ratio and the area; then calculate the coordinates of the centre of gravity of defects area, namely locating point coordinates; At last, location of the defects area were marked by the coding pen communicated with MCU. Experiment show that no defect was omitted and false alarm error rate was lower than 5%, which showed that the designed system met the demand of shaft part on-line real-time detection.

  17. Spherical Images for Cultural Heritage: Survey and Documentation with the Nikon KM360

    NASA Astrophysics Data System (ADS)

    Gottardi, C.; Guerra, F.

    2018-05-01

    The work presented here focuses on the analysis of the potential of spherical images acquired with specific cameras for documentation and three-dimensional reconstruction of Cultural Heritage. Nowadays, thanks to the introduction of cameras able to generate panoramic images automatically, without the requirement of a stitching software to join together different photos, spherical images allow the documentation of spaces in an extremely fast and efficient way. In this particular case, the Nikon Key Mission 360 spherical camera was tested on the Tolentini's cloister, which used to be part of the convent of the close church and now location of the Iuav University of Venice. The aim of the research is based on testing the acquisition of spherical images with the KM360 and comparing the obtained photogrammetric models with data acquired from a laser scanning survey in order to test the metric accuracy and the level of detail achievable with this particular camera. This work is part of a wider research project that the Photogrammetry Laboratory of the Iuav University of Venice has been dealing with in the last few months; the final aim of this research project will be not only the comparison between 3D models obtained from spherical images and laser scanning survey's techniques, but also the examination of their reliability and accuracy with respect to the previous methods of generating spherical panoramas. At the end of the research work, we would like to obtain an operational procedure for spherical cameras applied to metric survey and documentation of Cultural Heritage.

  18. Effect of camera resolution and bandwidth on facial affect recognition.

    PubMed

    Cruz, Mario; Cruz, Robyn Flaum; Krupinski, Elizabeth A; Lopez, Ana Maria; McNeeley, Richard M; Weinstein, Ronald S

    2004-01-01

    This preliminary study explored the effect of camera resolution and bandwidth on facial affect recognition, an important process and clinical variable in mental health service delivery. Sixty medical students and mental health-care professionals were recruited and randomized to four different combinations of commonly used teleconferencing camera resolutions and bandwidths: (1) one chip charged coupling device (CCD) camera, commonly used for VHSgrade taping and in teleconferencing systems costing less than $4,000 with a resolution of 280 lines, and 128 kilobytes per second bandwidth (kbps); (2) VHS and 768 kbps; (3) three-chip CCD camera, commonly used for Betacam (Beta) grade taping and in teleconferencing systems costing more than $4,000 with a resolution of 480 lines, and 128 kbps; and (4) Betacam and 768 kbps. The subjects were asked to identify four facial affects dynamically presented on videotape by an actor and actress presented via a video monitor at 30 frames per second. Two-way analysis of variance (ANOVA) revealed a significant interaction effect for camera resolution and bandwidth (p = 0.02) and a significant main effect for camera resolution (p = 0.006), but no main effect for bandwidth was detected. Post hoc testing of interaction means, using the Tukey Honestly Significant Difference (HSD) test and the critical difference (CD) at the 0.05 alpha level = 1.71, revealed subjects in the VHS/768 kbps (M = 7.133) and VHS/128 kbps (M = 6.533) were significantly better at recognizing the displayed facial affects than those in the Betacam/768 kbps (M = 4.733) or Betacam/128 kbps (M = 6.333) conditions. Camera resolution and bandwidth combinations differ in their capacity to influence facial affect recognition. For service providers, this study's results support the use of VHS cameras with either 768 kbps or 128 kbps bandwidths for facial affect recognition compared to Betacam cameras. The authors argue that the results of this study are a consequence of the VHS camera resolution/bandwidth combinations' ability to improve signal detection (i.e., facial affect recognition) by subjects in comparison to Betacam camera resolution/bandwidth combinations.

  19. Design of a CAN bus interface for photoelectric encoder in the spaceflight camera

    NASA Astrophysics Data System (ADS)

    Sun, Ying; Wan, Qiu-hua; She, Rong-hong; Zhao, Chang-hai; Jiang, Yong

    2009-05-01

    In order to make photoelectric encoder usable in a spaceflight camera which adopts CAN bus as the communication method, CAN bus interface of the photoelectric encoder is designed in this paper. CAN bus interface hardware circuit of photoelectric encoder consists of CAN bus controller SJA 1000, CAN bus transceiver TJA1050 and singlechip. CAN bus interface controlling software program is completed in C language. A ten-meter shield twisted pair line is used as the transmission medium in the spaceflight camera, and speed rate is 600kbps.The experiments show that: the photoelectric encoder with CAN bus interface which has the advantages of more reliability, real-time, transfer rate and transfer distance overcomes communication line's shortcomings of classical photoelectric encoder system. The system works well in automatic measuring and controlling system.

  20. Cytotoxicity Test Based on Human Cells Labeled with Fluorescent Proteins: Fluorimetry, Photography, and Scanning for High-Throughput Assay.

    PubMed

    Kalinina, Marina A; Skvortsov, Dmitry A; Rubtsova, Maria P; Komarova, Ekaterina S; Dontsova, Olga A

    2018-06-01

    High- and medium-throughput assays are now routine methods for drug screening and toxicology investigations on mammalian cells. However, a simple and cost-effective analysis of cytotoxicity that can be carried out with commonly used laboratory equipment is still required. The developed cytotoxicity assays are based on human cell lines stably expressing eGFP, tdTomato, mCherry, or Katushka2S fluorescent proteins. Red fluorescent proteins exhibit a higher signal-to-noise ratio, due to less interference by medium autofluorescence, in comparison to green fluorescent protein. Measurements have been performed on a fluorescence scanner, a plate fluorimeter, and a camera photodocumentation system. For a 96-well plate assay, the sensitivity per well and the measurement duration were 250 cells and 15 min for the scanner, 500 cells and 2 min for the plate fluorimeter, and 1000 cells and less than 1 min for the camera detection. These sensitivities are similar to commonly used MTT (tetrazolium dye) assays. The used scanner and the camera had not been previously applied for cytotoxicity evaluation. An image processing scheme for the high-resolution scanner is proposed that significantly diminishes the number of control wells, even for a library containing fluorescent substances. The suggested cytotoxicity assay has been verified by measurements of the cytotoxicity of several well-known cytotoxic drugs and further applied to test a set of novel bacteriotoxic compounds in a medium-throughput format. The fluorescent signal of living cells is detected without disturbing them and adding any reagents, thus allowing to investigate time-dependent cytotoxicity effects on the same sample of cells. A fast, simple and cost-effective assay is suggested for cytotoxicity evaluation based on mammalian cells expressing fluorescent proteins and commonly used laboratory equipment.

  1. Serious Gaming Technologies Support Human Factors Investigations of Advanced Interfaces for Semi-Autonomous Vehicles

    DTIC Science & Technology

    2006-06-01

    conventional camera vs. thermal imager vs. night vision; camera field of view (narrow, wide, panoramic); keyboard + mouse vs. joystick control vs...motorised platform which could scan the immediate area, producing a 360o panorama of “stitched-together” digital pictures. The picture file, together with...VBS was used to automate the process of creating a QuickTime panorama (.mov or .qt), which includes the initial retrieval of the images, the

  2. Choice of range-energy relationship for the analysis of electron-beam-induced-current line scans

    NASA Astrophysics Data System (ADS)

    Luke, Keung, L.

    1994-07-01

    The electron range in a material is an important parameter in the analysis of electron-beam-induced-current (EBIC) line scans. Both the Kanaya-Okayama (KO) and Everhart-Hoff (EH) range-energy relationships have been widely used by investigators for this purpose. Although the KO range is significantly larer than the EH range, no study has been done to examine the effect of choosing one range over the other on the values of the surface recombination velocity S(sub T) and minority-carrier diffusion length L evaluated from EBICF line scans. Such a study has been carried out, focusing on two major questions: (1) When the KO range is used in different reported methods to evaluate either or both S(sub T) and L from EBIC line scans, how different are their values thus determined in comparison to those using the EH range?; (2) from EBIC line scans of a given material, is there a way to discriminate between the KO and the EH ranges which should be used to analyze these scans? Answers to these questions are presented to assist investigators in extracting more reliable values of either or both S(sub T) and L and in finding the right range to use in the analysis of these line scans.

  3. Comparison of line-peak and line-scanning excitation in two-color laser-induced-fluorescence thermometry of OH.

    PubMed

    Kostka, Stanislav; Roy, Sukesh; Lakusta, Patrick J; Meyer, Terrence R; Renfro, Michael W; Gord, James R; Branam, Richard

    2009-11-10

    Two-line laser-induced-fluorescence (LIF) thermometry is commonly employed to generate instantaneous planar maps of temperature in unsteady flames. The use of line scanning to extract the ratio of integrated intensities is less common because it precludes instantaneous measurements. Recent advances in the energy output of high-speed, ultraviolet, optical parameter oscillators have made possible the rapid scanning of molecular rovibrational transitions and, hence, the potential to extract information on gas-phase temperatures. In the current study, two-line OH LIF thermometry is performed in a well-calibrated reacting flow for the purpose of comparing the relative accuracy of various line-pair selections from the literature and quantifying the differences between peak-intensity and spectrally integrated line ratios. Investigated are the effects of collisional quenching, laser absorption, and the integration width for partial scanning of closely spaced lines on the measured temperatures. Data from excitation scans are compared with theoretical line shapes, and experimentally derived temperatures are compared with numerical predictions that were previously validated using coherent anti-Stokes-Raman scattering. Ratios of four pairs of transitions in the A2Sigma+<--X2Pi (1,0) band of OH are collected in an atmospheric-pressure, near-adiabatic hydrogen-air flame over a wide range of equivalence ratios--from 0.4 to 1.4. It is observed that measured temperatures based on the ratio of Q1(14)/Q1(5) transition lines result in the best accuracy and that line scanning improves the measurement accuracy by as much as threefold at low-equivalence-ratio, low-temperature conditions. These results provide a comprehensive analysis of the procedures required to ensure accurate two-line LIF measurements in reacting flows over a wide range of conditions.

  4. 3D surface scan of biological samples with a Push-broom Imaging Spectrometer

    NASA Astrophysics Data System (ADS)

    Yao, Haibo; Kincaid, Russell; Hruska, Zuzana; Brown, Robert L.; Bhatnagar, Deepak; Cleveland, Thomas E.

    2013-08-01

    The food industry is always on the lookout for sensing technologies for rapid and nondestructive inspection of food products. Hyperspectral imaging technology integrates both imaging and spectroscopy into unique imaging sensors. Its application for food safety and quality inspection has made significant progress in recent years. Specifically, hyperspectral imaging has shown its potential for surface contamination detection in many food related applications. Most existing hyperspectral imaging systems use pushbroom scanning which is generally used for flat surface inspection. In some applications it is desirable to be able to acquire hyperspectral images on circular objects such as corn ears, apples, and cucumbers. Past research describes inspection systems that examine all surfaces of individual objects. Most of these systems did not employ hyperspectral imaging. These systems typically utilized a roller to rotate an object, such as an apple. During apple rotation, the camera took multiple images in order to cover the complete surface of the apple. The acquired image data lacked the spectral component present in a hyperspectral image. This paper discusses the development of a hyperspectral imaging system for a 3-D surface scan of biological samples. The new instrument is based on a pushbroom hyperspectral line scanner using a rotational stage to turn the sample. The system is suitable for whole surface hyperspectral imaging of circular objects. In addition to its value to the food industry, the system could be useful for other applications involving 3-D surface inspection.

  5. Analysis of filament statistics in fast camera data on MAST

    NASA Astrophysics Data System (ADS)

    Farley, Tom; Militello, Fulvio; Walkden, Nick; Harrison, James; Silburn, Scott; Bradley, James

    2017-10-01

    Coherent filamentary structures have been shown to play a dominant role in turbulent cross-field particle transport [D'Ippolito 2011]. An improved understanding of filaments is vital in order to control scrape off layer (SOL) density profiles and thus control first wall erosion, impurity flushing and coupling of radio frequency heating in future devices. The Elzar code [T. Farley, 2017 in prep.] is applied to MAST data. The code uses information about the magnetic equilibrium to calculate the intensity of light emission along field lines as seen in the camera images, as a function of the field lines' radial and toroidal locations at the mid-plane. In this way a `pseudo-inversion' of the intensity profiles in the camera images is achieved from which filaments can be identified and measured. In this work, a statistical analysis of the intensity fluctuations along field lines in the camera field of view is performed using techniques similar to those typically applied in standard Langmuir probe analyses. These filament statistics are interpreted in terms of the theoretical ergodic framework presented by F. Militello & J.T. Omotani, 2016, in order to better understand how time averaged filament dynamics produce the more familiar SOL density profiles. This work has received funding from the RCUK Energy programme (Grant Number EP/P012450/1), from Euratom (Grant Agreement No. 633053) and from the EUROfusion consortium.

  6. Design and verification of the miniature optical system for small object surface profile fast scanning

    NASA Astrophysics Data System (ADS)

    Chi, Sheng; Lee, Shu-Sheng; Huang, Jen, Jen-Yu; Lai, Ti-Yu; Jan, Chia-Ming; Hu, Po-Chi

    2016-04-01

    As the progress of optical technologies, different commercial 3D surface contour scanners are on the market nowadays. Most of them are used for reconstructing the surface profile of mold or mechanical objects which are larger than 50 mm×50 mm× 50 mm, and the scanning system size is about 300 mm×300 mm×100 mm. There are seldom optical systems commercialized for surface profile fast scanning for small object size less than 10 mm×10 mm×10 mm. Therefore, a miniature optical system has been designed and developed in this research work for this purpose. Since the most used scanning method of such system is line scan technology, we have developed pseudo-phase shifting digital projection technology by adopting projecting fringes and phase reconstruction method. A projector was used to project a digital fringe patterns on the object, and the fringes intensity images of the reference plane and of the sample object were recorded by a CMOS camera. The phase difference between the plane and object can be calculated from the fringes images, and the surface profile of the object was reconstructed by using the phase differences. The traditional phase shifting method was accomplished by using PZT actuator or precisely controlled motor to adjust the light source or grating and this is one of the limitations for high speed scanning. Compared with the traditional optical setup, we utilized a micro projector to project the digital fringe patterns on the sample. This diminished the phase shifting processing time and the controlled phase differences between the shifted phases become more precise. Besides, the optical path design based on a portable device scanning system was used to minimize the size and reduce the number of the system components. A screwdriver section about 7mm×5mm×5mm has been scanned and its surface profile was successfully restored. The experimental results showed that the measurement area of our system can be smaller than 10mm×10mm, the precision reached to +/-10μm, and the scanning time for each surface of an object was less than 15 seconds. This has proved that our system own the potential to be a fast scanning scanner for small object surface profile scanning.

  7. Frequency-Domain Streak Camera and Tomography for Ultrafast Imaging of Evolving and Channeled Plasma Accelerator Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Zhengyan; Zgadzaj, Rafal; Wang Xiaoming

    2010-11-04

    We demonstrate a prototype Frequency Domain Streak Camera (FDSC) that can capture the picosecond time evolution of the plasma accelerator structure in a single shot. In our prototype Frequency-Domain Streak Camera, a probe pulse propagates obliquely to a sub-picosecond pump pulse that creates an evolving nonlinear index 'bubble' in fused silica glass, supplementing a conventional Frequency Domain Holographic (FDH) probe-reference pair that co-propagates with the 'bubble'. Frequency Domain Tomography (FDT) generalizes Frequency-Domain Streak Camera by probing the 'bubble' from multiple angles and reconstructing its morphology and evolution using algorithms similar to those used in medical CAT scans. Multiplexing methods (Temporalmore » Multiplexing and Angular Multiplexing) improve data storage and processing capability, demonstrating a compact Frequency Domain Tomography system with a single spectrometer.« less

  8. Method used to test the imaging consistency of binocular camera's left-right optical system

    NASA Astrophysics Data System (ADS)

    Liu, Meiying; Wang, Hu; Liu, Jie; Xue, Yaoke; Yang, Shaodong; Zhao, Hui

    2016-09-01

    To binocular camera, the consistency of optical parameters of the left and the right optical system is an important factor that will influence the overall imaging consistency. In conventional testing procedure of optical system, there lacks specifications suitable for evaluating imaging consistency. In this paper, considering the special requirements of binocular optical imaging system, a method used to measure the imaging consistency of binocular camera is presented. Based on this method, a measurement system which is composed of an integrating sphere, a rotary table and a CMOS camera has been established. First, let the left and the right optical system capture images in normal exposure time under the same condition. Second, a contour image is obtained based on the multiple threshold segmentation result and the boundary is determined using the slope of contour lines near the pseudo-contour line. Third, the constraint of gray level based on the corresponding coordinates of left-right images is established and the imaging consistency could be evaluated through standard deviation σ of the imaging grayscale difference D (x, y) between the left and right optical system. The experiments demonstrate that the method is suitable for carrying out the imaging consistency testing for binocular camera. When the standard deviation 3σ distribution of imaging gray difference D (x, y) between the left and right optical system of the binocular camera does not exceed 5%, it is believed that the design requirements have been achieved. This method could be used effectively and paves the way for the imaging consistency testing of the binocular camera.

  9. Performance characterization of UV science cameras developed for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP)

    NASA Astrophysics Data System (ADS)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, D.; Beabout, B.; Stewart, M.

    2014-07-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-α and to detect the Hanle effect in the line core. Due to the nature of Lyman-α polarizationin the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1% in the line core. CLASP is a dual-beam spectro-polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1% polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. The CLASP cameras were designed to operate with ≤ 10 e-/pixel/second dark current, ≤ 25 e- read noise, a gain of 2.0 +- 0.5 and ≤ 1.0% residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; dark current, read noise, camera gain and residual non-linearity.

  10. High speed multiwire photon camera

    NASA Technical Reports Server (NTRS)

    Lacy, Jeffrey L. (Inventor)

    1991-01-01

    An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occuring during the readout window.

  11. High speed multiwire photon camera

    NASA Technical Reports Server (NTRS)

    Lacy, Jeffrey L. (Inventor)

    1989-01-01

    An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occurring during the readout window.

  12. Voyager spacecraft images of Jupiter and Saturn

    NASA Technical Reports Server (NTRS)

    Birnbaum, M. M.

    1982-01-01

    The Voyager imaging system is described, noting that it is made up of a narrow-angle and a wide-angle TV camera, each in turn consisting of optics, a filter wheel and shutter assembly, a vidicon tube, and an electronics subsystem. The narrow-angle camera has a focal length of 1500 mm; its field of view is 0.42 deg and its focal ratio is f/8.5. For the wide-angle camera, the focal length is 200 mm, the field of view 3.2 deg, and the focal ratio of f/3.5. Images are exposed by each camera through one of eight filters in the filter wheel on the photoconductive surface of a magnetically focused and deflected vidicon having a diameter of 25 mm. The vidicon storage surface (target) is a selenium-sulfur film having an active area of 11.14 x 11.14 mm; it holds a frame consisting of 800 lines with 800 picture elements per line. Pictures of Jupiter, Saturn, and their moons are presented, with short descriptions given of the area being viewed.

  13. 3D scan line method for identifying void fabric of granular materials

    NASA Astrophysics Data System (ADS)

    Theocharis, Alexandros I.; Vairaktaris, Emmanouil; Dafalias, Yannis F.

    2017-06-01

    Among other processes measuring the void phase of porous or fractured media, scan line approach is a simplified "graphical" method, mainly used in image processing related procedures. In soil mechanics, the application of scan line method is related to the soil fabric, which is important in characterizing the anisotropic mechanical response of soils. Void fabric is of particular interest, since graphical approaches are well defined experimentally and most of them can also be easily used in numerical experiments, like the scan line method. This is in contrast to the definition of fabric based on contact normal vectors that are extremely difficult to determine, especially considering physical experiments. The scan line method has been proposed by Oda et al [1] and implemented again by Ghedia and O'Sullivan [2]. A modified method based on DEM analysis instead of image measurements of fabric has been previously proposed and implemented by the authors in a 2D scheme [3-4]. In this work, a 3D extension of the modified scan line definition is presented using PFC 3D®. The results show clearly similar trends with the 2D case and the same behaviour of fabric anisotropy is presented.

  14. Intershot Analysis of Flows in DIII-D

    NASA Astrophysics Data System (ADS)

    Meyer, W. H.; Allen, S. L.; Samuell, C. M.; Howard, J.

    2016-10-01

    Analysis of the DIII-D flow diagnostic data require demodulation of interference images, and inversion of the resultant line integrated emissivity and flow (phase) images. Four response matrices are pre-calculated: the emissivity line integral and the line integral of the scalar product of the lines-of-site with the orthogonal unit vectors of parallel flow. Equilibrium data determines the relative weight of the component matrices used in the final flow inversion matrix. Serial processing has been used for the lower divertor viewing flow camera 800x600 pixel image. The full cross section viewing camera will require parallel processing of the 2160x2560 pixel image. We will discuss using a Posix thread pool and a Tesla K40c GPU in the processing of this data. Prepared by LLNL under Contract DE-AC52-07NA27344. This material is based upon work supported by the U.S. DOE, Office of Science, Fusion Energy Sciences.

  15. 40 CFR 451.21 - Effluent limitations attainable by the application of the best practicable control technology...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS... as video cameras, digital scanning sonar, and upweller systems; monitoring of sediment quality...

  16. 40 CFR 451.21 - Effluent limitations attainable by the application of the best practicable control technology...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS... as video cameras, digital scanning sonar, and upweller systems; monitoring of sediment quality...

  17. 40 CFR 451.21 - Effluent limitations attainable by the application of the best practicable control technology...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS... as video cameras, digital scanning sonar, and upweller systems; monitoring of sediment quality...

  18. Failure Analysis of Heavy-Ion-Irradiated Schottky Diodes

    NASA Technical Reports Server (NTRS)

    Casey, Megan C.; Lauenstein, Jean-Marie; Wilcox, Edward P.; Topper, Alyson D.; Campola, Michael J.; Label, Kenneth A.

    2017-01-01

    In this work, we use high- and low-magnitude optical microscope images, infrared camera images, and scanning electron microscope images to identify and describe the failure locations in heavy-ion-irradiated Schottky diodes.

  19. 640x480 PtSi Stirling-cooled camera system

    NASA Astrophysics Data System (ADS)

    Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; Coyle, Peter J.; Feder, Howard L.; Gilmartin, Harvey R.; Levine, Peter A.; Sauer, Donald J.; Shallcross, Frank V.; Demers, P. L.; Smalser, P. J.; Tower, John R.

    1992-09-01

    A Stirling cooled 3 - 5 micron camera system has been developed. The camera employs a monolithic 640 X 480 PtSi-MOS focal plane array. The camera system achieves an NEDT equals 0.10 K at 30 Hz frame rate with f/1.5 optics (300 K background). At a spatial frequency of 0.02 cycles/mRAD the vertical and horizontal Minimum Resolvable Temperature are in the range of MRT equals 0.03 K (f/1.5 optics, 300 K background). The MOS focal plane array achieves a resolution of 480 TV lines per picture height independent of background level and position within the frame.

  20. Spectral imaging using consumer-level devices and kernel-based regression.

    PubMed

    Heikkinen, Ville; Cámara, Clara; Hirvonen, Tapani; Penttinen, Niko

    2016-06-01

    Hyperspectral reflectance factor image estimations were performed in the 400-700 nm wavelength range using a portable consumer-level laptop display as an adjustable light source for a trichromatic camera. Targets of interest were ColorChecker Classic samples, Munsell Matte samples, geometrically challenging tempera icon paintings from the turn of the 20th century, and human hands. Measurements and simulations were performed using Nikon D80 RGB camera and Dell Vostro 2520 laptop screen as a light source. Estimations were performed without spectral characteristics of the devices and by emphasizing simplicity for training sets and estimation model optimization. Spectral and color error images are shown for the estimations using line-scanned hyperspectral images as the ground truth. Estimations were performed using kernel-based regression models via a first-degree inhomogeneous polynomial kernel and a Matérn kernel, where in the latter case the median heuristic approach for model optimization and link function for bounded estimation were evaluated. Results suggest modest requirements for a training set and show that all estimation models have markedly improved accuracy with respect to the DE00 color distance (up to 99% for paintings and hands) and the Pearson distance (up to 98% for paintings and 99% for hands) from a weak training set (Digital ColorChecker SG) case when small representative training data were used in the estimation.

  1. High-resolution CCD imaging alternatives

    NASA Astrophysics Data System (ADS)

    Brown, D. L.; Acker, D. E.

    1992-08-01

    High resolution CCD color cameras have recently stimulated the interest of a large number of potential end-users for a wide range of practical applications. Real-time High Definition Television (HDTV) systems are now being used or considered for use in applications ranging from entertainment program origination through digital image storage to medical and scientific research. HDTV generation of electronic images offers significant cost and time-saving advantages over the use of film in such applications. Further in still image systems electronic image capture is faster and more efficient than conventional image scanners. The CCD still camera can capture 3-dimensional objects into the computing environment directly without having to shoot a picture on film develop it and then scan the image into a computer. 2. EXTENDING CCD TECHNOLOGY BEYOND BROADCAST Most standard production CCD sensor chips are made for broadcast-compatible systems. One popular CCD and the basis for this discussion offers arrays of roughly 750 x 580 picture elements (pixels) or a total array of approximately 435 pixels (see Fig. 1). FOR. A has developed a technique to increase the number of available pixels for a given image compared to that produced by the standard CCD itself. Using an inter-lined CCD with an overall spatial structure several times larger than the photo-sensitive sensor areas each of the CCD sensors is shifted in two dimensions in order to fill in spatial gaps between adjacent sensors.

  2. Color line-scan technology in industrial applications

    NASA Astrophysics Data System (ADS)

    Lemstrom, Guy F.

    1995-10-01

    Color machine vision opens new possibilities for industrial on-line quality control applications. With color machine vision it's possible to detect different colors and shades, make color separation, spectroscopic applications and at the same time do measurements in the same way as with gray scale technology. These can be geometrical measurements such as dimensions, shape, texture etc. By combining these technologies in a color line scan camera, it brings the machine vision to new dimensions of realizing new applications and new areas in the machine vision business. Quality and process control requirements in the industry get more demanding every day. Color machine vision can be the solution for many simple tasks that haven't been realized with gray scale technology. The lack of detecting or measuring colors has been one reason why machine vision has not been used in quality control as much as it could have been. Color machine vision has shown a growing enthusiasm in the industrial machine vision applications. Potential areas of the industry include food, wood, mining and minerals, printing, paper, glass, plastic, recycling etc. Tasks are from simple measuring to total process and quality control. The color machine vision is not only for measuring colors. It can also be for contrast enhancement, object detection, background removing, structure detection and measuring. Color or spectral separation can be used in many different ways for working out machine vision application than before. It's only a question of how to use the benefits of having two or more data per measured pixel, instead of having only one as in case with traditional gray scale technology. There are plenty of potential applications already today that can be realized with color vision and it's going to give more performance to many traditional gray scale applications in the near future. But the most important feature is that color machine vision offers a new way of working out applications, where machine vision hasn't been applied before.

  3. Performance Characterization of UV Science Cameras Developed for the Chromospheric Lyman-Alpha Spectro-Polarimeter

    NASA Technical Reports Server (NTRS)

    Champey, Patrick; Kobayashi, Ken; Winebarger, Amy; Cirtain, Jonathan; Hyde, David; Robertson, Bryan; Beabout, Brent; Beabout, Dyana; Stewart, Mike

    2014-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-alpha and to detect the Hanle effect in the line core. Due to the nature of Lyman-alpha polarization in the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1 percent in the line core. CLASP is a dual-beam spectro- polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1 percent polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. Coating the e2v CCD57-10 512x512 detectors with Lumogen-E coating allows for a relatively high (30 percent) quantum efficiency at the Lyman-alpha line. The CLASP cameras were designed to operate with a gain of 2.0 +/- 0.5, less than or equal to 25 e- readout noise, less than or equal to 10 e-/second/pixel dark current, and less than 0.1percent residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; system gain, dark current, read noise, and residual non-linearity.

  4. LabVIEW control software for scanning micro-beam X-ray fluorescence spectrometer.

    PubMed

    Wrobel, Pawel; Czyzycki, Mateusz; Furman, Leszek; Kolasinski, Krzysztof; Lankosz, Marek; Mrenca, Alina; Samek, Lucyna; Wegrzynek, Dariusz

    2012-05-15

    Confocal micro-beam X-ray fluorescence microscope was constructed. The system was assembled from commercially available components - a low power X-ray tube source, polycapillary X-ray optics and silicon drift detector - controlled by an in-house developed LabVIEW software. A video camera coupled to optical microscope was utilized to display the area excited by X-ray beam. The camera image calibration and scan area definition software were also based entirely on LabVIEW code. Presently, the main area of application of the newly constructed spectrometer is 2-dimensional mapping of element distribution in environmental, biological and geological samples with micrometer spatial resolution. The hardware and the developed software can already handle volumetric 3-D confocal scans. In this work, a front panel graphical user interface as well as communication protocols between hardware components were described. Two applications of the spectrometer, to homogeneity testing of titanium layers and to imaging of various types of grains in air particulate matter collected on membrane filters, were presented. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Low cost 3D scanning process using digital image processing

    NASA Astrophysics Data System (ADS)

    Aguilar, David; Romero, Carlos; Martínez, Fernando

    2017-02-01

    This paper shows the design and building of a low cost 3D scanner, able to digitize solid objects through contactless data acquisition, using active object reflection. 3D scanners are used in different applications such as: science, engineering, entertainment, etc; these are classified in: contact scanners and contactless ones, where the last ones are often the most used but they are expensive. This low-cost prototype is done through a vertical scanning of the object using a fixed camera and a mobile horizontal laser light, which is deformed depending on the 3-dimensional surface of the solid. Using digital image processing an analysis of the deformation detected by the camera was done; it allows determining the 3D coordinates using triangulation. The obtained information is processed by a Matlab script, which gives to the user a point cloud corresponding to each horizontal scanning done. The obtained results show an acceptable quality and significant details of digitalized objects, making this prototype (built on LEGO Mindstorms NXT kit) a versatile and cheap tool, which can be used for many applications, mainly by engineering students.

  6. EAARL Coastal Topography - Northeast Barrier Islands 2007: Bare Earth

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Sallenger, A.H.; Wright, C. Wayne; Yates, Xan; Bonisteel, Jamie M.

    2008-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived bare earth (BE) topography were produced collaboratively by the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the northeast coastal barrier islands in New York and New Jersey, acquired April 29-30 and May 15-16, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is routinely used to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  7. EAARL Topography - Natchez Trace Parkway 2007: First Surface

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Segura, Martha; Yates, Xan

    2008-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived first surface (FS) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the Natchez Trace Parkway in Mississippi, acquired on September 14, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  8. EAARL Topography - Vicksburg National Military Park 2008: Bare Earth

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Segura, Martha; Yates, Xan

    2008-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived bare earth (BE) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Vicksburg National Military Park in Mississippi, acquired on March 6, 2008. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  9. EAARL Coastal Topography - Northeast Barrier Islands 2007: First Surface

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Sallenger, A.H.; Wright, C. Wayne; Yates, Xan; Bonisteel, Jamie M.

    2009-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived first surface (FS) topography were produced collaboratively by the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the northeast coastal barrier islands in New York and New Jersey, acquired April 29-30 and May 15-16, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is routinely used to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  10. EAARL Topography-Vicksburg National Military Park 2007: First Surface

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Segura, Martha; Yates, Xan

    2009-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived first-surface (FS) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Vicksburg National Military Park in Mississippi, acquired on September 12, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  11. EAARL Coastal Topography--Cape Canaveral, Florida, 2009: First Surface

    USGS Publications Warehouse

    Bonisteel-Cormier, J.M.; Nayegandhi, Amar; Plant, Nathaniel; Wright, C.W.; Nagle, D.B.; Serafin, K.S.; Klipp, E.S.

    2011-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Kennedy Space Center, FL. This project provides highly detailed and accurate datasets of a portion of the eastern Florida coastline beachface, acquired on May 28, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the "bare earth" under vegetation from a point cloud of last return elevations.

  12. EAARL Coastal Topography - Sandy Hook 2007

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Stevens, Sara; Yates, Xan; Bonisteel, Jamie M.

    2008-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of Gateway National Recreation Area's Sandy Hook Unit in New Jersey, acquired on May 16, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL) was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for pre-survey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is routinely used to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  13. Clinical evaluation of pixellated NaI:Tl and continuous LaBr 3:Ce, compact scintillation cameras for breast tumors imaging

    NASA Astrophysics Data System (ADS)

    Pani, R.; Pellegrini, R.; Betti, M.; De Vincentis, G.; Cinti, M. N.; Bennati, P.; Vittorini, F.; Casali, V.; Mattioli, M.; Orsolini Cencelli, V.; Navarria, F.; Bollini, D.; Moschini, G.; Iurlaro, G.; Montani, L.; de Notaristefani, F.

    2007-02-01

    The principal limiting factor in the clinical acceptance of scintimammography is certainly its low sensitivity for cancers sized <1 cm, mainly due to the lack of equipment specifically designed for breast imaging. The National Institute of Nuclear Physics (INFN) has been developing a new scintillation camera based on Lanthanum tri-Bromide Cerium-doped crystal (LaBr 3:Ce), that demonstrating superior imaging performances with respect to the dedicated scintillation γ-camera that was previously developed. The proposed detector consists of continuous LaBr 3:Ce scintillator crystal coupled to a Hamamatsu H8500 Flat Panel PMT. One centimeter thick crystal has been chosen to increase crystal detection efficiency. In this paper, we propose a comparison and evaluation between lanthanum γ-camera and a Multi PSPMT camera, NaI(Tl) discrete pixel based, previously developed under "IMI" Italian project for technological transfer of INFN. A phantom study has been developed to test both the cameras before introducing them in clinical trials. High resolution scans produced by LaBr 3:Ce camera showed higher tumor contrast with a detailed imaging of uptake area than pixellated NaI(Tl) dedicated camera. Furthermore, with the lanthanum camera, the Signal-to-Noise Ratio ( SNR) value was increased for a lesion as small as 5 mm, with a consequent strong improvement in detectability.

  14. Study of optical techniques for the Ames unitary wind tunnel. Part 5: Infrared imagery

    NASA Technical Reports Server (NTRS)

    Lee, George

    1992-01-01

    A survey of infrared thermography for aerodynamics was made. Particular attention was paid to boundary layer transition detection. IR thermography flow visualization of 2-D and 3-D separation was surveyed. Heat transfer measurements and surface temperature measurements were also covered. Comparisons of several commercial IR cameras were made. The use of a recently purchased IR camera in the Ames Unitary Plan Wind Tunnels was studied. Optical access for these facilities and the methods to scan typical models was investigated.

  15. Analysis of the effect on optical equipment caused by solar position in target flight measure

    NASA Astrophysics Data System (ADS)

    Zhu, Shun-hua; Hu, Hai-bin

    2012-11-01

    Optical equipment is widely used to measure flight parameters in target flight performance test, but the equipment is sensitive to the sun's rays. In order to avoid the disadvantage of sun's rays directly shines to the optical equipment camera lens when measuring target flight parameters, the angle between observation direction and the line which connects optical equipment camera lens and the sun should be kept at a big range, The calculation method of the solar azimuth and altitude to the optical equipment at any time and at any place on the earth, the equipment observation direction model and the calculating model of angle between observation direction and the line which connects optical equipment camera lens are introduced in this article. Also, the simulation of the effect on optical equipment caused by solar position at different time, different date, different month and different target flight direction is given in this article.

  16. Rocket studies of solar corona and transition region. [X-Ray spectrometer/spectrograph telescope

    NASA Technical Reports Server (NTRS)

    Acton, L. W.; Bruner, E. C., Jr.; Brown, W. A.; Nobles, R. A.

    1979-01-01

    The XSST (X-Ray Spectrometer/Spectrograph Telescope) rocket payload launched by a Nike Boosted Black Brant was designed to provide high spectral resolution coronal soft X-ray line information on a spectrographic plate, as well as time resolved photo-electric records of pre-selected lines and spectral regions. This spectral data is obtained from a 1 x 10 arc second solar region defined by the paraboloidal telescope of the XSST. The transition region camera provided full disc images in selected spectral intervals originating in lower temperature zones than the emitting regions accessible to the XSST. A H-alpha camera system allowed referencing the measurements to the chromospheric temperatures and altitudes. Payload flight and recovery information is provided along with X-ray photoelectric and UV flight data, transition camera results and a summary of the anomalies encountered. Instrument mechanical stability and spectrometer pointing direction are also examined.

  17. a Novel Technique for Precision Geometric Correction of Jitter Distortion for the Europa Imaging System and Other Rolling-Shutter Cameras

    NASA Astrophysics Data System (ADS)

    Kirk, R. L.; Shepherd, M.; Sides, S. C.

    2018-04-01

    We use simulated images to demonstrate a novel technique for mitigating geometric distortions caused by platform motion ("jitter") as two-dimensional image sensors are exposed and read out line by line ("rolling shutter"). The results indicate that the Europa Imaging System (EIS) on NASA's Europa Clipper can likely meet its scientific goals requiring 0.1-pixel precision. We are therefore adapting the software used to demonstrate and test rolling shutter jitter correction to become part of the standard processing pipeline for EIS. The correction method will also apply to other rolling-shutter cameras, provided they have the operational flexibility to read out selected "check lines" at chosen times during the systematic readout of the frame area.

  18. Orbital-science investigation: Part C: photogrammetry of Apollo 15 photography

    USGS Publications Warehouse

    Wu, Sherman S.C.; Schafer, Francis J.; Jordan, Raymond; Nakata, Gary M.; Derick, James L.

    1972-01-01

    Mapping of large areas of the Moon by photogrammetric methods was not seriously considered until the Apollo 15 mission. In this mission, a mapping camera system and a 61-cm optical-bar high-resolution panoramic camera, as well as a laser altimeter, were used. The mapping camera system comprises a 7.6-cm metric terrain camera and a 7.6-cm stellar camera mounted in a fixed angular relationship (an angle of 96° between the two camera axes). The metric camera has a glass focal-plane plate with reseau grids. The ground-resolution capability from an altitude of 110 km is approximately 20 m. Because of the auxiliary stellar camera and the laser altimeter, the resulting metric photography can be used not only for medium- and small-scale cartographic or topographic maps, but it also can provide a basis for establishing a lunar geodetic network. The optical-bar panoramic camera has a 135- to 180-line resolution, which is approximately 1 to 2 m of ground resolution from an altitude of 110 km. Very large scale specialized topographic maps for supporting geologic studies of lunar-surface features can be produced from the stereoscopic coverage provided by this camera.

  19. a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging

    NASA Astrophysics Data System (ADS)

    Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.

    2017-08-01

    Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.

  20. Pothole Detection System Using a Black-box Camera.

    PubMed

    Jo, Youngtae; Ryu, Seungki

    2015-11-19

    Aging roads and poor road-maintenance systems result a large number of potholes, whose numbers increase over time. Potholes jeopardize road safety and transportation efficiency. Moreover, they are often a contributing factor to car accidents. To address the problems associated with potholes, the locations and size of potholes must be determined quickly. Sophisticated road-maintenance strategies can be developed using a pothole database, which requires a specific pothole-detection system that can collect pothole information at low cost and over a wide area. However, pothole repair has long relied on manual detection efforts. Recent automatic detection systems, such as those based on vibrations or laser scanning, are insufficient to detect potholes correctly and inexpensively owing to the unstable detection of vibration-based methods and high costs of laser scanning-based methods. Thus, in this paper, we introduce a new pothole-detection system using a commercial black-box camera. The proposed system detects potholes over a wide area and at low cost. We have developed a novel pothole-detection algorithm specifically designed to work with the embedded computing environments of black-box cameras. Experimental results are presented with our proposed system, showing that potholes can be detected accurately in real-time.

  1. A smartphone photogrammetry method for digitizing prosthetic socket interiors.

    PubMed

    Hernandez, Amaia; Lemaire, Edward

    2017-04-01

    Prosthetic CAD/CAM systems require accurate 3D limb models; however, difficulties arise when working from the person's socket since current 3D scanners have difficulties scanning socket interiors. While dedicated scanners exist, they are expensive and the cost may be prohibitive for a limited number of scans per year. A low-cost and accessible photogrammetry method for socket interior digitization is proposed, using a smartphone camera and cloud-based photogrammetry services. 15 two-dimensional images of the socket's interior are captured using a smartphone camera. A 3D model is generated using cloud-based software. Linear measurements were comparing between sockets and the related 3D models. 3D reconstruction accuracy averaged 2.6 ± 2.0 mm and 0.086 ± 0.078 L, which was less accurate than models obtained by high quality 3D scanners. However, this method would provide a viable 3D digital socket reproduction that is accessible and low-cost, after processing in prosthetic CAD software. Clinical relevance The described method provides a low-cost and accessible means to digitize a socket interior for use in prosthetic CAD/CAM systems, employing a smartphone camera and cloud-based photogrammetry software.

  2. Research and application on imaging technology of line structure light based on confocal microscopy

    NASA Astrophysics Data System (ADS)

    Han, Wenfeng; Xiao, Zexin; Wang, Xiaofen

    2009-11-01

    In 2005, the theory of line structure light confocal microscopy was put forward firstly in China by Xingyu Gao and Zexin Xiao in the Institute of Opt-mechatronics of Guilin University of Electronic Technology. Though the lateral resolution of line confocal microscopy can only reach or approach the level of the traditional dot confocal microscopy. But compared with traditional dot confocal microscopy, it has two advantages: first, by substituting line scanning for dot scanning, plane imaging only performs one-dimensional scanning, with imaging velocity greatly improved and scanning mechanism simplified, second, transfer quantity of light is greatly improved by substituting detection hairline for detection pinhole, and low illumination CCD is used directly to collect images instead of photoelectric intensifier. In order to apply the line confocal microscopy to practical system, based on the further research on the theory of the line confocal microscopy, imaging technology of line structure light is put forward on condition of implementation of confocal microscopy. Its validity and reliability are also verified by experiments.

  3. Unmanned aerial vehicles (UAVs) for surveying marine fauna: a dugong case study.

    PubMed

    Hodgson, Amanda; Kelly, Natalie; Peel, David

    2013-01-01

    Aerial surveys of marine mammals are routinely conducted to assess and monitor species' habitat use and population status. In Australia, dugongs (Dugong dugon) are regularly surveyed and long-term datasets have formed the basis for defining habitat of high conservation value and risk assessments of human impacts. Unmanned aerial vehicles (UAVs) may facilitate more accurate, human-risk free, and cheaper aerial surveys. We undertook the first Australian UAV survey trial in Shark Bay, western Australia. We conducted seven flights of the ScanEagle UAV, mounted with a digital SLR camera payload. During each flight, ten transects covering a 1.3 km(2) area frequently used by dugongs, were flown at 500, 750 and 1000 ft. Image (photograph) capture was controlled via the Ground Control Station and the capture rate was scheduled to achieve a prescribed 10% overlap between images along transect lines. Images were manually reviewed post hoc for animals and scored according to sun glitter, Beaufort Sea state and turbidity. We captured 6243 images, 627 containing dugongs. We also identified whales, dolphins, turtles and a range of other fauna. Of all possible dugong sightings, 95% (CI = 90%, 98%) were subjectively classed as 'certain' (unmistakably dugongs). Neither our dugong sighting rate, nor our ability to identify dugongs with certainty, were affected by UAV altitude. Turbidity was the only environmental variable significantly affecting the dugong sighting rate. Our results suggest that UAV systems may not be limited by sea state conditions in the same manner as sightings from manned surveys. The overlap between images proved valuable for detecting animals that were masked by sun glitter in the corners of images, and identifying animals initially captured at awkward body angles. This initial trial of a basic camera system has successfully demonstrated that the ScanEagle UAV has great potential as a tool for marine mammal aerial surveys.

  4. Unmanned Aerial Vehicles (UAVs) for Surveying Marine Fauna: A Dugong Case Study

    PubMed Central

    Hodgson, Amanda; Kelly, Natalie; Peel, David

    2013-01-01

    Aerial surveys of marine mammals are routinely conducted to assess and monitor species’ habitat use and population status. In Australia, dugongs (Dugong dugon) are regularly surveyed and long-term datasets have formed the basis for defining habitat of high conservation value and risk assessments of human impacts. Unmanned aerial vehicles (UAVs) may facilitate more accurate, human-risk free, and cheaper aerial surveys. We undertook the first Australian UAV survey trial in Shark Bay, western Australia. We conducted seven flights of the ScanEagle UAV, mounted with a digital SLR camera payload. During each flight, ten transects covering a 1.3 km2 area frequently used by dugongs, were flown at 500, 750 and 1000 ft. Image (photograph) capture was controlled via the Ground Control Station and the capture rate was scheduled to achieve a prescribed 10% overlap between images along transect lines. Images were manually reviewed post hoc for animals and scored according to sun glitter, Beaufort Sea state and turbidity. We captured 6243 images, 627 containing dugongs. We also identified whales, dolphins, turtles and a range of other fauna. Of all possible dugong sightings, 95% (CI = 90%, 98%) were subjectively classed as ‘certain’ (unmistakably dugongs). Neither our dugong sighting rate, nor our ability to identify dugongs with certainty, were affected by UAV altitude. Turbidity was the only environmental variable significantly affecting the dugong sighting rate. Our results suggest that UAV systems may not be limited by sea state conditions in the same manner as sightings from manned surveys. The overlap between images proved valuable for detecting animals that were masked by sun glitter in the corners of images, and identifying animals initially captured at awkward body angles. This initial trial of a basic camera system has successfully demonstrated that the ScanEagle UAV has great potential as a tool for marine mammal aerial surveys. PMID:24223967

  5. Scan-Line Methods in Spatial Data Systems

    DTIC Science & Technology

    1990-09-04

    algorithms in detail to show some of the implementation issues. Data Compression Storage and transmission times can be reduced by using compression ...goes through the data . Luckily, there are good one-directional compression algorithms , such as run-length coding 13 in which each scan line can be...independently compressed . These are the algorithms to use in a parallel scan-line system. Data compression is usually only used for long-term storage of

  6. On-line 3-dimensional confocal imaging in vivo.

    PubMed

    Li, J; Jester, J V; Cavanagh, H D; Black, T D; Petroll, W M

    2000-09-01

    In vivo confocal microscopy through focusing (CMTF) can provide a 3-D stack of high-resolution corneal images and allows objective measurements of corneal sublayer thickness and backscattering. However, current systems require time-consuming off-line image processing and analysis on multiple software platforms. Furthermore, there is a trade off between the CMTF speed and measurement precision. The purpose of this study was to develop a novel on-line system for in vivo corneal imaging and analysis that overcomes these limitations. A tandem scanning confocal microscope (TSCM) was used for corneal imaging. The TSCM video camera was interfaced directly to a PC image acquisition board to implement real-time digitization. Software was developed to allow in vivo 2-D imaging, CMTF image acquisition, interactive 3-D reconstruction, and analysis of CMTF data to be performed on line in a single user-friendly environment. A procedure was also incorporated to separate the odd/even video fields, thereby doubling the CMTF sampling rate and theoretically improving the precision of CMTF thickness measurements by a factor of two. In vivo corneal examinations of a normal human and a photorefractive keratectomy patient are presented to demonstrate the capabilities of the new system. Improvements in the convenience, speed, and functionality of in vivo CMTF image acquisition, display, and analysis are demonstrated. This is the first full-featured software package designed for in vivo TSCM imaging of the cornea, which performs both 2-D and 3-D image acquisition, display, and processing as well as CMTF analysis. The use of a PC platform and incorporation of easy to use, on line, and interactive features should help to improve the clinical utility of this technology.

  7. Space-based infrared sensors of space target imaging effect analysis

    NASA Astrophysics Data System (ADS)

    Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang

    2018-02-01

    Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.

  8. Correction And Use Of Jitter In Television Images

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.

    1989-01-01

    Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.

  9. The continuing importance of thyroid scintigraphy in the era of high-resolution ultrasound.

    PubMed

    Meller, J; Becker, W

    2002-08-01

    At the molecular level, the uptake of radioiodine and pertechnetate is proportional to the expression of the thyroidal sodium/iodine symporter (NIS). Qualitative and quantitative scintigraphic evaluation of the thyroid is performed with a gamma camera fitted with an on-line computer system and enables determination of the iodine uptake or the technetium uptake (TCTU) as an iodine clearance equivalent. Despite new molecular genetic insights into congenital hypothyroidism, the iodine-123 or pertechnetate scan remains the most accurate test for the detection of ectopic thyroid tissue. Following the identification of specific mutations of the genes coding for the NIS, thyroid peroxidase and pendrin, the discharge test has lost its role in establishing the diagnosis of inherited dyshormonogenesis, but it is still of value in the assessment of defect severity. In PDS mutations the test can be used to establish the diagnosis of syndromic disease. Quantitative pertechnetate scintigraphy is the most sensitive and specific technique for the diagnosis and quantification of thyroid autonomy. The method has proved to be valuable in risk stratification of spontaneous or iodine-induced hyperthyroidism, in the estimation of the target volume prior to radioiodine therapy and in the evaluation of therapeutic success after definitive treatment. In iodine deficiency areas the thyroid scan remains indispensable for the functional characterisation of a thyroid nodule and is still a first-line diagnostic procedure in cases of suspected thyroid malignancy. This is especially of importance in patients with Graves' disease, among whom a relatively high prevalence of cancer has been found in cold thyroid nodules. While determination of the TCTU is without any value in the differentiation between autoimmune thyroiditis and Graves' disease in most cases, it is of substantial importance in the differentiation between hyperthyroid autoimmune thyroiditis and Graves' disease.

  10. Combined hostile fire and optics detection

    NASA Astrophysics Data System (ADS)

    Brännlund, Carl; Tidström, Jonas; Henriksson, Markus; Sjöqvist, Lars

    2013-10-01

    Snipers and other optically guided weapon systems are serious threats in military operations. We have studied a SWIR (Short Wave Infrared) camera-based system with capability to detect and locate snipers both before and after shot over a large field-of-view. The high frame rate SWIR-camera allows resolution of the temporal profile of muzzle flashes which is the infrared signature associated with the ejection of the bullet from the rifle. The capability to detect and discriminate sniper muzzle flashes with this system has been verified by FOI in earlier studies. In this work we have extended the system by adding a laser channel for optics detection. A laser diode with slit-shaped beam profile is scanned over the camera field-of-view to detect retro reflection from optical sights. The optics detection system has been tested at various distances up to 1.15 km showing the feasibility to detect rifle scopes in full daylight. The high speed camera gives the possibility to discriminate false alarms by analyzing the temporal data. The intensity variation, caused by atmospheric turbulence, enables discrimination of small sights from larger reflectors due to aperture averaging, although the targets only cover a single pixel. It is shown that optics detection can be integrated in combination with muzzle flash detection by adding a scanning rectangular laser slit. The overall optics detection capability by continuous surveillance of a relatively large field-of-view looks promising. This type of multifunctional system may become an important tool to detect snipers before and after shot.

  11. Measurement of vibration using phase only correlation technique

    NASA Astrophysics Data System (ADS)

    Balachandar, S.; Vipin, K.

    2017-08-01

    A novel method for the measurement of vibration is proposed and demonstrated. The proposed experiment is based on laser triangulation: consists of line laser, object under test and a high speed camera remotely controlled by a software. Experiment involves launching a line-laser probe beam perpendicular to the axis of the vibrating object. The reflected probe beam is recorded by a high speed camera. The dynamic position of the line laser in camera plane is governed by the magnitude and frequency of the vibrating test-object. Using phase correlation technique the maximum distance travelled by the probe beam in CCD plane is measured in terms of pixels using MATLAB. An actual displacement of the object in mm is measured by calibration. Using displacement data with time, other vibration associated quantities such as acceleration, velocity and frequency are evaluated. The preliminary result of the proposed method is reported for acceleration from 1g to 3g, and from frequency 6Hz to 26Hz. The results are closely matching with its theoretical values. The advantage of the proposed method is that it is a non-destructive method and using phase correlation algorithm subpixel displacement in CCD plane can be measured with high accuracy.

  12. Technologies for Positioning and Placement of Underwater Structures

    DTIC Science & Technology

    2000-03-01

    for imaging the bottom immediately before placement of the structure. c. Use passive sensors (such as tiltmeters , inclinometers, and gyrocompasses...4 Acoustic Sensors .................................................................... 5 Multibeamn and Side-Scan Sonar Transducers...11.I Video Camera....................................................................11. Passive Sensors

  13. High speed television camera system processes photographic film data for digital computer analysis

    NASA Technical Reports Server (NTRS)

    Habbal, N. A.

    1970-01-01

    Data acquisition system translates and processes graphical information recorded on high speed photographic film. It automatically scans the film and stores the information with a minimal use of the computer memory.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Virador, Patrick R.G.

    The author performs image reconstruction for a novel Positron Emission Tomography camera that is optimized for breast cancer imaging. This work addresses for the first time, the problem of fully-3D, tomographic reconstruction using a septa-less, stationary, (i.e. no rotation or linear motion), and rectangular camera whose Field of View (FOV) encompasses the entire volume enclosed by detector modules capable of measuring Depth of Interaction (DOI) information. The camera is rectangular in shape in order to accommodate breasts of varying sizes while allowing for soft compression of the breast during the scan. This non-standard geometry of the camera exacerbates two problems:more » (a) radial elongation due to crystal penetration and (b) reconstructing images from irregularly sampled data. Packing considerations also give rise to regions in projection space that are not sampled which lead to missing information. The author presents new Fourier Methods based image reconstruction algorithms that incorporate DOI information and accommodate the irregular sampling of the camera in a consistent manner by defining lines of responses (LORs) between the measured interaction points instead of rebinning the events into predefined crystal face LORs which is the only other method to handle DOI information proposed thus far. The new procedures maximize the use of the increased sampling provided by the DOI while minimizing interpolation in the data. The new algorithms use fixed-width evenly spaced radial bins in order to take advantage of the speed of the Fast Fourier Transform (FFT), which necessitates the use of irregular angular sampling in order to minimize the number of unnormalizable Zero-Efficiency Bins (ZEBs). In order to address the persisting ZEBs and the issue of missing information originating from packing considerations, the algorithms (a) perform nearest neighbor smoothing in 2D in the radial bins (b) employ a semi-iterative procedure in order to estimate the unsampled data and (c) mash the in plane projections, i.e. 2D data, with the projection data from the first oblique angles, which are then used to reconstruct the preliminary image in the 3D Reprojection Projection algorithm. The author presents reconstructed images of point sources and extended sources in both 2D and 3D. The images show that the camera is anticipated to eliminate radial elongation and produce artifact free and essentially spatially isotropic images throughout the entire FOV. It has a resolution of 1.50 ± 0.75 mm FWHM near the center, 2.25 ±0.75 mm FWHM in the bulk of the FOV, and 3.00 ± 0.75 mm FWHM near the edge and corners of the FOV.« less

  15. Precise 3D Lug Pose Detection Sensor for Automatic Robot Welding Using a Structured-Light Vision System

    PubMed Central

    Park, Jae Byung; Lee, Seung Hun; Lee, Il Jae

    2009-01-01

    In this study, we propose a precise 3D lug pose detection sensor for automatic robot welding of a lug to a huge steel plate used in shipbuilding, where the lug is a handle to carry the huge steel plate. The proposed sensor consists of a camera and four laser line diodes, and its design parameters are determined by analyzing its detectable range and resolution. For the lug pose acquisition, four laser lines are projected on both lug and plate, and the projected lines are detected by the camera. For robust detection of the projected lines against the illumination change, the vertical threshold, thinning, Hough transform and separated Hough transform algorithms are successively applied to the camera image. The lug pose acquisition is carried out by two stages: the top view alignment and the side view alignment. The top view alignment is to detect the coarse lug pose relatively far from the lug, and the side view alignment is to detect the fine lug pose close to the lug. After the top view alignment, the robot is controlled to move close to the side of the lug for the side view alignment. By this way, the precise 3D lug pose can be obtained. Finally, experiments with the sensor prototype are carried out to verify the feasibility and effectiveness of the proposed sensor. PMID:22400007

  16. Electron imaging with an EBSD detector.

    PubMed

    Wright, Stuart I; Nowell, Matthew M; de Kloe, René; Camus, Patrick; Rampton, Travis

    2015-01-01

    Electron Backscatter Diffraction (EBSD) has proven to be a useful tool for characterizing the crystallographic orientation aspects of microstructures at length scales ranging from tens of nanometers to millimeters in the scanning electron microscope (SEM). With the advent of high-speed digital cameras for EBSD use, it has become practical to use the EBSD detector as an imaging device similar to a backscatter (or forward-scatter) detector. Using the EBSD detector in this manner enables images exhibiting topographic, atomic density and orientation contrast to be obtained at rates similar to slow scanning in the conventional SEM manner. The high-speed acquisition is achieved through extreme binning of the camera-enough to result in a 5 × 5 pixel pattern. At such high binning, the captured patterns are not suitable for indexing. However, no indexing is required for using the detector as an imaging device. Rather, a 5 × 5 array of images is formed by essentially using each pixel in the 5 × 5 pixel pattern as an individual scattered electron detector. The images can also be formed at traditional EBSD scanning rates by recording the image data during a scan or can also be formed through post-processing of patterns recorded at each point in the scan. Such images lend themselves to correlative analysis of image data with the usual orientation data provided by and with chemical data obtained simultaneously via X-Ray Energy Dispersive Spectroscopy (XEDS). Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  17. a Cost-Effective Method for Crack Detection and Measurement on Concrete Surface

    NASA Astrophysics Data System (ADS)

    Sarker, M. M.; Ali, T. A.; Abdelfatah, A.; Yehia, S.; Elaksher, A.

    2017-11-01

    Crack detection and measurement in the surface of concrete structures is currently carried out manually or through Non-Destructive Testing (NDT) such as imaging or scanning. The recent developments in depth (stereo) cameras have presented an opportunity for cost-effective, reliable crack detection and measurement. This study aimed at evaluating the feasibility of the new inexpensive depth camera (ZED) for crack detection and measurement. This depth camera with its lightweight and portable nature produces a 3D data file of the imaged surface. The ZED camera was utilized to image a concrete surface and the 3D file was processed to detect and analyse cracks. This article describes the outcome of the experiment carried out with the ZED camera as well as the processing tools used for crack detection and analysis. Crack properties that were also of interest were length, orientation, and width. The use of the ZED camera allowed for distinction between surface and concrete cracks. The ZED high-resolution capability and point cloud capture technology helped in generating a dense 3D data in low-lighting conditions. The results showed the ability of the ZED camera to capture the crack depth changes between surface (render) cracks, and crack that form in the concrete itself.

  18. Design and Development of a Low-Cost Aerial Mobile Mapping System for Multi-Purpose Applications

    NASA Astrophysics Data System (ADS)

    Acevedo Pardo, C.; Farjas Abadía, M.; Sternberg, H.

    2015-08-01

    The research project with the working title "Design and development of a low-cost modular Aerial Mobile Mapping System" was formed during the last year as the result from numerous discussions and considerations with colleagues from the HafenCity University Hamburg, Department Geomatics. The aim of the project is to design a sensor platform which can be embedded preferentially on an UAV, but also can be integrated on any adaptable vehicle. The system should perform a direct scanning of surfaces with a laser scanner and supported through sensors for determining the position and attitude of the platform. The modular design allows his extension with other sensors such as multispectral cameras, digital cameras or multiple cameras systems.

  19. Two-dimensional simulation and modeling in scanning electron microscope imaging and metrology research.

    PubMed

    Postek, Michael T; Vladár, András E; Lowney, Jeremiah R; Keery, William J

    2002-01-01

    Traditional Monte Carlo modeling of the electron beam-specimen interactions in a scanning electron microscope (SEM) produces information about electron beam penetration and output signal generation at either a single beam-landing location, or multiple landing positions. If the multiple landings lie on a line, the results can be graphed in a line scan-like format. Monte Carlo results formatted as line scans have proven useful in providing one-dimensional information about the sample (e.g., linewidth). When used this way, this process is called forward line scan modeling. In the present work, the concept of image simulation (or the first step in the inverse modeling of images) is introduced where the forward-modeled line scan data are carried one step further to construct theoretical two-dimensional (2-D) micrographs (i.e., theoretical SEM images) for comparison with similar experimentally obtained micrographs. This provides an ability to mimic and closely match theory and experiment using SEM images. Calculated and/or measured libraries of simulated images can be developed with this technique. The library concept will prove to be very useful in the determination of dimensional and other properties of simple structures, such as integrated circuit parts, where the shape of the features is preferably measured from a single top-down image or a line scan. This paper presents one approach to the generation of 2-D simulated images and presents some suggestions as to their application to critical dimension metrology.

  20. Display system employing acousto-optic tunable filter

    NASA Technical Reports Server (NTRS)

    Lambert, James L. (Inventor)

    1995-01-01

    An acousto-optic tunable filter (AOTF) is employed to generate a display by driving the AOTF with a RF electrical signal comprising modulated red, green, and blue video scan line signals and scanning the AOTF with a linearly polarized, pulsed light beam, resulting in encoding of color video columns (scan lines) of an input video image into vertical columns of the AOTF output beam. The AOTF is illuminated periodically as each acoustically-encoded scan line fills the cell aperture of the AOTF. A polarizing beam splitter removes the unused first order beam component of the AOTF output and, if desired, overlays a real world scene on the output plane. Resolutions as high as 30,000 lines are possible, providing holographic display capability.

  1. Display system employing acousto-optic tunable filter

    NASA Technical Reports Server (NTRS)

    Lambert, James L. (Inventor)

    1993-01-01

    An acousto-optic tunable filter (AOTF) is employed to generate a display by driving the AOTF with a RF electrical signal comprising modulated red, green, and blue video scan line signals and scanning the AOTF with a linearly polarized, pulsed light beam, resulting in encoding of color video columns (scan lines) of an input video image into vertical columns of the AOTF output beam. The AOTF is illuminated periodically as each acoustically-encoded scan line fills the cell aperture of the AOTF. A polarizing beam splitter removes the unused first order beam component of the AOTF output and, if desired, overlays a real world scene on the output plane. Resolutions as high as 30,000 lines are possible, providing holographic display capability.

  2. A New Era in Solar Thermal-IR Astronomy: the NSO Array Camera (NAC) on the McMath-Pierce Telescope

    NASA Astrophysics Data System (ADS)

    Ayres, T.; Penn, M.; Plymate, C.; Keller, C.

    2008-09-01

    The U.S. National Solar Observatory Array Camera (NAC) is a cryogenically cooled 1Kx1K InSb ``Aladdin" array that recently became operational at the McMath-Pierce facility on Kitt Peak, a high dry site in the southwest U.S. (Arizona). The new camera is similar to those already incorporated into instruments on nighttime telescopes, and has unprecedented sensitivity, low noise, and excellent cosmetics compared with the Amber Engineering (AE) device it replaces. (The latter was scavenged from a commercial surveillance camera in the 1990's: only 256X256 format, high noise, and annoying flatfield structure). The NAC focal plane is maintained at 30 K by a mechanical closed-cycle helium cooler, dispensing with the cumbersome pumped--solid-N2 40 K system used previously with the AE camera. The NAC linearity has been verified for exposures as short as 1 ms, although latency in the data recording holds the maximum frame rate to about 8 Hz (in "streaming mode"). The camera is run in tandem with the Infrared Adaptive Optics (IRAO) system. Utilizing a 37-actuator deformable mirror, IRAO can--under moderate seeing conditions--correct the telescope image to the diffraction limit longward of 2.3 mu (if a suitable high contrast target is available: the IR granulation has proven too bland to reliably track). IRAO also provides fine control over the solar image for spatial scanning in long-slit mode with the 14 m vertical "Main" spectrograph (MS). A 1'X1' area scan, with 0.5" steps orthogonal to the slit direction, requires less than half a minute, much shorter than p-mode and granulation evolution time scales. A recent engineering test run, in April 2008, utilized NAC/IRAO/MS to capture the fundamental (4.6 mu) and first-overtone (2.3 mu) rovibrational bands of CO, including maps of quiet regions, drift scans along the equatorial limbs (to measure the off-limb molecular emissions), and imaging of a fortuitous small sunspot pair, a final gasp, perhaps, of Cycle 23. Future work with the NAC will emphasize pathfinding toward the next generation of IR imaging spectrometers for the Advanced Technology Solar Telescope, whose 4 m aperture finally will bring sorely needed high spatial resolution to daytime infrared astronomy. In the meantime, the NAC is available to qualified solar physicists from around the world to conduct forefront research in the 1-5 mu region, on the venerable--but infrared friendly--McMath-Pierce telescope.

  3. Robust estimation of simulated urinary volume from camera images under bathroom illumination.

    PubMed

    Honda, Chizuru; Bhuiyan, Md Shoaib; Kawanaka, Haruki; Watanabe, Eiichi; Oguri, Koji

    2016-08-01

    General uroflowmetry method involves the risk of nosocomial infections or time and effort of the recording. Medical institutions, therefore, need to measure voided volume simply and hygienically. Multiple cylindrical model that can estimate the fluid flow rate from the photographed image using camera has been proposed in an earlier study. This study implemented a flow rate estimation by using a general-purpose camera system (Raspberry Pi Camera Module) and the multiple cylindrical model. However, large amounts of noise in extracting liquid region are generated by the variation of the illumination when performing measurements in the bathroom. So the estimation error gets very large. In other words, the specifications of the previous study's camera setup regarding the shutter type and the frame rate was too strict. In this study, we relax the specifications to achieve a flow rate estimation using a general-purpose camera. In order to determine the appropriate approximate curve, we propose a binarizing method using background subtraction at each scanning row and a curve approximation method using RANSAC. Finally, by evaluating the estimation accuracy of our experiment and by comparing it with the earlier study's results, we show the effectiveness of our proposed method for flow rate estimation.

  4. Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?

    NASA Astrophysics Data System (ADS)

    Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2018-05-01

    360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.

  5. [Microinjection Monitoring System Design Applied to MRI Scanning].

    PubMed

    Xu, Yongfeng

    2017-09-30

    A microinjection monitoring system applied to the MRI scanning was introduced. The micro camera probe was used to stretch into the main magnet for real-time video injection monitoring of injection tube terminal. The programming based on LabVIEW was created to analysis and process the real-time video information. The feedback signal was used for intelligent controlling of the modified injection pump. The real-time monitoring system can make the best use of injection under the condition that the injection device was away from the sample which inside the magnetic room and unvisible. 9.4 T MRI scanning experiment showed that the system in ultra-high field can work stability and doesn't affect the MRI scans.

  6. KSC-02pd1131

    NASA Image and Video Library

    2002-07-10

    KENNEDY SPACE CENTER, FLA. -- Scott Minnick, with United Space Alliance, places a fiber-optic camera inside the flow line on Endeavour. Minnick wears a special viewing apparatus that sees where the camera is going. The inspection is the result of small cracks being discovered on the LH2 Main Propulsion System (MPS) flow liners in other orbiters. Endeavour is next scheduled to fly on mission STS-113.

  7. KSC-02pd1128

    NASA Image and Video Library

    2002-07-10

    KENNEDY SPACE CENTER, FLA. -- Scott Minnick, with United Space Alliance, places a fiber-optic camera inside the flow line on Endeavour. Minnick wears a special viewing apparatus that sees where the camera is going. The inspection is the result of small cracks being discovered on the LH2 Main Propulsion System (MPS) flow liners in other orbiters. Endeavour is next scheduled to fly on mission STS-113.

  8. Orbiter Camera Payload System

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Components for an orbiting camera payload system (OCPS) include the large format camera (LFC), a gas supply assembly, and ground test, handling, and calibration hardware. The LFC, a high resolution large format photogrammetric camera for use in the cargo bay of the space transport system, is also adaptable to use on an RB-57 aircraft or on a free flyer satellite. Carrying 4000 feet of film, the LFC is usable over the visible to near IR, at V/h rates of from 11 to 41 milliradians per second, overlap of 10, 60, 70 or 80 percent and exposure times of from 4 to 32 milliseconds. With a 12 inch focal length it produces a 9 by 18 inch format (long dimension in line of flight) with full format low contrast resolution of 88 lines per millimeter (AWAR), full format distortion of less than 14 microns and a complement of 45 Reseau marks and 12 fiducial marks. Weight of the OCPS as supplied, fully loaded is 944 pounds and power dissipation is 273 watts average when in operation, 95 watts in standby. The LFC contains an internal exposure sensor, or will respond to external command. It is able to photograph starfields for inflight calibration upon command.

  9. Femtosecond laser for cavity preparation in enamel and dentin: ablation efficiency related factors.

    PubMed

    Chen, H; Li, H; Sun, Yc; Wang, Y; Lü, Pj

    2016-02-11

    To study the effects of laser fluence (laser energy density), scanning line spacing and ablation depth on the efficiency of a femtosecond laser for three-dimensional ablation of enamel and dentin. A diode-pumped, thin-disk femtosecond laser (wavelength 1025 nm, pulse width 400 fs) was used for the ablation of enamel and dentin. The laser spot was guided in a series of overlapping parallel lines on enamel and dentin surfaces to form a three-dimensional cavity. The depth and volume of the ablated cavity was then measured under a 3D measurement microscope to determine the ablation efficiency. Different values of fluence, scanning line spacing and ablation depth were used to assess the effects of each variable on ablation efficiency. Ablation efficiencies for enamel and dentin were maximized at different laser fluences and number of scanning lines and decreased with increases in laser fluence or with increases in scanning line spacing beyond spot diameter or with increases in ablation depth. Laser fluence, scanning line spacing and ablation depth all significantly affected femtosecond laser ablation efficiency. Use of a reasonable control for each of these parameters will improve future clinical application.

  10. Scanning and storage of electrophoretic records

    DOEpatents

    McKean, Ronald A.; Stiegman, Jeff

    1990-01-01

    An electrophoretic record that includes at least one gel separation is mounted for motion laterally of the separation record. A light source is positioned to illuminate at least a portion of the record, and a linear array camera is positioned to have a field of view of the illuminated portion of the record and orthogonal to the direction of record motion. The elements of the linear array are scanned at increments of motion of the record across the field of view to develop a series of signals corresponding to intensity of light at each element at each scan increment.

  11. Environmental performance evaluation of an advanced-design solid-state television camera

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The development of an advanced-design black-and-white solid-state television camera which can survive exposure to space environmental conditions was undertaken. A 380 x 488 element buried-channel CCD is utilized as the image sensor to ensure compatibility with 525-line transmission and display equipment. Specific camera design approaches selected for study and analysis included: (1) component and circuit sensitivity to temperature; (2) circuit board thermal and mechanical design; and (3) CCD temperature control. Preferred approaches were determined and integrated into the final design for two deliverable solid-state TV cameras. One of these cameras was subjected to environmental tests to determine stress limits for exposure to vibration, shock, acceleration, and temperature-vacuum conditions. These tests indicate performance at the design goal limits can be achieved for most of the specified conditions.

  12. Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction

    PubMed Central

    Berveglieri, Adilson; Liang, Xinlian; Honkavaara, Eija

    2017-01-01

    This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras. PMID:29207468

  13. Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction.

    PubMed

    Berveglieri, Adilson; Tommaselli, Antonio M G; Liang, Xinlian; Honkavaara, Eija

    2017-12-02

    This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras.

  14. WE-EF-BRA-01: A Dual-Use Optical Tomography System for Small Animal Radiation Research Platform (SARRP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, K; Bin, Z; Wong, J

    Purpose: We develop a novel dual-use configuration for a tri-modality, CBCT/bioluminescence tomography(BLT)/fluorescence tomography(FT), imaging system with the SARRP that can function as a standalone system for longitudinal imaging research and on-board the SARRP to guide irradiation. BLT provides radiation guidance for soft tissue target, while FT offers functional information allowing mechanistic investigations. Methods: The optical assembly includes CCD camera, lens, filter wheel, 3-way mirrors, scanning fiber system and light-tight enclosure. The rotating mirror system directs the optical signal from the animal surface to the camera at multiple projection over 180 degree. The fiber-laser system serves as the external light sourcemore » for the FT application. Multiple filters are used for multispectral imaging to enhance localization accuracy using BLT. SARRP CBCT provides anatomical information and geometric mesh for BLT/FT reconstruction. To facilitate dual use, the 3-way mirror system is cantilevered in front of the camera. The entire optical assembly is driven by a 1D linear stage to dock onto an independent mouse support bed for standalone application. After completion of on-board optical imaging, the system is retracted from the SARRP to allow irradiation of the mouse. Results: A tissue-simulating phantom and a mouse model with a luminescence light source are used to demonstrate the function of the dual-use optical system. Feasibility data have been obtained based on a manual-docking prototype. The center of mass of light source determined in living mouse with on-board BLT is within 1±0.2mm of that with CBCT. The performance of the motorized system is expected to be the same and will be presented. Conclusion: We anticipate the motorized dual use system provide significant efficiency gain over our manual docking and off-line system. By also supporting off-line longitudinal studies independent of the SARRP, the dual-use system is a highly efficient and cost-effective platform to facilitate optical imaging for pre-clinical radiation research. The work is supported by NIH R01CA158100 and Xstrahl Ltd. Drs. John Wong and Iulian Iordachita receive royalty payment from a licensing agreement between Xstrahl Ltd and Johns Hopkins University. John Wong also has a consultant agreement with Xstrahl Ltd.« less

  15. Line drawing Scientific Instrument Module and lunar orbital science package

    NASA Technical Reports Server (NTRS)

    1970-01-01

    A line drawing of the Scientific Instrument Module (SIM) with its lunar orbital science package. The SIM will be mounted in a previously vacant sector of the Apollo Service Module. It will carry specialized cameras and instrumentation for gathering lunar orbit scientific data.

  16. Temporal Coding of Volumetric Imagery

    NASA Astrophysics Data System (ADS)

    Llull, Patrick Ryan

    'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions. Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z ) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke. Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.

  17. Evaluation of Retinal and Choroidal Thickness by Swept-Source Optical Coherence Tomography: Repeatability and Assessment of Artifacts

    PubMed Central

    Mansouri, Kaweh; Medeiros, Felipe A.; Tatham, Andrew J.; Marchase, Nicholas; Weinreb, Robert N.

    2017-01-01

    PURPOSE To determine the repeatability of automated retinal and choroidal thickness measurements with swept-source optical coherence tomography (SS OCT) and the frequency and type of scan artifacts. DESIGN Prospective evaluation of new diagnostic technology. METHODS Thirty healthy subjects were recruited prospectively and underwent imaging with a prototype SS OCT instrument. Undilated scans of 54 eyes of 27 subjects (mean age, 35.1 ± 9.3 years) were obtained. Each subject had 4 SS OCT protocols repeated 3 times: 3-dimensional (3D) 6 × 6-mm raster scan of the optic disc and macula, radial, and line scan. Automated measurements were obtained through segmentation software. Interscan repeatability was assessed by intraclass correlation coefficients (ICCs). RESULTS ICCs for choroidal measurements were 0.92, 0.98, 0.80, and 0.91, respectively, for 3D macula, 3D optic disc, radial, and line scans. ICCs for retinal measurements were 0.39, 0.49, 0.71, and 0.69, respectively. Artifacts were present in up to 9% scans. Signal loss because of blinking was the most common artifact on 3D scans (optic disc scan, 7%; macula scan, 9%), whereas segmentation failure occurred in 4% of radial and 3% of line scans. When scans with image artifacts were excluded, ICCs for choroidal thickness increased to 0.95, 0.99, 0.87, and 0.93 for 3D macula, 3D optic disc, radial, and line scans, respectively. ICCs for retinal thickness increased to 0.88, 0.83, 0.89, and 0.76, respectively. CONCLUSIONS Improved repeatability of automated choroidal and retinal thickness measurements was found with the SS OCT after correction of scan artifacts. Recognition of scan artifacts is important for correct interpretation of SS OCT measurements. PMID:24531020

  18. Point Cloud Analysis for Uav-Borne Laser Scanning with Horizontally and Vertically Oriented Line Scanners - Concept and First Results

    NASA Astrophysics Data System (ADS)

    Weinmann, M.; Müller, M. S.; Hillemann, M.; Reydel, N.; Hinz, S.; Jutzi, B.

    2017-08-01

    In this paper, we focus on UAV-borne laser scanning with the objective of densely sampling object surfaces in the local surrounding of the UAV. In this regard, using a line scanner which scans along the vertical direction and perpendicular to the flight direction results in a point cloud with low point density if the UAV moves fast. Using a line scanner which scans along the horizontal direction only delivers data corresponding to the altitude of the UAV and thus a low scene coverage. For these reasons, we present a concept and a system for UAV-borne laser scanning using multiple line scanners. Our system consists of a quadcopter equipped with horizontally and vertically oriented line scanners. We demonstrate the capabilities of our system by presenting first results obtained for a flight within an outdoor scene. Thereby, we use a downsampling of the original point cloud and different neighborhood types to extract fundamental geometric features which in turn can be used for scene interpretation with respect to linear, planar or volumetric structures.

  19. Inspection of thick welded joints using laser-ultrasonic SAFT.

    PubMed

    Lévesque, D; Asaumi, Y; Lord, M; Bescond, C; Hatanaka, H; Tagami, M; Monchalin, J-P

    2016-07-01

    The detection of defects in thick butt joints in the early phase of multi-pass arc welding would be very valuable to reduce cost and time in the necessity of reworking. As a non-contact method, the laser-ultrasonic technique (LUT) has the potential for the automated inspection of welds, ultimately online during manufacturing. In this study, testing has been carried out using LUT combined with the synthetic aperture focusing technique (SAFT) on 25 and 50mm thick butt welded joints of steel both completed and partially welded. EDM slits of 2 or 3mm height were inserted at different depths in the multi-pass welding process to simulate a lack of fusion. Line scans transverse to the weld are performed with the generation and detection laser spots superimposed directly on the surface of the weld bead. A CCD line camera is used to simultaneously acquire the surface profile for correction in the SAFT processing. All artificial defects but also real defects are visualized in the investigated thick butt weld specimens, either completed or partially welded after a given number of passes. The results obtained clearly show the potential of using the LUT with SAFT for the automated inspection of arc welds or hybrid laser-arc welds during manufacturing. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  20. Line of sight pointing technology for laser communication system between aircrafts

    NASA Astrophysics Data System (ADS)

    Zhao, Xin; Liu, Yunqing; Song, Yansong

    2017-12-01

    In space optical communications, it is important to obtain the most efficient performance of line of sight (LOS) pointing system. The errors of position (latitude, longitude, and altitude), attitude angles (pitch, yaw, and roll), and installation angle among a different coordinates system are usually ineluctable when assembling and running an aircraft optical communication terminal. These errors would lead to pointing errors and make it difficult for the LOS system to point to its terminal to establish a communication link. The LOS pointing technology of an aircraft optical communication system has been researched using a transformation matrix between the coordinate systems of two aircraft terminals. A method of LOS calibration has been proposed to reduce the pointing error. In a flight test, a successful 144-km link was established between two aircrafts. The position and attitude angles of the aircraft have been obtained to calculate the pointing angle in azimuth and elevation provided by using a double-antenna GPS/INS system. The size of the field of uncertainty (FOU) and the pointing accuracy are analyzed based on error theory, and it has been also measured using an observation camera installed next to the optical LOS. Our results show that the FOU of aircraft optical communications is 10 mrad without a filter, which is the foundation to acquisition strategy and scanning time.

  1. Multiple Velocity Profile Measurements in Hypersonic Flows using Sequentially-Imaged Fluorescence Tagging

    NASA Technical Reports Server (NTRS)

    Bathel, Brett F.; Danehy, Paul M.; Inmian, Jennifer A.; Jones, Stephen B.; Ivey, Christopher B.; Goyne, Christopher P.

    2010-01-01

    Nitric-oxide planar laser-induced fluorescence (NO PLIF) was used to perform velocity measurements in hypersonic flows by generating multiple tagged lines which fluoresce as they convect downstream. For each laser pulse, a single interline, progressive scan intensified CCD camera was used to obtain separate images of the initial undelayed and delayed NO molecules that had been tagged by the laser. The CCD configuration allowed for sub-microsecond acquisition of both images, resulting in sub-microsecond temporal resolution as well as sub-mm spatial resolution (0.5-mm x 0.7-mm). Determination of axial velocity was made by application of a cross-correlation analysis of the horizontal shift of individual tagged lines. Quantification of systematic errors, the contribution of gating/exposure duration errors, and influence of collision rate on fluorescence to temporal uncertainty were made. Quantification of the spatial uncertainty depended upon the analysis technique and signal-to-noise of the acquired profiles. This investigation focused on two hypersonic flow experiments: (1) a reaction control system (RCS) jet on an Orion Crew Exploration Vehicle (CEV) wind tunnel model and (2) a 10-degree half-angle wedge containing a 2-mm tall, 4-mm wide cylindrical boundary layer trip. The experiments were performed at the NASA Langley Research Center's 31-inch Mach 10 wind tunnel.

  2. Application of polarization in high speed, high contrast inspection

    NASA Astrophysics Data System (ADS)

    Novak, Matthew J.

    2017-08-01

    Industrial optical inspection often requires high speed and high throughput of materials. Engineers use a variety of techniques to handle these inspection needs. Some examples include line scan cameras, high speed multi-spectral and laser-based systems. High-volume manufacturing presents different challenges for inspection engineers. For example, manufacturers produce some components in quantities of millions per month, per week or even per day. Quality control of so many parts requires creativity to achieve the measurement needs. At times, traditional vision systems lack the contrast to provide the data required. In this paper, we show how dynamic polarization imaging captures high contrast images. These images are useful for engineers to perform inspection tasks in some cases where optical contrast is low. We will cover basic theory of polarization. We show how to exploit polarization as a contrast enhancement technique. We also show results of modeling for a polarization inspection application. Specifically, we explore polarization techniques for inspection of adhesives on glass.

  3. Femtoelectron-Based Terahertz Imaging of Hydration State in a Proton Exchange Membrane Fuel Cell

    NASA Astrophysics Data System (ADS)

    Buaphad, P.; Thamboon, P.; Kangrang, N.; Rhodes, M. W.; Thongbai, C.

    2015-08-01

    Imbalanced water management in a proton exchange membrane (PEM) fuel cell significantly reduces the cell performance and durability. Visualization of water distribution and transport can provide greater comprehension toward optimization of the PEM fuel cell. In this work, we are interested in water flooding issues that occurred in flow channels on cathode side of the PEM fuel cell. The sample cell was fabricated with addition of a transparent acrylic window allowing light access and observed the process of flooding formation (in situ) via a CCD camera. We then explore potential use of terahertz (THz) imaging, consisting of femtoelectron-based THz source and off-angle reflective-mode imaging, to identify water presence in the sample cell. We present simulations of two hydration states (water and nonwater area), which are in agreement with the THz image results. A line-scan plot is utilized for quantitative analysis and for defining spatial resolution of the image. Implementing metal mesh filtering can improve spatial resolution of our THz imaging system.

  4. High sensitive volumetric imaging of renal microcirculation in vivo using ultrahigh sensitive optical microangiography

    NASA Astrophysics Data System (ADS)

    Zhi, Zhongwei; Jung, Yeongri; Jia, Yali; An, Lin; Wang, Ruikang K.

    2011-03-01

    We present a non-invasive, label-free imaging technique called Ultrahigh Sensitive Optical Microangiography (UHSOMAG) for high sensitive volumetric imaging of renal microcirculation. The UHS-OMAG imaging system is based on spectral domain optical coherence tomography (SD-OCT), which uses a 47000 A-line scan rate CCD camera to perform an imaging speed of 150 frames per second that takes only ~7 seconds to acquire a 3D image. The technique, capable of measuring slow blood flow down to 4 um/s, is sensitive enough to image capillary networks, such as peritubular capillaries and glomerulus within renal cortex. We show superior performance of UHS-OMAG in providing depthresolved volumetric images of rich renal microcirculation. We monitored the dynamics of renal microvasculature during renal ischemia and reperfusion. Obvious reduction of renal microvascular density due to renal ischemia was visualized and quantitatively analyzed. This technique can be helpful for the assessment of chronic kidney disease (CKD) which relates to abnormal microvasculature.

  5. Experimental Verification of the Kruskal-Shafranov Stability Limit in Line-Tied Partial Toroidal Plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oz, E.; Myers, C. E.; Yamada, M.

    2011-07-19

    The stability properties of partial toroidal flux ropes are studied in detail in the laboratory, motivated by ubiquitous arched magnetic structures found on the solar surface. The flux ropes studied here are magnetized arc discharges formed between two electrodes in the Magnetic Reconnection Experiment (MRX) [Yamada et al., Phys. Plasmas, 4, 1936 (1997)]. The three dimensional evolution of these flux ropes is monitored by a fast visible light framing camera, while their magnetic structure is measured by a variety of internal magnetic probes. The flux ropes are consistently observed to undergo large-scale oscillations as a result of an external kinkmore » instability. Using detailed scans of the plasma current, the guide field strength, and the length of the flux rope, we show that the threshold for kink stability is governed by the Kruskal-Shafranov limit for a flux rope that is held fixed at both ends (i.e., qa = 1).« less

  6. Experimental verification of the Kruskal-Shafranov stability limit in line-tied partial-toroidal plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oz, E.; Myers, C. E.; Yamada, M.

    2011-10-15

    The stability properties of partial-toroidal flux ropes are studied in detail in the laboratory, motivated by ubiquitous arched magnetic structures found on the solar surface. The flux ropes studied here are magnetized arc discharges formed between two electrodes in the Magnetic Reconnection Experiment (MRX) [Yamada et al., Phys. Plasmas 4, 1936 (1997)]. The three dimensional evolution of these flux ropes is monitored by a fast visible light framing camera, while their magnetic structure is measured by a variety of internal magnetic probes. The flux ropes are consistently observed to undergo large-scale oscillations as a result of an external kink instability.more » Using detailed scans of the plasma current, the guide field strength, and the length of the flux rope, we show that the threshold for kink stability is governed by the Kruskal-Shafranov limit for a flux rope that is held fixed at both ends (i.e., q{sub a} = 1).« less

  7. A pepper-pot emittance meter for low-energy heavy-ion beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kremers, H. R.; Beijers, J. P. M.; Brandenburg, S.

    2013-02-15

    A novel emittance meter has been developed to measure the four-dimensional, transverse phase-space distribution of a low-energy ion beam using the pepper-pot technique. A characteristic feature of this instrument is that the pepper-pot plate, which has a linear array of holes in the vertical direction, is scanned horizontally through the ion beam. This has the advantage that the emittance can also be measured at locations along the beam line where the beam has a large horizontal divergence. A set of multi-channel plates, scintillation screen, and ccd camera is used as a position-sensitive ion detector allowing a large range of beammore » intensities that can be handled. This paper describes the design, construction, and operation of the instrument as well as the data analysis used to reconstruct the four-dimensional phase-space distribution of an ion beam. Measurements on a 15 keV He{sup +} beam are used as an example.« less

  8. Detection of an Atmosphere Around the Super-Earth 55 Cancri e

    NASA Astrophysics Data System (ADS)

    Tsiaras, A.; Rocchetto, M.; Waldmann, I. P.; Venot, O.; Varley, R.; Morello, G.; Damiano, M.; Tinetti, G.; Barton, E. J.; Yurchenko, S. N.; Tennyson, J.

    2016-04-01

    We report the analysis of two new spectroscopic observations in the near-infrared of the super-Earth 55 Cancri e, obtained with the WFC3 camera on board the Hubble Space Telescope. 55 Cancri e orbits so close to its parent star that temperatures much higher than 2000 K are expected on its surface. Given the brightness of 55 Cancri, the observations were obtained in scanning mode, adopting a very long scanning length and a very high scanning speed. We use our specialized pipeline to take into account systematics introduced by these observational parameters when coupled with the geometrical distortions of the instrument. We measure the transit depth per wavelength channel with an average relative uncertainty of 22 ppm per visit and find modulations that depart from a straight line model with a 6σ confidence level. These results suggest that 55 Cancri e is surrounded by an atmosphere, which is probably hydrogen-rich. Our fully Bayesian spectral retrieval code, { T }-REx, has identified HCN to be the most likely molecular candidate able to explain the features at 1.42 and 1.54 μm. While additional spectroscopic observations in a broader wavelength range in the infrared will be needed to confirm the HCN detection, we discuss here the implications of such a result. Our chemical model, developed with combustion specialists, indicates that relatively high mixing ratios of HCN may be caused by a high C/O ratio. This result suggests this super-Earth is a carbon-rich environment even more exotic than previously thought.

  9. Swept Line Electron Beam Annealing of Ion Implanted Semiconductors.

    DTIC Science & Technology

    1982-07-01

    of my research to the mainstream of technology. The techniques used for beam processing are distinguished by their * ~.* beam source and method by...raster scanned CW lasers (CWL), pulsed ion beams (PI), area pulsed electron beams (PEE), raster scanned (RSEB) or multi - scanned electron beams (MSEB...where high quality or tailored profiles are required. Continuous wave lasers and multi -scanned or swept-line electron beams are the most likely candidates

  10. Improved Real-Time Scan Matching Using Corner Features

    NASA Astrophysics Data System (ADS)

    Mohamed, H. A.; Moussa, A. M.; Elhabiby, M. M.; El-Sheimy, N.; Sesay, Abu B.

    2016-06-01

    The automation of unmanned vehicle operation has gained a lot of research attention, in the last few years, because of its numerous applications. The vehicle localization is more challenging in indoor environments where absolute positioning measurements (e.g. GPS) are typically unavailable. Laser range finders are among the most widely used sensors that help the unmanned vehicles to localize themselves in indoor environments. Typically, automatic real-time matching of the successive scans is performed either explicitly or implicitly by any localization approach that utilizes laser range finders. Many accustomed approaches such as Iterative Closest Point (ICP), Iterative Matching Range Point (IMRP), Iterative Dual Correspondence (IDC), and Polar Scan Matching (PSM) handles the scan matching problem in an iterative fashion which significantly affects the time consumption. Furthermore, the solution convergence is not guaranteed especially in cases of sharp maneuvers or fast movement. This paper proposes an automated real-time scan matching algorithm where the matching process is initialized using the detected corners. This initialization step aims to increase the convergence probability and to limit the number of iterations needed to reach convergence. The corner detection is preceded by line extraction from the laser scans. To evaluate the probability of line availability in indoor environments, various data sets, offered by different research groups, have been tested and the mean numbers of extracted lines per scan for these data sets are ranging from 4.10 to 8.86 lines of more than 7 points. The set of all intersections between extracted lines are detected as corners regardless of the physical intersection of these line segments in the scan. To account for the uncertainties of the detected corners, the covariance of the corners is estimated using the extracted lines variances. The detected corners are used to estimate the transformation parameters between the successive scan using least squares. These estimated transformation parameters are used to calculate an adjusted initialization for scan matching process. The presented method can be employed solely to match the successive scans and also can be used to aid other accustomed iterative methods to achieve more effective and faster converge. The performance and time consumption of the proposed approach is compared with ICP algorithm alone without initialization in different scenarios such as static period, fast straight movement, and sharp manoeuvers.

  11. Gamma-camera 18F-FDG PET in diagnosis and staging of patients presenting with suspected lung cancer and comparison with dedicated PET.

    PubMed

    Oturai, Peter S; Mortensen, Jann; Enevoldsen, Henriette; Eigtved, Annika; Backer, Vibeke; Olesen, Knud P; Nielsen, Henrik W; Hansen, Hanne; Stentoft, Poul; Friberg, Lars

    2004-08-01

    It is not clear whether high-quality coincidence gamma-PET (gPET) cameras can provide clinical data comparable with data obtained with dedicated PET (dPET) cameras in the primary diagnostic work-up of patients with suspected lung cancer. This study focuses on 2 main issues: direct comparison between foci resolved with the 2 different PET scanners and the diagnostic accuracy compared with final diagnosis determined by the combined information from all other investigations and clinical follow-up. Eighty-six patients were recruited to this study through a routine diagnostic program. They all had changes on their chest radiographs, suggesting malignant lung tumor. In addition to the standard diagnostic program, each patient had 2 PET scans that were performed on the same day. After administration of 419 MBq (range = 305-547 MBq) (18)F-FDG, patients were scanned in a dedicated PET scanner about 1 h after FDG administration and in a dual-head coincidence gamma-camera about 3 h after tracer injection. Images from the 2 scans were evaluated in a blinded set-up and compared with the final outcome. Malignant intrathoracic disease was found in 52 patients, and 47 patients had primary lung cancers. dPET detected all patients as having malignancies (sensitivity, 100%; specificity, 50%), whereas gPET missed one patient (sensitivity, 98%; specificity, 56%). For evaluating regional lymph node involvement, sensitivity and specificity rates were 78% and 84% for dPET and 61% and 90% for gPET, respectively. When comparing the 2 PET techniques with clinical tumor stage (TNM), full agreement was obtained in 64% of the patients (Cohen's kappa = 0.56). Comparing categorization of the patients into clinical relevant stages (no malignancy/malignancy suitable for treatment with curative intent/nontreatable malignancy), resulted in full agreement in 81% (Cohen's kappa = 0.71) of patients. Comparing results from a recent generation of gPET cameras obtained about 2 h later than those of dPET, there was a fairly good agreement with regard to detecting primary lung tumors but slightly reduced sensitivity in detecting smaller malignant lesions such as lymph nodes. Depending on the population to be investigated, and if dPET is not available, gPET might provide significant diagnostic information in patients in whom lung cancer is suspected.

  12. Automatic 3D relief acquisition and georeferencing of road sides by low-cost on-motion SfM

    NASA Astrophysics Data System (ADS)

    Voumard, Jérémie; Bornemann, Perrick; Malet, Jean-Philippe; Derron, Marc-Henri; Jaboyedoff, Michel

    2017-04-01

    3D terrain relief acquisition is important for a large part of geosciences. Several methods have been developed to digitize terrains, such as total station, LiDAR, GNSS or photogrammetry. To digitize road (or rail tracks) sides on long sections, mobile spatial imaging system or UAV are commonly used. In this project, we compare a still fairly new method -the SfM on-motion technics- with some traditional technics of terrain digitizing (terrestrial laser scanning, traditional SfM, UAS imaging solutions, GNSS surveying systems and total stations). The SfM on-motion technics generates 3D spatial data by photogrammetric processing of images taken from a moving vehicle. Our mobile system consists of six action cameras placed on a vehicle. Four fisheye cameras mounted on a mast on the vehicle roof are placed at 3.2 meters above the ground. Three of them have a GNNS chip providing geotagged images. Two pictures were acquired every second by each camera. 4K resolution fisheye videos were also used to extract 8.3M not geotagged pictures. All these pictures are then processed with the Agisoft PhotoScan Professional software. Results from the SfM on-motion technics are compared with results from classical SfM photogrammetry on a 500 meters long alpine track. They were also compared with mobile laser scanning data on the same road section. First results seem to indicate that slope structures are well observable up to decimetric accuracy. For the georeferencing, the planimetric (XY) accuracy of few meters is much better than the altimetric (Z) accuracy. There is indeed a Z coordinate shift of few tens of meters between GoPro cameras and Garmin camera. This makes necessary to give a greater freedom to altimetric coordinates in the processing software. Benefits of this low-cost SfM on-motion method are: 1) a simple setup to use in the field (easy to switch between vehicle types as car, train, bike, etc.), 2) a low cost and 3) an automatic georeferencing of 3D points clouds. Main disadvantages are: 1) results are less accurate than those from LiDAR system, 2) a heavy images processing and 3) a short distance of acquisition.

  13. A new adaptive light beam focusing principle for scanning light stimulation systems.

    PubMed

    Bitzer, L A; Meseth, M; Benson, N; Schmechel, R

    2013-02-01

    In this article a novel principle to achieve optimal focusing conditions or rather the smallest possible beam diameter for scanning light stimulation systems is presented. It is based on the following methodology: First, a reference point on a camera sensor is introduced where optimal focusing conditions are adjusted and the distance between the light focusing optic and the reference point is determined using a laser displacement sensor. In a second step, this displacement sensor is used to map the topography of the sample under investigation. Finally, the actual measurement is conducted, using optimal focusing conditions in each measurement point at the sample surface, that are determined by the height difference between camera sensor and the sample topography. This principle is independent of the measurement values, the optical or electrical properties of the sample, the used light source, or the selected wavelength. Furthermore, the samples can be tilted, rough, bent, or of different surface materials. In the following the principle is implemented using an optical beam induced current system, but basically it can be applied to any other scanning light stimulation system. Measurements to demonstrate its operation are shown, using a polycrystalline silicon solar cell.

  14. Note: Simple hysteresis parameter inspector for camera module with liquid lens

    NASA Astrophysics Data System (ADS)

    Chen, Po-Jui; Liao, Tai-Shan; Hwang, Chi-Hung

    2010-05-01

    A method to inspect hysteresis parameter is presented in this article. The hysteresis of whole camera module with liquid lens can be measured rather than a single lens merely. Because the variation in focal length influences image quality, we propose utilizing the sharpness of images which is captured from camera module for hysteresis evaluation. Experiments reveal that the profile of sharpness hysteresis corresponds to the characteristic of contact angle of liquid lens. Therefore, it can infer that the hysteresis of camera module is induced by the contact angle of liquid lens. An inspection process takes only 20 s to complete. Thus comparing with other instruments, this inspection method is more suitable to integrate into the mass production lines for online quality assurance.

  15. Extrinsic Calibration of a Laser Galvanometric Setup and a Range Camera.

    PubMed

    Sels, Seppe; Bogaerts, Boris; Vanlanduit, Steve; Penne, Rudi

    2018-05-08

    Currently, galvanometric scanning systems (like the one used in a scanning laser Doppler vibrometer) rely on a planar calibration procedure between a two-dimensional (2D) camera and the laser galvanometric scanning system to automatically aim a laser beam at a particular point on an object. In the case of nonplanar or moving objects, this calibration is not sufficiently accurate anymore. In this work, a three-dimensional (3D) calibration procedure that uses a 3D range sensor is proposed. The 3D calibration is valid for all types of objects and retains its accuracy when objects are moved between subsequent measurement campaigns. The proposed 3D calibration uses a Non-Perspective-n-Point (NPnP) problem solution. The 3D range sensor is used to calculate the position of the object under test relative to the laser galvanometric system. With this extrinsic calibration, the laser galvanometric scanning system can automatically aim a laser beam to this object. In experiments, the mean accuracy of aiming the laser beam on an object is below 10 mm for 95% of the measurements. This achieved accuracy is mainly determined by the accuracy and resolution of the 3D range sensor. The new calibration method is significantly better than the original 2D calibration method, which in our setup achieves errors below 68 mm for 95% of the measurements.

  16. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity

    PubMed Central

    Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun

    2015-01-01

    Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach. PMID:26151203

  17. A Sparse Representation-Based Deployment Method for Optimizing the Observation Quality of Camera Networks

    PubMed Central

    Wang, Chang; Qi, Fei; Shi, Guangming; Wang, Xiaotian

    2013-01-01

    Deployment is a critical issue affecting the quality of service of camera networks. The deployment aims at adopting the least number of cameras to cover the whole scene, which may have obstacles to occlude the line of sight, with expected observation quality. This is generally formulated as a non-convex optimization problem, which is hard to solve in polynomial time. In this paper, we propose an efficient convex solution for deployment optimizing the observation quality based on a novel anisotropic sensing model of cameras, which provides a reliable measurement of the observation quality. The deployment is formulated as the selection of a subset of nodes from a redundant initial deployment with numerous cameras, which is an ℓ0 minimization problem. Then, we relax this non-convex optimization to a convex ℓ1 minimization employing the sparse representation. Therefore, the high quality deployment is efficiently obtained via convex optimization. Simulation results confirm the effectiveness of the proposed camera deployment algorithms. PMID:23989826

  18. CCD Camera Detection of HIV Infection.

    PubMed

    Day, John R

    2017-01-01

    Rapid and precise quantification of the infectivity of HIV is important for molecular virologic studies, as well as for measuring the activities of antiviral drugs and neutralizing antibodies. An indicator cell line, a CCD camera, and image-analysis software are used to quantify HIV infectivity. The cells of the P4R5 line, which express the receptors for HIV infection as well as β-galactosidase under the control of the HIV-1 long terminal repeat, are infected with HIV and then incubated 2 days later with X-gal to stain the infected cells blue. Digital images of monolayers of the infected cells are captured using a high resolution CCD video camera and a macro video zoom lens. A software program is developed to process the images and to count the blue-stained foci of infection. The described method allows for the rapid quantification of the infected cells over a wide range of viral inocula with reproducibility, accuracy and at relatively low cost.

  19. Autonomous pedestrian localization technique using CMOS camera sensors

    NASA Astrophysics Data System (ADS)

    Chun, Chanwoo

    2014-09-01

    We present a pedestrian localization technique that does not need infrastructure. The proposed angle-only measurement method needs specially manufactured shoes. Each shoe has two CMOS cameras and two markers such as LEDs attached on the inward side. The line of sight (LOS) angles towards the two markers on the forward shoe are measured using the two cameras on the other rear shoe. Our simulation results shows that a pedestrian walking down in a shopping mall wearing this device can be accurately guided to the front of a destination store located 100m away, if the floor plan of the mall is available.

  20. Real-Time External Respiratory Motion Measuring Technique Using an RGB-D Camera and Principal Component Analysis †

    PubMed Central

    Wijenayake, Udaya; Park, Soon-Yong

    2017-01-01

    Accurate tracking and modeling of internal and external respiratory motion in the thoracic and abdominal regions of a human body is a highly discussed topic in external beam radiotherapy treatment. Errors in target/normal tissue delineation and dose calculation and the increment of the healthy tissues being exposed to high radiation doses are some of the unsolicited problems caused due to inaccurate tracking of the respiratory motion. Many related works have been introduced for respiratory motion modeling, but a majority of them highly depend on radiography/fluoroscopy imaging, wearable markers or surgical node implanting techniques. We, in this article, propose a new respiratory motion tracking approach by exploiting the advantages of an RGB-D camera. First, we create a patient-specific respiratory motion model using principal component analysis (PCA) removing the spatial and temporal noise of the input depth data. Then, this model is utilized for real-time external respiratory motion measurement with high accuracy. Additionally, we introduce a marker-based depth frame registration technique to limit the measuring area into an anatomically consistent region that helps to handle the patient movements during the treatment. We achieved a 0.97 correlation comparing to a spirometer and 0.53 mm average error considering a laser line scanning result as the ground truth. As future work, we will use this accurate measurement of external respiratory motion to generate a correlated motion model that describes the movements of internal tumors. PMID:28792468

  1. A Fresh Cadaver Study on Indocyanine Green Fluorescence Lymphography: A New Whole-Body Imaging Technique for Investigating the Superficial Lymphatics.

    PubMed

    Shinaoka, Akira; Koshimune, Seijiro; Yamada, Kiyoshi; Kumagishi, Kanae; Suami, Hiroo; Kimata, Yoshihiro; Ohtsuka, Aiji

    2018-05-01

    Identification of the lymphatic system in cadavers is painstaking because lymphatic vessels have very thin walls and are transparent. Selection of appropriate contrast agents is a key factor for successfully visualizing the lymphatics. In this study, the authors introduce a new imaging technique of lymphatic mapping in the whole bodies of fresh cadavers. Ten fresh human cadavers were used for this study. The authors injected 0.1 ml of indocyanine green fluorescence solution subcutaneously at multiple spots along the watershed lines between lymphatic territories and hand and foot regions. After the body was scanned by the near-infrared camera system, fluorescent tissues were harvested and histologic examination was performed under the microscope equipped with the infrared camera system to confirm that they were the lymphatics. Subcutaneously injected indocyanine green was immediately transported into the lymphatic vessels after gentle massage on the injection points. Sweeping massage along the lymphatic vessels facilitated indocyanine green transport inside the lymphatic vessel to move toward the lymph nodes. The lymphatic system was visualized well in the whole body. Histologic examinations confirmed that indocyanine green was detected in the lymphatic lumens specifically, even when located far from the injected points. The lymphatic system could be visualized in whole-body fresh cadavers, as in living bodies, using indocyanine green fluorescence lymphography. Compatibility of indocyanine green lymphography would facilitate the use of cadaveric specimens for macroscopic and microscopic analyses.

  2. Demosaicking for full motion video 9-band SWIR sensor

    NASA Astrophysics Data System (ADS)

    Kanaev, Andrey V.; Rawhouser, Marjorie; Kutteruf, Mary R.; Yetzbacher, Michael K.; DePrenger, Michael J.; Novak, Kyle M.; Miller, Corey A.; Miller, Christopher W.

    2014-05-01

    Short wave infrared (SWIR) spectral imaging systems are vital for Intelligence, Surveillance, and Reconnaissance (ISR) applications because of their abilities to autonomously detect targets and classify materials. Typically the spectral imagers are incapable of providing Full Motion Video (FMV) because of their reliance on line scanning. We enable FMV capability for a SWIR multi-spectral camera by creating a repeating pattern of 3x3 spectral filters on a staring focal plane array (FPA). In this paper we present the imagery from an FMV SWIR camera with nine discrete bands and discuss image processing algorithms necessary for its operation. The main task of image processing in this case is demosaicking of the spectral bands i.e. reconstructing full spectral images with original FPA resolution from spatially subsampled and incomplete spectral data acquired with the choice of filter array pattern. To the best of author's knowledge, the demosaicking algorithms for nine or more equally sampled bands have not been reported before. Moreover all existing algorithms developed for demosaicking visible color filter arrays with less than nine colors assume either certain relationship between the visible colors, which are not valid for SWIR imaging, or presence of one color band with higher sampling rate compared to the rest of the bands, which does not conform to our spectral filter pattern. We will discuss and present results for two novel approaches to demosaicking: interpolation using multi-band edge information and application of multi-frame super-resolution to a single frame resolution enhancement of multi-spectral spatially multiplexed images.

  3. Design of noise barrier inspection system for high-speed railway

    NASA Astrophysics Data System (ADS)

    Liu, Bingqian; Shao, Shuangyun; Feng, Qibo; Ma, Le; Cholryong, Kim

    2016-10-01

    The damage of noise barriers will highly reduce the transportation safety of the high-speed railway. In this paper, an online inspection system of noise barrier based on laser vision for the safety of high-speed railway is proposed. The inspection system, mainly consisted of a fast camera and a line laser, installed in the first carriage of the high-speed CIT(Composited Inspection Train).A Laser line was projected on the surface of the noise barriers and the images of the light line were received by the camera while the train is running at high speed. The distance between the inspection system and the noise barrier can be obtained based on laser triangulation principle. The results of field tests show that the proposed system can meet the need of high speed and high accuracy to get the contour distortion of the noise barriers.

  4. Pixel Dynamics Analysis of Photospheric Spectral Data

    DTIC Science & Technology

    2014-11-13

    absorption lines centered at 6301.5 Å and 6302.5 Å. The two smaller absorption lines are telluric lines. The analysis is carried out for a range of...cadence and consist of 251 scan lines. These two new sets of SOLIS VSM data also revealed more inconsistent instrument movements between scans, forcing us...SOLIS VSM instrument. The wavelength range shows two photospheric absorption lines, Fe I 6301.5 Å and Fe I 6302.5 Å ), and two smaller telluric

  5. Improved Scanners for Microscopic Hyperspectral Imaging

    NASA Technical Reports Server (NTRS)

    Mao, Chengye

    2009-01-01

    Improved scanners to be incorporated into hyperspectral microscope-based imaging systems have been invented. Heretofore, in microscopic imaging, including spectral imaging, it has been customary to either move the specimen relative to the optical assembly that includes the microscope or else move the entire assembly relative to the specimen. It becomes extremely difficult to control such scanning when submicron translation increments are required, because the high magnification of the microscope enlarges all movements in the specimen image on the focal plane. To overcome this difficulty, in a system based on this invention, no attempt would be made to move either the specimen or the optical assembly. Instead, an objective lens would be moved within the assembly so as to cause translation of the image at the focal plane: the effect would be equivalent to scanning in the focal plane. The upper part of the figure depicts a generic proposed microscope-based hyperspectral imaging system incorporating the invention. The optical assembly of this system would include an objective lens (normally, a microscope objective lens) and a charge-coupled-device (CCD) camera. The objective lens would be mounted on a servomotor-driven translation stage, which would be capable of moving the lens in precisely controlled increments, relative to the camera, parallel to the focal-plane scan axis. The output of the CCD camera would be digitized and fed to a frame grabber in a computer. The computer would store the frame-grabber output for subsequent viewing and/or processing of images. The computer would contain a position-control interface board, through which it would control the servomotor. There are several versions of the invention. An essential feature common to all versions is that the stationary optical subassembly containing the camera would also contain a spatial window, at the focal plane of the objective lens, that would pass only a selected portion of the image. In one version, the window would be a slit, the CCD would contain a one-dimensional array of pixels, and the objective lens would be moved along an axis perpendicular to the slit to spatially scan the image of the specimen in pushbroom fashion. The image built up by scanning in this case would be an ordinary (non-spectral) image. In another version, the optics of which are depicted in the lower part of the figure, the spatial window would be a slit, the CCD would contain a two-dimensional array of pixels, the slit image would be refocused onto the CCD by a relay-lens pair consisting of a collimating and a focusing lens, and a prism-gratingprism optical spectrometer would be placed between the collimating and focusing lenses. Consequently, the image on the CCD would be spatially resolved along the slit axis and spectrally resolved along the axis perpendicular to the slit. As in the first-mentioned version, the objective lens would be moved along an axis perpendicular to the slit to spatially scan the image of the specimen in pushbroom fashion.

  6. An Augmented-Reality Edge Enhancement Application for Google Glass

    PubMed Central

    Hwang, Alex D.; Peli, Eli

    2014-01-01

    Purpose Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer’s real world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Methods Goggle Glass’s camera lens distortions were corrected by using an image warping. Since the camera and virtual display are horizontally separated by 16mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of 3D transformations to minimize parallax errors before the final projection to the Glass’ see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal vision subjects, with and without a diffuser film to simulate vision loss. Results For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera’s performance. The authors assume this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Conclusions Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible, and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration. PMID:24978871

  7. Fisheye Multi-Camera System Calibration for Surveying Narrow and Complex Architectures

    NASA Astrophysics Data System (ADS)

    Perfetti, L.; Polari, C.; Fassi, F.

    2018-05-01

    Narrow spaces and passages are not a rare encounter in cultural heritage, the shape and extension of those areas place a serious challenge on any techniques one may choose to survey their 3D geometry. Especially on techniques that make use of stationary instrumentation like terrestrial laser scanning. The ratio between space extension and cross section width of many corridors and staircases can easily lead to distortions/drift of the 3D reconstruction because of the problem of propagation of uncertainty. This paper investigates the use of fisheye photogrammetry to produce the 3D reconstruction of such spaces and presents some tests to contain the degree of freedom of the photogrammetric network, thereby containing the drift of long data set as well. The idea is that of employing a multi-camera system composed of several fisheye cameras and to implement distances and relative orientation constraints, as well as the pre-calibration of the internal parameters for each camera, within the bundle adjustment. For the beginning of this investigation, we used the NCTech iSTAR panoramic camera as a rigid multi-camera system. The case study of the Amedeo Spire of the Milan Cathedral, that encloses a spiral staircase, is the stage for all the tests. Comparisons have been made between the results obtained with the multi-camera configuration, the auto-stitched equirectangular images and a data set obtained with a monocular fisheye configuration using a full frame DSLR. Results show improved accuracy, down to millimetres, using a rigidly constrained multi-camera.

  8. The fast and accurate 3D-face scanning technology based on laser triangle sensors

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Chen, Yang; Kong, Bin

    2013-08-01

    A laser triangle scanning method and the structure of 3D-face measurement system were introduced. In presented system, a liner laser source was selected as an optical indicated signal in order to scanning a line one times. The CCD image sensor was used to capture image of the laser line modulated by human face. The system parameters were obtained by system calibrated calculated. The lens parameters of image part of were calibrated with machine visual image method and the triangle structure parameters were calibrated with fine wire paralleled arranged. The CCD image part and line laser indicator were set with a linear motor carry which can achieve the line laser scanning form top of the head to neck. For the nose is ledge part and the eyes are sunk part, one CCD image sensor can not obtain the completed image of laser line. In this system, two CCD image sensors were set symmetric at two sides of the laser indicator. In fact, this structure includes two laser triangle measure units. Another novel design is there laser indicators were arranged in order to reduce the scanning time for it is difficult for human to keep static for longer time. The 3D data were calculated after scanning. And further data processing include 3D coordinate refine, mesh calculate and surface show. Experiments show that this system has simply structure, high scanning speed and accurate. The scanning range covers the whole head of adult, the typical resolution is 0.5mm.

  9. Screening of adulterants in powdered foods and ingredients using line-scan Raman chemical imaging.

    USDA-ARS?s Scientific Manuscript database

    A newly developed line-scan Raman imaging system using a 785 nm line laser was used to authenticate powdered foods and ingredients. The system was used to collect hyperspectral Raman images in the range of 102–2865 wavenumber from three representative food powders mixed with selected adulterants eac...

  10. Endoscopic laser range scanner for minimally invasive, image guided kidney surgery

    NASA Astrophysics Data System (ADS)

    Friets, Eric; Bieszczad, Jerry; Kynor, David; Norris, James; Davis, Brynmor; Allen, Lindsay; Chambers, Robert; Wolf, Jacob; Glisson, Courtenay; Herrell, S. Duke; Galloway, Robert L.

    2013-03-01

    Image guided surgery (IGS) has led to significant advances in surgical procedures and outcomes. Endoscopic IGS is hindered, however, by the lack of suitable intraoperative scanning technology for registration with preoperative tomographic image data. This paper describes implementation of an endoscopic laser range scanner (eLRS) system for accurate, intraoperative mapping of the kidney surface, registration of the measured kidney surface with preoperative tomographic images, and interactive image-based surgical guidance for subsurface lesion targeting. The eLRS comprises a standard stereo endoscope coupled to a steerable laser, which scans a laser fan beam across the kidney surface, and a high-speed color camera, which records the laser-illuminated pixel locations on the kidney. Through calibrated triangulation, a dense set of 3-D surface coordinates are determined. At maximum resolution, the eLRS acquires over 300,000 surface points in less than 15 seconds. Lower resolution scans of 27,500 points are acquired in one second. Measurement accuracy of the eLRS, determined through scanning of reference planar and spherical phantoms, is estimated to be 0.38 +/- 0.27 mm at a range of 2 to 6 cm. Registration of the scanned kidney surface with preoperative image data is achieved using a modified iterative closest point algorithm. Surgical guidance is provided through graphical overlay of the boundaries of subsurface lesions, vasculature, ducts, and other renal structures labeled in the CT or MR images, onto the eLRS camera image. Depth to these subsurface targets is also displayed. Proof of clinical feasibility has been established in an explanted perfused porcine kidney experiment.

  11. Evaluation of a hyperspectral image database for demosaicking purposes

    NASA Astrophysics Data System (ADS)

    Larabi, Mohamed-Chaker; Süsstrunk, Sabine

    2011-01-01

    We present a study on the the applicability of hyperspectral images to evaluate color filter array (CFA) design and the performance of demosaicking algorithms. The aim is to simulate a typical digital still camera processing pipe-line and to compare two different scenarios: evaluate the performance of demosaicking algorithms applied to raw camera RGB values before color rendering to sRGB, and evaluate the performance of demosaicking algorithms applied on the final sRGB color rendered image. The second scenario is the most frequently used one in literature because CFA design and algorithms are usually tested on a set of existing images that are already rendered, such as the Kodak Photo CD set containing the well-known lighthouse image. We simulate the camera processing pipe-line with measured spectral sensitivity functions of a real camera. Modeling a Bayer CFA, we select three linear demosaicking techniques in order to perform the tests. The evaluation is done using CMSE, CPSNR, s-CIELAB and MSSIM metrics to compare demosaicking results. We find that the performance, and especially the difference between demosaicking algorithms, is indeed significant depending if the mosaicking/demosaicking is applied to camera raw values as opposed to already rendered sRGB images. We argue that evaluating the former gives a better indication how a CFA/demosaicking combination will work in practice, and that it is in the interest of the community to create a hyperspectral image dataset dedicated to that effect.

  12. Sensor fusion of cameras and a laser for city-scale 3D reconstruction.

    PubMed

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-11-04

    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  13. Statis omnidirectional stereoscopic display system

    NASA Astrophysics Data System (ADS)

    Barton, George G.; Feldman, Sidney; Beckstead, Jeffrey A.

    1999-11-01

    A unique three camera stereoscopic omnidirectional viewing system based on the periscopic panoramic camera described in the 11/98 SPIE proceedings (AM13). The 3 panoramic cameras are equilaterally combined so each leg of the triangle approximates the human inter-ocular spacing allowing each panoramic camera to view 240 degree(s) of the panoramic scene, the most counter clockwise 120 degree(s) being the left eye field and the other 120 degree(s) segment being the right eye field. Field definition may be by green/red filtration or time discrimination of the video signal. In the first instance a 2 color spectacle is used in viewing the display or in the 2nd instance LCD goggles are used to differentiate the R/L fields. Radially scanned vidicons or re-mapped CCDs may be used. The display consists of three vertically stacked 120 degree(s) segments of the panoramic field of view with 2 fields/frame. Field A being the left eye display and Field B the right eye display.

  14. Bringing SARA to School.

    ERIC Educational Resources Information Center

    Gavin, Thomas A.

    2000-01-01

    Well-designed problem-solving plans have something metal detectors and security cameras lack: proof of success. SARA, an acronym for Scanning, Analysis, Response, and Assessment, was shown to increase school safety in districts in Charlotte, North Carolina, and St. Petersburg, Florida. Program workings are explained. (MLH)

  15. Thyroid Scan and Uptake

    MedlinePlus

    ... minutes prior to the test. When it is time for the imaging to begin, you will lie down on a moveable examination table with your head tipped backward and neck extended. The gamma camera will then take a series of images, capturing images of the thyroid gland ...

  16. Low-Latency Line Tracking Using Event-Based Dynamic Vision Sensors

    PubMed Central

    Everding, Lukas; Conradt, Jörg

    2018-01-01

    In order to safely navigate and orient in their local surroundings autonomous systems need to rapidly extract and persistently track visual features from the environment. While there are many algorithms tackling those tasks for traditional frame-based cameras, these have to deal with the fact that conventional cameras sample their environment with a fixed frequency. Most prominently, the same features have to be found in consecutive frames and corresponding features then need to be matched using elaborate techniques as any information between the two frames is lost. We introduce a novel method to detect and track line structures in data streams of event-based silicon retinae [also known as dynamic vision sensors (DVS)]. In contrast to conventional cameras, these biologically inspired sensors generate a quasicontinuous stream of vision information analogous to the information stream created by the ganglion cells in mammal retinae. All pixels of DVS operate asynchronously without a periodic sampling rate and emit a so-called DVS address event as soon as they perceive a luminance change exceeding an adjustable threshold. We use the high temporal resolution achieved by the DVS to track features continuously through time instead of only at fixed points in time. The focus of this work lies on tracking lines in a mostly static environment which is observed by a moving camera, a typical setting in mobile robotics. Since DVS events are mostly generated at object boundaries and edges which in man-made environments often form lines they were chosen as feature to track. Our method is based on detecting planes of DVS address events in x-y-t-space and tracing these planes through time. It is robust against noise and runs in real time on a standard computer, hence it is suitable for low latency robotics. The efficacy and performance are evaluated on real-world data sets which show artificial structures in an office-building using event data for tracking and frame data for ground-truth estimation from a DAVIS240C sensor. PMID:29515386

  17. Digital adaptive optics line-scanning confocal imaging system.

    PubMed

    Liu, Changgeng; Kim, Myung K

    2015-01-01

    A digital adaptive optics line-scanning confocal imaging (DAOLCI) system is proposed by applying digital holographic adaptive optics to a digital form of line-scanning confocal imaging system. In DAOLCI, each line scan is recorded by a digital hologram, which allows access to the complex optical field from one slice of the sample through digital holography. This complex optical field contains both the information of one slice of the sample and the optical aberration of the system, thus allowing us to compensate for the effect of the optical aberration, which can be sensed by a complex guide star hologram. After numerical aberration compensation, the corrected optical fields of a sequence of line scans are stitched into the final corrected confocal image. In DAOLCI, a numerical slit is applied to realize the confocality at the sensor end. The width of this slit can be adjusted to control the image contrast and speckle noise for scattering samples. DAOLCI dispenses with the hardware pieces, such as Shack–Hartmann wavefront sensor and deformable mirror, and the closed-loop feedbacks adopted in the conventional adaptive optics confocal imaging system, thus reducing the optomechanical complexity and cost. Numerical simulations and proof-of-principle experiments are presented that demonstrate the feasibility of this idea.

  18. Novel Phased Array Scanning Employing A Single Feed Without Using Individual Phase Shifters

    NASA Technical Reports Server (NTRS)

    Host, Nicholas K.; Chen, Chi-Chih; Volakis, John L.; Miranda, Felix A.

    2012-01-01

    Phased arrays afford many advantages over mechanically steered systems. However, they are also more complex, heavy, and most of all costly. The high cost mainly originates from the complex feeding structure. This paper proposes a novel feeding scheme to eliminate all phase shifters and achieve scanning via one-dimensional motion. Beam scanning is achieved via a series fed array incorporating feeding transmission lines whose wave velocity can be mechanically adjusted. Along with the line design, ideal element impedances to be used in conjunction with the line are derived. Practical designs are shown which achieve scanning to +/-30deg from boresight. Finally, a prototype is fabricated and measured, demonstrating the concept.

  19. Calibration Methods for a 3D Triangulation Based Camera

    NASA Astrophysics Data System (ADS)

    Schulz, Ulrike; Böhnke, Kay

    A sensor in a camera takes a gray level image (1536 x 512 pixels), which is reflected by a reference body. The reference body is illuminated by a linear laser line. This gray level image can be used for a 3D calibration. The following paper describes how a calibration program calculates the calibration factors. The calibration factors serve to determine the size of an unknown reference body.

  20. ENGINEERING TEST REACTOR (ETR) BUILDING, TRA642. CONTEXTUAL VIEW, CAMERA FACING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    ENGINEERING TEST REACTOR (ETR) BUILDING, TRA-642. CONTEXTUAL VIEW, CAMERA FACING EAST. VERTICAL METAL SIDING. ROOF IS SLIGHTLY ELEVATED AT CENTER LINE FOR DRAINAGE. WEST SIDE OF ETR COMPRESSOR BUILDING, TRA-643, PROJECTS TOWARD LEFT AT FAR END OF ETR BUILDING. INL NEGATIVE NO. HD46-37-1. Mike Crane, Photographer, 4/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  1. Radio astronomy Explorer B antenna aspect processor

    NASA Technical Reports Server (NTRS)

    Miller, W. H.; Novello, J.; Reeves, C. C.

    1972-01-01

    The antenna aspect system used on the Radio Astronomy Explorer B spacecraft is described. This system consists of two facsimile cameras, a data encoder, and a data processor. Emphasis is placed on the discussion of the data processor, which contains a data compressor and a source encoder. With this compression scheme a compression ratio of 8 is achieved on a typical line of camera data. These compressed data are then convolutionally encoded.

  2. STREAM PROCESSING ALGORITHMS FOR DYNAMIC 3D SCENE ANALYSIS

    DTIC Science & Technology

    2018-02-15

    23 9 Ground truth creation based on marked building feature points in two different views 50 frames apart in...between just two views , each row in the current figure represents a similar assessment however between one camera and all other cameras within the dataset...BA4S. While Fig. 44 depicted the epipolar lines for the point correspondences between just two views , the current figure represents a similar

  3. The Video Collaborative Localization of a Miner's Lamp Based on Wireless Multimedia Sensor Networks for Underground Coal Mines.

    PubMed

    You, Kaiming; Yang, Wei; Han, Ruisong

    2015-09-29

    Based on wireless multimedia sensor networks (WMSNs) deployed in an underground coal mine, a miner's lamp video collaborative localization algorithm was proposed to locate miners in the scene of insufficient illumination and bifurcated structures of underground tunnels. In bifurcation area, several camera nodes are deployed along the longitudinal direction of tunnels, forming a collaborative cluster in wireless way to monitor and locate miners in underground tunnels. Cap-lamps are regarded as the feature of miners in the scene of insufficient illumination of underground tunnels, which means that miners can be identified by detecting their cap-lamps. A miner's lamp will project mapping points on the imaging plane of collaborative cameras and the coordinates of mapping points are calculated by collaborative cameras. Then, multiple straight lines between the positions of collaborative cameras and their corresponding mapping points are established. To find the three-dimension (3D) coordinate location of the miner's lamp a least square method is proposed to get the optimal intersection of the multiple straight lines. Tests were carried out both in a corridor and a realistic scenario of underground tunnel, which show that the proposed miner's lamp video collaborative localization algorithm has good effectiveness, robustness and localization accuracy in real world conditions of underground tunnels.

  4. Improved TDEM formation using fused ladar/digital imagery from a low-cost small UAV

    NASA Astrophysics Data System (ADS)

    Khatiwada, Bikalpa; Budge, Scott E.

    2017-05-01

    Formation of a Textured Digital Elevation Model (TDEM) has been useful in many applications in the fields of agriculture, disaster response, terrain analysis and more. Use of a low-cost small UAV system with a texel camera (fused lidar/digital imagery) can significantly reduce the cost compared to conventional aircraft-based methods. This paper reports continued work on this problem reported in a previous paper by Bybee and Budge, and reports improvements in performance. A UAV fitted with a texel camera is flown at a fixed height above the terrain and swaths of texel image data of the terrain below is taken continuously. Each texel swath has one or more lines of lidar data surrounded by a narrow strip of EO data. Texel swaths are taken such that there is some overlap from one swath to its adjacent swath. The GPS/IMU fitted on the camera also give coarse knowledge of attitude and position. Using this coarse knowledge and the information from the texel image, the error in the camera position and attitude is reduced which helps in producing an accurate TDEM. This paper reports improvements in the original work by using multiple lines of lidar data per swath. The final results are shown and analyzed for numerical accuracy.

  5. Fabrication of large dual-polarized multichroic TES bolometer arrays for CMB measurements with the SPT-3G camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Posada, C. M.; Ade, P. A. R.; Ahmed, Z.

    2015-08-11

    This work presents the procedures used by Argonne National Laboratory to fabricate large arrays of multichroic transition-edge sensor (TES) bolometers for cosmic microwave background (CMB) measurements. These detectors will be assembled into the focal plane for the SPT-3G camera, the third generation CMB camera to be installed in the South Pole Telescope. The complete SPT-3G camera will have approximately 2690 pixels, for a total of 16,140 TES bolometric detectors. Each pixel is comprised of a broad-band sinuous antenna coupled to a Nb microstrip line. In-line filters are used to define the different band-passes before the millimeter-wavelength signal is fed tomore » the respective Ti/Au TES bolometers. There are six TES bolometer detectors per pixel, which allow for measurements of three band-passes (95 GHz, 150 GHz and 220 GHz) and two polarizations. The steps involved in the monolithic fabrication of these detector arrays are presented here in detail. Patterns are defined using a combination of stepper and contact lithography. The misalignment between layers is kept below 200 nm. The overall fabrication involves a total of 16 processes, including reactive and magnetron sputtering, reactive ion etching, inductively coupled plasma etching and chemical etching.« less

  6. SU-C-213-04: Application of Depth Sensing and 3D-Printing Technique for Total Body Irradiation (TBI) Patient Measurement and Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, M; Suh, T; Research Institute of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul

    2015-06-15

    Purpose: To develop and validate an innovative method of using depth sensing cameras and 3D printing techniques for Total Body Irradiation (TBI) treatment planning and compensator fabrication. Methods: A tablet with motion tracking cameras and integrated depth sensing was used to scan a RANDOTM phantom arranged in a TBI treatment booth to detect and store the 3D surface in a point cloud (PC) format. The accuracy of the detected surface was evaluated by comparison to extracted measurements from CT scan images. The thickness, source to surface distance and off-axis distance of the phantom at different body section was measured formore » TBI treatment planning. A 2D map containing a detailed compensator design was calculated to achieve uniform dose distribution throughout the phantom. The compensator was fabricated using a 3D printer, silicone molding and tungsten powder. In vivo dosimetry measurements were performed using optically stimulated luminescent detectors (OSLDs). Results: The whole scan of the anthropomorphic phantom took approximately 30 seconds. The mean error for thickness measurements at each section of phantom compare to CT was 0.44 ± 0.268 cm. These errors resulted in approximately 2% dose error calculation and 0.4 mm tungsten thickness deviation for the compensator design. The accuracy of 3D compensator printing was within 0.2 mm. In vivo measurements for an end-to-end test showed the overall dose difference was within 3%. Conclusion: Motion cameras and depth sensing techniques proved to be an accurate and efficient tool for TBI patient measurement and treatment planning. 3D printing technique improved the efficiency and accuracy of the compensator production and ensured a more accurate treatment delivery.« less

  7. Streak camera based SLR receiver for two color atmospheric measurements

    NASA Technical Reports Server (NTRS)

    Varghese, Thomas K.; Clarke, Christopher; Oldham, Thomas; Selden, Michael

    1993-01-01

    To realize accurate two-color differential measurements, an image digitizing system with variable spatial resolution was designed, built, and integrated to a photon-counting picosecond streak camera, yielding a temporal scan resolution better than 300 femtosecond/pixel. The streak camera is configured to operate with 3 spatial channels; two of these support green (532 nm) and uv (355 nm) while the third accommodates reference pulses (764 nm) for real-time calibration. Critical parameters affecting differential timing accuracy such as pulse width and shape, number of received photons, streak camera/imaging system nonlinearities, dynamic range, and noise characteristics were investigated to optimize the system for accurate differential delay measurements. The streak camera output image consists of three image fields, each field is 1024 pixels along the time axis and 16 pixels across the spatial axis. Each of the image fields may be independently positioned across the spatial axis. Two of the image fields are used for the two wavelengths used in the experiment; the third window measures the temporal separation of a pair of diode laser pulses which verify the streak camera sweep speed for each data frame. The sum of the 16 pixel intensities across each of the 1024 temporal positions for the three data windows is used to extract the three waveforms. The waveform data is processed using an iterative three-point running average filter (10 to 30 iterations are used) to remove high-frequency structure. The pulse pair separations are determined using the half-max and centroid type analysis. Rigorous experimental verification has demonstrated that this simplified process provides the best measurement accuracy. To calibrate the receiver system sweep, two laser pulses with precisely known temporal separation are scanned along the full length of the sweep axis. The experimental measurements are then modeled using polynomial regression to obtain a best fit to the data. Data aggregation using normal point approach has provided accurate data fitting techniques and is found to be much more convenient than using the full rate single shot data. The systematic errors from this model have been found to be less than 3 ps for normal points.

  8. An Innovative Procedure for Calibration of Strapdown Electro-Optical Sensors Onboard Unmanned Air Vehicles

    PubMed Central

    Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio; Rispoli, Attilio

    2010-01-01

    This paper presents an innovative method for estimating the attitude of airborne electro-optical cameras with respect to the onboard autonomous navigation unit. The procedure is based on the use of attitude measurements under static conditions taken by an inertial unit and carrier-phase differential Global Positioning System to obtain accurate camera position estimates in the aircraft body reference frame, while image analysis allows line-of-sight unit vectors in the camera based reference frame to be computed. The method has been applied to the alignment of the visible and infrared cameras installed onboard the experimental aircraft of the Italian Aerospace Research Center and adopted for in-flight obstacle detection and collision avoidance. Results show an angular uncertainty on the order of 0.1° (rms). PMID:22315559

  9. Solid state television camera (CCD-buried channel)

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The development of an all solid state television camera, which uses a buried channel charge coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array is utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control (i.e., ALC and AGC) techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.

  10. Solid state television camera (CCD-buried channel), revision 1

    NASA Technical Reports Server (NTRS)

    1977-01-01

    An all solid state television camera was designed which uses a buried channel charge coupled device (CCD) as the image sensor. A 380 x 488 element CCD array is utilized to ensure compatibility with 525-line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (1) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (2) techniques for the elimination or suppression of CCD blemish effects, and (3) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.

  11. Solid state, CCD-buried channel, television camera study and design

    NASA Technical Reports Server (NTRS)

    Hoagland, K. A.; Balopole, H.

    1976-01-01

    An investigation of an all solid state television camera design, which uses a buried channel charge-coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array was utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a design which addresses the program requirements for a deliverable solid state TV camera.

  12. Novel automatic detection of pleura and B-lines (comet-tail artifacts) on in vivo lung ultrasound scans

    NASA Astrophysics Data System (ADS)

    Moshavegh, Ramin; Hansen, Kristoffer Lindskov; Møller Sørensen, Hasse; Hemmsen, Martin Christian; Ewertsen, Caroline; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt

    2016-04-01

    This paper presents a novel automatic method for detection of B-lines (comet-tail artifacts) in lung ultrasound scans. B-lines are the most commonly used artifacts for analyzing the pulmonary edema. They appear as laser-like vertical beams, which arise from the pleural line and spread down without fading to the edge of the screen. An increase in their number is associated with presence of edema. All the scans used in this study were acquired using a BK3000 ultrasound scanner (BK Ultrasound, Denmark) driving a 192-element 5:5 MHz wide linear transducer (10L2W, BK Ultrasound). The dynamic received focus technique was employed to generate the sequences. Six subjects, among those three patients after major surgery and three normal subjects, were scanned once and Six ultrasound sequences each containing 50 frames were acquired. The proposed algorithm was applied to all 300 in-vivo lung ultrasound images. The pleural line is first segmented on each image and then the B-line artifacts spreading down from the pleural line are detected and overlayed on the image. The resulting 300 images showed that the mean lateral distance between B-lines detected on images acquired from patients decreased by 20% in compare with that of normal subjects. Therefore, the method can be used as the basis of a method of automatically and qualitatively characterizing the distribution of B-lines.

  13. Line-scan macro-scale Raman chemical imaging for authentication of powdered foods and ingredients

    USDA-ARS?s Scientific Manuscript database

    Adulteration and fraud for powdered foods and ingredients are rising food safety risks that threaten consumers’ health. In this study, a newly developed line-scan macro-scale Raman imaging system using a 5 W 785 nm line laser as excitation source was used to authenticate the food powders. The system...

  14. Line-scan spatially offset Raman spectroscopy for inspecting subsurface food safety and quality

    USDA-ARS?s Scientific Manuscript database

    This paper presented a method for subsurface food inspection using a newly developed line-scan spatially offset Raman spectroscopy (SORS) technique. A 785 nm laser was used as a Raman excitation source. The line-shape SORS data was collected in a wavenumber range of 0–2815 cm-1 using a detection mod...

  15. Measurement of total-body cobalt-57 vitamin B12 absorption with a gamma camera.

    PubMed

    Cardarelli, J A; Slingerland, D W; Burrows, B A; Miller, A

    1985-08-01

    Previously described techniques for the measurement of the absorption of [57Co]vitamin B12 by total-body counting have required an iron room equipped with scanning or multiple detectors. The present study uses simplifying modifications which make the technique more available and include the use of static geometry, the measurement of body thickness to correct for attenuation, a simple formula to convert the capsule-in-air count to a 100% absorption count, and finally the use of an adequately shielded gamma camera obviating the need of an iron room.

  16. Space acquired photography

    USGS Publications Warehouse

    ,

    2008-01-01

    Interested in a photograph of the first space walk by an American astronaut, or the first photograph from space of a solar eclipse? Or maybe your interest is in a specific geologic, oceanic, or meteorological phenomenon? The U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center is making photographs of the Earth taken from space available for search, download, and ordering. These photographs were taken by Gemini mission astronauts with handheld cameras or by the Large Format Camera that flew on space shuttle Challenger in October 1984. Space photographs are distributed by EROS only as high-resolution scanned or medium-resolution digital products.

  17. Soft X-ray and XUV imaging with a charge-coupled device /CCD/-based detector

    NASA Technical Reports Server (NTRS)

    Loter, N. G.; Burstein, P.; Krieger, A.; Ross, D.; Harrison, D.; Michels, D. J.

    1981-01-01

    A soft X-ray/XUV imaging camera which uses a thinned, back-illuminated, all-buried channel RCA CCD for radiation sensing has been built and tested. The camera is a slow-scan device which makes possible frame integration if necessary. The detection characteristics of the device have been tested over the 15-1500 eV range. The response was linear with exposure up to 0.2-0.4 erg/sq cm; saturation occurred at greater exposures. Attention is given to attempts to resolve single photons with energies of 1.5 keV.

  18. Electronic recording of holograms with applications to holographic displays

    NASA Technical Reports Server (NTRS)

    Claspy, P. C.; Merat, F. L.

    1979-01-01

    The paper describes an electronic heterodyne recording which uses electrooptic modulation to introduce a sinusoidal phase shift between the object and reference wave. The resulting temporally modulated holographic interference pattern is scanned by a commercial image dissector camera, and the rejection of the self-interference terms is accomplished by heterodyne detection at the camera output. The electrical signal representing this processed hologram can then be used to modify the properties of a liquid crystal light valve or a similar device. Such display devices transform the displayed interference pattern into a phase modulated wave front rendering a three-dimensional image.

  19. Boundary Layer Transition Detection on a Rotor Blade Using Rotating Mirror Thermography

    NASA Technical Reports Server (NTRS)

    Heineck, James T.; Schuelein, Erich; Raffel, Markus

    2014-01-01

    Laminar-to-turbulent transition on a rotor blade in hover has been imaged using an area-scan infrared camera. A new method for tracking a blade using a rotating mirror was employed. The mirror axis of rotation roughly corresponded to the rotor axis of rotation and the mirror rotational frequency is 1/2 that of the rotor. This permitted the use of cameras whose integration time was too long to prevent image blur due to the motion of the blade. This article will show the use of this method for a rotor blade at different collective pitch angles.

  20. Real-time continuous-wave terahertz line scanner based on a compact 1 × 240 InGaAs Schottky barrier diode array detector.

    PubMed

    Han, Sang-Pil; Ko, Hyunsung; Kim, Namje; Lee, Won-Hui; Moon, Kiwon; Lee, Il-Min; Lee, Eui Su; Lee, Dong Hun; Lee, Wangjoo; Han, Seong-Tae; Choi, Sung-Wook; Park, Kyung Hyun

    2014-11-17

    We demonstrate real-time continuous-wave terahertz (THz) line-scanned imaging based on a 1 × 240 InGaAs Schottky barrier diode (SBD) array detector with a scan velocity of 25 cm/s, a scan line length of 12 cm, and a pixel size of 0.5 × 0.5 mm². Foreign substances, such as a paper clip with a spatial resolution of approximately 1 mm that is hidden under a cracker, are clearly detected by this THz line-scanning system. The system consists of the SBD array detector, a 200-GHz gyrotron source, a conveyor system, and several optical components such as a high-density polyethylene cylindrical lens, metal cylindrical mirror, and THz wire-grid polarizer. Using the THz polarizer, the signal-to-noise ratio of the SBD array detector improves because the quality of the source beam is enhanced.

  1. EAARL Topography - George Washington Birthplace National Monument 2008

    USGS Publications Warehouse

    Brock, John C.; Nayegandhi, Amar; Wright, C. Wayne; Stevens, Sara; Yates, Xan

    2009-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived bare earth (BE) and first surface (FS) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the George Washington Birthplace National Monument in Virginia, acquired on March 26, 2008. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL) was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is routinely used to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  2. EAARL Coastal Topography - Northern Gulf of Mexico, 2007: First Surface

    USGS Publications Warehouse

    Smith, Kathryn E.L.; Nayegandhi, Amar; Wright, C. Wayne; Bonisteel, Jamie M.; Brock, John C.

    2009-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived first surface (FS) elevation data were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. The project provides highly detailed and accurate datasets of select barrier islands and peninsular regions of Louisiana, Mississippi, Alabama, and Florida, acquired June 27-30, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  3. EAARL Coastal Topography-Pearl River Delta 2008: Bare Earth

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Miner, Michael D.; Yates, Xan; Bonisteel, Jamie M.

    2009-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived bare earth (BE) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the University of New Orleans (UNO), Pontchartrain Institute for Environmental Sciences (PIES), New Orleans, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the Pearl River Delta in Louisiana and Mississippi, acquired March 9-11, 2008. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  4. EAARL Coastal Topography-Pearl River Delta 2008: First Surface

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Miner, Michael D.; Michael, D.; Yates, Xan; Bonisteel, Jamie M.

    2009-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived first surface (FS) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the University of New Orleans (UNO), Pontchartrain Institute for Environmental Sciences (PIES), New Orleans, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the Pearl River Delta in Louisiana and Mississippi, acquired March 9-11, 2008. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  5. EAARL Topography - Jean Lafitte National Historical Park and Preserve 2006

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Segura, Martha; Yates, Xan

    2008-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived first surface (FS) and bare earth (BE) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Jean Lafitte National Historical Park and Preserve in Louisiana, acquired on September 22, 2006. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  6. EAARL Coastal Topography - Northern Gulf of Mexico, 2007: Bare Earth

    USGS Publications Warehouse

    Smith, Kathryn E.L.; Nayegandhi, Amar; Wright, C. Wayne; Bonisteel, Jamie M.; Brock, John C.

    2009-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived bare earth (BE) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. The purpose of this project is to provide highly detailed and accurate datasets of select barrier islands and peninsular regions of Louisiana, Mississippi, Alabama, and Florida, acquired on June 27-30, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  7. EAARL Submerged Topography - U.S. Virgin Islands 2003

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Stevens, Sara; Yates, Xan; Bonisteel, Jamie M.

    2008-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived submerged topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), South Florida-Caribbean Network, Miami, FL; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate bathymetric datasets of a portion of the U.S. Virgin Islands, acquired on April 21, 23, and 30, May 2, and June 14 and 17, 2003. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  8. EAARL Coastal Topography-Cape Hatteras National Seashore, North Carolina, Post-Nor'Ida, 2009: Bare Earth

    USGS Publications Warehouse

    Bonisteel-Cormier, J.M.; Nayegandhi, Amar; Fredericks, Xan; Brock, J.C.; Wright, C.W.; Nagle, D.B.; Stevens, Sara

    2011-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the National Park Service Southeast Coast Network's Cape Hatteras National Seashore in North Carolina, acquired post-Nor'Ida (November 2009 nor'easter) on November 27 and 29 and December 1, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  9. EAARL coastal topography and imagery–Western Louisiana, post-Hurricane Rita, 2005: First surface

    USGS Publications Warehouse

    Bonisteel-Cormier, Jamie M.; Wright, Wayne C.; Fredericks, Alexandra M.; Klipp, Emily S.; Nagle, Doug B.; Sallenger, Asbury H.; Brock, John C.

    2013-01-01

    These remotely sensed, geographically referenced color-infrared (CIR) imagery and elevation measurements of lidar-derived first-surface (FS) topography datasets were produced by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, Florida, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, Virginia. This project provides highly detailed and accurate datasets of a portion of the Louisiana coastline beachface, acquired post-Hurricane Rita on September 27-28 and October 2, 2005. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the National Aeronautics and Space Administration (NASA) Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the "bare earth" under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Lidar for Science and Resource Management Website.

  10. EAARL Coastal Topography-Maryland and Delaware, Post-Nor'Ida, 2009

    USGS Publications Warehouse

    Bonisteel-Cormier, J.M.; Vivekanandan, Saisudha; Nayegandhi, Amar; Sallenger, A.H.; Wright, C.W.; Brock, J.C.; Nagle, D.B.; Klipp, E.S.

    2010-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL. This project provides highly detailed and accurate datasets of a portion of the eastern Maryland and Delaware coastline beachface, acquired post-Nor'Ida (November 2009 nor'easter) on November 28 and 30, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  11. EAARL Coastal Topography-Eastern Louisiana Barrier Islands, Post-Hurricane Gustav, 2008: First Surface

    USGS Publications Warehouse

    Bonisteel-Cormier, J.M.; Nayegandhi, Amar; Wright, C.W.; Sallenger, A.H.; Brock, J.C.; Nagle, D.B.; Vivekanandan, Saisudha; Fredericks, Xan

    2010-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the eastern Louisiana barrier islands, acquired post-Hurricane Gustav (September 2008 hurricane) on September 6 and 7, 2008. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  12. EAARL coastal topography-Cape Hatteras National Seashore, North Carolina, post-Nor'Ida, 2009: first surface

    USGS Publications Warehouse

    Bonisteel-Cormier, J.M.; Nayegandhi, Amar; Brock, J.C.; Wright, C.W.; Nagle, D.B.; Fredericks, Xan; Stevens, Sara

    2010-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the National Park Service Southeast Coast Network's Cape Hatteras National Seashore in North Carolina, acquired post-Nor'Ida (November 2009 nor'easter) on November 27 and 29 and December 1, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  13. EAARL Coastal Topography-Mississippi and Alabama Barrier Islands, Post-Hurricane Gustav, 2008

    USGS Publications Warehouse

    Bonisteel-Cormier, J.M.; Nayegandhi, Amar; Wright, C.W.; Sallenger, A.H.; Brock, J.C.; Nagle, D.B.; Klipp, E.S.; Vivekanandan, Saisudha; Fredericks, Xan; Segura, Martha

    2010-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the Mississippi and Alabama barrier islands, acquired post-Hurricane Gustav (September 2008 hurricane) on September 8, 2008. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  14. EAARL Coastal Topography and Imagery-Assateague Island National Seashore, Maryland and Virginia, Post-Nor'Ida, 2009

    USGS Publications Warehouse

    Bonisteel-Cormier, J.M.; Nayegandhi, Amar; Brock, J.C.; Wright, C.W.; Nagle, D.B.; Klipp, E.S.; Vivekanandan, Saisudha; Fredericks, Xan; Stevens, Sara

    2010-01-01

    These remotely sensed, geographically referenced color-infrared (CIR) imagery and elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the Assateague Island National Seashore in Maryland and Virginia, acquired post-Nor'Ida (November 2009 nor'easter) on November 28 and 30, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar(EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  15. EAARL Coastal Topography-Fire Island National Seashore, New York, Post-Nor'Ida, 2009

    USGS Publications Warehouse

    Nayegandhi, Amar; Vivekanandan, Saisudha; Brock, J.C.; Wright, C.W.; Nagle, D.B.; Bonisteel-Cormier, J.M.; Fredericks, Xan; Stevens, Sara

    2010-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the Fire Island National Seashore in New York, acquired post-Nor'Ida (November 2009 nor'easter) on December 4, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  16. EAARL coastal topography and imagery-Fire Island National Seashore, New York, 2009

    USGS Publications Warehouse

    Vivekanandan, Saisudha; Klipp, E.S.; Nayegandhi, Amar; Bonisteel-Cormier, J.M.; Brock, J.C.; Wright, C.W.; Nagle, D.B.; Fredericks, Xan; Stevens, Sara

    2010-01-01

    These remotely sensed, geographically referenced color-infrared (CIR) imagery and elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the Fire Island National Seashore in New York, acquired on July 9 and August 3, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral CIR camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  17. EAARL Coastal Topography-Chandeleur Islands, Louisiana, 2010: Bare Earth

    USGS Publications Warehouse

    Nayegandhi, Amar; Bonisteel-Cormier, Jamie M.; Brock, John C.; Sallenger, A.H.; Wright, C. Wayne; Nagle, David B.; Vivekanandan, Saisudha; Yates, Xan; Klipp, Emily S.

    2010-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) and submerged topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the Chandeleur Islands, acquired March 3, 2010. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  18. EAARL Coastal Topography-Eastern Florida, Post-Hurricane Jeanne, 2004: First Surface

    USGS Publications Warehouse

    Fredericks, Xan; Nayegandhi, Amar; Bonisteel-Cormier, J.M.; Wright, C.W.; Sallenger, A.H.; Brock, J.C.; Klipp, E.S.; Nagle, D.B.

    2010-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the eastern Florida coastline beachface, acquired post-Hurricane Jeanne (September 2004 hurricane) on October 1, 2004. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  19. EAARL Coastal Topography-Sandy Hook Unit, Gateway National Recreation Area, New Jersey, Post-Nor'Ida, 2009

    USGS Publications Warehouse

    Nayegandhi, Amar; Vivekanandan, Saisudha; Brock, J.C.; Wright, C.W.; Bonisteel-Cormier, J.M.; Nagle, D.B.; Klipp, E.S.; Stevens, Sara

    2010-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the Sandy Hook Unit of Gateway National Recreation Area in New Jersey, acquired post-Nor'Ida (November 2009 nor'easter) on December 4, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  20. EAARL Coastal Topography and Imagery-Naval Live Oaks Area, Gulf Islands National Seashore, Florida, 2007

    USGS Publications Warehouse

    Nagle, David B.; Nayegandhi, Amar; Yates, Xan; Brock, John C.; Wright, C. Wayne; Bonisteel, Jamie M.; Klipp, Emily S.; Segura, Martha

    2010-01-01

    These remotely sensed, geographically referenced color-infrared (CIR) imagery and elevation measurements of lidar-derived bare-earth (BE) topography, first-surface (FS) topography, and canopy-height (CH) datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Science Center, St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Naval Live Oaks Area in Florida's Gulf Islands National Seashore, acquired June 30, 2007. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral CIR camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.

  1. EAARL Coastal Topography - Fire Island National Seashore 2007

    USGS Publications Warehouse

    Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Stevens, Sara; Yates, Xan; Bonisteel, Jamie M.

    2008-01-01

    These remotely sensed, geographically referenced elevation measurements of Lidar-derived first surface (FS) and bare earth (BE) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of Fire Island National Seashore in New York, acquired on April 29-30 and May 15-16, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL) was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for pre-survey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is routinely used to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  2. EAARL Coastal Topography-Assateague Island National Seashore, 2008: Bare Earth

    USGS Publications Warehouse

    Bonisteel, Jamie M.; Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Stevens, Sara; Yates, Xan; Klipp, Emily S.

    2009-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Assateague Island National Seashore in Maryland and Virginia, acquired March 24-25, 2008. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL) was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for pre-survey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  3. EAARL Coastal Topography-Assateague Island National Seashore, 2008: First Surface

    USGS Publications Warehouse

    Bonisteel, Jamie M.; Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Stevens, Sara; Yates, Xan; Klipp, Emily S.

    2009-01-01

    These remotely sensed, geographically referenced elevation measurements of lidar-derived first-surface (FS) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Assateague Island National Seashore in Maryland and Virginia, acquired March 24-25, 2008. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for pre-survey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.

  4. Nanotribology Investigations of Solid and Liquid Lubricants Using Scanned Probe Microscopies

    DTIC Science & Technology

    2000-01-28

    Kai Rose, postdoctoral fellow (external fellowship support; supplies on AFOSR) 7. Ernesto Joselevich, postdoctoral fellow (external fellowship...scale friction measurements", European Semiconductor, July/August 1997. 2. I. Amato , "Candid Cameras for the Nanoworld," Science 276, 1982-1985 (1997

  5. Sampling and Analysis of Impact Crater Residues Found on the Wide Field Planetary Camera-2 Radiator

    NASA Astrophysics Data System (ADS)

    Anz-Meador, P. D.; Liou, J.-C.; Ross, D.; Robinson, G. A.; Opiela, J. N.; Kearsley, A. T.; Grime, G. W.; Colaux, J. L.; Jeynes, C.; Palitsin, V. V.; Webb, R. P.; Griffin, T. J.; Reed, B. B.; Gerlach, L.

    2013-08-01

    After nearly 16 years in low Earth orbit (LEO), the Wide Field Planetary Camera-2 (WFPC2) was recovered from the Hubble Space Telescope (HST) in May 2009, during the 12 day shuttle mission designated STS-125. The WFPC-2 radiator had been struck by approximately 700 impactors producing crater features 300 μ m and larger in size. Following optical inspection in 2009, agreement was reached for joint NASA-ESA study of crater residues, in 2011. Over 480 impact features were extracted at NASA Johnson Space Center's (JSC) Space Exposed Hardware clean-room and curation facility during 2012, and were shared between NASA and ESA. We describe analyses conducted using scanning electron microscopy (SEM) - energy dispersive X-ray spectrometry (EDX): by NASA at JSC's Astromaterials Research and Exploration Science (ARES) Division; and for ESA at the Natural History Museum (NHM), with Ion beam analysis (IBA) using a scanned proton microbeam at the University of Surrey Ion Beam Centre (IBC).

  6. Vision-guided gripping of a cylinder

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1991-01-01

    The motivation for vision-guided servoing is taken from tasks in automated or telerobotic space assembly and construction. Vision-guided servoing requires the ability to perform rapid pose estimates and provide predictive feature tracking. Monocular information from a gripper-mounted camera is used to servo the gripper to grasp a cylinder. The procedure is divided into recognition and servo phases. The recognition stage verifies the presence of a cylinder in the camera field of view. Then an initial pose estimate is computed and uncluttered scan regions are selected. The servo phase processes only the selected scan regions of the image. Given the knowledge, from the recognition phase, that there is a cylinder in the image and knowing the radius of the cylinder, 4 of the 6 pose parameters can be estimated with minimal computation. The relative motion of the cylinder is obtained by using the current pose and prior pose estimates. The motion information is then used to generate a predictive feature-based trajectory for the path of the gripper.

  7. Sampling and Analysis of Impact Crater Residues Found on the Wide Field Planetary Camera-2 Radiator

    NASA Technical Reports Server (NTRS)

    Kearsley, A. T.; Grime, G. W.; Colaux, J. L.; Jeynes, C.; Palitsin, V. V.; Webb, R, P.; Griffin, T. J.; Reed, B. B.; Anz-Meador, P. D.; Kou, J.-C.; hide

    2013-01-01

    After nearly 16 years in low Earth orbit (LEO), the Wide Field Planetary Camera-2 (WFPC2) was recovered from the Hubble Space Telescope (HST) in May 2009, during the 12 day shuttle mission designated STS-125. The WFPC-2 radiator had been struck by approximately 700 impactors producing crater features 300 microns and larger in size. Following optical inspection in 2009, agreement was reached for joint NASA-ESA study of crater residues, in 2011. Over 480 impact features were extracted at NASA Johnson Space Center's (JSC) Space Exposed Hardware clean-room and curation facility during 2012, and were shared between NASA and ESA. We describe analyses conducted using scanning electron microscopy (SEM) - energy dispersive X-ray spectrometry (EDX): by NASA at JSC's Astromaterials Research and Exploration Science (ARES) Division; and for ESA at the Natural History Museum (NHM), with Ion beam analysis (IBA) using a scanned proton microbeam at the University of Surrey Ion Beam Centre (IBC).

  8. Combining near-field scanning optical microscopy with spectral interferometry for local characterization of the optical electric field in photonic structures.

    PubMed

    Trägårdh, Johanna; Gersen, Henkjan

    2013-07-15

    We show how a combination of near-field scanning optical microscopy with crossed beam spectral interferometry allows a local measurement of the spectral phase and amplitude of light propagating in photonic structures. The method only requires measurement at the single point of interest and at a reference point, to correct for the relative phase of the interferometer branches, to retrieve the dispersion properties of the sample. Furthermore, since the measurement is performed in the spectral domain, the spectral phase and amplitude could be retrieved from a single camera frame, here in 70 ms for a signal power of less than 100 pW limited by the dynamic range of the 8-bit camera. The method is substantially faster than most previous time-resolved NSOM methods that are based on time-domain interferometry, which also reduced problems with drift. We demonstrate how the method can be used to measure the refractive index and group velocity in a waveguide structure.

  9. A tiger cannot change its stripes: using a three-dimensional model to match images of living tigers and tiger skins.

    PubMed

    Hiby, Lex; Lovell, Phil; Patil, Narendra; Kumar, N Samba; Gopalaswamy, Arjun M; Karanth, K Ullas

    2009-06-23

    The tiger is one of many species in which individuals can be identified by surface patterns. Camera traps can be used to record individual tigers moving over an array of locations and provide data for monitoring and studying populations and devising conservation strategies. We suggest using a combination of algorithms to calculate similarity scores between pattern samples scanned from the images to automate the search for a match to a new image. We show how using a three-dimensional surface model of a tiger to scan the pattern samples allows comparison of images that differ widely in camera angles and body posture. The software, which is free to download, considerably reduces the effort required to maintain an image catalogue and we suggest it could be used to trace the origin of a tiger skin by searching a central database of living tigers' images for matches to an image of the skin.

  10. A tiger cannot change its stripes: using a three-dimensional model to match images of living tigers and tiger skins

    PubMed Central

    Hiby, Lex; Lovell, Phil; Patil, Narendra; Kumar, N. Samba; Gopalaswamy, Arjun M.; Karanth, K. Ullas

    2009-01-01

    The tiger is one of many species in which individuals can be identified by surface patterns. Camera traps can be used to record individual tigers moving over an array of locations and provide data for monitoring and studying populations and devising conservation strategies. We suggest using a combination of algorithms to calculate similarity scores between pattern samples scanned from the images to automate the search for a match to a new image. We show how using a three-dimensional surface model of a tiger to scan the pattern samples allows comparison of images that differ widely in camera angles and body posture. The software, which is free to download, considerably reduces the effort required to maintain an image catalogue and we suggest it could be used to trace the origin of a tiger skin by searching a central database of living tigers' images for matches to an image of the skin. PMID:19324633

  11. Photothermal camera port accessory for microscopic thermal diffusivity imaging

    NASA Astrophysics Data System (ADS)

    Escola, Facundo Zaldívar; Kunik, Darío; Mingolo, Nelly; Martínez, Oscar Eduardo

    2016-06-01

    The design of a scanning photothermal accessory is presented, which can be attached to the camera port of commercial microscopes to measure thermal diffusivity maps with micrometer resolution. The device is based on the thermal expansion recovery technique, which measures the defocusing of a probe beam due to the curvature induced by the local heat delivered by a focused pump beam. The beam delivery and collecting optics are built using optical fiber technology, resulting in a robust optical system that provides collinear pump and probe beams without any alignment adjustment necessary. The quasiconfocal configuration for the signal collection using the same optical fiber sets very restrictive conditions on the positioning and alignment of the optical components of the scanning unit, and a detailed discussion of the design equations is presented. The alignment procedure is carefully described, resulting in a system so robust and stable that no further alignment is necessary for the day-to-day use, becoming a tool that can be used for routine quality control, operated by a trained technician.

  12. Cancer diagnosis using a conventional x-ray fluorescence camera with a cadmium-telluride detector

    NASA Astrophysics Data System (ADS)

    Sato, Eiichi; Enomoto, Toshiyuki; Hagiwara, Osahiko; Abudurexiti, Abulajiang; Sato, Koetsu; Sato, Shigehiro; Ogawa, Akira; Onagawa, Jun

    2011-10-01

    X-ray fluorescence (XRF) analysis is useful for mapping various atoms in objects. Bremsstrahlung X-rays are selected using a 3.0 mm-thick aluminum filter, and these rays are absorbed by indium, cerium and gadolinium atoms in objects. Then XRF is produced from the objects, and photons are detected by a cadmium-telluride detector. The Kα photons are discriminated using a multichannel analyzer, and the number of photons is counted by a counter card. The objects are moved and scanned by an x-y stage in conjunction with a two-stage controller, and X-ray images obtained by atomic mapping are shown on a personal computer monitor. The scan steps of the x and y axes were both 2.5 mm, and the photon-counting time per mapping point was 0.5 s. We carried out atomic mapping using the X-ray camera, and Kα photons from cerium and gadolinium atoms were produced from cancerous regions in nude mice.

  13. Conventional X-ray fluorescence camera with a cadmium-telluride detector and its application to cancer diagnosis

    NASA Astrophysics Data System (ADS)

    Enomoto, Toshiyuki; Sato, Eiichi; Abderyim, Purkhet; Abudurexiti, Abulajiang; Hagiwara, Osahiko; Matsukiyo, Hiroshi; Osawa, Akihiro; Watanabe, Manabu; Nagao, Jiro; Sato, Shigehiro; Ogawa, Akira; Onagawa, Jun

    2011-04-01

    X-ray fluorescence (XRF) analysis is useful for mapping various molecules in objects. Bremsstrahlung X-rays are selected using a 3.0-mm-thick aluminum filter, and these rays are absorbed by iodine, cerium, and gadolinium molecules in objects. Next, XRF is produced from the objects, and photons are detected by a cadmium-telluride detector. The Kα photons are discriminated using a multichannel analyzer, and the number of photons is counted by a counter card. The objects are moved and scanned by an x- y stage in conjunction with a two-stage controller, and X-ray images obtained by molecular mapping are shown on a personal computer monitor. The scan steps of x and y axes were both 2.5 mm, and the photon-counting time per mapping point was 0.5 s. We carried out molecular mapping using the X-ray camera, and Kα photons from cerium and gadolinium molecules were produced from cancerous regions in nude mice.

  14. Automated optical testing of LWIR objective lenses using focal plane array sensors

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik; Domagalski, Christian; Peter, Frank; Heinisch, Josef; Dumitrescu, Eugen

    2012-10-01

    The image quality of today's state-of-the-art IR objective lenses is constantly improving while at the same time the market for thermography and vision grows strongly. Because of increasing demands on the quality of IR optics and increasing production volumes, the standards for image quality testing increase and tests need to be performed in shorter time. Most high-precision MTF testing equipment for the IR spectral bands in use today relies on the scanning slit method that scans a 1D detector over a pattern in the image generated by the lens under test, followed by image analysis to extract performance parameters. The disadvantages of this approach are that it is relatively slow, it requires highly trained operators for aligning the sample and the number of parameters that can be extracted is limited. In this paper we present lessons learned from the R and D process on using focal plane array (FPA) sensors for testing of long-wave IR (LWIR, 8-12 m) optics. Factors that need to be taken into account when switching from scanning slit to FPAs are e.g.: the thermal background from the environment, the low scene contrast in the LWIR, the need for advanced image processing algorithms to pre-process camera images for analysis and camera artifacts. Finally, we discuss 2 measurement systems for LWIR lens characterization that we recently developed with different target applications: 1) A fully automated system suitable for production testing and metrology that uses uncooled microbolometer cameras to automatically measure MTF (on-axis and at several o-axis positions) and parameters like EFL, FFL, autofocus curves, image plane tilt, etc. for LWIR objectives with an EFL between 1 and 12mm. The measurement cycle time for one sample is typically between 6 and 8s. 2) A high-precision research-grade system using again an uncooled LWIR camera as detector, that is very simple to align and operate. A wide range of lens parameters (MTF, EFL, astigmatism, distortion, etc.) can be easily and accurately measured with this system.

  15. Feasibility Study of Compton Cameras for X-ray Fluorescence Computed Tomography with Humans

    PubMed Central

    Vernekohl, Don; Ahmad, Moiz; Chinn, Garry; Xing, Lei

    2017-01-01

    X-ray fluorescence imaging is a promising imaging technique able to depict the spatial distributions of low amounts of molecular agents in vivo. Currently, the translation of the technique to preclinical and clinical applications is hindered by long scanning times as objects are scanned with flux-limited narrow pencil beams. The study presents a novel imaging approach combining x-ray fluorescence imaging with Compton imaging. Compton cameras leverage the imaging performance of XFCT and abolish the need of pencil beam excitation. The study examines the potential of this new imaging approach on the base of Monte-Carlo simulations. In the work, it is first presented that the particular option of slice/fan-beam x-ray excitation has advantages in image reconstruction in regard of processing time and image quality compared to traditional volumetric Compton imaging. In a second experiment, the feasibility of the approach for clinical applications with tracer agents made from gold nano-particles is examined in a simulated lung scan scenario. The high energy of characteristic x-ray photons from gold is advantageous for deep tissue penetration and has lower angular blurring in the Compton camera. It is found that Doppler broadening in the first detector stage of the Compton camera adds the largest contribution on the angular blurring; physically limiting the spatial resolution. Following the analysis of the results from the spatial resolution test, resolutions in the order of one centimeter are achievable with the approach in the center of the lung. The concept of Compton imaging allows to distinguish to some extend between scattered photons and x-ray fluorescent photons based on their difference in emission position. The results predict that molecular sensitivities down to 240 pM/l for 5 mm diameter lesions at 15 mGy for 50 nm diameter gold nano-particles are achievable. A 45-fold speed up time for data acquisition compared to traditional pencil beam XFCT could be achieved for lung imaging on cost of a small sensitivity decrease. PMID:27845933

  16. Rapid Damage Assessment. Volume II. Development and Testing of Rapid Damage Assessment System.

    DTIC Science & Technology

    1981-02-01

    pixels/s Camera Line Rate 732.4 lines/s Pixels per Line 1728 video 314 blank 4 line number (binary) 2 run number (BCD) 2048 total Pixel Resolution 8 bits...sists of an LSI-ll microprocessor, a VDI -200 video display processor, an FD-2 dual floppy diskette subsystem, an FT-I function key-trackball module...COMPONENT LIST FOR IMAGE PROCESSOR SYSTEM IMAGE PROCESSOR SYSTEM VIEWS I VDI -200 Display Processor Racks, Table FD-2 Dual Floppy Diskette Subsystem FT-l

  17. A neutral-beam profile monitor with a phosphor screen and a high-sensitivity camera for the J-PARC KOTO experiment

    NASA Astrophysics Data System (ADS)

    Matsumura, T.; Kamiji, I.; Nakagiri, K.; Nanjo, H.; Nomura, T.; Sasao, N.; Shinkawa, T.; Shiomi, K.

    2018-03-01

    We have developed a beam-profile monitor (BPM) system to align the collimators for the neutral beam-line at the Hadron Experimental Facility of J-PARC. The system is composed of a phosphor screen and a CCD camera coupled to an image intensifier mounted on a remote control X- Y stage. The design and detailed performance studies of the BPM are presented. The monitor has a spatial resolution of better than 0.6 mm and a deviation from linearity of less than 1%. These results indicate that the BPM system meets the requirements to define collimator-edge positions for the beam-line tuning. Confirmation using the neutral beam for the KOTO experiment is also presented.

  18. Space Shuttle Orbiter Digital Outer Mold Line Scanning

    NASA Technical Reports Server (NTRS)

    Campbell, Charles H.; Wilson, Brad; Pavek, Mike; Berger, Karen

    2012-01-01

    The Space Shuttle Orbiters Discovery and Endeavor have been digitally scanned to produce post-flight configuration outer mold line surfaces. Very detailed scans of the windward side of these vehicles provide resolution of the detailed tile step and gap geometry, as well as the reinforced carbon carbon nose cap and leading edges. Lower resolution scans of the upper surface provide definition of the crew cabin windows, wing upper surfaces, payload bay doors, orbital maneuvering system pods and the vertical tail. The process for acquisition of these digital scans as well as post-processing of the very large data set will be described.

  19. High-speed light field camera and frequency division multiplexing for fast multi-plane velocity measurements.

    PubMed

    Fischer, Andreas; Kupsch, Christian; Gürtler, Johannes; Czarske, Jürgen

    2015-09-21

    Non-intrusive fast 3d measurements of volumetric velocity fields are necessary for understanding complex flows. Using high-speed cameras and spectroscopic measurement principles, where the Doppler frequency of scattered light is evaluated within the illuminated plane, each pixel allows one measurement and, thus, planar measurements with high data rates are possible. While scanning is one standard technique to add the third dimension, the volumetric data is not acquired simultaneously. In order to overcome this drawback, a high-speed light field camera is proposed for obtaining volumetric data with each single frame. The high-speed light field camera approach is applied to a Doppler global velocimeter with sinusoidal laser frequency modulation. As a result, a frequency multiplexing technique is required in addition to the plenoptic refocusing for eliminating the crosstalk between the measurement planes. However, the plenoptic refocusing is still necessary in order to achieve a large refocusing range for a high numerical aperture that minimizes the measurement uncertainty. Finally, two spatially separated measurement planes with 25×25 pixels each are simultaneously acquired with a measurement rate of 0.5 kHz with a single high-speed camera.

  20. Data Acquisition System of Nobeyama MKID Camera

    NASA Astrophysics Data System (ADS)

    Nagai, M.; Hisamatsu, S.; Zhai, G.; Nitta, T.; Nakai, N.; Kuno, N.; Murayama, Y.; Hattori, S.; Mandal, P.; Sekimoto, Y.; Kiuchi, H.; Noguchi, T.; Matsuo, H.; Dominjon, A.; Sekiguchi, S.; Naruse, M.; Maekawa, J.; Minamidani, T.; Saito, M.

    2018-05-01

    We are developing a superconducting camera based on microwave kinetic inductance detectors (MKIDs) to observe 100-GHz continuum with the Nobeyama 45-m telescope. A data acquisition (DAQ) system for the camera has been designed to operate the MKIDs with the telescope. This system is required to connect the telescope control system (COSMOS) to the readout system of the MKIDs (MKID DAQ) which employs the frequency-sweeping probe scheme. The DAQ system is also required to record the reference signal of the beam switching for the demodulation by the analysis pipeline in order to suppress the sky fluctuation. The system has to be able to merge and save all data acquired both by the camera and by the telescope, including the cryostat temperature and pressure and the telescope pointing. A collection of software which implements these functions and works as a TCP/IP server on a workstation was developed. The server accepts commands and observation scripts from COSMOS and then issues commands to MKID DAQ to configure and start data acquisition. We made a commissioning of the MKID camera on the Nobeyama 45-m telescope and obtained successful scan signals of the atmosphere and of the Moon.

  1. Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds

    NASA Astrophysics Data System (ADS)

    Boerner, R.; Kröhnert, M.

    2016-06-01

    3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.

  2. Dynamic quantification of canopy structure to characterize early plant vigour in wheat genotypes

    PubMed Central

    Duan, T.; Chapman, S.C.; Holland, E.; Rebetzke, G.J.; Guo, Y.; Zheng, B.

    2016-01-01

    Early vigour is an important physiological trait to improve establishment, water-use efficiency, and grain yield for wheat. Phenotyping large numbers of lines is challenging due to the fast growth and development of wheat seedlings. Here we developed a new photo-based workflow to monitor dynamically the growth and development of the wheat canopy of two wheat lines with a contrasting early vigour trait. Multiview images were taken using a ‘vegetation stress’ camera at 2 d intervals from emergence to the sixth leaf stage. Point clouds were extracted using the Multi-View Stereo and Structure From Motion (MVS-SFM) algorithm, and segmented into individual organs using the Octree method, with leaf midribs fitted using local polynomial function. Finally, phenotypic parameters were calculated from the reconstructed point cloud including: tiller and leaf number, plant height, Haun index, phyllochron, leaf length, angle, and leaf elongation rate. There was good agreement between the observed and estimated leaf length (RMSE=8.6mm, R 2=0.98, n=322) across both lines. Significant contrasts of phenotyping parameters were observed between the two lines and were consistent with manual observations. The early vigour line had fewer tillers (2.4±0.6) and larger leaves (308.0±38.4mm and 17.1±2.7mm for leaf length and width, respectively). While the phyllochron of both lines was quite similar, the non-vigorous line had a greater Haun index (more leaves on the main stem) on any date, as the vigorous line had slower development of its first two leaves. The workflow presented in this study provides an efficient method to phenotype individual plants using a low-cost camera (an RGB camera is also suitable) and could be applied in phenotyping for applications in both simulation modelling and breeding. The rapidity and accuracy of this novel method can characterize the results of specific selection criteria (e.g. width of leaf three, number of tillers, rate of leaf appearance) that have been or can now be utilized to breed for early leaf growth and tillering in wheat. PMID:27312669

  3. Dynamic quantification of canopy structure to characterize early plant vigour in wheat genotypes.

    PubMed

    Duan, T; Chapman, S C; Holland, E; Rebetzke, G J; Guo, Y; Zheng, B

    2016-08-01

    Early vigour is an important physiological trait to improve establishment, water-use efficiency, and grain yield for wheat. Phenotyping large numbers of lines is challenging due to the fast growth and development of wheat seedlings. Here we developed a new photo-based workflow to monitor dynamically the growth and development of the wheat canopy of two wheat lines with a contrasting early vigour trait. Multiview images were taken using a 'vegetation stress' camera at 2 d intervals from emergence to the sixth leaf stage. Point clouds were extracted using the Multi-View Stereo and Structure From Motion (MVS-SFM) algorithm, and segmented into individual organs using the Octree method, with leaf midribs fitted using local polynomial function. Finally, phenotypic parameters were calculated from the reconstructed point cloud including: tiller and leaf number, plant height, Haun index, phyllochron, leaf length, angle, and leaf elongation rate. There was good agreement between the observed and estimated leaf length (RMSE=8.6mm, R (2)=0.98, n=322) across both lines. Significant contrasts of phenotyping parameters were observed between the two lines and were consistent with manual observations. The early vigour line had fewer tillers (2.4±0.6) and larger leaves (308.0±38.4mm and 17.1±2.7mm for leaf length and width, respectively). While the phyllochron of both lines was quite similar, the non-vigorous line had a greater Haun index (more leaves on the main stem) on any date, as the vigorous line had slower development of its first two leaves. The workflow presented in this study provides an efficient method to phenotype individual plants using a low-cost camera (an RGB camera is also suitable) and could be applied in phenotyping for applications in both simulation modelling and breeding. The rapidity and accuracy of this novel method can characterize the results of specific selection criteria (e.g. width of leaf three, number of tillers, rate of leaf appearance) that have been or can now be utilized to breed for early leaf growth and tillering in wheat. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  4. Investigation of Phototriangulation Accuracy with Using of Various Techniques Laboratory and Field Calibration

    NASA Astrophysics Data System (ADS)

    Chibunichev, A. G.; Kurkov, V. M.; Smirnov, A. V.; Govorov, A. V.; Mikhalin, V. A.

    2016-10-01

    Nowadays, aerial survey technology using aerial systems based on unmanned aerial vehicles (UAVs) becomes more popular. UAVs physically can not carry professional aerocameras. Consumer digital cameras are used instead. Such cameras usually have rolling, lamellar or global shutter. Quite often manufacturers and users of such aerial systems do not use camera calibration. In this case self-calibration techniques are used. However such approach is not confirmed by extensive theoretical and practical research. In this paper we compare results of phototriangulation based on laboratory, test-field or self-calibration. For investigations we use Zaoksky test area as an experimental field provided dense network of target and natural control points. Racurs PHOTOMOD and Agisoft PhotoScan software were used in evaluation. The results of investigations, conclusions and practical recommendations are presented in this article.

  5. A feasibility study of damage detection in beams using high-speed camera (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wan, Chao; Yuan, Fuh-Gwo

    2017-04-01

    In this paper a method for damage detection in beam structures using high-speed camera is presented. Traditional methods of damage detection in structures typically involve contact (i.e., piezoelectric sensor or accelerometer) or non-contact sensors (i.e., laser vibrometer) which can be costly and time consuming to inspect an entire structure. With the popularity of the digital camera and the development of computer vision technology, video cameras offer a viable capability of measurement including higher spatial resolution, remote sensing and low-cost. In the study, a damage detection method based on the high-speed camera was proposed. The system setup comprises a high-speed camera and a line-laser which can capture the out-of-plane displacement of a cantilever beam. The cantilever beam with an artificial crack was excited and the vibration process was recorded by the camera. A methodology called motion magnification, which can amplify subtle motions in a video is used for modal identification of the beam. A finite element model was used for validation of the proposed method. Suggestions for applications of this methodology and challenges in future work will be discussed.

  6. A high resolution IR/visible imaging system for the W7-X limiter

    NASA Astrophysics Data System (ADS)

    Wurden, G. A.; Stephey, L. A.; Biedermann, C.; Jakubowski, M. W.; Dunn, J. P.; Gamradt, M.

    2016-11-01

    A high-resolution imaging system, consisting of megapixel mid-IR and visible cameras along the same line of sight, has been prepared for the new W7-X stellarator and was operated during Operational Period 1.1 to view one of the five inboard graphite limiters. The radial line of sight, through a large diameter (184 mm clear aperture) uncoated sapphire window, couples a direct viewing 1344 × 784 pixel FLIR SC8303HD camera. A germanium beam-splitter sends visible light to a 1024 × 1024 pixel Allied Vision Technologies Prosilica GX1050 color camera. Both achieve sub-millimeter resolution on the 161 mm wide, inertially cooled, segmented graphite tiles. The IR and visible cameras are controlled via optical fibers over full Camera Link and dual GigE Ethernet (2 Gbit/s data rates) interfaces, respectively. While they are mounted outside the cryostat at a distance of 3.2 m from the limiter, they are close to a large magnetic trim coil and require soft iron shielding. We have taken IR data at 125 Hz to 1.25 kHz frame rates and seen that surface temperature increases in excess of 350 °C, especially on leading edges or defect hot spots. The IR camera sees heat-load stripe patterns on the limiter and has been used to infer limiter power fluxes (˜1-4.5 MW/m2), during the ECRH heating phase. IR images have also been used calorimetrically between shots to measure equilibrated bulk tile temperature, and hence tile energy inputs (in the range of 30 kJ/tile with 0.6 MW, 6 s heating pulses). Small UFO's can be seen and tracked by the FLIR camera in some discharges. The calibrated visible color camera (100 Hz frame rate) has also been equipped with narrow band C-III and H-alpha filters, to compare with other diagnostics, and is used for absolute particle flux determination from the limiter surface. Sometimes, but not always, hot-spots in the IR are also seen to be bright in C-III light.

  7. A high resolution IR/visible imaging system for the W7-X limiter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wurden, G. A., E-mail: wurden@lanl.gov; Dunn, J. P.; Stephey, L. A.

    A high-resolution imaging system, consisting of megapixel mid-IR and visible cameras along the same line of sight, has been prepared for the new W7-X stellarator and was operated during Operational Period 1.1 to view one of the five inboard graphite limiters. The radial line of sight, through a large diameter (184 mm clear aperture) uncoated sapphire window, couples a direct viewing 1344 × 784 pixel FLIR SC8303HD camera. A germanium beam-splitter sends visible light to a 1024 × 1024 pixel Allied Vision Technologies Prosilica GX1050 color camera. Both achieve sub-millimeter resolution on the 161 mm wide, inertially cooled, segmented graphitemore » tiles. The IR and visible cameras are controlled via optical fibers over full Camera Link and dual GigE Ethernet (2 Gbit/s data rates) interfaces, respectively. While they are mounted outside the cryostat at a distance of 3.2 m from the limiter, they are close to a large magnetic trim coil and require soft iron shielding. We have taken IR data at 125 Hz to 1.25 kHz frame rates and seen that surface temperature increases in excess of 350 °C, especially on leading edges or defect hot spots. The IR camera sees heat-load stripe patterns on the limiter and has been used to infer limiter power fluxes (∼1–4.5 MW/m{sup 2}), during the ECRH heating phase. IR images have also been used calorimetrically between shots to measure equilibrated bulk tile temperature, and hence tile energy inputs (in the range of 30 kJ/tile with 0.6 MW, 6 s heating pulses). Small UFO’s can be seen and tracked by the FLIR camera in some discharges. The calibrated visible color camera (100 Hz frame rate) has also been equipped with narrow band C-III and H-alpha filters, to compare with other diagnostics, and is used for absolute particle flux determination from the limiter surface. Sometimes, but not always, hot-spots in the IR are also seen to be bright in C-III light.« less

  8. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).

  9. Real-Time Noise Removal for Line-Scanning Hyperspectral Devices Using a Minimum Noise Fraction-Based Approach

    PubMed Central

    Bjorgan, Asgeir; Randeberg, Lise Lyngsnes

    2015-01-01

    Processing line-by-line and in real-time can be convenient for some applications of line-scanning hyperspectral imaging technology. Some types of processing, like inverse modeling and spectral analysis, can be sensitive to noise. The MNF (minimum noise fraction) transform provides suitable denoising performance, but requires full image availability for the estimation of image and noise statistics. In this work, a modified algorithm is proposed. Incrementally-updated statistics enables the algorithm to denoise the image line-by-line. The denoising performance has been compared to conventional MNF and found to be equal. With a satisfying denoising performance and real-time implementation, the developed algorithm can denoise line-scanned hyperspectral images in real-time. The elimination of waiting time before denoised data are available is an important step towards real-time visualization of processed hyperspectral data. The source code can be found at http://www.github.com/ntnu-bioopt/mnf. This includes an implementation of conventional MNF denoising. PMID:25654717

  10. Comparison of 3d Reconstruction Services and Terrestrial Laser Scanning for Cultural Heritage Documentation

    NASA Astrophysics Data System (ADS)

    Rasztovits, S.; Dorninger, P.

    2013-07-01

    Terrestrial Laser Scanning (TLS) is an established method to reconstruct the geometrical surface of given objects. Current systems allow for fast and efficient determination of 3D models with high accuracy and richness in detail. Alternatively, 3D reconstruction services are using images to reconstruct the surface of an object. While the instrumental expenses for laser scanning systems are high, upcoming free software services as well as open source software packages enable the generation of 3D models using digital consumer cameras. In addition, processing TLS data still requires an experienced user while recent web-services operate completely automatically. An indisputable advantage of image based 3D modeling is its implicit capability for model texturing. However, the achievable accuracy and resolution of the 3D models is lower than those of laser scanning data. Within this contribution, we investigate the results of automated web-services for image based 3D model generation with respect to a TLS reference model. For this, a copper sculpture was acquired using a laser scanner and using image series of different digital cameras. Two different webservices, namely Arc3D and AutoDesk 123D Catch were used to process the image data. The geometric accuracy was compared for the entire model and for some highly structured details. The results are presented and interpreted based on difference models. Finally, an economical comparison of the generation of the models is given considering the interactive and processing time costs.

  11. Quality Assessment and Comparison of Smartphone and Leica C10 Laser Scanner Based Point Clouds

    NASA Astrophysics Data System (ADS)

    Sirmacek, Beril; Lindenbergh, Roderik; Wang, Jinhu

    2016-06-01

    3D urban models are valuable for urban map generation, environment monitoring, safety planning and educational purposes. For 3D measurement of urban structures, generally airborne laser scanning sensors or multi-view satellite images are used as a data source. However, close-range sensors (such as terrestrial laser scanners) and low cost cameras (which can generate point clouds based on photogrammetry) can provide denser sampling of 3D surface geometry. Unfortunately, terrestrial laser scanning sensors are expensive and trained persons are needed to use them for point cloud acquisition. A potential effective 3D modelling can be generated based on a low cost smartphone sensor. Herein, we show examples of using smartphone camera images to generate 3D models of urban structures. We compare a smartphone based 3D model of an example structure with a terrestrial laser scanning point cloud of the structure. This comparison gives us opportunity to discuss the differences in terms of geometrical correctness, as well as the advantages, disadvantages and limitations in data acquisition and processing. We also discuss how smartphone based point clouds can help to solve further problems with 3D urban model generation in a practical way. We show that terrestrial laser scanning point clouds which do not have color information can be colored using smartphones. The experiments, discussions and scientific findings might be insightful for the future studies in fast, easy and low-cost 3D urban model generation field.

  12. Airburst height computation method of Sea-Impact Test

    NASA Astrophysics Data System (ADS)

    Kim, Jinho; Kim, Hyungsup; Chae, Sungwoo; Park, Sungho

    2017-05-01

    This paper describes the ways how to measure the airburst height of projectiles and rockets. In general, the airburst height could be determined by using triangulation method or the images from the camera installed on the radar. There are some limitations in these previous methods when the missiles impact the sea surface. To apply triangulation method, the cameras should be installed so that the lines of sight intersect at angles from 60 to 120 degrees. There could be no effective observation towers to install the optical system. In case the range of the missile is more than 50km, the images from the camera of the radar could be useless. This paper proposes the method to measure the airburst height of sea impact projectile by using a single camera. The camera would be installed on the island near to the impact area and the distance could be computed by using the position and attitude of camera and sea level. To demonstrate the proposed method, the results from the proposed method are compared with that from the previous method.

  13. JPRS Report, Science & Technology, Japan, 27th Aircraft Symposium

    DTIC Science & Technology

    1990-10-29

    screen; the relative attitude is then determined . 2) Video Sensor System Specific patterns (grapple target, etc.) drawn on the target spacecraft , or the...entire target spacecraft , is imaged by camera . Navigation information is obtained by on-board image processing, such as extraction of contours and...standard figure called "grapple target" located in the vicinity of the grapple fixture on the target spacecraft is imaged by camera . Contour lines and

  14. Autonomous detection of crowd anomalies in multiple-camera surveillance feeds

    NASA Astrophysics Data System (ADS)

    Nordlöf, Jonas; Andersson, Maria

    2016-10-01

    A novel approach for autonomous detection of anomalies in crowded environments is presented in this paper. The proposed models uses a Gaussian mixture probability hypothesis density (GM-PHD) filter as feature extractor in conjunction with different Gaussian mixture hidden Markov models (GM-HMMs). Results, based on both simulated and recorded data, indicate that this method can track and detect anomalies on-line in individual crowds through multiple camera feeds in a crowded environment.

  15. Graphic design of pinhole cameras

    NASA Technical Reports Server (NTRS)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  16. Human detection and motion analysis at security points

    NASA Astrophysics Data System (ADS)

    Ozer, I. Burak; Lv, Tiehan; Wolf, Wayne H.

    2003-08-01

    This paper presents a real-time video surveillance system for the recognition of specific human activities. Specifically, the proposed automatic motion analysis is used as an on-line alarm system to detect abnormal situations in a campus environment. A smart multi-camera system developed at Princeton University is extended for use in smart environments in which the camera detects the presence of multiple persons as well as their gestures and their interaction in real-time.

  17. Using the scanning electron microscope on the production line to assure quality semiconductors

    NASA Technical Reports Server (NTRS)

    Adolphsen, J. W.; Anstead, R. J.

    1972-01-01

    The use of the scanning electron microscope to detect metallization defects introduced during batch processing of semiconductor devices is discussed. A method of determining metallization integrity was developed which culminates in a procurement specification using the scanning microscope on the production line as a quality control tool. Batch process control of the metallization operation is monitored early in the manufacturing cycle.

  18. A new scanning system for alpha decay events as calibration sources for range-energy relation in nuclear emulsion

    NASA Astrophysics Data System (ADS)

    Yoshida, J.; Kinbara, S.; Mishina, A.; Nakazawa, K.; Soe, M. K.; Theint, A. M. M.; Tint, K. T.

    2017-03-01

    A new scanning system named "Vertex picker" has been developed to rapid collect alpha decay events, which are calibration sources for the range-energy relation in nuclear emulsion. A computer-controlled optical microscope scans emulsion layers exhaustively, and a high-speed and high-resolution camera takes their micrographs. A dedicated image processing picks out vertex-like shapes. Practical operations of alpha decay search were demonstrated by emulsion sheets of the KEK-PS E373 experiment. Alpha decays of nearly 28 events were detected in eye-check work on a PC monitor per hour. This yield is nearly 20 times more effective than that by the conventional eye-scan method. The speed and quality is acceptable for the coming new experiment, J-PARC E07.

  19. Acquisition parameters optimization of a transmission electron forward scatter diffraction system in a cold-field emission scanning electron microscope for nanomaterials characterization.

    PubMed

    Brodusch, Nicolas; Demers, Hendrix; Trudeau, Michel; Gauvin, Raynald

    2013-01-01

    Transmission electron forward scatter diffraction (t-EFSD) is a new technique providing crystallographic information with high resolution on thin specimens by using a conventional electron backscatter diffraction (EBSD) system in a scanning electron microscope. In this study, the impact of tilt angle, working distance, and detector distance on the Kikuchi pattern quality were investigated in a cold-field emission scanning electron microscope (CFE-SEM). We demonstrated that t-EFSD is applicable for tilt angles ranging from -20° to -40°. Working distance (WD) should be optimized for each material by choosing the WD for which the EBSD camera screen illumination is the highest, as the number of detected electrons on the screen is directly dependent on the scattering angle. To take advantage of the best performances of the CFE-SEM, the EBSD camera should be close to the sample and oriented towards the bottom to increase forward scattered electron collection efficiency. However, specimen chamber cluttering and beam/mechanical drift are important limitations in the CFE-SEM used in this work. Finally, the importance of t-EFSD in materials science characterization was illustrated through three examples of phase identification and orientation mapping. © Wiley Periodicals, Inc.

  20. D Scanning of Live Pigs System and its Application in Body Measurements

    NASA Astrophysics Data System (ADS)

    Guo, H.; Wang, K.; Su, W.; Zhu, D. H.; Liu, W. L.; Xing, Ch.; Chen, Z. R.

    2017-09-01

    The shape of a live pig is an important indicator of its health and value, whether for breeding or for carcass quality. This paper implements a prototype system for live single pig body surface 3d scanning based on two consumer depth cameras, utilizing the 3d point clouds data. These cameras are calibrated in advance to have a common coordinate system. The live 3D point clouds stream of moving single pig is obtained by two Xtion Pro Live sensors from different viewpoints simultaneously. A novel detection method is proposed and applied to automatically detect the frames containing pigs with the correct posture from the point clouds stream, according to the geometric characteristics of pig's shape. The proposed method is incorporated in a hybrid scheme, that serves as the preprocessing step in a body measurements framework for pigs. Experimental results show the portability of our scanning system and effectiveness of our detection method. Furthermore, an updated this point cloud preprocessing software for livestock body measurements can be downloaded freely from https://github.com/LiveStockShapeAnalysis to livestock industry, research community and can be used for monitoring livestock growth status.

Top