Automatic source camera identification using the intrinsic lens radial distortion
NASA Astrophysics Data System (ADS)
Choi, Kai San; Lam, Edmund Y.; Wong, Kenneth K. Y.
2006-11-01
Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.
Source Camera Identification and Blind Tamper Detections for Images
2007-04-24
measures and image quality measures in camera identification problem was studied using conjunction with a KNN classifier to identify the feature sets...shots varying from nature scenes .-.. motorala to close-ups of people. We experimented with the KNN *~. * ny classifier (K=5) as well SVM algorithm of...on Acoustic, Speech and Signal Processing (ICASSP), France, May 2006, vol. 5, pp. 401-404. [9] H. Farid and S. Lyu, "Higher-order wavelet statistics
A reference estimator based on composite sensor pattern noise for source device identification
NASA Astrophysics Data System (ADS)
Li, Ruizhe; Li, Chang-Tsun; Guan, Yu
2014-02-01
It has been proved that Sensor Pattern Noise (SPN) can serve as an imaging device fingerprint for source camera identification. Reference SPN estimation is a very important procedure within the framework of this application. Most previous works built reference SPN by averaging the SPNs extracted from 50 images of blue sky. However, this method can be problematic. Firstly, in practice we may face the problem of source camera identification in the absence of the imaging cameras and reference SPNs, which means only natural images with scene details are available for reference SPN estimation rather than blue sky images. It is challenging because the reference SPN can be severely contaminated by image content. Secondly, the number of available reference images sometimes is too few for existing methods to estimate a reliable reference SPN. In fact, existing methods lack consideration of the number of available reference images as they were designed for the datasets with abundant images to estimate the reference SPN. In order to deal with the aforementioned problem, in this work, a novel reference estimator is proposed. Experimental results show that our proposed method achieves better performance than the methods based on the averaged reference SPN, especially when few reference images used.
Performance comparison of denoising filters for source camera identification
NASA Astrophysics Data System (ADS)
Cortiana, A.; Conotter, V.; Boato, G.; De Natale, F. G. B.
2011-02-01
Source identification for digital content is one of the main branches of digital image forensics. It relies on the extraction of the photo-response non-uniformity (PRNU) noise as a unique intrinsic fingerprint that efficiently characterizes the digital device which generated the content. Such noise is estimated as the difference between the content and its de-noised version obtained via denoising filter processing. This paper proposes a performance comparison of different denoising filters for source identification purposes. In particular, results achieved with a sophisticated 3D filter are presented and discussed with respect to state-of-the-art denoising filters previously employed in such a context.
Space-based infrared sensors of space target imaging effect analysis
NASA Astrophysics Data System (ADS)
Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang
2018-02-01
Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.
Tiwari, Mayank; Gupta, Bhupendra
2018-04-01
For source camera identification (SCI), photo response non-uniformity (PRNU) has been widely used as the fingerprint of the camera. The PRNU is extracted from the image by applying a de-noising filter then taking the difference between the original image and the de-noised image. However, it is observed that intensity-based features and high-frequency details (edges and texture) of the image, effect quality of the extracted PRNU. This effects correlation calculation and creates problems in SCI. For solving this problem, we propose a weighting function based on image features. We have experimentally identified image features (intensity and high-frequency contents) effect on the estimated PRNU, and then develop a weighting function which gives higher weights to image regions which give reliable PRNU and at the same point it gives comparatively less weights to the image regions which do not give reliable PRNU. Experimental results show that the proposed weighting function is able to improve the accuracy of SCI up to a great extent. Copyright © 2018 Elsevier B.V. All rights reserved.
Forensic use of photo response non-uniformity of imaging sensors and a counter method.
Dirik, Ahmet Emir; Karaküçük, Ahmet
2014-01-13
Analogous to use of bullet scratches in forensic science, the authenticity of a digital image can be verified through the noise characteristics of an imaging sensor. In particular, photo-response non-uniformity noise (PRNU) has been used in source camera identification (SCI). However, this technique can be used maliciously to track or inculpate innocent people. To impede such tracking, PRNU noise should be suppressed significantly. Based on this motivation, we propose a counter forensic method to deceive SCI. Experimental results show that it is possible to impede PRNU-based camera identification for various imaging sensors while preserving the image quality.
NASA Astrophysics Data System (ADS)
Ueno, Yuichiro; Takahashi, Isao; Ishitsu, Takafumi; Tadokoro, Takahiro; Okada, Koichi; Nagumo, Yasushi; Fujishima, Yasutake; Yoshida, Akira; Umegaki, Kikuo
2018-06-01
We developed a pinhole type gamma camera, using a compact detector module of a pixelated CdTe semiconductor, which has suitable sensitivity and quantitative accuracy for low dose rate fields. In order to improve the sensitivity of the pinhole type semiconductor gamma camera, we adopted three methods: a signal processing method to set the discriminating level lower, a high sensitivity pinhole collimator and a smoothing image filter that improves the efficiency of the source identification. We tested basic performances of the developed gamma camera and carefully examined effects of the three methods. From the sensitivity test, we found that the effective sensitivity was about 21 times higher than that of the gamma camera for high dose rate fields which we had previously developed. We confirmed that the gamma camera had sufficient sensitivity and high quantitative accuracy; for example, a weak hot spot (0.9 μSv/h) around a tree root could be detected within 45 min in a low dose rate field test, and errors of measured dose rates with point sources were less than 7% in a dose rate accuracy test.
Optical stereo video signal processor
NASA Technical Reports Server (NTRS)
Craig, G. D. (Inventor)
1985-01-01
An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.
Study on the Spatial Resolution of Single and Multiple Coincidences Compton Camera
NASA Astrophysics Data System (ADS)
Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna
2012-10-01
In this paper we study the image resolution that can be obtained from the Multiple Coincidences Compton Camera (MCCC). The principle of MCCC is based on a simultaneous acquisition of several gamma-rays emitted in cascade from a single nucleus. Contrary to a standard Compton camera, MCCC can theoretically provide the exact location of a radioactive source (based only on the identification of the intersection point of three cones created by a single decay), without complicated tomographic reconstruction. However, practical implementation of the MCCC approach encounters several problems, such as low detection sensitivities result in very low probability of coincident triple gamma-ray detection, which is necessary for the source localization. It is also important to evaluate how the detection uncertainties (finite energy and spatial resolution) influence identification of the intersection of three cones, thus the resulting image quality. In this study we investigate how the spatial resolution of the reconstructed images using the triple-cone reconstruction (TCR) approach compares to images reconstructed from the same data using standard iterative method based on single-cone. Results show, that FWHM for the point source reconstructed with TCR was 20-30% higher than the one obtained from the standard iterative reconstruction based on expectation maximization (EM) algorithm and conventional single-cone Compton imaging. Finite energy and spatial resolutions of the MCCC detectors lead to errors in conical surfaces definitions (“thick” conical surfaces) which only amplify in image reconstruction when intersection of three cones is being sought. Our investigations show that, in spite of being conceptually appealing, the identification of triple cone intersection constitutes yet another restriction of the multiple coincidence approach which limits the image resolution that can be obtained with MCCC and TCR algorithm.
Attitude identification for SCOLE using two infrared cameras
NASA Technical Reports Server (NTRS)
Shenhar, Joram
1991-01-01
An algorithm is presented that incorporates real time data from two infrared cameras and computes the attitude parameters of the Spacecraft COntrol Lab Experiment (SCOLE), a lab apparatus representing an offset feed antenna attached to the Space Shuttle by a flexible mast. The algorithm uses camera position data of three miniature light emitting diodes (LEDs), mounted on the SCOLE platform, permitting arbitrary camera placement and an on-line attitude extraction. The continuous nature of the algorithm allows identification of the placement of the two cameras with respect to some initial position of the three reference LEDs, followed by on-line six degrees of freedom attitude tracking, regardless of the attitude time history. A description is provided of the algorithm in the camera identification mode as well as the mode of target tracking. Experimental data from a reduced size SCOLE-like lab model, reflecting the performance of the camera identification and the tracking processes, are presented. Computer code for camera placement identification and SCOLE attitude tracking is listed.
On Biometrics With Eye Movements.
Zhang, Youming; Juhola, Martti
2017-09-01
Eye movements are a relatively novel data source for biometric identification. When video cameras applied to eye tracking become smaller and more efficient, this data source could offer interesting opportunities for the development of eye movement biometrics. In this paper, we study primarily biometric identification as seen as a classification task of multiple classes, and secondarily biometric verification considered as binary classification. Our research is based on the saccadic eye movement signal measurements from 109 young subjects. In order to test the data measured, we use a procedure of biometric identification according to the one-versus-one (subject) principle. In a development from our previous research, which also involved biometric verification based on saccadic eye movements, we now apply another eye movement tracker device with a higher sampling frequency of 250 Hz. The results obtained are good, with correct identification rates at 80-90% at their best.
Near-UV Sources in the Hubble Ultra Deep Field: The Catalog
NASA Technical Reports Server (NTRS)
Gardner, Jonathan P.; Voyrer, Elysse; de Mello, Duilia F.; Siana, Brian; Quirk, Cori; Teplitz, Harry I.
2009-01-01
The catalog from the first high resolution U-band image of the Hubble Ultra Deep Field, taken with Hubble s Wide Field Planetary Camera 2 through the F300W filter, is presented. We detect 96 U-band objects and compare and combine this catalog with a Great Observatories Origins Deep Survey (GOODS) B-selected catalog that provides B, V, i, and z photometry, spectral types, and photometric redshifts. We have also obtained Far-Ultraviolet (FUV, 1614 Angstroms) data with Hubble s Advanced Camera for Surveys Solar Blind Channel (ACS/SBC) and with Galaxy Evolution Explorer (GALEX). We detected 31 sources with ACS/SBC, 28 with GALEX/FUV, and 45 with GALEX/NUV. The methods of observations, image processing, object identification, catalog preparation, and catalog matching are presented.
A modular approach to detection and identification of defects in rough lumber
Sang Mook Lee; A. Lynn Abbott; Daniel L. Schmoldt
2001-01-01
This paper describes a prototype scanning system that can automatically identify several important defects on rough hardwood lumber. The scanning system utilizes 3 laser sources and an embedded-processor camera to capture and analyze profile and gray-scale images. The modular approach combines the detection of wane (the curved sides of a board, possibly containing...
A Subterranean Camera Trigger for Identifying Predators Excavating Turtle Nests
Thomas J. Maier; Michael N. Marchand; Richard M. DeGraaf; John A. Litvaitis
2002-01-01
Predation is the predominant source of nest mortality for most North American turtle species, including populations that are in decline (Brooks et al. 1992; Congdon et al. 2000). The identification of nest predators---crucial to understanding predator-prey relationships---has been previously accomplished largely by use of techniques that rely on the availability of...
An integrated port camera and display system for laparoscopy.
Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E
2010-05-01
In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.
Low-cost real-time automatic wheel classification system
NASA Astrophysics Data System (ADS)
Shabestari, Behrouz N.; Miller, John W. V.; Wedding, Victoria
1992-11-01
This paper describes the design and implementation of a low-cost machine vision system for identifying various types of automotive wheels which are manufactured in several styles and sizes. In this application, a variety of wheels travel on a conveyor in random order through a number of processing steps. One of these processes requires the identification of the wheel type which was performed manually by an operator. A vision system was designed to provide the required identification. The system consisted of an annular illumination source, a CCD TV camera, frame grabber, and 386-compatible computer. Statistical pattern recognition techniques were used to provide robust classification as well as a simple means for adding new wheel designs to the system. Maintenance of the system can be performed by plant personnel with minimal training. The basic steps for identification include image acquisition, segmentation of the regions of interest, extraction of selected features, and classification. The vision system has been installed in a plant and has proven to be extremely effective. The system properly identifies the wheels correctly up to 30 wheels per minute regardless of rotational orientation in the camera's field of view. Correct classification can even be achieved if a portion of the wheel is blocked off from the camera. Significant cost savings have been achieved by a reduction in scrap associated with incorrect manual classification as well as a reduction of labor in a tedious task.
Lock-in imaging with synchronous digital mirror demodulation
NASA Astrophysics Data System (ADS)
Bush, Michael G.
2010-04-01
Lock-in imaging enables high contrast imaging in adverse conditions by exploiting a modulated light source and homodyne detection. We report results on a patent pending lock-in imaging system fabricated from commercial-off-theshelf parts utilizing standard cameras and a spatial light modulator. By leveraging the capabilities of standard parts we are able to present a low cost, high resolution, high sensitivity camera with applications in search and rescue, friend or foe identification (IFF), and covert surveillance. Different operating modes allow the same instrument to be utilized for dual band multispectral imaging or high dynamic range imaging, increasing the flexibility in different operational settings.
Camera calibration: active versus passive targets
NASA Astrophysics Data System (ADS)
Schmalz, Christoph; Forster, Frank; Angelopoulou, Elli
2011-11-01
Traditionally, most camera calibrations rely on a planar target with well-known marks. However, the localization error of the marks in the image is a source of inaccuracy. We propose the use of high-resolution digital displays as active calibration targets to obtain more accurate calibration results for all types of cameras. The display shows a series of coded patterns to generate correspondences between world points and image points. This has several advantages. No special calibration hardware is necessary because suitable displays are practically ubiquitious. The method is fully automatic, and no identification of marks is necessary. For a coding scheme based on phase shifting, the localization accuracy is approximately independent of the camera's focus settings. Most importantly, higher accuracy can be achieved compared to passive targets, such as printed checkerboards. A rigorous evaluation is performed to substantiate this claim. Our active target method is compared to standard calibrations using a checkerboard target. We perform camera, calibrations with different combinations of displays, cameras, and lenses, as well as with simulated images and find markedly lower reprojection errors when using active targets. For example, in a stereo reconstruction task, the accuracy of a system calibrated with an active target is five times better.
WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research.
Nazir, Sajid; Newey, Scott; Irvine, R Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; Wal, René van der
2017-01-01
The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named 'WiseEye', designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management.
WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research
Nazir, Sajid; Newey, Scott; Irvine, R. Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; van der Wal, René
2017-01-01
The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named ‘WiseEye’, designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management. PMID:28076444
Improved photo response non-uniformity (PRNU) based source camera identification.
Cooper, Alan J
2013-03-10
The concept of using Photo Response Non-Uniformity (PRNU) as a reliable forensic tool to match an image to a source camera is now well established. Traditionally, the PRNU estimation methodologies have centred on a wavelet based de-noising approach. Resultant filtering artefacts in combination with image and JPEG contamination act to reduce the quality of PRNU estimation. In this paper, it is argued that the application calls for a simplified filtering strategy which at its base level may be realised using a combination of adaptive and median filtering applied in the spatial domain. The proposed filtering method is interlinked with a further two stage enhancement strategy where only pixels in the image having high probabilities of significant PRNU bias are retained. This methodology significantly improves the discrimination between matching and non-matching image data sets over that of the common wavelet filtering approach. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Forensics for flatbed scanners
NASA Astrophysics Data System (ADS)
Gloe, Thomas; Franz, Elke; Winkler, Antje
2007-02-01
Within this article, we investigate possibilities for identifying the origin of images acquired with flatbed scanners. A current method for the identification of digital cameras takes advantage of image sensor noise, strictly speaking, the spatial noise. Since flatbed scanners and digital cameras use similar technologies, the utilization of image sensor noise for identifying the origin of scanned images seems to be possible. As characterization of flatbed scanner noise, we considered array reference patterns and sensor line reference patterns. However, there are particularities of flatbed scanners which we expect to influence the identification. This was confirmed by extensive tests: Identification was possible to a certain degree, but less reliable than digital camera identification. In additional tests, we simulated the influence of flatfielding and down scaling as examples for such particularities of flatbed scanners on digital camera identification. One can conclude from the results achieved so far that identifying flatbed scanners is possible. However, since the analyzed methods are not able to determine the image origin in all cases, further investigations are necessary.
Multiple-target tracking implementation in the ebCMOS camera system: the LUSIPHER prototype
NASA Astrophysics Data System (ADS)
Doan, Quang Tuyen; Barbier, Remi; Dominjon, Agnes; Cajgfinger, Thomas; Guerin, Cyrille
2012-06-01
The domain of the low light imaging systems progresses very fast, thanks to detection and electronic multiplication technology evolution, such as the emCCD (electron multiplying CCD) or the ebCMOS (electron bombarded CMOS). We present an ebCMOS camera system that is able to track every 2 ms more than 2000 targets with a mean number of photons per target lower than two. The point light sources (targets) are spots generated by a microlens array (Shack-Hartmann) used in adaptive optics. The Multiple-Target-Tracking designed and implemented on a rugged workstation is described. The results and the performances of the system on the identification and tracking are presented and discussed.
Single camera photogrammetry system for EEG electrode identification and localization.
Baysal, Uğur; Sengül, Gökhan
2010-04-01
In this study, photogrammetric coordinate measurement and color-based identification of EEG electrode positions on the human head are simultaneously implemented. A rotating, 2MP digital camera about 20 cm above the subject's head is used and the images are acquired at predefined stop points separated azimuthally at equal angular displacements. In order to realize full automation, the electrodes have been labeled by colored circular markers and an electrode recognition algorithm has been developed. The proposed method has been tested by using a plastic head phantom carrying 25 electrode markers. Electrode locations have been determined while incorporating three different methods: (i) the proposed photogrammetric method, (ii) conventional 3D radiofrequency (RF) digitizer, and (iii) coordinate measurement machine having about 6.5 mum accuracy. It is found that the proposed system automatically identifies electrodes and localizes them with a maximum error of 0.77 mm. It is suggested that this method may be used in EEG source localization applications in the human brain.
The Hubble Space Telescope: UV, Visible, and Near-Infrared Pursuits
NASA Technical Reports Server (NTRS)
Wiseman, Jennifer
2010-01-01
The Hubble Space Telescope continues to push the limits on world-class astrophysics. Cameras including the Advanced Camera for Surveys and the new panchromatic Wide Field Camera 3 which was installed nu last year's successful servicing mission S2N4,o{fer imaging from near-infrared through ultraviolet wavelengths. Spectroscopic studies of sources from black holes to exoplanet atmospheres are making great advances through the versatile use of STIS, the Space Telescope Imaging Spectrograph. The new Cosmic Origins Spectrograph, also installed last year, is the most sensitive UV spectrograph to fly io space and is uniquely suited to address particular scientific questions on galaxy halos, the intergalactic medium, and the cosmic web. With these outstanding capabilities on HST come complex needs for laboratory astrophysics support including atomic and line identification data. I will provide an overview of Hubble's current capabilities and the scientific programs and goals that particularly benefit from the studies of laboratory astrophysics.
NASA Astrophysics Data System (ADS)
Hamel, M. C.; Polack, J. K.; Poitrasson-Rivière, A.; Clarke, S. D.; Pozzi, S. A.
2017-01-01
In this work we present a technique for isolating the gamma-ray and neutron energy spectra from multiple radioactive sources localized in an image. Image reconstruction algorithms for radiation scatter cameras typically focus on improving image quality. However, with scatter cameras being developed for non-proliferation applications, there is a need for not only source localization but also source identification. This work outlines a modified stochastic origin ensembles algorithm that provides localized spectra for all pixels in the image. We demonstrated the technique by performing three experiments with a dual-particle imager that measured various gamma-ray and neutron sources simultaneously. We showed that we could isolate the peaks from 22Na and 137Cs and that the energy resolution is maintained in the isolated spectra. To evaluate the spectral isolation of neutrons, a 252Cf source and a PuBe source were measured simultaneously and the reconstruction showed that the isolated PuBe spectrum had a higher average energy and a greater fraction of neutrons at higher energies than the 252Cf. Finally, spectrum isolation was used for an experiment with weapons grade plutonium, 252Cf, and AmBe. The resulting neutron and gamma-ray spectra showed the expected characteristics that could then be used to identify the sources.
A Framework for People Re-Identification in Multi-Camera Surveillance Systems
ERIC Educational Resources Information Center
Ammar, Sirine; Zaghden, Nizar; Neji, Mahmoud
2017-01-01
People re-identification has been a very active research topic recently in computer vision. It is an important application in surveillance system with disjoint cameras. This paper is focused on the implementation of a human re-identification system. First the face of detected people is divided into three parts and some soft-biometric traits are…
21 CFR 886.1120 - Opthalmic camera.
Code of Federal Regulations, 2010 CFR
2010-04-01
... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding area...
Cross-Correlation-Based Structural System Identification Using Unmanned Aerial Vehicles
Yoon, Hyungchul; Hoskere, Vedhus; Park, Jong-Woong; Spencer, Billie F.
2017-01-01
Computer vision techniques have been employed to characterize dynamic properties of structures, as well as to capture structural motion for system identification purposes. All of these methods leverage image-processing techniques using a stationary camera. This requirement makes finding an effective location for camera installation difficult, because civil infrastructure (i.e., bridges, buildings, etc.) are often difficult to access, being constructed over rivers, roads, or other obstacles. This paper seeks to use video from Unmanned Aerial Vehicles (UAVs) to address this problem. As opposed to the traditional way of using stationary cameras, the use of UAVs brings the issue of the camera itself moving; thus, the displacements of the structure obtained by processing UAV video are relative to the UAV camera. Some efforts have been reported to compensate for the camera motion, but they require certain assumptions that may be difficult to satisfy. This paper proposes a new method for structural system identification using the UAV video directly. Several challenges are addressed, including: (1) estimation of an appropriate scale factor; and (2) compensation for the rolling shutter effect. Experimental validation is carried out to validate the proposed approach. The experimental results demonstrate the efficacy and significant potential of the proposed approach. PMID:28891985
High frequency modal identification on noisy high-speed camera data
NASA Astrophysics Data System (ADS)
Javh, Jaka; Slavič, Janko; Boltežar, Miha
2018-01-01
Vibration measurements using optical full-field systems based on high-speed footage are typically heavily burdened by noise, as the displacement amplitudes of the vibrating structures are often very small (in the range of micrometers, depending on the structure). The modal information is troublesome to measure as the structure's response is close to, or below, the noise level of the camera-based measurement system. This paper demonstrates modal parameter identification for such noisy measurements. It is shown that by using the Least-Squares Complex-Frequency method combined with the Least-Squares Frequency-Domain method, identification at high-frequencies is still possible. By additionally incorporating a more precise sensor to identify the eigenvalues, a hybrid accelerometer/high-speed camera mode shape identification is possible even below the noise floor. An accelerometer measurement is used to identify the eigenvalues, while the camera measurement is used to produce the full-field mode shapes close to 10 kHz. The identified modal parameters improve the quality of the measured modal data and serve as a reduced model of the structure's dynamics.
Minimum Requirements for Taxicab Security Cameras.
Zeng, Shengke; Amandus, Harlan E; Amendola, Alfred A; Newbraugh, Bradley H; Cantis, Douglas M; Weaver, Darlene
2014-07-01
The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability.
MAXI/GSC 7-year Source Catalog
NASA Astrophysics Data System (ADS)
Ueda, Y.; Kawamuro, T.; Hori, T.; Shidatsu, M.; Tanimoto, A.; MAXI Team
2017-10-01
Monitor of All-sky X-ray Image (MAXI) on the International Space Station has been continuously observing the X-ray sky since its launch in 2009. The MAXI survey has achieved the best sensitivity in the 4-10 keV band as an all sky X-ray mission, and is complementary to the ROSAT all sky survey (<2 keV) and hard X-ray (>10 keV) surveys performed with Swift and INTEGRAL. Here we present the latest source catalog of MAXI/Gas Slit Camera (GSC) constructed from the first 7-year data, which is an extension of the 37-month catalog of the high Galactic-latitude sky (Hiroi et al. 2013). We summarize statistical properties of the X-ray sources and results of cross identification with other catalogs.
21 CFR 892.1620 - Cine or spot fluorographic x-ray camera.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Cine or spot fluorographic x-ray camera. 892.1620... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1620 Cine or spot fluorographic x-ray camera. (a) Identification. A cine or spot fluorographic x-ray camera is a device intended to photograph...
21 CFR 892.1620 - Cine or spot fluorographic x-ray camera.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Cine or spot fluorographic x-ray camera. 892.1620... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1620 Cine or spot fluorographic x-ray camera. (a) Identification. A cine or spot fluorographic x-ray camera is a device intended to photograph...
21 CFR 892.1620 - Cine or spot fluorographic x-ray camera.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Cine or spot fluorographic x-ray camera. 892.1620... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1620 Cine or spot fluorographic x-ray camera. (a) Identification. A cine or spot fluorographic x-ray camera is a device intended to photograph...
21 CFR 892.1620 - Cine or spot fluorographic x-ray camera.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Cine or spot fluorographic x-ray camera. 892.1620... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1620 Cine or spot fluorographic x-ray camera. (a) Identification. A cine or spot fluorographic x-ray camera is a device intended to photograph...
21 CFR 892.1620 - Cine or spot fluorographic x-ray camera.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Cine or spot fluorographic x-ray camera. 892.1620... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1620 Cine or spot fluorographic x-ray camera. (a) Identification. A cine or spot fluorographic x-ray camera is a device intended to photograph...
Identification of Active Galactic Nuclei through HST optical variability in the GOODS South field
NASA Astrophysics Data System (ADS)
Pouliasis, Ektoras; Georgantopoulos; Bonanos, A.; HCV Team
2016-08-01
This work aims to identify AGN in the GOODS South deep field through optical variability. This method can easily identify low-luminosity AGN. In particular, we use images in the z-band obtained from the Hubble Space Telescope with the ACS/WFC camera over 5 epochs separated by ~45 days. Aperture photometry has been performed using SExtractor to extract the lightcurves. Several variability indices, such as the median absolute deviation, excess variance, and sigma were applied to automatically identify the variable sources. After removing artifacts, stars and supernovae from the variable selected sample and keeping only those sources with known photometric or spectroscopic redshift, the optical variability was compared to variability in other wavelengths (X-rays, mid-IR, radio). This multi-wavelength study provides important constraints on the structure and the properties of the AGN and their relation to their hosts. This work is a part of the validation of the Hubble Catalog of Variables (HCV) project, which has been launched at the National Observatory of Athens by ESA, and aims to identify all sources (pointlike and extended) showing variability, based on the Hubble Source Catalog (HSC, Whitmore et al. 2015). The HSC version 1 was released in February 2015 and includes 80 million sources imaged with the WFPC2, ACS/WFC, WFC3/UVIS and WFC3/IR cameras.
Application of infrared uncooled cameras in surveillance systems
NASA Astrophysics Data System (ADS)
Dulski, R.; Bareła, J.; Trzaskawka, P.; PiÄ tkowski, T.
2013-10-01
The recent necessity to protect military bases, convoys and patrols gave serious impact to the development of multisensor security systems for perimeter protection. One of the most important devices used in such systems are IR cameras. The paper discusses technical possibilities and limitations to use uncooled IR camera in a multi-sensor surveillance system for perimeter protection. Effective ranges of detection depend on the class of the sensor used and the observed scene itself. Application of IR camera increases the probability of intruder detection regardless of the time of day or weather conditions. It also simultaneously decreased the false alarm rate produced by the surveillance system. The role of IR cameras in the system was discussed as well as technical possibilities to detect human being. Comparison of commercially available IR cameras, capable to achieve desired ranges was done. The required spatial resolution for detection, recognition and identification was calculated. The simulation of detection ranges was done using a new model for predicting target acquisition performance which uses the Targeting Task Performance (TTP) metric. Like its predecessor, the Johnson criteria, the new model bounds the range performance with image quality. The scope of presented analysis is limited to the estimation of detection, recognition and identification ranges for typical thermal cameras with uncooled microbolometer focal plane arrays. This type of cameras is most widely used in security systems because of competitive price to performance ratio. Detection, recognition and identification range calculations were made, and the appropriate results for the devices with selected technical specifications were compared and discussed.
Minimum Requirements for Taxicab Security Cameras*
Zeng, Shengke; Amandus, Harlan E.; Amendola, Alfred A.; Newbraugh, Bradley H.; Cantis, Douglas M.; Weaver, Darlene
2015-01-01
Problem The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Methods Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Results Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. Practical Applications These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability. PMID:26823992
Dual multispectral and 3D structured light laparoscope
NASA Astrophysics Data System (ADS)
Clancy, Neil T.; Lin, Jianyu; Arya, Shobhit; Hanna, George B.; Elson, Daniel S.
2015-03-01
Intraoperative feedback on tissue function, such as blood volume and oxygenation would be useful to the surgeon in cases where current clinical practice relies on subjective measures, such as identification of ischaemic bowel or tissue viability during anastomosis formation. Also, tissue surface profiling may be used to detect and identify certain pathologies, as well as diagnosing aspects of tissue health such as gut motility. In this paper a dual modality laparoscopic system is presented that combines multispectral reflectance and 3D surface imaging. White light illumination from a xenon source is detected by a laparoscope-mounted fast filter wheel camera to assemble a multispectral image (MSI) cube. Surface shape is then calculated using a spectrally-encoded structured light (SL) pattern detected by the same camera and triangulated using an active stereo technique. Images of porcine small bowel were acquired during open surgery. Tissue reflectance spectra were acquired and blood volume was calculated at each spatial pixel across the bowel wall and mesentery. SL features were segmented and identified using a `normalised cut' algoritm and the colour vector of each spot. Using the 3D geometry defined by the camera coordinate system the multispectral data could be overlaid onto the surface mesh. Dual MSI and SL imaging has the potential to provide augmented views to the surgeon supplying diagnostic information related to blood supply health and organ function. Future work on this system will include filter optimisation to reduce noise in tissue optical property measurement, and minimise spot identification errors in the SL pattern.
Airborne multispectral identification of individual cotton plants using consumer-grade cameras
USDA-ARS?s Scientific Manuscript database
Although multispectral remote sensing using consumer-grade cameras has successfully identified fields of small cotton plants, improvements to detection sensitivity are needed to identify individual or small clusters of plants. The imaging sensor of consumer-grade cameras are based on a Bayer patter...
Ellefsen, Kyle L; Settle, Brett; Parker, Ian; Smith, Ian F
2014-09-01
Local Ca(2+) transients such as puffs and sparks form the building blocks of cellular Ca(2+) signaling in numerous cell types. They have traditionally been studied by linescan confocal microscopy, but advances in TIRF microscopy together with improved electron-multiplied CCD (EMCCD) cameras now enable rapid (>500 frames s(-1)) imaging of subcellular Ca(2+) signals with high spatial resolution in two dimensions. This approach yields vastly more information (ca. 1 Gb min(-1)) than linescan imaging, rendering visual identification and analysis of local events imaged both laborious and subject to user bias. Here we describe a routine to rapidly automate identification and analysis of local Ca(2+) events. This features an intuitive graphical user-interfaces and runs under Matlab and the open-source Python software. The underlying algorithm features spatial and temporal noise filtering to reliably detect even small events in the presence of noisy and fluctuating baselines; localizes sites of Ca(2+) release with sub-pixel resolution; facilitates user review and editing of data; and outputs time-sequences of fluorescence ratio signals for identified event sites along with Excel-compatible tables listing amplitudes and kinetics of events. Copyright © 2014 Elsevier Ltd. All rights reserved.
An evolution of image source camera attribution approaches.
Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul
2016-05-01
Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Leon X-1, the First Chandra Source
NASA Technical Reports Server (NTRS)
Weisskopf, Martin C.; Aldcroft, Tom; Cameron, Robert A.; Gandhi, Poshak; Foellmi, Cedric; Elsner, Ronald F.; Patel, Sandeep K.; ODell, Stephen L.
2004-01-01
Here we present an analysis of the first photons detected with the Chandra X-ray Observatory and an identification of the brightest source in the field which we named Leon X-1 to honor the momentous contributions of the Chandra Telescope Scientist, Leon Van Speybroeck. The observation took place immediately following the opening of the last door protecting the X-ray telescope. We discuss the unusual operational conditions as the first extra-terrestrial X-ray photons reflected from the telescope onto the ACIS camera. One bright source was a p parent to the team at the control center and the small collection of photons that appeared on the monitor were sufficient to indicate that the telescope had survived the launch and was approximately in focus, even prior to any checks and subsequent adjustments.
Multi-Task Learning with Low Rank Attribute Embedding for Multi-Camera Person Re-Identification.
Su, Chi; Yang, Fan; Zhang, Shiliang; Tian, Qi; Davis, Larry Steven; Gao, Wen
2018-05-01
We propose Multi-Task Learning with Low Rank Attribute Embedding (MTL-LORAE) to address the problem of person re-identification on multi-cameras. Re-identifications on different cameras are considered as related tasks, which allows the shared information among different tasks to be explored to improve the re-identification accuracy. The MTL-LORAE framework integrates low-level features with mid-level attributes as the descriptions for persons. To improve the accuracy of such description, we introduce the low-rank attribute embedding, which maps original binary attributes into a continuous space utilizing the correlative relationship between each pair of attributes. In this way, inaccurate attributes are rectified and missing attributes are recovered. The resulting objective function is constructed with an attribute embedding error and a quadratic loss concerning class labels. It is solved by an alternating optimization strategy. The proposed MTL-LORAE is tested on four datasets and is validated to outperform the existing methods with significant margins.
Heterogeneous Vision Data Fusion for Independently Moving Cameras
2010-03-01
target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY
Methods for identification of images acquired with digital cameras
NASA Astrophysics Data System (ADS)
Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki
2001-02-01
From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.
Bifocal Stereo for Multipath Person Re-Identification
NASA Astrophysics Data System (ADS)
Blott, G.; Heipke, C.
2017-11-01
This work presents an approach for the task of person re-identification by exploiting bifocal stereo cameras. Present monocular person re-identification approaches show a decreasing working distance, when increasing the image resolution to obtain a higher reidentification performance. We propose a novel 3D multipath bifocal approach, containing a rectilinear lens with larger focal length for long range distances and a fish eye lens of a smaller focal length for the near range. The person re-identification performance is at least on par with 2D re-identification approaches but the working distance of the approach is increased and on average 10% more re-identification performance can be achieved in the overlapping field of view compared to a single camera. In addition, the 3D information is exploited from the overlapping field of view to solve potential 2D ambiguities.
USDA-ARS?s Scientific Manuscript database
Remote sensing systems based on consumer-grade cameras have been increasingly used in scientific research and remote sensing applications because of their low cost and ease of use. However, the performance of consumer-grade cameras for practical applications have not been well documented in related ...
Passive Infrared Thermographic Imaging for Mobile Robot Object Identification
NASA Astrophysics Data System (ADS)
Hinders, M. K.; Fehlman, W. L.
2010-02-01
The usefulness of thermal infrared imaging as a mobile robot sensing modality is explored, and a set of thermal-physical features used to characterize passive thermal objects in outdoor environments is described. Objects that extend laterally beyond the thermal camera's field of view, such as brick walls, hedges, picket fences, and wood walls as well as compact objects that are laterally within the thermal camera's field of view, such as metal poles and tree trunks, are considered. Classification of passive thermal objects is a subtle process since they are not a source for their own emission of thermal energy. A detailed analysis is included of the acquisition and preprocessing of thermal images, as well as the generation and selection of thermal-physical features from these objects within thermal images. Classification performance using these features is discussed, as a precursor to the design of a physics-based model to automatically classify these objects.
NASA Astrophysics Data System (ADS)
Sojasi, Saeed; Yousefi, Bardia; Liaigre, Kévin; Ibarra-Castanedo, Clemente; Beaudoin, Georges; Maldague, Xavier P. V.; Huot, François; Chamberland, Martin
2017-05-01
Hyperspectral imaging (HSI) in the long-wave infrared spectrum (LWIR) provides spectral and spatial information concerning the emissivity of the surface of materials, which can be used for mineral identification. For this, an endmember, which is the purest form of a mineral, is used as reference. All pure minerals have specific spectral profiles in the electromagnetic wavelength, which can be thought of as the mineral's fingerprint. The main goal of this paper is the identification of minerals by LWIR hyperspectral imaging using a machine learning scheme. The information of hyperspectral imaging has been recorded from the energy emitted from the mineral's surface. Solar energy is the source of energy in remote sensing, while a heating element is the energy source employed in laboratory experiments. Our work contains three main steps where the first step involves obtaining the spectral signatures of pure (single) minerals with a hyperspectral camera, in the long-wave infrared (7.7 to 11.8 μm), which measures the emitted radiance from the minerals' surface. The second step concerns feature extraction by applying the continuous wavelet transform (CWT) and finally we use support vector machine classifier with radial basis functions (SVM-RBF) for classification/identification of minerals. The overall accuracy of classification in our work is 90.23+/- 2.66%. In conclusion, based on CWT's ability to capture the information of signals can be used as a good marker for classification and identification the minerals substance.
Re-identification of persons in multi-camera surveillance under varying viewpoints and illumination
NASA Astrophysics Data System (ADS)
Bouma, Henri; Borsboom, Sander; den Hollander, Richard J. M.; Landsmeer, Sander H.; Worring, Marcel
2012-06-01
The capability to track individuals in CCTV cameras is important for surveillance and forensics alike. However, it is laborious to do over multiple cameras. Therefore, an automated system is desirable. In literature several methods have been proposed, but their robustness against varying viewpoints and illumination is limited. Hence performance in realistic settings is also limited. In this paper, we present a novel method for the automatic re-identification of persons in video from surveillance cameras in a realistic setting. The method is computationally efficient, robust to a wide variety of viewpoints and illumination, simple to implement and it requires no training. We compare the performance of our method to several state-of-the-art methods on a publically available dataset that contains the variety of viewpoints and illumination to allow benchmarking. The results indicate that our method shows good performance and enables a human operator to track persons five times faster.
People counting and re-identification using fusion of video camera and laser scanner
NASA Astrophysics Data System (ADS)
Ling, Bo; Olivera, Santiago; Wagley, Raj
2016-05-01
We present a system for people counting and re-identification. It can be used by transit and homeland security agencies. Under FTA SBIR program, we have developed a preliminary system for transit passenger counting and re-identification using a laser scanner and video camera. The laser scanner is used to identify the locations of passenger's head and shoulder in an image, a challenging task in crowed environment. It can also estimate the passenger height without prior calibration. Various color models have been applied to form color signatures. Finally, using a statistical fusion and classification scheme, passengers are counted and re-identified.
"X-Ray Transients in Star-Forming Regions" and "Hard X-Ray Emission from X-Ray Bursters"
NASA Technical Reports Server (NTRS)
Halpern, Jules P.; Kaaret, Philip
1999-01-01
This grant funded work on the analysis of data obtained with the Burst and Transient Experiment (BATSE) on the Compton Gamma-Ray Observatory. The goal of the work was to search for hard x-ray transients in star forming regions using the all-sky hard x-ray monitoring capability of BATSE. Our initial work lead to the discovery of a hard x-ray transient, GRO J1849-03. Follow-up observations of this source made with the Wide Field Camera on BeppoSAX showed that the source should be identified with the previously known x-ray pulsar GS 1843-02 which itself is identified with the x-ray source X1845-024 originally discovered with the SAS-3 satellite. Our identification of the source and measurement of the outburst recurrence time, lead to the identification of the source as a Be/X-ray binary with a spin period of 94.8 s and an orbital period of 241 days. The funding was used primarily for partial salary and travel support for John Tomsick, then a graduate student at Columbia University. John Tomsick, now Dr. Tomsick, received his Ph.D. from Columbia University in July 1999, based partially on results obtained under this investigation. He is now a postdoctoral research scientist at the University of California, San Diego.
Goñi Gironés, E; Vicente García, F; Serra Arbeloa, P; Estébanez Estébanez, C; Calvo Benito, A; Rodrigo Rincón, I; Camarero Salazar, A; Martínez Lozano, M E
2013-01-01
To define the sentinel node identification rate in breast cancer, the chronological evolution of this parameter and the influence of the introduction of a portable gamma camera. A retrospective study was conducted using a prospective database of 754 patients who had undergone a sentinel lymph node biopsy between January 2003 and December 2011. The technique was mixed in the starting period and subsequently was performed with radiotracer intra-peritumorally administered the day before of the surgery. Until October 2009, excision of the sentinel node was guided by a probe. After that date, a portable gamma camera was introduced for intrasurgical detection. The SN was biopsied in 725 out of the 754 patients studied. The resulting technique global effectiveness was 96.2%. In accordance with the year of the surgical intervention, the identification percentage was 93.5% in 2003, 88.7% in 2004, 94.3% in 2005, 95.7% in 2006, 93.3% in 2007, 98.8% in 2008, 97.1% in 2009 and 99.1% in 2010 and 2011. There was a significant difference in the proportion of identification before and after the incorporation of the portable gamma camera of 4.6% (95% CI of the difference 2-7.2%, P = 0.0037). The percentage of global identification exceeds the recommended level following the current guidelines. Chronologically, the improvement for this parameter during the study period has been observed. These data suggest that the incorporation of a portable gamma camera had an important role. Copyright © 2013 Elsevier España, S.L. and SEMNIM. All rights reserved.
Wind Tunnel Tests of the Space Shuttle Foam Insulation with Simulated Debonded Regions
1981-04-01
set identification number Gage sensitivity Calculated gage sen8itivity 82 = Sl * f(TGE) Material specimen identification designation Free-stream...ColoY motion pictures (2 cameras) and pre- and posttest color stills recorded ariy changes "in the samples. The movie cameras were operated at...The oBli ~ue shock wave generated by the -wedge reduces the free-stream Mach nut1ber to the desired local Mach number. Since the free=sti’eam
A compact neutron scatter camera for field deployment
Goldsmith, John E. M.; Gerling, Mark D.; Brennan, James S.
2016-08-23
Here, we describe a very compact (0.9 m high, 0.4 m diameter, 40 kg) battery operable neutron scatter camera designed for field deployment. Unlike most other systems, the configuration of the sixteen liquid-scintillator detection cells are arranged to provide omnidirectional (4π) imaging with sensitivity comparable to a conventional two-plane system. Although designed primarily to operate as a neutron scatter camera for localizing energetic neutron sources, it also functions as a Compton camera for localizing gamma sources. In addition to describing the radionuclide source localization capabilities of this system, we demonstrate how it provides neutron spectra that can distinguish plutonium metalmore » from plutonium oxide sources, in addition to the easier task of distinguishing AmBe from fission sources.« less
Content-based image exploitation for situational awareness
NASA Astrophysics Data System (ADS)
Gains, David
2008-04-01
Image exploitation is of increasing importance to the enterprise of building situational awareness from multi-source data. It involves image acquisition, identification of objects of interest in imagery, storage, search and retrieval of imagery, and the distribution of imagery over possibly bandwidth limited networks. This paper describes an image exploitation application that uses image content alone to detect objects of interest, and that automatically establishes and preserves spatial and temporal relationships between images, cameras and objects. The application features an intuitive user interface that exposes all images and information generated by the system to an operator thus facilitating the formation of situational awareness.
NASA Technical Reports Server (NTRS)
1996-01-01
PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.
Automated Meteor Fluxes with a Wide-Field Meteor Camera Network
NASA Technical Reports Server (NTRS)
Blaauw, R. C.; Campbell-Brown, M. D.; Cooke, W.; Weryk, R. J.; Gill, J.; Musci, R.
2013-01-01
Within NASA, the Meteoroid Environment Office (MEO) is charged to monitor the meteoroid environment in near ]earth space for the protection of satellites and spacecraft. The MEO has recently established a two ]station system to calculate automated meteor fluxes in the millimeter ]size ]range. The cameras each consist of a 17 mm focal length Schneider lens on a Watec 902H2 Ultimate CCD video camera, producing a 21.7 x 16.3 degree field of view. This configuration has a red ]sensitive limiting meteor magnitude of about +5. The stations are located in the South Eastern USA, 31.8 kilometers apart, and are aimed at a location 90 km above a point 50 km equidistant from each station, which optimizes the common volume. Both single station and double station fluxes are found, each having benefits; more meteors will be detected in a single camera than will be seen in both cameras, producing a better determined flux, but double station detections allow for non ]ambiguous shower associations and permit speed/orbit determinations. Video from the cameras are fed into Linux computers running the ASGARD (All Sky and Guided Automatic Real ]time Detection) software, created by Rob Weryk of the University of Western Ontario Meteor Physics Group. ASGARD performs the meteor detection/photometry, and invokes the MILIG and MORB codes to determine the trajectory, speed, and orbit of the meteor. A subroutine in ASGARD allows for the approximate shower identification in single station meteors. The ASGARD output is used in routines to calculate the flux in units of #/sq km/hour. The flux algorithm employed here differs from others currently in use in that it does not assume a single height for all meteors observed in the common camera volume. In the MEO system, the volume is broken up into a set of height intervals, with the collecting areas determined by the radiant of active shower or sporadic source. The flux per height interval is summed to obtain the total meteor flux. As ASGARD also computes the meteor mass from the photometry, a mass flux can be also calculated. Weather conditions in the southeastern United States are seldom ideal, which introduces the difficulty of a variable sky background. First a weather algorithm indicates if sky conditions are clear enough to calculate fluxes, at which point a limiting magnitude algorithm is employed. The limiting magnitude algorithm performs a fit of stellar magnitudes vs camera intensities. The stellar limiting magnitude is derived from this and easily converted to a limiting meteor magnitude for the active shower or sporadic source.
Thermographic Nondestructive Evaluation of the Space Shuttle Main Engine Nozzle
NASA Technical Reports Server (NTRS)
Walker, James L.; Lansing, Matthew D.; Russell, Samuel S.; Caraccioli, Paul; Whitaker, Ann F. (Technical Monitor)
2000-01-01
The methods and results presented in this summary address the thermographic identification of interstitial leaks in the Space Shuttle Main Engine nozzles. A highly sensitive digital infrared camera is used to record the minute cooling effects associated with a leak source, such as a crack or pinhole, hidden within the nozzle wall by observing the inner "hot wall" surface as the nozzle is pressurized. These images are enhanced by digitally subtracting a thermal reference image taken before pressurization, greatly diminishing background noise. The method provides a nonintrusive way of localizing the tube that is leaking and the exact leak source position to within a very small axial distance. Many of the factors that influence the inspectability of the nozzle are addressed; including pressure rate, peak pressure, gas type, ambient temperature and surface preparation.
Improved camera for better X-ray powder photographs
NASA Technical Reports Server (NTRS)
Parrish, W.; Vajda, I. E.
1969-01-01
Camera obtains powder-type photographs of single crystals or polycrystalline powder specimens. X-ray diffraction photographs of a powder specimen are characterized by improved resolution and greater intensity. A reasonably good powder pattern of small samples can be produced for identification purposes.
A novel method for detecting light source for digital images forensic
NASA Astrophysics Data System (ADS)
Roy, A. K.; Mitra, S. K.; Agrawal, R.
2011-06-01
Manipulation in image has been in practice since centuries. These manipulated images are intended to alter facts — facts of ethics, morality, politics, sex, celebrity or chaos. Image forensic science is used to detect these manipulations in a digital image. There are several standard ways to analyze an image for manipulation. Each one has some limitation. Also very rarely any method tried to capitalize on the way image was taken by the camera. We propose a new method that is based on light and its shade as light and shade are the fundamental input resources that may carry all the information of the image. The proposed method measures the direction of light source and uses the light based technique for identification of any intentional partial manipulation in the said digital image. The method is tested for known manipulated images to correctly identify the light sources. The light source of an image is measured in terms of angle. The experimental results show the robustness of the methodology.
Far-ultraviolet stellar photometry: A field in Orion
NASA Astrophysics Data System (ADS)
Schmidt, Edward G.; Carruthers, George R.
1993-12-01
Far-ultraviolet photometry for 625 objects in Orion is presented. These data were extracted from electrographic camera images obtained during sounding rocket flights in 1975 and 1982. The 1975 images were centered close to the belt of Orion while the 1982 images were centered approximately 9 deg further north. One hundred and fifty stars fell in the overlapping region and were observed with both cameras. Sixty-eight percent of the objects were tentatively identified with known stars using the SIMBAD database while another 24% are blends of objects too close together to separate with our resolution. As in previous studies, the majority of the identified ultraviolet sources are early-type stars. However, there are a significant number for which no such identification was possible, and we suggest that these are interesting objects which should be further investigated. Seven stars were found which were bright in the ultraviolet but faint in the visible. We suggest that some of these are nearby white dwarfs.
Narrowband infrared emitters for combat ID
NASA Astrophysics Data System (ADS)
Pralle, Martin U.; Puscasu, Irina; Daly, James; Fallon, Keith; Loges, Peter; Greenwald, Anton; Johnson, Edward
2007-04-01
There is a strong desire to create narrowband infrared light sources as personnel beacons for application in infrared Identify Friend or Foe (IFF) systems. This demand has augmented dramatically in recent years with the reports of friendly fire casualties in Afghanistan and Iraq. ICx Photonics' photonic crystal enhanced TM (PCE TM) infrared emitter technology affords the possibility of creating narrowband IR light sources tuned to specific IR wavebands (near 1-2 microns, mid 3-5 microns, and long 8-12 microns) making it the ideal solution for infrared IFF. This technology is based on a metal coated 2D photonic crystal of air holes in a silicon substrate. Upon thermal excitation the photonic crystal modifies the emitted yielding narrowband IR light with center wavelength commensurate with the periodicity of the lattice. We have integrated this technology with microhotplate MEMS devices to yield 15mW IR light sources in the 3-5 micron waveband with wall plug efficiencies in excess of 10%, 2 orders of magnitude more efficient that conventional IR LEDs. We have further extended this technology into the LWIR with a light source that produces 9 mW of 8-12 micron light at an efficiency of 8%. Viewing distances >500 meters were observed with fielded camera technologies, ideal for ground to ground troop identification. When grouped into an emitter panel, the viewing distances were extended to 5 miles, ideal for ground to air identification.
Sweatt, William C.
1998-01-01
A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.
An Application for Driver Drowsiness Identification based on Pupil Detection using IR Camera
NASA Astrophysics Data System (ADS)
Kumar, K. S. Chidanand; Bhowmick, Brojeshwar
A Driver drowsiness identification system has been proposed that generates alarms when driver falls asleep during driving. A number of different physical phenomena can be monitored and measured in order to detect drowsiness of driver in a vehicle. This paper presents a methodology for driver drowsiness identification using IR camera by detecting and tracking pupils. The face region is first determined first using euler number and template matching. Pupils are then located in the face region. In subsequent frames of video, pupils are tracked in order to find whether the eyes are open or closed. If eyes are closed for several consecutive frames then it is concluded that the driver is fatigued and alarm is generated.
Modified algorithm for mineral identification in LWIR hyperspectral imagery
NASA Astrophysics Data System (ADS)
Yousefi, Bardia; Sojasi, Saeed; Liaigre, Kévin; Ibarra Castanedo, Clemente; Beaudoin, Georges; Huot, François; Maldague, Xavier P. V.; Chamberland, Martin
2017-05-01
The applications of hyperspectral infrared imagery in the different fields of research are significant and growing. It is mainly used in remote sensing for target detection, vegetation detection, urban area categorization, astronomy and geological applications. The geological applications of this technology mainly consist in mineral identification using in airborne or satellite imagery. We address a quantitative and qualitative assessment of mineral identification in the laboratory conditions. We strive to identify nine different mineral grains (Biotite, Diopside, Epidote, Goethite, Kyanite, Scheelite, Smithsonite, Tourmaline, Quartz). A hyperspectral camera in the Long Wave Infrared (LWIR, 7.7-11.8 ) with a LW-macro lens providing a spatial resolution of 100 μm, an infragold plate, and a heating source are the instruments used in the experiment. The proposed algorithm clusters all the pixel-spectra in different categories. Then the best representatives of each cluster are chosen and compared with the ASTER spectral library of JPL/NASA through spectral comparison techniques, such as Spectral angle mapper (SAM) and Normalized Cross Correlation (NCC). The results of the algorithm indicate significant computational efficiency (more than 20 times faster) as compared to previous algorithms and have shown a promising performance for mineral identification.
Saletti, Dominique
2017-01-01
Rapid progress in ultra-high-speed imaging has allowed material properties to be studied at high strain rates by applying full-field measurements and inverse identification methods. Nevertheless, the sensitivity of these techniques still requires a better understanding, since various extrinsic factors present during an actual experiment make it difficult to separate different sources of errors that can significantly affect the quality of the identified results. This study presents a methodology using simulated experiments to investigate the accuracy of the so-called spalling technique (used to study tensile properties of concrete subjected to high strain rates) by numerically simulating the entire identification process. The experimental technique uses the virtual fields method and the grid method. The methodology consists of reproducing the recording process of an ultra-high-speed camera by generating sequences of synthetically deformed images of a sample surface, which are then analysed using the standard tools. The investigation of the uncertainty of the identified parameters, such as Young's modulus along with the stress–strain constitutive response, is addressed by introducing the most significant user-dependent parameters (i.e. acquisition speed, camera dynamic range, grid sampling, blurring), proving that the used technique can be an effective tool for error investigation. This article is part of the themed issue ‘Experimental testing and modelling of brittle materials at high strain rates’. PMID:27956505
A DWARF NOVA IN THE GLOBULAR CLUSTER M13
DOE Office of Scientific and Technical Information (OSTI.GOV)
Servillat, M.; Van den Berg, M.; Grindlay, J.
Dwarf novae (DNe) in globular clusters (GCs) seem to be rare with only 13 detections in the 157 known Galactic GCs. We report the identification of a new DN in M13, the 14th DN identified in a GC to date. Using the 2 m Faulkes Telescope North, we conducted a search for stars in M13 that show variability over a year (2005-2006) on timescales of days and months. This led to the detection of one DN showing several outbursts. A Chandra X-ray source is coincident with this DN and shows both a spectrum and variability consistent with that expected frommore » a DN, thus supporting the identification. We searched for a counterpart in Hubble Space Telescope Advanced Camera for Surveys/Wide Field Camera archived images and found at least 11 candidates, of which we could characterize only the 7 brightest, including one with a 3{sigma} H{alpha} excess and a faint blue star. The detection of one DN when more could have been expected likely indicates that our knowledge of the global Galactic population of cataclysmic variables is too limited. The proportion of DNe may be lower than found in catalogs, or they may have a much smaller mean duty cycle ({approx}1%) as proposed by some population synthesis models and recent observations in the field.« less
The Effect of Camera Angle and Image Size on Source Credibility and Interpersonal Attraction.
ERIC Educational Resources Information Center
McCain, Thomas A.; Wakshlag, Jacob J.
The purpose of this study was to examine the effects of two nonverbal visual variables (camera angle and image size) on variables developed in a nonmediated context (source credibility and interpersonal attraction). Camera angle and image size were manipulated in eight video taped television newscasts which were subsequently presented to eight…
NASA Technical Reports Server (NTRS)
Cooke, William J.
2013-01-01
In the summer of 2008, the NASA Meteoroid Environments Office (MEO) began to establish a video fireball network, based on the following objectives: (1) determine the speed distribution of cm size meteoroids, (2) determine the major sources of cm size meteoroids (showers/sporadic sources), (3) characterize meteor showers (numbers, magnitudes, trajectories, orbits), (4) determine the size at which showers dominate the meteor flux, (5) discriminate between re-entering space debris and meteors, and 6) locate meteorite falls. In order to achieve the above with the limited resources available to the MEO, it was necessary that the network function almost fully autonomously, with very little required from humans in the areas of upkeep or analysis. With this in mind, the camera design and, most importantly, the ASGARD meteor detection software were adopted from the University of Western Ontario's Southern Ontario Meteor Network (SOMN), as NASA has a cooperative agreement with Western's Meteor Physics Group. 15 cameras have been built, and the network now consists of 8 operational cameras, with at least 4 more slated for deployment in calendar year 2013. The goal is to have 15 systems, distributed in two or more groups east of automatic analysis; every morning, this server also automatically generates an email and a web page (http://fireballs.ndc.nasa.gov) containing an automated analysis of the previous night's events. This analysis provides the following for each meteor: UTC date and time, speed, start and end locations (longitude, latitude, altitude), radiant, shower identification, light curve (meteor absolute magnitude as a function of time), photometric mass, orbital elements, and Tisserand parameter. Radiant/orbital plots and various histograms (number versus speed, time, etc) are also produced. After more than four years of operation, over 5,000 multi-station fireballs have been observed, 3 of which potentially dropped meteorites. A database containing data on all these events, including the videos and calibration information, has been developed and is being modified to include data from the SOMN and other camera networks.
Robust 3D Position Estimation in Wide and Unconstrained Indoor Environments
Mossel, Annette
2015-01-01
In this paper, a system for 3D position estimation in wide, unconstrained indoor environments is presented that employs infrared optical outside-in tracking of rigid-body targets with a stereo camera rig. To overcome limitations of state-of-the-art optical tracking systems, a pipeline for robust target identification and 3D point reconstruction has been investigated that enables camera calibration and tracking in environments with poor illumination, static and moving ambient light sources, occlusions and harsh conditions, such as fog. For evaluation, the system has been successfully applied in three different wide and unconstrained indoor environments, (1) user tracking for virtual and augmented reality applications, (2) handheld target tracking for tunneling and (3) machine guidance for mining. The results of each use case are discussed to embed the presented approach into a larger technological and application context. The experimental results demonstrate the system’s capabilities to track targets up to 100 m. Comparing the proposed approach to prior art in optical tracking in terms of range coverage and accuracy, it significantly extends the available tracking range, while only requiring two cameras and providing a relative 3D point accuracy with sub-centimeter deviation up to 30 m and low-centimeter deviation up to 100 m. PMID:26694388
Optical Verification Laboratory Demonstration System for High Security Identification Cards
NASA Technical Reports Server (NTRS)
Javidi, Bahram
1997-01-01
Document fraud including unauthorized duplication of identification cards and credit cards is a serious problem facing the government, banks, businesses, and consumers. In addition, counterfeit products such as computer chips, and compact discs, are arriving on our shores in great numbers. With the rapid advances in computers, CCD technology, image processing hardware and software, printers, scanners, and copiers, it is becoming increasingly easy to reproduce pictures, logos, symbols, paper currency, or patterns. These problems have stimulated an interest in research, development and publications in security technology. Some ID cards, credit cards and passports currently use holograms as a security measure to thwart copying. The holograms are inspected by the human eye. In theory, the hologram cannot be reproduced by an unauthorized person using commercially-available optical components; in practice, however, technology has advanced to the point where the holographic image can be acquired from a credit card-photographed or captured with by a CCD camera-and a new hologram synthesized using commercially-available optical components or hologram-producing equipment. Therefore, a pattern that can be read by a conventional light source and a CCD camera can be reproduced. An optical security and anti-copying device that provides significant security improvements over existing security technology was demonstrated. The system can be applied for security verification of credit cards, passports, and other IDs so that they cannot easily be reproduced. We have used a new scheme of complex phase/amplitude patterns that cannot be seen and cannot be copied by an intensity-sensitive detector such as a CCD camera. A random phase mask is bonded to a primary identification pattern which could also be phase encoded. The pattern could be a fingerprint, a picture of a face, or a signature. The proposed optical processing device is designed to identify both the random phase mask and the primary pattern [1-3]. We have demonstrated experimentally an optical processor for security verification of objects, products, and persons. This demonstration is very important to encourage industries to consider the proposed system for research and development.
Detection of Nuclear Sources by UAV Teleoperation Using a Visuo-Haptic Augmented Reality Interface
Micconi, Giorgio; Caselli, Stefano; Benassi, Giacomo; Zambelli, Nicola; Bettelli, Manuele
2017-01-01
A visuo-haptic augmented reality (VHAR) interface is presented enabling an operator to teleoperate an unmanned aerial vehicle (UAV) equipped with a custom CdZnTe-based spectroscopic gamma-ray detector in outdoor environments. The task is to localize nuclear radiation sources, whose location is unknown to the user, without the close exposure of the operator. The developed detector also enables identification of the localized nuclear sources. The aim of the VHAR interface is to increase the situation awareness of the operator. The user teleoperates the UAV using a 3DOF haptic device that provides an attractive force feedback around the location of the most intense detected radiation source. Moreover, a fixed camera on the ground observes the environment where the UAV is flying. A 3D augmented reality scene is displayed on a computer screen accessible to the operator. Multiple types of graphical overlays are shown, including sensor data acquired by the nuclear radiation detector, a virtual cursor that tracks the UAV and geographical information, such as buildings. Experiments performed in a real environment are reported using an intense nuclear source. PMID:28961198
Detection of Nuclear Sources by UAV Teleoperation Using a Visuo-Haptic Augmented Reality Interface.
Aleotti, Jacopo; Micconi, Giorgio; Caselli, Stefano; Benassi, Giacomo; Zambelli, Nicola; Bettelli, Manuele; Zappettini, Andrea
2017-09-29
A visuo-haptic augmented reality (VHAR) interface is presented enabling an operator to teleoperate an unmanned aerial vehicle (UAV) equipped with a custom CdZnTe-based spectroscopic gamma-ray detector in outdoor environments. The task is to localize nuclear radiation sources, whose location is unknown to the user, without the close exposure of the operator. The developed detector also enables identification of the localized nuclear sources. The aim of the VHAR interface is to increase the situation awareness of the operator. The user teleoperates the UAV using a 3DOF haptic device that provides an attractive force feedback around the location of the most intense detected radiation source. Moreover, a fixed camera on the ground observes the environment where the UAV is flying. A 3D augmented reality scene is displayed on a computer screen accessible to the operator. Multiple types of graphical overlays are shown, including sensor data acquired by the nuclear radiation detector, a virtual cursor that tracks the UAV and geographical information, such as buildings. Experiments performed in a real environment are reported using an intense nuclear source.
NASA Astrophysics Data System (ADS)
Chapin, Edward L.; Pope, Alexandra; Scott, Douglas; Aretxaga, Itziar; Austermann, Jason E.; Chary, Ranga-Ram; Coppin, Kristen; Halpern, Mark; Hughes, David H.; Lowenthal, James D.; Morrison, Glenn E.; Perera, Thushara A.; Scott, Kimberly S.; Wilson, Grant W.; Yun, Min S.
2009-10-01
We present results from a multiwavelength study of 29 sources (false detection probabilities <5 per cent) from a survey of the Great Observatories Origins Deep Survey-North (GOODS-N) field at 1.1mm using the Astronomical Thermal Emission Camera (AzTEC). Comparing with existing 850μm Submillimetre Common-User Bolometer Array (SCUBA) studies in the field, we examine differences in the source populations selected at the two wavelengths. The AzTEC observations uniformly cover the entire survey field to a 1σ depth of ~1mJy. Searching deep 1.4GHz Very Large Array (VLA) and Spitzer 3-24μm catalogues, we identify robust counterparts for 21 1.1mm sources, and tentative associations for the remaining objects. The redshift distribution of AzTEC sources is inferred from available spectroscopic and photometric redshifts. We find a median redshift of z = 2.7, somewhat higher than z = 2.0 for 850μm selected sources in the same field, and our lowest redshift identification lies at a spectroscopic redshift z = 1.1460. We measure the 850μm to 1.1mm colour of our sources and do not find evidence for `850μm dropouts', which can be explained by the low signal-to-noise ratio of the observations. We also combine these observed colours with spectroscopic redshifts to derive the range of dust temperatures T, and dust emissivity indices β for the sample, concluding that existing estimates T ~ 30K and β ~ 1.75 are consistent with these new data.
X-ray detectors at the Linac Coherent Light Source.
Blaj, Gabriel; Caragiulo, Pietro; Carini, Gabriella; Carron, Sebastian; Dragone, Angelo; Freytag, Dietrich; Haller, Gunther; Hart, Philip; Hasi, Jasmine; Herbst, Ryan; Herrmann, Sven; Kenney, Chris; Markovic, Bojan; Nishimura, Kurtis; Osier, Shawn; Pines, Jack; Reese, Benjamin; Segal, Julie; Tomada, Astrid; Weaver, Matt
2015-05-01
Free-electron lasers (FELs) present new challenges for camera development compared with conventional light sources. At SLAC a variety of technologies are being used to match the demands of the Linac Coherent Light Source (LCLS) and to support a wide range of scientific applications. In this paper an overview of X-ray detector design requirements at FELs is presented and the various cameras in use at SLAC are described for the benefit of users planning experiments or analysts looking at data. Features and operation of the CSPAD camera, which is currently deployed at LCLS, are discussed, and the ePix family, a new generation of cameras under development at SLAC, is introduced.
X-ray detectors at the Linac Coherent Light Source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaj, Gabriel; Caragiulo, Pietro; Carini, Gabriella
Free-electron lasers (FELs) present new challenges for camera development compared with conventional light sources. At SLAC a variety of technologies are being used to match the demands of the Linac Coherent Light Source (LCLS) and to support a wide range of scientific applications. In this paper an overview of X-ray detector design requirements at FELs is presented and the various cameras in use at SLAC are described for the benefit of users planning experiments or analysts looking at data. Features and operation of the CSPAD camera, which is currently deployed at LCLS, are discussed, and the ePix family, a newmore » generation of cameras under development at SLAC, is introduced.« less
X-ray detectors at the Linac Coherent Light Source
Blaj, Gabriel; Caragiulo, Pietro; Carini, Gabriella; ...
2015-04-21
Free-electron lasers (FELs) present new challenges for camera development compared with conventional light sources. At SLAC a variety of technologies are being used to match the demands of the Linac Coherent Light Source (LCLS) and to support a wide range of scientific applications. In this paper an overview of X-ray detector design requirements at FELs is presented and the various cameras in use at SLAC are described for the benefit of users planning experiments or analysts looking at data. Features and operation of the CSPAD camera, which is currently deployed at LCLS, are discussed, and the ePix family, a newmore » generation of cameras under development at SLAC, is introduced.« less
X-ray detectors at the Linac Coherent Light Source
Blaj, Gabriel; Caragiulo, Pietro; Carini, Gabriella; Carron, Sebastian; Dragone, Angelo; Freytag, Dietrich; Haller, Gunther; Hart, Philip; Hasi, Jasmine; Herbst, Ryan; Herrmann, Sven; Kenney, Chris; Markovic, Bojan; Nishimura, Kurtis; Osier, Shawn; Pines, Jack; Reese, Benjamin; Segal, Julie; Tomada, Astrid; Weaver, Matt
2015-01-01
Free-electron lasers (FELs) present new challenges for camera development compared with conventional light sources. At SLAC a variety of technologies are being used to match the demands of the Linac Coherent Light Source (LCLS) and to support a wide range of scientific applications. In this paper an overview of X-ray detector design requirements at FELs is presented and the various cameras in use at SLAC are described for the benefit of users planning experiments or analysts looking at data. Features and operation of the CSPAD camera, which is currently deployed at LCLS, are discussed, and the ePix family, a new generation of cameras under development at SLAC, is introduced. PMID:25931071
Lunar Reconnaissance Orbiter Camera (LROC) instrument overview
Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.
2010-01-01
The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.
NASA Astrophysics Data System (ADS)
Narayanan, V. L.
2017-12-01
For the first time, high speed imaging of lightning from few isolated tropical thunderstorms are observed from India. The recordings are made from Tirupati (13.6oN, 79.4oE, 180 m above mean sea level) during summer months with a digital camera capable of recording high speed videos up to 480 fps. At 480 fps, each individual video file is recorded for 30 s resulting in 14400 deinterlaced images per video file. An automatic processing algorithm is developed for quick identification and analysis of the lightning events which will be discussed in detail. Preliminary results indicating different types of phenomena associated with lightning like stepped leader, dart leader, luminous channels corresponding to continuing current and M components are discussed. While most of the examples show cloud to ground discharges, few interesting cases of intra-cloud, inter-cloud and cloud-air discharges will also be displayed. This indicates that though high speed cameras with few 1000 fps are preferred for a detailed study on lightning, moderate range CMOS sensor based digital cameras can provide important information as well. The lightning imaging activity presented herein is initiated as an amateur effort and currently plans are underway to propose a suite of supporting instruments to conduct coordinated campaigns. The images discussed here are acquired from normal residential area and indicate how frequent lightning strikes are in such tropical locations during thunderstorms, though no towering structures are nearby. It is expected that popularizing of such recordings made with affordable digital cameras will trigger more interest in lightning research and provide a possible data source from amateur observers paving the way for citizen science.
Initial Demonstration of 9-MHz Framing Camera Rates on the FAST UV Drive Laser Pulse Trains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lumpkin, A. H.; Edstrom Jr., D.; Ruan, J.
2016-10-09
We report the configuration of a Hamamatsu C5680 streak camera as a framing camera to record transverse spatial information of green-component laser micropulses at 3- and 9-MHz rates for the first time. The latter is near the time scale of the ~7.5-MHz revolution frequency of the Integrable Optics Test Accelerator (IOTA) ring and its expected synchroton radiation source temporal structure. The 2-D images are recorded with a Gig-E readout CCD camera. We also report a first proof of principle with an OTR source using the linac streak camera in a semi-framing mode.
Human tracking over camera networks: a review
NASA Astrophysics Data System (ADS)
Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang
2017-12-01
In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.
Optical, Near-IR, and X-Ray Observations of SN 2015J and Its Host Galaxy
NASA Astrophysics Data System (ADS)
Nucita, A. A.; De Paolis, F.; Saxton, R.; Testa, V.; Strafella, F.; Read, A.; Licchelli, D.; Ingrosso, G.; Convenga, F.; Boutsia, K.
2017-12-01
SN 2015J was discovered on 2015 April 27th and is classified as an SN IIn. At first, it appeared to be an orphan SN candidate, I.e., without any clear identification of its host galaxy. Here, we present an analysis of the observations carried out by the VLT 8 m class telescope with the FORS2 camera in the R band and the Magellan telescope (6.5 m) equipped with the IMACS Short-Camera (V and I filters) and the FourStar camera (Ks filter). We show that SN 2015J resides in what appears to be a very compact galaxy, establishing a relation between the SN event and its natural host. We also present and discuss archival and new X-ray data centered on SN 2015J. At the time of the supernova explosion, Swift/XRT observations were made and a weak X-ray source was detected at the location of SN 2015J. Almost one year later, the same source was unambiguously identified during serendipitous observations by Swift/XRT and XMM-Newton, clearly showing an enhancement of the 0.3-10 keV band flux by a factor ≃ 30 with respect to the initial state. Swift/XRT observations show that the source is still active in the X-rays at a level of ≃ 0.05 counts s-1. The unabsorbed X-ray luminosity derived from the XMM-Newton slew and SWIFT observations, {L}x≃ 5× {10}41 erg s-1, places SN 2015J among the brightest young supernovae in X-rays. Based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA, with ESO Telescopes at the La Silla-Paranal Observatory under program ID 298.D-5016(A), and with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. We also acknowledge the use of public data from the Swift data archive.
Waste inspection tomography (WIT)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernardi, R.T.
1996-12-31
WIT is a self-sufficient mobile semitrailer for nondestructive evaluation and nondestructive assay of nuclear waste drums using x-ray and gamma-ray tomography. The recently completed Phase I included the design, fabrication, and initial testing of all WIT subsystems installed on-board the trailer. Initial test results include 2 MeV digital radiography, computed tomography, Anger camera imaging, single photon emission computed tomography, gamma-ray spectroscopy, collimated gamma scanning, and active and passive computed tomography using a 1.4 mCi source of {sup 166}Ho. These techniques were initially demonstrated on a 55-gallon phantom drum with 3 simulated waste matrices of combustibles, heterogeneous metals, and cement usingmore » check sources of gamma active isotopes such as {sup 137}Cs and {sup 133}Ba with 9-250 {mu}Ci activities. Waste matrix identification, isotopic identification, and attenuation-corrected gamma activity determination were demonstrated nondestructively and noninvasively in Phase I. Currently ongoing Phase II involves DOE site field test demonstrations at LLNL, RFETS, and INEL with real nuclear waste drums. Current WIT experience includes 55 gallon drums of cement, graphite, sludge, glass, metals, and combustibles. Thus far WIT has inspected drums with 0-20 gms of {sup 239}Pu.« less
The Legal Implications of Surveillance Cameras
ERIC Educational Resources Information Center
Steketee, Amy M.
2012-01-01
The nature of school security has changed dramatically over the last decade. Schools employ various measures, from metal detectors to identification badges to drug testing, to promote the safety and security of staff and students. One of the increasingly prevalent measures is the use of security cameras. In fact, the U.S. Department of Education…
Sweatt, W.C.
1998-09-08
A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D{sub source} {approx_equal} 0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors. 11 figs.
Evangelista, Dennis J.; Ray, Dylan D.; Hedrick, Tyson L.
2016-01-01
ABSTRACT Ecological, behavioral and biomechanical studies often need to quantify animal movement and behavior in three dimensions. In laboratory studies, a common tool to accomplish these measurements is the use of multiple, calibrated high-speed cameras. Until very recently, the complexity, weight and cost of such cameras have made their deployment in field situations risky; furthermore, such cameras are not affordable to many researchers. Here, we show how inexpensive, consumer-grade cameras can adequately accomplish these measurements both within the laboratory and in the field. Combined with our methods and open source software, the availability of inexpensive, portable and rugged cameras will open up new areas of biological study by providing precise 3D tracking and quantification of animal and human movement to researchers in a wide variety of field and laboratory contexts. PMID:27444791
Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects
Lambers, Martin; Kolb, Andreas
2017-01-01
In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data. PMID:29271888
Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects.
Bulczak, David; Lambers, Martin; Kolb, Andreas
2017-12-22
In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.
Camera sensor arrangement for crop/weed detection accuracy in agronomic images.
Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo
2013-04-02
In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects.
A Fisheries Application of a Dual-Frequency Identification Sonar Acoustic Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moursund, Russell A.; Carlson, Thomas J.; Peters, Rock D.
2003-06-01
The uses of an acoustic camera in fish passage research at hydropower facilities are being explored by the U.S. Army Corps of Engineers. The Dual-Frequency Identification Sonar (DIDSON) is a high-resolution imaging sonar that obtains near video-quality images for the identification of objects underwater. Developed originally for the Navy by the University of Washington?s Applied Physics Laboratory, it bridges the gap between existing fisheries assessment sonar and optical systems. Traditional fisheries assessment sonars detect targets at long ranges but cannot record the shape of targets. The images within 12 m of this acoustic camera are so clear that one canmore » see fish undulating as they swim and can tell the head from the tail in otherwise zero-visibility water. In the 1.8 MHz high-frequency mode, this system is composed of 96 beams over a 29-degree field of view. This high resolution and a fast frame rate allow the acoustic camera to produce near video-quality images of objects through time. This technology redefines many of the traditional limitations of sonar for fisheries and aquatic ecology. Images can be taken of fish in confined spaces, close to structural or surface boundaries, and in the presence of entrained air. The targets themselves can be visualized in real time. The DIDSON can be used where conventional underwater cameras would be limited in sampling range to < 1 m by low light levels and high turbidity, and where traditional sonar would be limited by the confined sample volume. Results of recent testing at The Dalles Dam, on the lower Columbia River in Oregon, USA, are shown.« less
The electromagnetic interference of mobile phones on the function of a γ-camera.
Javadi, Hamid; Azizmohammadi, Zahra; Mahmoud Pashazadeh, Ali; Neshandar Asli, Isa; Moazzeni, Taleb; Baharfar, Nastaran; Shafiei, Babak; Nabipour, Iraj; Assadi, Majid
2014-03-01
The aim of the present study is to evaluate whether or not the electromagnetic field generated by mobile phones interferes with the function of a SPECT γ-camera during data acquisition. We tested the effects of 7 models of mobile phones on 1 SPECT γ-camera. The mobile phones were tested when making a call, in ringing mode, and in standby mode. The γ-camera function was assessed during data acquisition from a planar source and a point source of Tc with activities of 10 mCi and 3 mCi, respectively. A significant visual decrease in count number was considered to be electromagnetic interference (EMI). The percentage of induced EMI with the γ-camera per mobile phone was in the range of 0% to 100%. The incidence of EMI was mainly observed in the first seconds of ringing and then mitigated in the following frames. Mobile phones are portable sources of electromagnetic radiation, and there is interference potential with the function of SPECT γ-cameras leading to adverse effects on the quality of the acquired images.
Opportunistic traffic sensing using existing video sources (phase II).
DOT National Transportation Integrated Search
2017-02-01
The purpose of the project reported on here was to investigate methods for automatic traffic sensing using traffic surveillance : cameras, red light cameras, and other permanent and pre-existing video sources. Success in this direction would potentia...
Identification of handwriting by using the genetic algorithm (GA) and support vector machine (SVM)
NASA Astrophysics Data System (ADS)
Zhang, Qigui; Deng, Kai
2016-12-01
As portable digital camera and a camera phone comes more and more popular, and equally pressing is meeting the requirements of people to shoot at any time, to identify and storage handwritten character. In this paper, genetic algorithm(GA) and support vector machine(SVM)are used for identification of handwriting. Compare with parameters-optimized method, this technique overcomes two defects: first, it's easy to trap in the local optimum; second, finding the best parameters in the larger range will affects the efficiency of classification and prediction. As the experimental results suggest, GA-SVM has a higher recognition rate.
Integrating motion-detection cameras and hair snags for wolverine identification
Audrey J. Magoun; Clinton D. Long; Michael K. Schwartz; Kristine L. Pilgrim; Richard E. Lowell; Patrick Valkenburg
2011-01-01
We developed an integrated system for photographing a wolverine's (Gulo gulo) ventral pattern while concurrently collecting hair for microsatellite DNA genotyping. Our objectives were to 1) test the system on a wild population of wolverines using an array of camera and hair-snag (C&H) stations in forested habitat where wolverines were known to occur, 2)...
Sensor noise camera identification: countering counter-forensics
NASA Astrophysics Data System (ADS)
Goljan, Miroslav; Fridrich, Jessica; Chen, Mo
2010-01-01
In camera identification using sensor noise, the camera that took a given image can be determined with high certainty by establishing the presence of the camera's sensor fingerprint in the image. In this paper, we develop methods to reveal counter-forensic activities in which an attacker estimates the camera fingerprint from a set of images and pastes it onto an image from a different camera with the intent to introduce a false alarm and, in doing so, frame an innocent victim. We start by classifying different scenarios based on the sophistication of the attacker's activity and the means available to her and to the victim, who wishes to defend herself. The key observation is that at least some of the images that were used by the attacker to estimate the fake fingerprint will likely be available to the victim as well. We describe the socalled "triangle test" that helps the victim reveal attacker's malicious activity with high certainty under a wide range of conditions. This test is then extended to the case when none of the images that the attacker used to create the fake fingerprint are available to the victim but the victim has at least two forged images to analyze. We demonstrate the test's performance experimentally and investigate its limitations. The conclusion that can be made from this study is that planting a sensor fingerprint in an image without leaving a trace is significantly more difficult than previously thought.
The Raptor Real-Time Processing Architecture
NASA Astrophysics Data System (ADS)
Galassi, M.; Starr, D.; Wozniak, P.; Brozdin, K.
The primary goal of Raptor is ambitious: to identify interesting optical transients from very wide field of view telescopes in real time, and then to quickly point the higher resolution Raptor ``fovea'' cameras and spectrometer to the location of the optical transient. The most interesting of Raptor's many applications is the real-time search for orphan optical counterparts of Gamma Ray Bursts. The sequence of steps (data acquisition, basic calibration, source extraction, astrometry, relative photometry, the smarts of transient identification and elimination of false positives, telescope pointing feedback, etc.) is implemented with a ``component'' approach. All basic elements of the pipeline functionality have been written from scratch or adapted (as in the case of SExtractor for source extraction) to form a consistent modern API operating on memory resident images and source lists. The result is a pipeline which meets our real-time requirements and which can easily operate as a monolithic or distributed processing system. Finally, the Raptor architecture is entirely based on free software (sometimes referred to as ``open source'' software). In this paper we also discuss the interplay between various free software technologies in this type of astronomical problem.
Raptor -- Mining the Sky in Real Time
NASA Astrophysics Data System (ADS)
Galassi, M.; Borozdin, K.; Casperson, D.; McGowan, K.; Starr, D.; White, R.; Wozniak, P.; Wren, J.
2004-06-01
The primary goal of Raptor is ambitious: to identify interesting optical transients from very wide field of view telescopes in real time, and then to quickly point the higher resolution Raptor ``fovea'' cameras and spectrometer to the location of the optical transient. The most interesting of Raptor's many applications is the real-time search for orphan optical counterparts of Gamma Ray Bursts. The sequence of steps (data acquisition, basic calibration, source extraction, astrometry, relative photometry, the smarts of transient identification and elimination of false positives, telescope pointing feedback...) is implemented with a ``component'' aproach. All basic elements of the pipeline functionality have been written from scratch or adapted (as in the case of SExtractor for source extraction) to form a consistent modern API operating on memory resident images and source lists. The result is a pipeline which meets our real-time requirements and which can easily operate as a monolithic or distributed processing system. Finally: the Raptor architecture is entirely based on free software (sometimes referred to as "open source" software). In this paper we also discuss the interplay between various free software technologies in this type of astronomical problem.
Minimalist identification system based on venous map for security applications
NASA Astrophysics Data System (ADS)
Jacinto G., Edwar; Martínez S., Fredy; Martínez S., Fernando
2015-07-01
This paper proposes a technique and an algorithm used to build a device for people identification through the processing of a low resolution camera image. The infrared channel is the only information needed, sensing the blood reaction with the proper wave length, and getting a preliminary snapshot of the vascular map of the back side of the hand. The software uses this information to extract the characteristics of the user in a limited area (region of interest, ROI), unique for each user, which applicable to biometric access control devices. This kind of recognition prototypes functions are expensive, but in this case (minimalist design), the biometric equipment only used a low cost camera and the matrix of IR emitters adaptation to construct an economic and versatile prototype, without neglecting the high level of effectiveness that characterizes this kind of identification method.
Frequency identification of vibration signals using video camera image data.
Jeng, Yih-Nen; Wu, Chia-Hung
2012-10-16
This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.
Frequency Identification of Vibration Signals Using Video Camera Image Data
Jeng, Yih-Nen; Wu, Chia-Hung
2012-01-01
This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026
NASA Astrophysics Data System (ADS)
Gloe, Thomas; Borowka, Karsten; Winkler, Antje
2010-01-01
The analysis of lateral chromatic aberration forms another ingredient for a well equipped toolbox of an image forensic investigator. Previous work proposed its application to forgery detection1 and image source identification.2 This paper takes a closer look on the current state-of-the-art method to analyse lateral chromatic aberration and presents a new approach to estimate lateral chromatic aberration in a runtime-efficient way. Employing a set of 11 different camera models including 43 devices, the characteristic of lateral chromatic aberration is investigated in a large-scale. The reported results point to general difficulties that have to be considered in real world investigations.
Suresh, R
2017-08-01
Pertinent marks of fired cartridge cases such as firing pin, breech face, extractor, ejector, etc. are used for firearm identification. A non-standard semiautomatic pistol and four .22rim fire cartridges (head stamp KF) is used for known source comparison study. Two test fired cartridge cases are examined under stereomicroscope. The characteristic marks are captured by digital camera and comparative analysis of striation marks is done by using different tools available in the Microsoft word (Windows 8) of a computer system. The similarities of striation marks thus obtained are highly convincing to identify the firearm. In this paper, an effort has been made to study and compare the striation marks of two fired cartridge cases using stereomicroscope, digital camera and computer system. Comparison microscope is not used in this study. The method described in this study is simple, cost effective, transport to field study and can be equipped in a crime scene vehicle to facilitate immediate on spot examination. The findings may be highly helpful to the forensic community, law enforcement agencies and students. Copyright © 2017 Elsevier B.V. All rights reserved.
Acquisition of gamma camera and physiological data by computer.
Hack, S N; Chang, M; Line, B R; Cooper, J A; Robeson, G H
1986-11-01
We have designed, implemented, and tested a new Research Data Acquisition System (RDAS) that permits a general purpose digital computer to acquire signals from both gamma camera sources and physiological signal sources concurrently. This system overcomes the limited multi-source, high speed data acquisition capabilities found in most clinically oriented nuclear medicine computers. The RDAS can simultaneously input signals from up to four gamma camera sources with a throughput of 200 kHz per source and from up to eight physiological signal sources with an aggregate throughput of 50 kHz. Rigorous testing has found the RDAS to exhibit acceptable linearity and timing characteristics. In addition, flood images obtained by this system were compared with flood images acquired by a commercial nuclear medicine computer system. National Electrical Manufacturers Association performance standards of the flood images were found to be comparable.
Engelen, Thijs; Winkel, Beatrice MF; Rietbergen, Daphne DD; KleinJan, Gijs H; Vidal-Sicart, Sergi; Olmos, Renato A Valdés; van den Berg, Nynke S; van Leeuwen, Fijs WB
2015-01-01
Accurate pre- and intraoperative identification of the sentinel node (SN) forms the basis of the SN biopsy procedure. Gamma tracing technologies such as a gamma probe (GP), a 2D mobile gamma camera (MGC) or 3D freehandSPECT (FHS) can be used to provide the surgeon with radioguidance to the SN(s). We reasoned that integrated use of these technologies results in the generation of a “hybrid” modality that combines the best that the individual radioguidance technologies have to offer. The sensitivity and resolvability of both 2D-MGC and 3D-FHS-MGC were studied in a phantom setup (at various source-detector depths and using varying injection site-to-SN distances), and in ten breast cancer patients scheduled for SN biopsy. Acquired 3D-FHS-MGC images were overlaid with the position of the phantom/patient. This augmented-reality overview image was then used for navigation to the hotspot/SN in virtual-reality using the GP. Obtained results were compared to conventional gamma camera lymphoscintigrams. Resolution of 3D-FHS-MGC allowed identification of the SNs at a minimum injection site (100 MBq)-to-node (1 MBq; 1%) distance of 20 mm, up to a source-detector depth of 36 mm in 2D-MGC and up to 24 mm in 3D-FHS-MGC. A clinically relevant dose of approximately 1 MBq was clearly detectable up to a depth of 60 mm in 2D-MGC and 48 mm in 3D-FHS-MGC. In all ten patients at least one SN was visualized on the lymphoscintigrams with a total of 12 SNs visualized. 3D-FHS-MGC identified 11 of 12 SNs and allowed navigation to all these visualized SNs; in one patient with two axillary SNs located closely to each other (11 mm), 3D-FHS-MGC was not able to distinguish the two SNs. In conclusion, high sensitivity detection of SNs at an injection site-to-node distance of 20 mm-and-up was possible using 3D-FHS-MGC. In patients, 3D-FHS-MGC showed highly reproducible images as compared to the conventional lymphoscintigrams. PMID:26069857
Discovery of the near-infrared counterpart to the luminous neutron-star low-mass X-ray binary GX 3+1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van den Berg, Maureen; Fridriksson, Joel K.; Homan, Jeroen
2014-10-01
Using the High Resolution Camera on board the Chandra X-ray Observatory, we have measured an accurate position for the bright persistent neutron star X-ray binary and atoll source GX 3+1. At a location that is consistent with this new position, we have discovered the near-infrared (NIR) counterpart to GX 3+1 in images taken with the PANIC and FourStar cameras on the Magellan Baade Telescope. The identification of this K{sub s} = 15.8 ± 0.1 mag star as the counterpart is based on the presence of a Br γ emission line in an NIR spectrum taken with the Folded-port InfraRed Echelettemore » spectrograph on the Baade Telescope. The absolute magnitude derived from the best available distance estimate to GX 3+1 indicates that the mass donor in the system is not a late-type giant. We find that the NIR light in GX 3+1 is likely dominated by the contribution from a heated outer accretion disk. This is similar to what has been found for the NIR flux from the brighter class of Z sources, but unlike the behavior of atolls fainter (L{sub X} ≈ 10{sup 36}-10{sup 37} erg s{sup –1}) than GX 3+1, where optically thin synchrotron emission from a jet probably dominates the NIR flux.« less
Hinken, David; Schinke, Carsten; Herlufsen, Sandra; Schmidt, Arne; Bothe, Karsten; Brendel, Rolf
2011-03-01
We report in detail on the luminescence imaging setup developed within the last years in our laboratory. In this setup, the luminescence emission of silicon solar cells or silicon wafers is analyzed quantitatively. Charge carriers are excited electrically (electroluminescence) using a power supply for carrier injection or optically (photoluminescence) using a laser as illumination source. The luminescence emission arising from the radiative recombination of the stimulated charge carriers is measured spatially resolved using a camera. We give details of the various components including cameras, optical filters for electro- and photo-luminescence, the semiconductor laser and the four-quadrant power supply. We compare a silicon charged-coupled device (CCD) camera with a back-illuminated silicon CCD camera comprising an electron multiplier gain and a complementary metal oxide semiconductor indium gallium arsenide camera. For the detection of the luminescence emission of silicon we analyze the dominant noise sources along with the signal-to-noise ratio of all three cameras at different operation conditions.
Lessons Learned from Crime Caught on Camera
Bernasco, Wim
2018-01-01
Objectives: The widespread use of camera surveillance in public places offers criminologists the opportunity to systematically and unobtrusively observe crime, their main subject matter. The purpose of this essay is to inform the reader of current developments in research on crimes caught on camera. Methods: We address the importance of direct observation of behavior and review criminological studies that used observational methods, with and without cameras, including the ones published in this issue. We also discuss the uses of camera recordings in other social sciences and in biology. Results: We formulate six key insights that emerge from the literature and make recommendations for future research. Conclusions: Camera recordings of real-life crime are likely to become part of the criminological tool kit that will help us better understand the situational and interactional elements of crime. Like any source, it has limitations that are best addressed by triangulation with other sources. PMID:29472728
Intelligent person identification system using stereo camera-based height and stride estimation
NASA Astrophysics Data System (ADS)
Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo
2005-05-01
In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.
Continuous Mapping of Tunnel Walls in a Gnss-Denied Environment
NASA Astrophysics Data System (ADS)
Chapman, Michael A.; Min, Cao; Zhang, Deijin
2016-06-01
The need for reliable systems for capturing precise detail in tunnels has increased as the number of tunnels (e.g., for cars and trucks, trains, subways, mining and other infrastructure) has increased and the age of these structures and, subsequent, deterioration has introduced structural degradations and eventual failures. Due to the hostile environments encountered in tunnels, mobile mapping systems are plagued with various problems such as loss of GNSS signals, drift of inertial measurements systems, low lighting conditions, dust and poor surface textures for feature identification and extraction. A tunnel mapping system using alternate sensors and algorithms that can deliver precise coordinates and feature attributes from surfaces along the entire tunnel path is presented. This system employs image bridging or visual odometry to estimate precise sensor positions and orientations. The fundamental concept is the use of image sequences to geometrically extend the control information in the absence of absolute positioning data sources. This is a non-trivial problem due to changes in scale, perceived resolution, image contrast and lack of salient features. The sensors employed include forward-looking high resolution digital frame cameras coupled with auxiliary light sources. In addition, a high frequency lidar system and a thermal imager are included to offer three dimensional point clouds of the tunnel walls along with thermal images for moisture detection. The mobile mapping system is equipped with an array of 16 cameras and light sources to capture the tunnel walls. Continuous images are produced using a semi-automated mosaicking process. Results of preliminary experimentation are presented to demonstrate the effectiveness of the system for the generation of seamless precise tunnel maps.
ERIC Educational Resources Information Center
Beverly, Robert E.; Young, Thomas J.
Two hundred forty college undergraduates participated in a study of the effect of camera angle on an audience's perceptual judgments of source credibility, dominance, attraction, and homophily. The subjects were divided into four groups and each group was shown a videotape presentation in which sources had been videotaped according to one of four…
Evidence for a Population of High-Redshift Submillimeter Galaxies from Interferometric Imaging
NASA Astrophysics Data System (ADS)
Younger, Joshua D.; Fazio, Giovanni G.; Huang, Jia-Sheng; Yun, Min S.; Wilson, Grant W.; Ashby, Matthew L. N.; Gurwell, Mark A.; Lai, Kamson; Peck, Alison B.; Petitpas, Glen R.; Wilner, David J.; Iono, Daisuke; Kohno, Kotaro; Kawabe, Ryohei; Hughes, David H.; Aretxaga, Itziar; Webb, Tracy; Martínez-Sansigre, Alejo; Kim, Sungeun; Scott, Kimberly S.; Austermann, Jason; Perera, Thushara; Lowenthal, James D.; Schinnerer, Eva; Smolčić, Vernesa
2007-12-01
We have used the Submillimeter Array to image a flux-limited sample of seven submillimeter galaxies, selected by the AzTEC camera on the JCMT at 1.1 mm, in the COSMOS field at 890 μm with ~2" resolution. All of the sources-two radio-bright and five radio-dim-are detected as single point sources at high significance (>6 σ), with positions accurate to ~0.2" that enable counterpart identification at other wavelengths observed with similarly high angular resolution. All seven have IRAC counterparts, but only two have secure counterparts in deep HST ACS imaging. As compared to the two radio-bright sources in the sample, and those in previous studies, the five radio-dim sources in the sample (1) have systematically higher submillimeter-to-radio flux ratios, (2) have lower IRAC 3.6-8.0 μm fluxes, and (3) are not detected at 24 μm. These properties, combined with size constraints at 890 μm (θ<~1.2''), suggest that the radio-dim submillimeter galaxies represent a population of very dusty starbursts, with physical scales similar to local ultraluminous infrared galaxies, with an average redshift higher than radio-bright sources.
Development of compact Compton camera for 3D image reconstruction of radioactive contamination
NASA Astrophysics Data System (ADS)
Sato, Y.; Terasaka, Y.; Ozawa, S.; Nakamura Miyamura, H.; Kaburagi, M.; Tanifuji, Y.; Kawabata, K.; Torii, T.
2017-11-01
The Fukushima Daiichi Nuclear Power Station (FDNPS), operated by Tokyo Electric Power Company Holdings, Inc., went into meltdown after the large tsunami caused by the Great East Japan Earthquake of March 11, 2011. Very large amounts of radionuclides were released from the damaged plant. Radiation distribution measurements inside FDNPS buildings are indispensable to execute decommissioning tasks in the reactor buildings. We have developed a compact Compton camera to measure the distribution of radioactive contamination inside the FDNPS buildings three-dimensionally (3D). The total weight of the Compton camera is lower than 1.0 kg. The gamma-ray sensor of the Compton camera employs Ce-doped GAGG (Gd3Al2Ga3O12) scintillators coupled with a multi-pixel photon counter. Angular correction of the detection efficiency of the Compton camera was conducted. Moreover, we developed a 3D back-projection method using the multi-angle data measured with the Compton camera. We successfully observed 3D radiation images resulting from the two 137Cs radioactive sources, and the image of the 9.2 MBq source appeared stronger than that of the 2.7 MBq source.
Development of an all-in-one gamma camera/CCD system for safeguard verification
NASA Astrophysics Data System (ADS)
Kim, Hyun-Il; An, Su Jung; Chung, Yong Hyun; Kwak, Sung-Woo
2014-12-01
For the purpose of monitoring and verifying efforts at safeguarding radioactive materials in various fields, a new all-in-one gamma camera/charged coupled device (CCD) system was developed. This combined system consists of a gamma camera, which gathers energy and position information on gamma-ray sources, and a CCD camera, which identifies the specific location in a monitored area. Therefore, 2-D image information and quantitative information regarding gamma-ray sources can be obtained using fused images. A gamma camera consists of a diverging collimator, a 22 × 22 array CsI(Na) pixelated scintillation crystal with a pixel size of 2 × 2 × 6 mm3 and Hamamatsu H8500 position-sensitive photomultiplier tube (PSPMT). The Basler scA640-70gc CCD camera, which delivers 70 frames per second at video graphics array (VGA) resolution, was employed. Performance testing was performed using a Co-57 point source 30 cm from the detector. The measured spatial resolution and sensitivity were 4.77 mm full width at half maximum (FWHM) and 7.78 cps/MBq, respectively. The energy resolution was 18% at 122 keV. These results demonstrate that the combined system has considerable potential for radiation monitoring.
Signal-to-noise ratio for the wide field-planetary camera of the Space Telescope
NASA Technical Reports Server (NTRS)
Zissa, D. E.
1984-01-01
Signal-to-noise ratios for the Wide Field Camera and Planetary Camera of the Space Telescope were calculated as a function of integration time. Models of the optical systems and CCD detector arrays were used with a 27th visual magnitude point source and a 25th visual magnitude per arc-sq. second extended source. A 23rd visual magnitude per arc-sq. second background was assumed. The models predicted signal-to-noise ratios of 10 within 4 hours for the point source centered on a signal pixel. Signal-to-noise ratios approaching 10 are estimated for approximately 0.25 x 0.25 arc-second areas within the extended source after 10 hours integration.
Remote sensing technologies are a class of instrument and sensor systems that include laser imageries, imaging spectrometers, and visible to thermal infrared cameras. These systems have been successfully used for gas phase chemical compound identification in a variety of field e...
Identification and Quantification Soil Redoximorphic Features by Digital Image Processing
USDA-ARS?s Scientific Manuscript database
Soil redoximorphic features (SRFs) have provided scientists and land managers with insight into relative soil moisture for approximately 60 years. The overall objective of this study was to develop a new method of SRF identification and quantification from soil cores using a digital camera and imag...
NASA Astrophysics Data System (ADS)
Wolszczak, Piotr; Łygas, Krystian; Litak, Grzegorz
2018-07-01
This study investigates dynamic responses of a nonlinear vibration energy harvester. The nonlinear mechanical resonator consists of a flexible beam moving like an inverted pendulum between amplitude limiters. It is coupled with a piezoelectric converter, and excited kinematically. Consequently, the mechanical energy input is converted into the electrical power output on the loading resistor included in an electric circuit attached to the piezoelectric electrodes. The curvature of beam mode shapes as well as deflection of the whole beam are examined using a high speed camera. The visual identification results are compared with the voltage output generated by the piezoelectric element for corresponding frequency sweeps and analyzed by the Hilbert transform.
The Endockscope Using Next Generation Smartphones: "A Global Opportunity".
Tse, Christina; Patel, Roshan M; Yoon, Renai; Okhunov, Zhamshid; Landman, Jaime; Clayman, Ralph V
2018-06-02
The Endockscope combines a smartphone, a battery powered flashlight and a fiberoptic cystoscope allowing for mobile videocystoscopy. We compared conventional videocystoscopy to the Endockscope paired with next generation smartphones in an ex-vivo porcine bladder model to evaluate its image quality. The Endockscope consists of a three-dimensional (3D) printed attachment that connects a smartphone to a flexible fiberoptic cystoscope plus a 1000 lumen light-emitting diode (LED) cordless light source. Video recordings of porcine cystoscopy with a fiberoptic flexible cystoscope (Storz) were captured for each mobile device (iPhone 6, iPhone 6S, iPhone 7, Samsung S8, and Google Pixel) and for the high-definition H3-Z versatile camera (HD) set-up with both the LED light source and the xenon light (XL) source. Eleven faculty urologists, blinded to the modality used, evaluated each video for image quality/resolution, brightness, color quality, sharpness, overall quality, and acceptability for diagnostic use. When comparing the Endockscope coupled to an Galaxy S8, iPhone 7, and iPhone 6S with the LED portable light source to the HD camera with XL, there were no statistically significant differences in any metric. 82% and 55% of evaluators considered the iPhone 7 + LED light source and iPhone 6S + LED light, respectively, appropriate for diagnostic purposes as compared to 100% who considered the HD camera with XL appropriate. The iPhone 6 and Google Pixel coupled with the LED source were both inferior to the HD camera with XL in all metrics. The Endockscope system with a LED light source when coupled with either an iPhone 7 or Samsung S8 (total cost: $750) is comparable to conventional videocystoscopy with a standard camera and XL light source (total cost: $45,000).
Reconstructing Face Image from the Thermal Infrared Spectrum to the Visible Spectrum †
Kresnaraman, Brahmastro; Deguchi, Daisuke; Takahashi, Tomokazu; Mekada, Yoshito; Ide, Ichiro; Murase, Hiroshi
2016-01-01
During the night or in poorly lit areas, thermal cameras are a better choice instead of normal cameras for security surveillance because they do not rely on illumination. A thermal camera is able to detect a person within its view, but identification from only thermal information is not an easy task. The purpose of this paper is to reconstruct the face image of a person from the thermal spectrum to the visible spectrum. After the reconstruction, further image processing can be employed, including identification/recognition. Concretely, we propose a two-step thermal-to-visible-spectrum reconstruction method based on Canonical Correlation Analysis (CCA). The reconstruction is done by utilizing the relationship between images in both thermal infrared and visible spectra obtained by CCA. The whole image is processed in the first step while the second step processes patches in an image. Results show that the proposed method gives satisfying results with the two-step approach and outperforms comparative methods in both quality and recognition evaluations. PMID:27110781
Directional Unfolded Source Term (DUST) for Compton Cameras.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Dean J.; Horne, Steven M.; O'Brien, Sean
2018-03-01
A Directional Unfolded Source Term (DUST) algorithm was developed to enable improved spectral analysis capabilities using data collected by Compton cameras. Achieving this objective required modification of the detector response function in the Gamma Detector Response and Analysis Software (GADRAS). Experimental data that were collected in support of this work include measurements of calibration sources at a range of separation distances and cylindrical depleted uranium castings.
A mobile light source for carbon/nitrogen cameras
NASA Astrophysics Data System (ADS)
Trower, W. P.; Karev, A. I.; Melekhin, V. N.; Shvedunov, V. I.; Sobenin, N. P.
1995-05-01
The pulsed light source for carbon/nitrogen cameras developed to image concealed narcotics/explosives is described. This race-track microtron will produce 40 mA pulses of 70 MeV electrons, have minimal size and weight, and maximal ruggedness and reliability, so that it can be transported on a truck.
United States Homeland Security and National Biometric Identification
2002-04-09
security number. Biometrics is the use of unique individual traits such as fingerprints, iris eye patterns, voice recognition, and facial recognition to...technology to control access onto their military bases using a Defense Manpower Management Command developed software application. FACIAL Facial recognition systems...installed facial recognition systems in conjunction with a series of 200 cameras to fight street crime and identify terrorists. The cameras, which are
Observation of Planetary Motion Using a Digital Camera
ERIC Educational Resources Information Center
Meyn, Jan-Peter
2008-01-01
A digital SLR camera with a standard lens (50 mm focal length, f/1.4) on a fixed tripod is used to obtain photographs of the sky which contain stars up to 8[superscript m] apparent magnitude. The angle of view is large enough to ensure visual identification of the photograph with a large sky region in a stellar map. The resolution is sufficient to…
"Stereo Compton cameras" for the 3-D localization of radioisotopes
NASA Astrophysics Data System (ADS)
Takeuchi, K.; Kataoka, J.; Nishiyama, T.; Fujita, T.; Kishimoto, A.; Ohsuka, S.; Nakamura, S.; Adachi, S.; Hirayanagi, M.; Uchiyama, T.; Ishikawa, Y.; Kato, T.
2014-11-01
The Compton camera is a viable and convenient tool used to visualize the distribution of radioactive isotopes that emit gamma rays. After the nuclear disaster in Fukushima in 2011, there is a particularly urgent need to develop "gamma cameras", which can visualize the distribution of such radioisotopes. In response, we propose a portable Compton camera, which comprises 3-D position-sensitive GAGG scintillators coupled with thin monolithic MPPC arrays. The pulse-height ratio of two MPPC-arrays allocated at both ends of the scintillator block determines the depth of interaction (DOI), which dramatically improves the position resolution of the scintillation detectors. We report on the detailed optimization of the detector design, based on Geant4 simulation. The results indicate that detection efficiency reaches up to 0.54%, or more than 10 times that of other cameras being tested in Fukushima, along with a moderate angular resolution of 8.1° (FWHM). By applying the triangular surveying method, we also propose a new concept for the stereo measurement of gamma rays by using two Compton cameras, thus enabling the 3-D positional measurement of radioactive isotopes for the first time. From one point source simulation data, we ensured that the source position and the distance to the same could be determined typically to within 2 meters' accuracy and we also confirmed that more than two sources are clearly separated by the event selection from two point sources of simulation data.
WPSS: watching people security services
NASA Astrophysics Data System (ADS)
Bouma, Henri; Baan, Jan; Borsboom, Sander; van Zon, Kasper; Luo, Xinghan; Loke, Ben; Stoeller, Bram; van Kuilenburg, Hans; Dijk, Judith
2013-10-01
To improve security, the number of surveillance cameras is rapidly increasing. However, the number of human operators remains limited and only a selection of the video streams are observed. Intelligent software services can help to find people quickly, evaluate their behavior and show the most relevant and deviant patterns. We present a software platform that contributes to the retrieval and observation of humans and to the analysis of their behavior. The platform consists of mono- and stereo-camera tracking, re-identification, behavioral feature computation, track analysis, behavior interpretation and visualization. This system is demonstrated in a busy shopping mall with multiple cameras and different lighting conditions.
Decomposed Photo Response Non-Uniformity for Digital Forensic Analysis
NASA Astrophysics Data System (ADS)
Li, Yue; Li, Chang-Tsun
The last few years have seen the applications of Photo Response Non-Uniformity noise (PRNU) - a unique stochastic fingerprint of image sensors, to various types of digital forensic investigations such as source device identification and integrity verification. In this work we proposed a new way of extracting PRNU noise pattern, called Decomposed PRNU (DPRNU), by exploiting the difference between the physical andartificial color components of the photos taken by digital cameras that use a Color Filter Array for interpolating artificial components from physical ones. Experimental results presented in this work have shown the superiority of the proposed DPRNU to the commonly used version. We also proposed a new performance metrics, Corrected Positive Rate (CPR) to evaluate the performance of the common PRNU and the proposed DPRNU.
General theory of remote gaze estimation using the pupil center and corneal reflections.
Guestrin, Elias Daniel; Eizenman, Moshe
2006-06-01
This paper presents a general theory for the remote estimation of the point-of-gaze (POG) from the coordinates of the centers of the pupil and corneal reflections. Corneal reflections are produced by light sources that illuminate the eye and the centers of the pupil and corneal reflections are estimated in video images from one or more cameras. The general theory covers the full range of possible system configurations. Using one camera and one light source, the POG can be estimated only if the head is completely stationary. Using one camera and multiple light sources, the POG can be estimated with free head movements, following the completion of a multiple-point calibration procedure. When multiple cameras and multiple light sources are used, the POG can be estimated following a simple one-point calibration procedure. Experimental and simulation results suggest that the main sources of gaze estimation errors are the discrepancy between the shape of real corneas and the spherical corneal shape assumed in the general theory, and the noise in the estimation of the centers of the pupil and corneal reflections. A detailed example of a system that uses the general theory to estimate the POG on a computer screen is presented.
Hanada, Takashi; Katsuta, Shoichi; Yorozu, Atsunori; Maruyama, Koichi
2009-01-01
When using a HDR remote afterloading brachytherapy unit, results of treatment can be greatly influenced by both source position and treatment time. The purpose of this study is to obtain information on the source of the HDR remote afterloading unit, such as its position and time structure, with the use of a simple system consisting of a plastic scintillator block and a charge‐coupled device (CCD) camera. The CCD camera was used for recording images of scintillation luminescence at a fixed rate of 30 frames per second in real time. The source position and time structure were obtained by analyzing the recorded images. For a preset source‐step‐interval of 5 mm, the measured value of the source position was 5.0±1.0mm, with a pixel resolution of 0.07 mm in the recorded images. For a preset transit time of 30 s, the measured value was 30.0±0.6 s, when the time resolution of the CCD camera was 1/30 s. This system enabled us to obtain the source dwell time and movement time. Therefore, parameters such as I192r source position, transit time, dwell time, and movement time at each dwell position can be determined quantitatively using this plastic scintillator‐CCD camera system. PACS number: 87.53.Jw
Measuring SO2 ship emissions with an ultraviolet imaging camera
NASA Astrophysics Data System (ADS)
Prata, A. J.
2014-05-01
Over the last few years fast-sampling ultraviolet (UV) imaging cameras have been developed for use in measuring SO2 emissions from industrial sources (e.g. power plants; typical emission rates ~ 1-10 kg s-1) and natural sources (e.g. volcanoes; typical emission rates ~ 10-100 kg s-1). Generally, measurements have been made from sources rich in SO2 with high concentrations and emission rates. In this work, for the first time, a UV camera has been used to measure the much lower concentrations and emission rates of SO2 (typical emission rates ~ 0.01-0.1 kg s-1) in the plumes from moving and stationary ships. Some innovations and trade-offs have been made so that estimates of the emission rates and path concentrations can be retrieved in real time. Field experiments were conducted at Kongsfjord in Ny Ålesund, Svalbard, where SO2 emissions from cruise ships were made, and at the port of Rotterdam, Netherlands, measuring emissions from more than 10 different container and cargo ships. In all cases SO2 path concentrations could be estimated and emission rates determined by measuring ship plume speeds simultaneously using the camera, or by using surface wind speed data from an independent source. Accuracies were compromised in some cases because of the presence of particulates in some ship emissions and the restriction of single-filter UV imagery, a requirement for fast-sampling (> 10 Hz) from a single camera. Despite the ease of use and ability to determine SO2 emission rates from the UV camera system, the limitation in accuracy and precision suggest that the system may only be used under rather ideal circumstances and that currently the technology needs further development to serve as a method to monitor ship emissions for regulatory purposes. A dual-camera system or a single, dual-filter camera is required in order to properly correct for the effects of particulates in ship plumes.
Gabrieli, Francesca; Dooley, Kathryn A; Zeibel, Jason G; Howe, James D; Delaney, John K
2018-06-18
Microscale mid-infrared (mid-IR) imaging spectroscopy is used for the mapping of chemical functional groups. The extension to macroscale imaging requires that either the mid-IR radiation reflected off or that emitted by the object be greater than the radiation from the thermal background. Reflectance spectra can be obtained using an active IR source to increase the amount of radiation reflected off the object, but rapid heating of greater than 4 °C can occur, which is a problem for paintings. Rather than using an active source, by placing a highly reflective tube between the painting and camera and introducing a low temperature source, thermal radiation from the room can be reduced, allowing the IR radiation emitted by the painting to dominate. Thus, emissivity spectra of the object can be recovered. Using this technique, mid-IR emissivity image cubes of paintings were collected at high collection rates with a low-noise, line-scanning imaging spectrometer, which allowed pigments and paint binders to be identified and mapped. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Do, Trong Hop; Yoo, Myungsik
2018-01-01
This paper proposes a vehicle positioning system using LED street lights and two rolling shutter CMOS sensor cameras. In this system, identification codes for the LED street lights are transmitted to camera-equipped vehicles through a visible light communication (VLC) channel. Given that the camera parameters are known, the positions of the vehicles are determined based on the geometric relationship between the coordinates of the LEDs in the images and their real world coordinates, which are obtained through the LED identification codes. The main contributions of the paper are twofold. First, the collinear arrangement of the LED street lights makes traditional camera-based positioning algorithms fail to determine the position of the vehicles. In this paper, an algorithm is proposed to fuse data received from the two cameras attached to the vehicles in order to solve the collinearity problem of the LEDs. Second, the rolling shutter mechanism of the CMOS sensors combined with the movement of the vehicles creates image artifacts that may severely degrade the positioning accuracy. This paper also proposes a method to compensate for the rolling shutter artifact, and a high positioning accuracy can be achieved even when the vehicle is moving at high speeds. The performance of the proposed positioning system corresponding to different system parameters is examined by conducting Matlab simulations. Small-scale experiments are also conducted to study the performance of the proposed algorithm in real applications.
Imaging fall Chinook salmon redds in the Columbia River with a dual-frequency identification sonar
Tiffan, K.F.; Rondorf, D.W.; Skalicky, J.J.
2004-01-01
We tested the efficacy of a dual-frequency identification sonar (DIDSON) for imaging and enumeration of fall Chinook salmon Oncorhynchus tshawytscha redds in a spawning area below Bonneville Dam on the Columbia River. The DIDSON uses sound to form near-video-quality images and has the advantages of imaging in zero-visibility water and possessing a greater detection range and field of view than underwater video cameras. We suspected that the large size and distinct morphology of a fall Chinook salmon redd would facilitate acoustic imaging if the DIDSON was towed near the river bottom so as to cast an acoustic shadow from the tailspill over the redd pocket. We tested this idea by observing 22 different redds with an underwater video camera, spatially referencing their locations, and then navigating to them while imaging them with the DIDSON. All 22 redds were successfully imaged with the DIDSON. We subsequently conducted redd searches along transects to compare the number of redds imaged by the DIDSON with the number observed using an underwater video camera. We counted 117 redds with the DIDSON and 81 redds with the underwater video camera. Only one of the redds observed with the underwater video camera was not also documented by the DIDSON. In spite of the DIDSON's high cost, it may serve as a useful tool for enumerating fall Chinook salmon redds in conditions that are not conducive to underwater videography.
The Role of Counterintelligence in the European Theater of Operations During World War II
1993-06-04
revolvers, Minox cameras, portable typewriters, 48 fingerprint cameras, latent fingerprint kits, handcuffs, and listening and recording devices.13 This...Comments from the detachments indicated that the fingerprint equipment, and listening and recording devices were of little use. However, the revolvers...40-49. 138 Moulage* 2 Fingerprinting 2 Latent Fingerprinting 3 System of Identification 1 Codes and Ciphers 1 Handwriting Comparison 2 Documentary
Camera-Model Identification Using Markovian Transition Probability Matrix
NASA Astrophysics Data System (ADS)
Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei
Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.
Effects of camera location on the reconstruction of 3D flare trajectory with two cameras
NASA Astrophysics Data System (ADS)
Özsaraç, Seçkin; Yeşilkaya, Muhammed
2015-05-01
Flares are used as valuable electronic warfare assets for the battle against infrared guided missiles. The trajectory of the flare is one of the most important factors that determine the effectiveness of the counter measure. Reconstruction of the three dimensional (3D) position of a point, which is seen by multiple cameras, is a common problem. Camera placement, camera calibration, corresponding pixel determination in between the images of different cameras and also the triangulation algorithm affect the performance of 3D position estimation. In this paper, we specifically investigate the effects of camera placement on the flare trajectory estimation performance by simulations. Firstly, 3D trajectory of a flare and also the aircraft, which dispenses the flare, are generated with simple motion models. Then, we place two virtual ideal pinhole camera models on different locations. Assuming the cameras are tracking the aircraft perfectly, the view vectors of the cameras are computed. Afterwards, using the view vector of each camera and also the 3D position of the flare, image plane coordinates of the flare on both cameras are computed using the field of view (FOV) values. To increase the fidelity of the simulation, we have used two sources of error. One is used to model the uncertainties in the determination of the camera view vectors, i.e. the orientations of the cameras are measured noisy. Second noise source is used to model the imperfections of the corresponding pixel determination of the flare in between the two cameras. Finally, 3D position of the flare is estimated using the corresponding pixel indices, view vector and also the FOV of the cameras by triangulation. All the processes mentioned so far are repeated for different relative camera placements so that the optimum estimation error performance is found for the given aircraft and are trajectories.
TIFR Near Infrared Imaging Camera-II on the 3.6 m Devasthal Optical Telescope
NASA Astrophysics Data System (ADS)
Baug, T.; Ojha, D. K.; Ghosh, S. K.; Sharma, S.; Pandey, A. K.; Kumar, Brijesh; Ghosh, Arpan; Ninan, J. P.; Naik, M. B.; D’Costa, S. L. A.; Poojary, S. S.; Sandimani, P. R.; Shah, H.; Krishna Reddy, B.; Pandey, S. B.; Chand, H.
Tata Institute of Fundamental Research (TIFR) Near Infrared Imaging Camera-II (TIRCAM2) is a closed-cycle Helium cryo-cooled imaging camera equipped with a Raytheon 512×512 pixels InSb Aladdin III Quadrant focal plane array (FPA) having sensitivity to photons in the 1-5μm wavelength band. In this paper, we present the performance of the camera on the newly installed 3.6m Devasthal Optical Telescope (DOT) based on the calibration observations carried out during 2017 May 11-14 and 2017 October 7-31. After the preliminary characterization, the camera has been released to the Indian and Belgian astronomical community for science observations since 2017 May. The camera offers a field-of-view (FoV) of ˜86.5‧‧×86.5‧‧ on the DOT with a pixel scale of 0.169‧‧. The seeing at the telescope site in the near-infrared (NIR) bands is typically sub-arcsecond with the best seeing of ˜0.45‧‧ realized in the NIR K-band on 2017 October 16. The camera is found to be capable of deep observations in the J, H and K bands comparable to other 4m class telescopes available world-wide. Another highlight of this camera is the observational capability for sources up to Wide-field Infrared Survey Explorer (WISE) W1-band (3.4μm) magnitudes of 9.2 in the narrow L-band (nbL; λcen˜ 3.59μm). Hence, the camera could be a good complementary instrument to observe the bright nbL-band sources that are saturated in the Spitzer-Infrared Array Camera (IRAC) ([3.6] ≲ 7.92 mag) and the WISE W1-band ([3.4] ≲ 8.1 mag). Sources with strong polycyclic aromatic hydrocarbon (PAH) emission at 3.3μm are also detected. Details of the observations and estimated parameters are presented in this paper.
Surveyor 3: Bacterium isolated from lunar retrieved television camera
NASA Technical Reports Server (NTRS)
Mitchell, F. J.; Ellis, W. L.
1972-01-01
Microbial analysis was the first of several studies of the retrieved camera and was performed immediately after the camera was opened. The emphasis of the analysis was placed upon isolating microorganisms that could be potentially pathogenic for man. Every step in the retrieval of the Surveyor 3 television camera was analyzed for possible contamination sources, including camera contact by the astronauts, ingassing in the lunar and command module during the mission or at splashdown, and handling during quarantine, disassembly, and analysis at the Lunar Receiving Laboratory
Babcock, Hazen P
2018-01-29
This work explores the use of industrial grade CMOS cameras for single molecule localization microscopy (SMLM). We show that industrial grade CMOS cameras approach the performance of scientific grade CMOS cameras at a fraction of the cost. This makes it more economically feasible to construct high-performance imaging systems with multiple cameras that are capable of a diversity of applications. In particular we demonstrate the use of industrial CMOS cameras for biplane, multiplane and spectrally resolved SMLM. We also provide open-source software for simultaneous control of multiple CMOS cameras and for the reduction of the movies that are acquired to super-resolution images.
Maui Space Surveillance System Satellite Categorization Laboratory
NASA Astrophysics Data System (ADS)
Deiotte, R.; Guyote, M.; Kelecy, T.; Hall, D.; Africano, J.; Kervin, P.
The MSSS satellite categorization laboratory is a fusion of robotics and digital imaging processes that aims to decompose satellite photometric characteristics and behavior in a controlled setting. By combining a robot, light source and camera to acquire non-resolved images of a model satellite, detailed photometric analyses can be performed to extract relevant information about shape features, elemental makeup, and ultimately attitude and function. Using the laboratory setting a detailed analysis can be done on any type of material or design and the results cataloged in a database that will facilitate object identification by "curve-fitting" individual elements in the basis set to observational data that might otherwise be unidentifiable. Currently the laboratory has created, an ST-Robotics five degree of freedom robotic arm, collimated light source and non-focused Apogee camera have all been integrated into a MATLAB based software package that facilitates automatic data acquisition and analysis. Efforts to date have been aimed at construction of the lab as well as validation and verification of simple geometric objects. Simple tests on spheres, cubes and simple satellites show promising results that could lead to a much better understanding of non-resolvable space object characteristics. This paper presents a description of the laboratory configuration and validation test results with emphasis on the non-resolved photometric characteristics for a variety of object shapes, spin dynamics and orientations. The future vision, utility and benefits of the laboratory to the SSA community as a whole are also discussed.
Selections from 2017: Hubble Survey Explores Distant Galaxies
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2017-12-01
Editors note:In these last two weeks of 2017, well be looking at a few selections that we havent yet discussed on AAS Nova from among the most-downloaded paperspublished in AAS journals this year. The usual posting schedule will resume in January.CANDELS Multi-Wavelength Catalogs: Source Identification and Photometry in the CANDELS COSMOSSurvey FieldPublished January2017Main takeaway:A publication led byHooshang Nayyeri(UC Irvine and UC Riverside) early this year details acatalog of sources built using the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey(CANDELS), a survey carried out by cameras on board the Hubble Space Telescope. The catalogliststhe properties of 38,000 distant galaxies visiblewithin the COSMOS field, a two-square-degree equatorial field explored in depthto answer cosmological questions.Why its interesting:Illustration showing the three-dimensional map of the dark matter distribution in theCOSMOS field. [Adapted from NASA/ESA/R. Massey(California Institute of Technology)]The depth and resolution of the CANDELS observations areuseful for addressingseveral major science goals, including the following:Studying the most distant objects in the universe at the epoch of reionization in the cosmic dawn.Understanding galaxy formation and evolution during the peak epoch of star formation in the cosmic high noon.Studying star formation from deep ultravioletobservations and studying cosmology from supernova observations.Why CANDELS is a major endeavor:CANDELS isthe largest multi-cycle treasury program ever approved on the Hubble Space Telescope using over 900 orbits between 2010 and 2013 withtwo cameras on board the spacecraftto study galaxy formation and evolution throughout cosmic time. The CANDELS images are all publicly available, and the new catalogrepresents an enormous source of information about distant objectsin our universe.CitationH. Nayyeri et al 2017 ApJS 228 7. doi:10.3847/1538-4365/228/1/7
An optical watermarking solution for color personal identification pictures
NASA Astrophysics Data System (ADS)
Tan, Yi-zhou; Liu, Hai-bo; Huang, Shui-hua; Sheng, Ben-jian; Pan, Zhong-ming
2009-11-01
This paper presents a new approach for embedding authentication information into image on printed materials based on optical projection technique. Our experimental setup consists of two parts, one is a common camera, and the other is a LCD projector, which project a pattern on personnel's body (especially on the face). The pattern, generated by a computer, act as the illumination light source with sinusoidal distribution and it is also the watermark signal. For a color image, the watermark is embedded into the blue channel. While we take pictures (256×256 and 512×512, 567×390 pixels, respectively), an invisible mark is embedded directly into magnitude coefficients of Discrete Fourier transform (DFT) at exposure moment. Both optical and digital correlation is suitable for detection of this type of watermark. The decoded watermark is a set of concentric circles or sectors in the DFT domain (middle frequencies region) which is robust to photographing, printing and scanning. The unlawful people modify or replace the original photograph, and make fake passport (drivers' license and so on). Experiments show, it is difficult to forge certificates in which a watermark was embedded by our projector-camera combination based on analogue watermark method rather than classical digital method.
Intensity distribution of the x ray source for the AXAF VETA-I mirror test
NASA Technical Reports Server (NTRS)
Zhao, Ping; Kellogg, Edwin M.; Schwartz, Daniel A.; Shao, Yibo; Fulton, M. Ann
1992-01-01
The X-ray generator for the AXAF VETA-I mirror test is an electron impact X-ray source with various anode materials. The source sizes of different anodes and their intensity distributions were measured with a pinhole camera before the VETA-I test. The pinhole camera consists of a 30 micrometers diameter pinhole for imaging the source and a Microchannel Plate Imaging Detector with 25 micrometers FWHM spatial resolution for detecting and recording the image. The camera has a magnification factor of 8.79, which enables measuring the detailed spatial structure of the source. The spot size, the intensity distribution, and the flux level of each source were measured with different operating parameters. During the VETA-I test, microscope pictures were taken for each used anode immediately after it was brought out of the source chamber. The source sizes and the intensity distribution structures are clearly shown in the pictures. They are compared and agree with the results from the pinhole camera measurements. This paper presents the results of the above measurements. The results show that under operating conditions characteristic of the VETA-I test, all the source sizes have a FWHM of less than 0.45 mm. For a source of this size at 528 meters away, the angular size to VETA is less than 0.17 arcsec which is small compared to the on ground VETA angular resolution (0.5 arcsec, required and 0.22 arcsec, measured). Even so, the results show the intensity distributions of the sources have complicated structures. These results were crucial for the VETA data analysis and for obtaining the on ground and predicted in orbit VETA Point Response Function.
Grubsky, Victor; Romanoov, Volodymyr; Shoemaker, Keith; Patton, Edward Matthew; Jannson, Tomasz
2016-02-02
A Compton tomography system comprises an x-ray source configured to produce a planar x-ray beam. The beam irradiates a slice of an object to be imaged, producing Compton-scattered x-rays. The Compton-scattered x-rays are imaged by an x-ray camera. Translation of the object with respect to the source and camera or vice versa allows three-dimensional object imaging.
Context-based handover of persons in crowd and riot scenarios
NASA Astrophysics Data System (ADS)
Metzler, Jürgen
2015-02-01
In order to control riots in crowds, it is helpful to get ringleaders under control and pull them out of the crowd if one has become an offender. A great support to achieve these tasks is the capability of observing the crowd and ringleaders automatically by using cameras. It also allows a better conservation of evidence in riot control. A ringleader who has become an offender should be tracked across and recognized by several cameras, regardless of whether overlapping camera's fields of view exist or not. We propose a context-based approach for handover of persons between different camera fields of view. This approach can be applied for overlapping as well as for non-overlapping fields of view, so that a fast and accurate identification of individual persons in camera networks is feasible. Within the scope of this paper, the approach is applied to a handover of persons between single images without having any temporal information. It is particularly developed for semiautomatic video editing and a handover of persons between cameras in order to improve conservation of evidence. The approach has been developed on a dataset collected during a Crowd and Riot Control (CRC) training of the German armed forces. It consists of three different levels of escalation. First, the crowd started with a peaceful demonstration. Later, there were violent protests, and third, the riot escalated and offenders bumped into the chain of guards. One result of the work is a reliable context-based method for person re-identification between single images of different camera fields of view in crowd and riot scenarios. Furthermore, a qualitative assessment shows that the use of contextual information can support this task additionally. It can decrease the needed time for handover and the number of confusions which supports the conservation of evidence in crowd and riot scenarios.
Konduru, Anil Reddy; Yelikar, Balasaheb R; Sathyashree, K V; Kumar, Ankur
2018-01-01
Open source technologies and mobile innovations have radically changed the way people interact with technology. These innovations and advancements have been used across various disciplines and already have a significant impact. Microscopy, with focus on visually appealing contrasting colors for better appreciation of morphology, forms the core of the disciplines such as Pathology, microbiology, and anatomy. Here, learning happens with the aid of multi-head microscopes and digital camera systems for teaching larger groups and in organizing interactive sessions for students or faculty of other departments. The cost of the original equipment manufacturer (OEM) camera systems in bringing this useful technology at all the locations is a limiting factor. To avoid this, we have used the low-cost technologies like Raspberry Pi, Mobile high definition link and 3D printing for adapters to create portable camera systems. Adopting these open source technologies enabled us to convert any binocular or trinocular microscope be connected to a projector or HD television at a fraction of the cost of the OEM camera systems with comparable quality. These systems, in addition to being cost-effective, have also provided the added advantage of portability, thus providing the much-needed flexibility at various teaching locations.
Note: Tormenta: An open source Python-powered control software for camera based optical microscopy.
Barabas, Federico M; Masullo, Luciano A; Stefani, Fernando D
2016-12-01
Until recently, PC control and synchronization of scientific instruments was only possible through closed-source expensive frameworks like National Instruments' LabVIEW. Nowadays, efficient cost-free alternatives are available in the context of a continuously growing community of open-source software developers. Here, we report on Tormenta, a modular open-source software for the control of camera-based optical microscopes. Tormenta is built on Python, works on multiple operating systems, and includes some key features for fluorescence nanoscopy based on single molecule localization.
Note: Tormenta: An open source Python-powered control software for camera based optical microscopy
NASA Astrophysics Data System (ADS)
Barabas, Federico M.; Masullo, Luciano A.; Stefani, Fernando D.
2016-12-01
Until recently, PC control and synchronization of scientific instruments was only possible through closed-source expensive frameworks like National Instruments' LabVIEW. Nowadays, efficient cost-free alternatives are available in the context of a continuously growing community of open-source software developers. Here, we report on Tormenta, a modular open-source software for the control of camera-based optical microscopes. Tormenta is built on Python, works on multiple operating systems, and includes some key features for fluorescence nanoscopy based on single molecule localization.
Visual identification system for homeland security and law enforcement support
NASA Astrophysics Data System (ADS)
Samuel, Todd J.; Edwards, Don; Knopf, Michael
2005-05-01
This paper describes the basic configuration for a visual identification system (VIS) for Homeland Security and law enforcement support. Security and law enforcement systems with an integrated VIS will accurately and rapidly provide identification of vehicles or containers that have entered, exited or passed through a specific monitoring location. The VIS system stores all images and makes them available for recall for approximately one week. Images of alarming vehicles will be archived indefinitely as part of the alarming vehicle"s or cargo container"s record. Depending on user needs, the digital imaging information will be provided electronically to the individual inspectors, supervisors, and/or control center at the customer"s office. The key components of the VIS are the high-resolution cameras that capture images of vehicles, lights, presence sensors, image cataloging software, and image recognition software. In addition to the cameras, the physical integration and network communications of the VIS components with the balance of the security system and client must be ensured.
Optical touch sensing: practical bounds for design and performance
NASA Astrophysics Data System (ADS)
Bläßle, Alexander; Janbek, Bebart; Liu, Lifeng; Nakamura, Kanna; Nolan, Kimberly; Paraschiv, Victor
2013-02-01
Touch sensitive screens are used in many applications ranging in size from smartphones and tablets to display walls and collaborative surfaces. In this study, we consider optical touch sensing, a technology best suited for large-scale touch surfaces. Optical touch sensing utilizes cameras and light sources placed along the edge of the display. Within this framework, we first find a sufficient number of cameras necessary for identifying a convex polygon touching the screen, using a continuous light source on the boundary of a circular domain. We then find the number of cameras necessary to distinguish between two circular objects in a circular or rectangular domain. Finally, we use Matlab to simulate the polygonal mesh formed from distributing cameras and light sources on a circular domain. Using this, we compute the number of polygons in the mesh and the maximum polygon area to give us information about the accuracy of the configuration. We close with summary and conclusions, and pointers to possible future research directions.
Portable multispectral fluorescence imaging system for food safety applications
NASA Astrophysics Data System (ADS)
Lefcourt, Alan M.; Kim, Moon S.; Chen, Yud-Ren
2004-03-01
Fluorescence can be a sensitive method for detecting food contaminants. Of particular interest is detection of fecal contamination as feces is the source of many pathogenic organisms. Feces generally contain chlorophyll a and related compounds due to ingestion of plant materials, and these compounds can readily be detected using fluorescence techniques. Described is a fluorescence-imaging system consisting primarily of a UV light source, an intensified camera with a six-position filter wheel, and software for controlling the system and automatically analyzing the resulting images. To validate the system, orchard apples artificially contaminated with dairy feces were used in a "hands-on" public demonstration. The contamination sites were easily identified using automated edge detection and threshold detection algorithms. In addition, by applying feces to apples and then washing sets of apples at hourly intervals, it was determined that five h was the minimum contact time that allowed identification of the contamination site after the apples were washed. There are many potential uses for this system, including studying the efficacy of apple washing systems.
MEANS FOR VISUALIZING FLUID FLOW PATTERNS
Lynch, F.E.; Palmer, L.D.; Poppendick, H.F.; Winn, G.M.
1961-05-16
An apparatus is given for determining both the absolute and relative velocities of a phosphorescent fluid flowing through a transparent conduit. The apparatus includes a source for exciting a narrow trsnsverse band of the fluid to phosphorescence, detecting means such as a camera located downstream from the exciting source to record the shape of the phosphorescent band as it passes, and a timer to measure the time elapsed between operation of the exciting source and operation of the camera.
Facial recognition trial: biometric identification of non-compliant subjects using CCTV
NASA Astrophysics Data System (ADS)
Best, Tim
2007-10-01
LogicaCMG were provided with an opportunity to deploy a facial recognition system in a realistic scenario. 12 cameras were installed at an international airport covering all entrances to the immigration hall. The evaluation took place over several months with numerous adjustments to both the hardware (i.e. cameras, servers and capture cards) and software. The learning curve has been very steep but a stage has now been reached where both LogicaCMG and the client are confident that, subject to the right environmental conditions (lighting and camera location) an effective system can be defined with a high probability of successful detection of the target individual, with minimal false alarms. To the best of our knowledge, results with a >90% detection rate, of non-compliant subjects 'at range' has not been achieved anywhere else. This puts this location at the forefront of capability in this area. The results achieved demonstrate that, given optimised conditions, it is possible to achieve a long range biometric identification of a non compliant subject, with a high rate of success.
Streak camera imaging of single photons at telecom wavelength
NASA Astrophysics Data System (ADS)
Allgaier, Markus; Ansari, Vahid; Eigner, Christof; Quiring, Viktor; Ricken, Raimund; Donohue, John Matthew; Czerniuk, Thomas; Aßmann, Marc; Bayer, Manfred; Brecht, Benjamin; Silberhorn, Christine
2018-01-01
Streak cameras are powerful tools for temporal characterization of ultrafast light pulses, even at the single-photon level. However, the low signal-to-noise ratio in the infrared range prevents measurements on weak light sources in the telecom regime. We present an approach to circumvent this problem, utilizing an up-conversion process in periodically poled waveguides in Lithium Niobate. We convert single photons from a parametric down-conversion source in order to reach the point of maximum detection efficiency of commercially available streak cameras. We explore phase-matching configurations to apply the up-conversion scheme in real-world applications.
Sensors for isolation of anti-cancer compounds found within marine invertebrates
NASA Astrophysics Data System (ADS)
Wiegand, Gordon; LaRue, Amanda
2015-05-01
Highly evolved bacteria living within immobile marine animals are being targeted as a source of antitumor pharmaceuticals. This paper describes 2 electo-optical sensor systems developed for identifying species of tunicates and actinobacteria that live within them. Two stages of identification include 1) a benthic survey apparatus to locate species and 2) a laboratory housed cell analysis platform used to classify their bacterial micro-biome. Marine Optics Sampling- There are over 3000 species of Tunicates that thrive in diverse habitats. We use a system of cameras, GPS and the GPS/photo integration application on a PC laptop to compile a time / location stamp for each image taken during the dive survey. A shape-map of x/y coordinates of photos are stored for later identification and sampling. Flow Cytometers/cell sorters housed at The Medical University of South Carolina and The University of Maryland have been modified to produce low-noise, high signal wave forms used for bacteria analysis. We strive to describe salient contrasts between these two fundamentally different sensor systems. Accents are placed on analog transducers and initial step sensing systems and output.
Noise and sensitivity of x-ray framing cameras at Nike (abstract)
NASA Astrophysics Data System (ADS)
Pawley, C. J.; Deniz, A. V.; Lehecka, T.
1999-01-01
X-ray framing cameras are the most widely used tool for radiographing density distributions in laser and Z-pinch driven experiments. The x-ray framing cameras that were developed specifically for experiments on the Nike laser system are described. One of these cameras has been coupled to a CCD camera and was tested for resolution and image noise using both electrons and x rays. The largest source of noise in the images was found to be due to low quantum detection efficiency of x-ray photons.
A computational approach to real-time image processing for serial time-encoded amplified microscopy
NASA Astrophysics Data System (ADS)
Oikawa, Minoru; Hiyama, Daisuke; Hirayama, Ryuji; Hasegawa, Satoki; Endo, Yutaka; Sugie, Takahisa; Tsumura, Norimichi; Kuroshima, Mai; Maki, Masanori; Okada, Genki; Lei, Cheng; Ozeki, Yasuyuki; Goda, Keisuke; Shimobaba, Tomoyoshi
2016-03-01
High-speed imaging is an indispensable technique, particularly for identifying or analyzing fast-moving objects. The serial time-encoded amplified microscopy (STEAM) technique was proposed to enable us to capture images with a frame rate 1,000 times faster than using conventional methods such as CCD (charge-coupled device) cameras. The application of this high-speed STEAM imaging technique to a real-time system, such as flow cytometry for a cell-sorting system, requires successively processing a large number of captured images with high throughput in real time. We are now developing a high-speed flow cytometer system including a STEAM camera. In this paper, we describe our approach to processing these large amounts of image data in real time. We use an analog-to-digital converter that has up to 7.0G samples/s and 8-bit resolution for capturing the output voltage signal that involves grayscale images from the STEAM camera. Therefore the direct data output from the STEAM camera generates 7.0G byte/s continuously. We provided a field-programmable gate array (FPGA) device as a digital signal pre-processor for image reconstruction and finding objects in a microfluidic channel with high data rates in real time. We also utilized graphics processing unit (GPU) devices for accelerating the calculation speed of identification of the reconstructed images. We built our prototype system, which including a STEAM camera, a FPGA device and a GPU device, and evaluated its performance in real-time identification of small particles (beads), as virtual biological cells, owing through a microfluidic channel.
ARNICA, the Arcetri Near-Infrared Camera
NASA Astrophysics Data System (ADS)
Lisi, F.; Baffa, C.; Bilotti, V.; Bonaccini, D.; del Vecchio, C.; Gennari, S.; Hunt, L. K.; Marcucci, G.; Stanga, R.
1996-04-01
ARNICA (ARcetri Near-Infrared CAmera) is the imaging camera for the near-infrared bands between 1.0 and 2.5 microns that the Arcetri Observatory has designed and built for the Infrared Telescope TIRGO located at Gornergrat, Switzerland. We describe the mechanical and optical design of the camera, and report on the astronomical performance of ARNICA as measured during the commissioning runs at the TIRGO (December, 1992 to December 1993), and an observing run at the William Herschel Telescope, Canary Islands (December, 1993). System performance is defined in terms of efficiency of the camera+telescope system and camera sensitivity for extended and point-like sources. (SECTION: Astronomical Instrumentation)
A feasibility study of damage detection in beams using high-speed camera (Conference Presentation)
NASA Astrophysics Data System (ADS)
Wan, Chao; Yuan, Fuh-Gwo
2017-04-01
In this paper a method for damage detection in beam structures using high-speed camera is presented. Traditional methods of damage detection in structures typically involve contact (i.e., piezoelectric sensor or accelerometer) or non-contact sensors (i.e., laser vibrometer) which can be costly and time consuming to inspect an entire structure. With the popularity of the digital camera and the development of computer vision technology, video cameras offer a viable capability of measurement including higher spatial resolution, remote sensing and low-cost. In the study, a damage detection method based on the high-speed camera was proposed. The system setup comprises a high-speed camera and a line-laser which can capture the out-of-plane displacement of a cantilever beam. The cantilever beam with an artificial crack was excited and the vibration process was recorded by the camera. A methodology called motion magnification, which can amplify subtle motions in a video is used for modal identification of the beam. A finite element model was used for validation of the proposed method. Suggestions for applications of this methodology and challenges in future work will be discussed.
Person re-identification over camera networks using multi-task distance metric learning.
Ma, Lianyang; Yang, Xiaokang; Tao, Dacheng
2014-08-01
Person reidentification in a camera network is a valuable yet challenging problem to solve. Existing methods learn a common Mahalanobis distance metric by using the data collected from different cameras and then exploit the learned metric for identifying people in the images. However, the cameras in a camera network have different settings and the recorded images are seriously affected by variability in illumination conditions, camera viewing angles, and background clutter. Using a common metric to conduct person reidentification tasks on different camera pairs overlooks the differences in camera settings; however, it is very time-consuming to label people manually in images from surveillance videos. For example, in most existing person reidentification data sets, only one image of a person is collected from each of only two cameras; therefore, directly learning a unique Mahalanobis distance metric for each camera pair is susceptible to over-fitting by using insufficiently labeled data. In this paper, we reformulate person reidentification in a camera network as a multitask distance metric learning problem. The proposed method designs multiple Mahalanobis distance metrics to cope with the complicated conditions that exist in typical camera networks. We address the fact that these Mahalanobis distance metrics are different but related, and learned by adding joint regularization to alleviate over-fitting. Furthermore, by extending, we present a novel multitask maximally collapsing metric learning (MtMCML) model for person reidentification in a camera network. Experimental results demonstrate that formulating person reidentification over camera networks as multitask distance metric learning problem can improve performance, and our proposed MtMCML works substantially better than other current state-of-the-art person reidentification methods.
CAMERA: An integrated strategy for compound spectra extraction and annotation of LC/MS data sets
Kuhl, Carsten; Tautenhahn, Ralf; Böttcher, Christoph; Larson, Tony R.; Neumann, Steffen
2013-01-01
Liquid chromatography coupled to mass spectrometry is routinely used for metabolomics experiments. In contrast to the fairly routine and automated data acquisition steps, subsequent compound annotation and identification require extensive manual analysis and thus form a major bottle neck in data interpretation. Here we present CAMERA, a Bioconductor package integrating algorithms to extract compound spectra, annotate isotope and adduct peaks, and propose the accurate compound mass even in highly complex data. To evaluate the algorithms, we compared the annotation of CAMERA against a manually defined annotation for a mixture of known compounds spiked into a complex matrix at different concentrations. CAMERA successfully extracted accurate masses for 89.7% and 90.3% of the annotatable compounds in positive and negative ion mode, respectively. Furthermore, we present a novel annotation approach that combines spectral information of data acquired in opposite ion modes to further improve the annotation rate. We demonstrate the utility of CAMERA in two different, easily adoptable plant metabolomics experiments, where the application of CAMERA drastically reduced the amount of manual analysis. PMID:22111785
Darmanis, Spyridon; Toms, Andrew; Durman, Robert; Moore, Donna; Eyres, Keith
2007-07-01
To reduce the operating time in computer-assisted navigated total knee replacement (TKR), by improving communication between the infrared camera and the trackers placed on the patient. The innovation involves placing a routinely used laser pointer on top of the camera, so that the infrared cameras focus precisely on the trackers located on the knee to be operated on. A prospective randomized study was performed involving 40 patients divided into two groups, A and B. Both groups underwent navigated TKR, but for group B patients a laser pointer was used to improve the targeting capabilities of the cameras. Without the laser pointer, the camera had to move a mean 9.2 times in order to identify the trackers. With the introduction of the laser pointer, this was reduced to 0.9 times. Accordingly, the additional mean time required without the laser pointer was 11.6 minutes. Time delays are a major problem in computer-assisted surgery, and our technical suggestion can contribute towards reducing the delays associated with this particular application.
Improving Photometric Calibration of Meteor Video Camera Systems.
Ehlert, Steven; Kingery, Aaron; Suggs, Robert
2017-09-01
We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera band pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at ∼ 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to ∼ 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.
Improving Photometric Calibration of Meteor Video Camera Systems
NASA Technical Reports Server (NTRS)
Ehlert, Steven; Kingery, Aaron; Suggs, Robert
2017-01-01
We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at approx. 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.
Peterson, S W; Robertson, D; Polf, J
2011-01-01
In this work, we investigate the use of a three-stage Compton camera to measure secondary prompt gamma rays emitted from patients treated with proton beam radiotherapy. The purpose of this study was (1) to develop an optimal three-stage Compton camera specifically designed to measure prompt gamma rays emitted from tissue and (2) to determine the feasibility of using this optimized Compton camera design to measure and image prompt gamma rays emitted during proton beam irradiation. The three-stage Compton camera was modeled in Geant4 as three high-purity germanium detector stages arranged in parallel-plane geometry. Initially, an isotropic gamma source ranging from 0 to 15 MeV was used to determine lateral width and thickness of the detector stages that provided the optimal detection efficiency. Then, the gamma source was replaced by a proton beam irradiating a tissue phantom to calculate the overall efficiency of the optimized camera for detecting emitted prompt gammas. The overall calculated efficiencies varied from ~10−6 to 10−3 prompt gammas detected per proton incident on the tissue phantom for several variations of the optimal camera design studied. Based on the overall efficiency results, we believe it feasible that a three-stage Compton camera could detect a sufficient number of prompt gammas to allow measurement and imaging of prompt gamma emission during proton radiotherapy. PMID:21048295
NASA Technical Reports Server (NTRS)
Franke, John M.; Rhodes, David B.; Jones, Stephen B.; Dismond, Harriet R.
1992-01-01
A technique for synchronizing a pulse light source to charge coupled device cameras is presented. The technique permits the use of pulse light sources for continuous as well as stop action flow visualization. The technique has eliminated the need to provide separate lighting systems at facilities requiring continuous and stop action viewing or photography.
Game theoretic approach for cooperative feature extraction in camera networks
NASA Astrophysics Data System (ADS)
Redondi, Alessandro E. C.; Baroffio, Luca; Cesana, Matteo; Tagliasacchi, Marco
2016-07-01
Visual sensor networks (VSNs) consist of several camera nodes with wireless communication capabilities that can perform visual analysis tasks such as object identification, recognition, and tracking. Often, VSN deployments result in many camera nodes with overlapping fields of view. In the past, such redundancy has been exploited in two different ways: (1) to improve the accuracy/quality of the visual analysis task by exploiting multiview information or (2) to reduce the energy consumed for performing the visual task, by applying temporal scheduling techniques among the cameras. We propose a game theoretic framework based on the Nash bargaining solution to bridge the gap between the two aforementioned approaches. The key tenet of the proposed framework is for cameras to reduce the consumed energy in the analysis process by exploiting the redundancy in the reciprocal fields of view. Experimental results in both simulated and real-life scenarios confirm that the proposed scheme is able to increase the network lifetime, with a negligible loss in terms of visual analysis accuracy.
Reticle stage based linear dosimeter
Berger, Kurt W [Livermore, CA
2007-03-27
A detector to measure EUV intensity employs a linear array of photodiodes. The detector is particularly suited for photolithography systems that includes: (i) a ringfield camera; (ii) a source of radiation; (iii) a condenser for processing radiation from the source of radiation to produce a ringfield illumination field for illuminating a mask; (iv) a reticle that is positioned at the ringfield camera's object plane and from which a reticle image in the form of an intensity profile is reflected into the entrance pupil of the ringfield camera, wherein the reticle moves in a direction that is transverse to the length of the ringfield illumination field that illuminates the reticle; (v) detector for measuring the entire intensity along the length of the ringfield illumination field that is projected onto the reticle; and (vi) a wafer onto which the reticle imaged is projected from the ringfield camera.
Reticle stage based linear dosimeter
Berger, Kurt W.
2005-06-14
A detector to measure EUV intensity employs a linear array of photodiodes. The detector is particularly suited for photolithography systems that includes: (i) a ringfield camera; (ii) a source of radiation; (iii) a condenser for processing radiation from the source of radiation to produce a ringfield illumination field for illuminating a mask; (iv) a reticle that is positioned at the ringfield camera's object plane and from which a reticle image in the form of an intensity profile is reflected into the entrance pupil of the ringfield camera, wherein the reticle moves in a direction that is transverse to the length of the ringfield illumination field that illuminates the reticle; (v) detector for measuring the entire intensity along the length of the ringfield illumination field that is projected onto the reticle; and (vi) a wafer onto which the reticle imaged is projected from the ringfield camera.
Harbour surveillance with cameras calibrated with AIS data
NASA Astrophysics Data System (ADS)
Palmieri, F. A. N.; Castaldo, F.; Marino, G.
The inexpensive availability of surveillance cameras, easily connected in network configurations, suggests the deployment of this additional sensor modality in port surveillance. Vessels appearing within cameras fields of view can be recognized and localized providing to fusion centers information that can be added to data coming from Radar, Lidar, AIS, etc. Camera systems, that are used as localizers however, must be properly calibrated in changing scenarios where often there is limited choice on the position on which they are deployed. Automatic Identification System (AIS) data, that includes position, course and vessel's identity, freely available through inexpensive receivers, for some of the vessels appearing within the field of view, provide the opportunity to achieve proper camera calibration to be used for the localization of vessels not equipped with AIS transponders. In this paper we assume a pinhole model for camera geometry and propose perspective matrices computation using AIS positional data. Images obtained from calibrated cameras are then matched and pixel association is utilized for other vessel's localization. We report preliminary experimental results of calibration and localization using two cameras deployed on the Gulf of Naples coastline. The two cameras overlook a section of the harbour and record short video sequences that are synchronized offline with AIS positional information of easily-identified passenger ships. Other small vessels, not equipped with AIS transponders, are localized using camera matrices and pixel matching. Localization accuracy is experimentally evaluated as a function of target distance from the sensors.
NASA Technical Reports Server (NTRS)
1992-01-01
This document describes the Advanced Imaging System CCD based camera. The AIS1 camera system was developed at Photometric Ltd. in Tucson, Arizona as part of a Phase 2 SBIR contract No. NAS5-30171 from the NASA/Goddard Space Flight Center in Greenbelt, Maryland. The camera project was undertaken as a part of the Space Telescope Imaging Spectrograph (STIS) project. This document is intended to serve as a complete manual for the use and maintenance of the camera system. All the different parts of the camera hardware and software are discussed and complete schematics and source code listings are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jozsef, G
Purpose: To build a test device for HDR afterloaders capable of checking source positions, times at positions and estimate the activity of the source. Methods: A catheter is taped on a plastic scintillation sheet. When a source travels through the catheter, the scintillator sheet lights up around the source. The sheet is monitored with a video camera, and records the movement of the light spot. The center of the spot on each image on the video provides the source location, and the time stamps of the images can provide the dwell time the source spend in each location. Finally, themore » brightness of the light spot is related to the activity of the source. A code was developed for noise removal, calibrate the scale of the image to centimeters, eliminate the distortion caused by the oblique view angle, identifying the boundaries of the light spot, transforming the image into binary and detect and calculate the source motion, positions and times. The images are much less noisy if the camera is shielded. That requires that the light spot is monitored in a mirror, rather than directly. The whole assembly is covered from external light and has a size of approximately 17×35×25cm (H×L×W) Results: A cheap camera in BW mode proved to be sufficient with a plastic scintillator sheet. The best images were resulted by a 3mm thick sheet with ZnS:Ag surface coating. The shielding of the camera decreased the noise, but could not eliminate it. A test run even in noisy condition resulted in approximately 1 mm and 1 sec difference from the planned positions and dwell times. Activity tests are in progress. Conclusion: The proposed method is feasible. It might simplify the monthly QA process of HDR Brachytherapy units.« less
Driver face recognition as a security and safety feature
NASA Astrophysics Data System (ADS)
Vetter, Volker; Giefing, Gerd-Juergen; Mai, Rudolf; Weisser, Hubert
1995-09-01
We present a driver face recognition system for comfortable access control and individual settings of automobiles. The primary goals are the prevention of car thefts and heavy accidents caused by unauthorized use (joy-riders), as well as the increase of safety through optimal settings, e.g. of the mirrors and the seat position. The person sitting on the driver's seat is observed automatically by a small video camera in the dashboard. All he has to do is to behave cooperatively, i.e. to look into the camera. A classification system validates his access. Only after a positive identification, the car can be used and the driver-specific environment (e.g. seat position, mirrors, etc.) may be set up to ensure the driver's comfort and safety. The driver identification system has been integrated in a Volkswagen research car. Recognition results are presented.
Sensor network based vehicle classification and license plate identification system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frigo, Janette Rose; Brennan, Sean M; Rosten, Edward J
Typically, for energy efficiency and scalability purposes, sensor networks have been used in the context of environmental and traffic monitoring applications in which operations at the sensor level are not computationally intensive. But increasingly, sensor network applications require data and compute intensive sensors such video cameras and microphones. In this paper, we describe the design and implementation of two such systems: a vehicle classifier based on acoustic signals and a license plate identification system using a camera. The systems are implemented in an energy-efficient manner to the extent possible using commercially available hardware, the Mica motes and the Stargate platform.more » Our experience in designing these systems leads us to consider an alternate more flexible, modular, low-power mote architecture that uses a combination of FPGAs, specialized embedded processing units and sensor data acquisition systems.« less
Benmiloud, Fares; Rebaudet, Stanislas; Varoquaux, Arthur; Penaranda, Guillaume; Bannier, Marie; Denizot, Anne
2018-01-01
The clinical impact of intraoperative autofluorescence-based identification of parathyroids using a near-infrared camera remains unknown. In a before and after controlled study, we compared all patients who underwent total thyroidectomy by the same surgeon during Period 1 (January 2015 to January 2016) without near-infrared (near-infrared- group) and those operated on during Period 2 (February 2016 to September 2016) using a near-infrared camera (near-infrared+ group). In parallel, we also compared all patients who underwent surgery without near-infrared during those same periods by another surgeon in the same unit (control groups). Main outcomes included postoperative hypocalcemia, parathyroid identification, autotransplantation, and inadvertent resection. The near-infrared+ group displayed significantly lower postoperative hypocalcemia rates (5.2%) than the near-infrared- group (20.9%; P < .001). Compared with the near-infrared- patients, the near-infrared+ group exhibited an increased mean number of identified parathyroids and reduced parathyroid autotransplantation rates, although no difference was observed in inadvertent resection rates. Parathyroids were identified via near-infrared before they were visualized by the surgeon in 68% patients. In the control groups, parathyroid identification improved significantly from Period 1 to Period 2, although autotransplantation, inadvertent resection and postoperative hypocalcemia rates did not differ. Near-infrared use during total thyroidectomy significantly reduced postoperative hypocalcemia, improved parathyroid identification and reduced their autotransplantation rate. Copyright © 2017 Elsevier Inc. All rights reserved.
Automatic identification and location technology of glass insulator self-shattering
NASA Astrophysics Data System (ADS)
Huang, Xinbo; Zhang, Huiying; Zhang, Ye
2017-11-01
The insulator of transmission lines is one of the most important infrastructures, which is vital to ensure the safe operation of transmission lines under complex and harsh operating conditions. The glass insulator often self-shatters but the available identification methods are inefficient and unreliable. Then, an automatic identification and localization technology of self-shattered glass insulators is proposed, which consists of the cameras installed on the tower video monitoring devices or the unmanned aerial vehicles, the 4G/OPGW network, and the monitoring center, where the identification and localization algorithm is embedded into the expert software. First, the images of insulators are captured by cameras, which are processed to identify the region of insulator string by the presented identification algorithm of insulator string. Second, according to the characteristics of the insulator string image, a mathematical model of the insulator string is established to estimate the direction and the length of the sliding blocks. Third, local binary pattern histograms of the template and the sliding block are extracted, by which the self-shattered insulator can be recognized and located. Finally, a series of experiments is fulfilled to verify the effectiveness of the algorithm. For single insulator images, Ac, Pr, and Rc of the algorithm are 94.5%, 92.38%, and 96.78%, respectively. For double insulator images, Ac, Pr, and Rc are 90.00%, 86.36%, and 93.23%, respectively.
Computational cameras for moving iris recognition
NASA Astrophysics Data System (ADS)
McCloskey, Scott; Venkatesha, Sharath
2015-05-01
Iris-based biometric identification is increasingly used for facility access and other security applications. Like all methods that exploit visual information, however, iris systems are limited by the quality of captured images. Optical defocus due to a small depth of field (DOF) is one such challenge, as is the acquisition of sharply-focused iris images from subjects in motion. This manuscript describes the application of computational motion-deblurring cameras to the problem of moving iris capture, from the underlying theory to system considerations and performance data.
Projector-Camera Systems for Immersive Training
2006-01-01
average to a sequence of 100 captured distortion corrected images. The OpenCV library [ OpenCV ] was used for camera calibration. To correct for...rendering application [Treskunov, Pair, and Swartout, 2004]. It was transposed to take into account different matrix conventions between OpenCV and...Screen Imperfections. Proc. Workshop on Projector-Camera Systems (PROCAMS), Nice, France, IEEE. OpenCV : Open Source Computer Vision. [Available
Camera Test on Curiosity During Flight to Mars
2012-05-07
An in-flight camera check produced this out-of-focus image when NASA Mars Science Laboratory spacecraft turned on illumination sources that are part of the Curiosity rover Mars Hand Lens Imager MAHLI instrument.
Radiation source with shaped emission
Kubiak, Glenn D.; Sweatt, William C.
2003-05-13
Employing a source of radiation, such as an electric discharge source, that is equipped with a capillary region configured into some predetermined shape, such as an arc or slit, can significantly improve the amount of flux delivered to the lithographic wafers while maintaining high efficiency. The source is particularly suited for photolithography systems that employs a ringfield camera. The invention permits the condenser which delivers critical illumination to the reticle to be simplified from five or more reflective elements to a total of three or four reflective elements thereby increasing condenser efficiency. It maximizes the flux delivered and maintains a high coupling efficiency. This architecture couples EUV radiation from the discharge source into a ring field lithography camera.
NASA Astrophysics Data System (ADS)
Shinoj, V. K.; Murukeshan, V. M.; Hong, Jesmond; Baskaran, M.; Aung, Tin
2015-07-01
Noninvasive medical imaging techniques have generated great interest and high potential in the research and development of ocular imaging and follow up procedures. It is well known that angle closure glaucoma is one of the major ocular diseases/ conditions that causes blindness. The identification and treatment of this disease are related primarily to angle assessment techniques. In this paper, we illustrate a probe-based imaging approach to obtain the images of the angle region in eye. The proposed probe consists of a micro CCD camera and LED/NIR laser light sources and they are configured at the distal end to enable imaging of iridocorneal region inside eye. With this proposed dualmodal probe, imaging is performed in light (white visible LED ON) and dark (NIR laser light source alone) conditions and the angle region is noticeable in both cases. The imaging using NIR sources have major significance in anterior chamber imaging since it evades pupil constriction due to the bright light and thereby the artificial altering of anterior chamber angle. The proposed methodology and developed scheme are expected to find potential application in glaucoma disease detection and diagnosis.
Plenoptic camera wavefront sensing with extended sources
NASA Astrophysics Data System (ADS)
Jiang, Pengzhi; Xu, Jieping; Liang, Yonghui; Mao, Hongjun
2016-09-01
The wavefront sensor is used in adaptive optics to detect the atmospheric distortion, which feeds back to the deformable mirror to compensate for this distortion. Different from the Shack-Hartmann sensor that has been widely used with point sources, the plenoptic camera wavefront sensor has been proposed as an alternative wavefront sensor adequate for extended objects in recent years. In this paper, the plenoptic camera wavefront sensing with extended sources is discussed systematically. Simulations are performed to investigate the wavefront measurement error and the closed-loop performance of the plenoptic sensor. The results show that there are an optimal lenslet size and an optimal number of pixels to make the best performance. The RMS of the resulting corrected wavefront in closed-loop adaptive optics system is less than 108 nm (0.2λ) when D/r0 ≤ 10 and the magnitude M ≤ 5. Our investigation indicates that the plenoptic sensor is efficient to operate on extended sources in the closed-loop adaptive optics system.
OPTICAL IMAGES AND SOURCE CATALOG OF AKARI NORTH ECLIPTIC POLE WIDE SURVEY FIELD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeon, Yiseul; Im, Myungshin; Lee, Induk
2010-09-15
We present the source catalog and the properties of the B-, R-, and I-band images obtained to support the AKARI North Ecliptic Pole Wide (NEP-Wide) survey. The NEP-Wide is an AKARI infrared imaging survey of the north ecliptic pole covering a 5.8 deg{sup 2} area over 2.5-6 {mu}m wavelengths. The optical imaging data were obtained at the Maidanak Observatory in Uzbekistan using the Seoul National University 4k x 4k Camera on the 1.5 m telescope. These images cover 4.9 deg{sup 2} where no deep optical imaging data are available. Our B-, R-, and I-band data reach the depths of {approx}23.4,more » {approx}23.1, and {approx}22.3 mag (AB) at 5{sigma}, respectively. The source catalog contains 96,460 objects in the R band, and the astrometric accuracy is about 0.''15 at 1{sigma} in each R.A. and decl. direction. These photometric data will be useful for many studies including identification of optical counterparts of the infrared sources detected by AKARI, analysis of their spectral energy distributions from optical through infrared, and the selection of interesting objects to understand the obscured galaxy evolution.« less
System for critical infrastructure security based on multispectral observation-detection module
NASA Astrophysics Data System (ADS)
Trzaskawka, Piotr; Kastek, Mariusz; Życzkowski, Marek; Dulski, Rafał; Szustakowski, Mieczysław; Ciurapiński, Wiesław; Bareła, Jarosław
2013-10-01
Recent terrorist attacks and possibilities of such actions in future have forced to develop security systems for critical infrastructures that embrace sensors technologies and technical organization of systems. The used till now perimeter protection of stationary objects, based on construction of a ring with two-zone fencing, visual cameras with illumination are efficiently displaced by the systems of the multisensor technology that consists of: visible technology - day/night cameras registering optical contrast of a scene, thermal technology - cheap bolometric cameras recording thermal contrast of a scene and active ground radars - microwave and millimetre wavelengths that record and detect reflected radiation. Merging of these three different technologies into one system requires methodology for selection of technical conditions of installation and parameters of sensors. This procedure enables us to construct a system with correlated range, resolution, field of view and object identification. Important technical problem connected with the multispectral system is its software, which helps couple the radar with the cameras. This software can be used for automatic focusing of cameras, automatic guiding cameras to an object detected by the radar, tracking of the object and localization of the object on the digital map as well as target identification and alerting. Based on "plug and play" architecture, this system provides unmatched flexibility and simplistic integration of sensors and devices in TCP/IP networks. Using a graphical user interface it is possible to control sensors and monitor streaming video and other data over the network, visualize the results of data fusion process and obtain detailed information about detected intruders over a digital map. System provide high-level applications and operator workload reduction with features such as sensor to sensor cueing from detection devices, automatic e-mail notification and alarm triggering. The paper presents a structure and some elements of critical infrastructure protection solution which is based on a modular multisensor security system. System description is focused mainly on methodology of selection of sensors parameters. The results of the tests in real conditions are also presented.
Monitoring the spatial and temporal evolution of slope instability with Digital Image Correlation
NASA Astrophysics Data System (ADS)
Manconi, Andrea; Glueer, Franziska; Loew, Simon
2017-04-01
The identification and monitoring of ground deformation is important for an appropriate analysis and interpretation of unstable slopes. Displacements are usually monitored with in-situ techniques (e.g., extensometers, inclinometers, geodetic leveling, tachymeters and D-GPS), and/or active remote sensing methods (e.g., LiDAR and radar interferometry). In particular situations, however, the choice of the appropriate monitoring system is constrained by site-specific conditions. Slope areas can be very remote and/or affected by rapid surface changes, thus hardly accessible, often unsafe, for field installations. In many cases the use of remote sensing approaches might be also hindered because of unsuitable acquisition geometries, poor spatial resolution and revisit times, and/or high costs. The increasing availability of digital imagery acquired from terrestrial photo and video cameras allows us nowadays for an additional source of data. The latter can be exploited to visually identify changes of the scene occurring over time, but also to quantify the evolution of surface displacements. Image processing analyses, such as Digital Image Correlation (known also as pixel-offset or feature-tracking), have demonstrated to provide a suitable alternative to detect and monitor surface deformation at high spatial and temporal resolutions. However, a number of intrinsic limitations have to be considered when dealing with optical imagery acquisition and processing, including the effects of light conditions, shadowing, and/or meteorological variables. Here we propose an algorithm to automatically select and process images acquired from time-lapse cameras. We aim at maximizing the results obtainable from large datasets of digital images acquired with different light and meteorological conditions, and at retrieving accurate information on the evolution of surface deformation. We show a successful example of application of our approach in the Swiss Alps, more specifically in the Great Aletsch area, where slope instability was recently reactivated due to the progressive glacier retreat. At this location, time-lapse cameras have been installed during the last two years, ranging from low-cost and low-resolution webcams to more expensive high-resolution reflex cameras. Our results confirm that time-lapse cameras provide quantitative and accurate measurements of surface deformation evolution over space and time, especially in situations when other monitoring instruments fail.
NASA Astrophysics Data System (ADS)
Georgiou, Giota; Verdaasdonk, Rudolf M.; van der Veen, Albert; Klaessens, John H.
2017-02-01
In the development of new near-infrared (NIR) fluorescence dyes for image guided surgery, there is a need for new NIR sensitive camera systems that can easily be adjusted to specific wavelength ranges in contrast the present clinical systems that are only optimized for ICG. To test alternative camera systems, a setup was developed to mimic the fluorescence light in a tissue phantom to measure the sensitivity and resolution. Selected narrow band NIR LED's were used to illuminate a 6mm diameter circular diffuse plate to create uniform intensity controllable light spot (μW-mW) as target/source for NIR camera's. Layers of (artificial) tissue with controlled thickness could be placed on the spot to mimic a fluorescent `cancer' embedded in tissue. This setup was used to compare a range of NIR sensitive consumer's cameras for potential use in image guided surgery. The image of the spot obtained with the cameras was captured and analyzed using ImageJ software. Enhanced CCD night vision cameras were the most sensitive capable of showing intensities < 1 μW through 5 mm of tissue. However, there was no control over the automatic gain and hence noise level. NIR sensitive DSLR cameras proved relative less sensitive but could be fully manually controlled as to gain (ISO 25600) and exposure time and are therefore preferred for a clinical setting in combination with Wi-Fi remote control. The NIR fluorescence testing setup proved to be useful for camera testing and can be used for development and quality control of new NIR fluorescence guided surgery equipment.
Pulsed x-ray sources for characterization of gated framing cameras
NASA Astrophysics Data System (ADS)
Filip, Catalin V.; Koch, Jeffrey A.; Freeman, Richard R.; King, James A.
2017-08-01
Gated X-ray framing cameras are used to measure important characteristics of inertial confinement fusion (ICF) implosions such as size and symmetry, with 50 ps time resolution in two dimensions. A pulsed source of hard (>8 keV) X-rays, would be a valuable calibration device, for example for gain-droop measurements of the variation in sensitivity of the gated strips. We have explored the requirements for such a source and a variety of options that could meet these requirements. We find that a small-size dense plasma focus machine could be a practical single-shot X-ray source for this application if timing uncertainties can be overcome.
NASA Astrophysics Data System (ADS)
Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.
2015-05-01
How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.
NASA Astrophysics Data System (ADS)
Viegas, Jaime; Mayeh, Mona; Srinivasan, Pradeep; Johnson, Eric G.; Marques, Paulo V. S.; Farahi, Faramarz
2017-02-01
In this work, a silicon oxynitride-on-silica refractometer is presented, based on sub-wavelength coupled arrayed waveguide interference, and capable of low-cost, high resolution, large scale deployment. The sensor has an experimental spectral sensitivity as high as 3200 nm/RIU, covering refractive indices ranging from 1 (air) up to 1.43 (oils). The sensor readout can be performed by standard spectrometers techniques of by pattern projection onto a camera, followed by optical pattern recognition. Positive identification of the refractive index of an unknown species is obtained by pattern cross-correlation with a look-up calibration table based algorithm. Given the lower contrast between core and cladding in such devices, higher mode overlap with single mode fiber is achieved, leading to a larger coupling efficiency and more relaxed alignment requirements as compared to silicon photonics platform. Also, the optical transparency of the sensor in the visible range allows the operation with light sources and camera detectors in the visible range, of much lower capital costs for a complete sensor system. Furthermore, the choice of refractive indices of core and cladding in the sensor head with integrated readout, allows the fabrication of the same device in polymers, for mass-production replication of disposable sensors.
A visual surveillance system for person re-identification
NASA Astrophysics Data System (ADS)
El-Alfy, Hazem; Muramatsu, Daigo; Teranishi, Yuuichi; Nishinaga, Nozomu; Makihara, Yasushi; Yagi, Yasushi
2017-03-01
We attempt the problem of autonomous surveillance for person re-identification. This is an active research area, where most recent work focuses on the open challenges of re-identification, independently of prerequisites of detection and tracking. In this paper, we are interested in designing a complete surveillance system, joining all the pieces of the puzzle together. We start by collecting our own dataset from multiple cameras. Then, we automate the process of detection and tracking of human subjects in the scenes, followed by performing the re-identification task. We evaluate the recognition performance of our system, report its strengths, discuss open challenges and suggest ways to address them.
Camera calibration method of binocular stereo vision based on OpenCV
NASA Astrophysics Data System (ADS)
Zhong, Wanzhen; Dong, Xiaona
2015-10-01
Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.
NASA Astrophysics Data System (ADS)
Aretxaga, Itziar
2015-08-01
The combination of short and long-wavelength deep (sub-)mm surveys can effectively be used to identify high-redshift sub-millimeter galaxies (z>4). Having star formation rates in excess of 500 Msun/yr, these bright (sub-)mm sources have been identified with the progenitors of massive elliptical galaxies undergoing rapid growth. With this purpose in mind, we are surveying a 20 sq. arcmin field within the Extended Groth Strip with the 1.1mm AzTEC camera mounted at the Large Millimeter Telescope that overlaps with the deep 450/850um SCUBA-2 Cosmology Legacy Survey and the CANDELS deep NIR imaging. The improved beamsize of the LMT (8”) over previous surveys aids the identification of the most prominent optical/IR counterparts. We discuss the high-redshift candidates found.
Inflight Calibration of the Lunar Reconnaissance Orbiter Camera Wide Angle Camera
NASA Astrophysics Data System (ADS)
Mahanti, P.; Humm, D. C.; Robinson, M. S.; Boyd, A. K.; Stelling, R.; Sato, H.; Denevi, B. W.; Braden, S. E.; Bowman-Cisneros, E.; Brylow, S. M.; Tschimmel, M.
2016-04-01
The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) has acquired more than 250,000 images of the illuminated lunar surface and over 190,000 observations of space and non-illuminated Moon since 1 January 2010. These images, along with images from the Narrow Angle Camera (NAC) and other Lunar Reconnaissance Orbiter instrument datasets are enabling new discoveries about the morphology, composition, and geologic/geochemical evolution of the Moon. Characterizing the inflight WAC system performance is crucial to scientific and exploration results. Pre-launch calibration of the WAC provided a baseline characterization that was critical for early targeting and analysis. Here we present an analysis of WAC performance from the inflight data. In the course of our analysis we compare and contrast with the pre-launch performance wherever possible and quantify the uncertainty related to various components of the calibration process. We document the absolute and relative radiometric calibration, point spread function, and scattered light sources and provide estimates of sources of uncertainty for spectral reflectance measurements of the Moon across a range of imaging conditions.
2004-02-02
This is a three-dimensional stereo anaglyph of an image taken by the front hazard-identification camera onboard NASA Mars Exploration Rover Opportunity, showing the rover arm in its extended position. 3D glasses are necessary to view this image.
Travtek Evaluation Task C3: Camera Car Study
DOT National Transportation Integrated Search
1998-11-01
A "biometric" technology is an automatic method for the identification, or identity verification, of an individual based on physiological or behavioral characteristics. The primary objective of the study summarized in this tech brief was to make reco...
Eccentricity error identification and compensation for high-accuracy 3D optical measurement
He, Dong; Liu, Xiaoli; Peng, Xiang; Ding, Yabin; Gao, Bruce Z
2016-01-01
The circular target has been widely used in various three-dimensional optical measurements, such as camera calibration, photogrammetry and structured light projection measurement system. The identification and compensation of the circular target systematic eccentricity error caused by perspective projection is an important issue for ensuring accurate measurement. This paper introduces a novel approach for identifying and correcting the eccentricity error with the help of a concentric circles target. Compared with previous eccentricity error correction methods, our approach does not require taking care of the geometric parameters of the measurement system regarding target and camera. Therefore, the proposed approach is very flexible in practical applications, and in particular, it is also applicable in the case of only one image with a single target available. The experimental results are presented to prove the efficiency and stability of the proposed approach for eccentricity error compensation. PMID:26900265
Eccentricity error identification and compensation for high-accuracy 3D optical measurement.
He, Dong; Liu, Xiaoli; Peng, Xiang; Ding, Yabin; Gao, Bruce Z
2013-07-01
The circular target has been widely used in various three-dimensional optical measurements, such as camera calibration, photogrammetry and structured light projection measurement system. The identification and compensation of the circular target systematic eccentricity error caused by perspective projection is an important issue for ensuring accurate measurement. This paper introduces a novel approach for identifying and correcting the eccentricity error with the help of a concentric circles target. Compared with previous eccentricity error correction methods, our approach does not require taking care of the geometric parameters of the measurement system regarding target and camera. Therefore, the proposed approach is very flexible in practical applications, and in particular, it is also applicable in the case of only one image with a single target available. The experimental results are presented to prove the efficiency and stability of the proposed approach for eccentricity error compensation.
A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes
Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-yung
2016-01-01
Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency. PMID:27792156
A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes.
Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-Yung
2016-10-25
Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency.
Measuring SO2 ship emissions with an ultra-violet imaging camera
NASA Astrophysics Data System (ADS)
Prata, A. J.
2013-11-01
Over the last few years fast-sampling ultra-violet (UV) imaging cameras have been developed for use in measuring SO2 emissions from industrial sources (e.g. power plants; typical fluxes ~1-10 kg s-1) and natural sources (e.g. volcanoes; typical fluxes ~10-100 kg s-1). Generally, measurements have been made from sources rich in SO2 with high concentrations and fluxes. In this work, for the first time, a UV camera has been used to measure the much lower concentrations and fluxes of SO2 (typical fluxes ~0.01-0.1 kg s-1) in the plumes from moving and stationary ships. Some innovations and trade-offs have been made so that estimates of the fluxes and path concentrations can be retrieved in real-time. Field experiments were conducted at Kongsfjord in Ny Ålesund, Svalbard, where emissions from cruise ships were made, and at the port of Rotterdam, Netherlands, measuring emissions from more than 10 different container and cargo ships. In all cases SO2 path concentrations could be estimated and fluxes determined by measuring ship plume speeds simultaneously using the camera, or by using surface wind speed data from an independent source. Accuracies were compromised in some cases because of the presence of particulates in some ship emissions and the restriction of single-filter UV imagery, a requirement for fast-sampling (>10 Hz) from a single camera. Typical accuracies ranged from 10-30% in path concentration and 10-40% in flux estimation. Despite the ease of use and ability to determine SO2 fluxes from the UV camera system, the limitation in accuracy and precision suggest that the system may only be used under rather ideal circumstances and that currently the technology needs further development to serve as a method to monitor ship emissions for regulatory purposes.
Imaging using a supercontinuum laser to assess tumors in patients with breast carcinoma
NASA Astrophysics Data System (ADS)
Sordillo, Laura A.; Sordillo, Peter P.; Alfano, R. R.
2016-03-01
The supercontinuum laser light source has many advantages over other light sources, including broad spectral range. Transmission images of paired normal and malignant breast tissue samples from two patients were obtained using a Leukos supercontinuum (SC) laser light source with wavelengths in the second and third NIR optical windows and an IR- CCD InGaAs camera detector (Goodrich Sensors Inc. high response camera SU320KTSW-1.7RT with spectral response between 900 nm and 1,700 nm). Optical attenuation measurements at the four NIR optical windows were obtained from the samples.
Polarized fluorescence for skin cancer diagnostic with a multi-aperture camera
NASA Astrophysics Data System (ADS)
Kandimalla, Haripriya; Ramella-Roman, Jessica C.
2008-02-01
Polarized fluorescence has shown some promising results in assessment of skin cancer margins. Researchers have used tetracycline and cross polarization imaging for nonmelanoma skin cancer demarcation as well as investigating endogenous skin polarized fluorescence. In this paper we present a new instrument for polarized fluorescence imaging, able to calculate the full fluorescence Stokes vector in one snapshot. The core of our system is a multi-aperture camera constructed with a two by two lenslet array. Three of the lenses have polarizing elements in front of them, oriented at 0°, + 45°and 90° with respect to light source polarization. A flash lamp combined with a polarizer parallel to the source-camera-sample plane and a UV filter is used as an excitation source. A blue filter in front of the camera system is used to collect only the fluorescent emission of interest and filter out the incident light. In-vitro tests of endogenous and exogenous polarized fluorescence on collagen rich material like bovine tendon were performed and Stokes vector of polarized fluorescence calculated. The system has the advantage of eliminating moving artifacts with the collection of different polarization states and stoke vector in a single snap shot.
Photogrammetry of Apollo 15 photography, part C
NASA Technical Reports Server (NTRS)
Wu, S. S. C.; Schafer, F. J.; Jordan, R.; Nakata, G. M.; Derick, J. L.
1972-01-01
In the Apollo 15 mission, a mapping camera system and a 61 cm optical bar, high resolution panoramic camera, as well as a laser altimeter were used. The panoramic camera is described, having several distortion sources, such as cylindrical shape of the negative film surface, the scanning action of the lens, the image motion compensator, and the spacecraft motion. Film products were processed on a specifically designed analytical plotter.
X-ray topography as a process control tool in semiconductor and microcircuit manufacture
NASA Technical Reports Server (NTRS)
Parker, D. L.; Porter, W. A.
1977-01-01
A bent wafer camera, designed to identify crystal lattice defects in semiconductor materials, was investigated. The camera makes use of conventional X-ray topographs and an innovative slightly bent wafer which allows rays from the point source to strike all portions of the wafer simultaneously. In addition to being utilized in solving production process control problems, this camera design substantially reduces the cost per topograph.
Shearing interference microscope for step-height measurements.
Trịnh, Hưng-Xuân; Lin, Shyh-Tsong; Chen, Liang-Chia; Yeh, Sheng-Lih; Chen, Chin-Sheng; Hoang, Hong-Hai
2017-05-01
A shearing interference microscope using a Savart prism as the shear plate is proposed for inspecting step-heights. Where the light beam propagates through the Savart prism and microscopic system to illuminate the sample, it then turns back to re-pass through the Savart prism and microscopic system to generate a shearing interference pattern on the camera. Two measurement modes, phase-shifting and phase-scanning, can be utilized to determine the depths of the step-heights on the sample. The first mode, which employs a narrowband source, is based on the five-step phase-shifting algorithm and has a measurement range of a quarter-wavelength. The second mode, which adopts a broadband source, is based on peak-intensity identification technology and has a measurement range up to a few micrometres. This paper is to introduce the configuration and measurement theory of this microscope, perform a setup used to implement it, and present the experimental results from the uses of the setup. The results not only verify the validity but also confirm the high measurement repeatability of the proposed microscope. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Intraoperative near-infrared autofluorescence imaging of parathyroid glands.
Ladurner, Roland; Sommerey, Sandra; Arabi, Nora Al; Hallfeldt, Klaus K J; Stepp, Herbert; Gallwas, Julia K S
2017-08-01
To identify parathyroid glands intraoperatively by exposing their autofluorescence using near-infrared light. Fluorescence imaging was carried out during minimally invasive and open parathyroid and thyroid surgery. After identification, the parathyroid glands as well as the surrounding tissue were exposed to near-infrared (NIR) light with a wavelength of 690-770 nm using a modified Karl Storz near-infrared/indocyanine green (NIR/ICG) endoscopic system. Parathyroid tissue was expected to show near-infrared autofluorescence, captured in the blue channel of the camera. Whenever possible the visual identification of parathyroid tissue was confirmed histologically. In preliminary investigations, using the original NIR/ICG endoscopic system we noticed considerable interference of light in the blue channel overlying the autofluorescence. Therefore, we modified the light source by interposing additional filters. In a second series, we investigated 35 parathyroid glands from 25 patients. Twenty-seven glands were identified correctly based on NIR autofluorescence. Regarding the extent of autofluorescence, there were no noticeable differences between parathyroid adenomas, hyperplasia and normal parathyroid glands. In contrast, thyroid tissue, lymph nodes and adipose tissue revealed no substantial autofluorescence. Parathyroid tissue is characterized by showing autofluorescence in the near-infrared spectrum. This effect can be used to distinguish parathyroid glands from other cervical tissue entities.
Forming images with thermal neutrons
NASA Astrophysics Data System (ADS)
Vanier, Peter E.; Forman, Leon
2003-01-01
Thermal neutrons passing through air have scattering lengths of about 20 meters. At further distances, the majority of neutrons emanating from a moderated source will scatter multiple times in the air before being detected, and will not retain information about the location of the source, except that their density will fall off somewhat faster than 1/r2. However, there remains a significant fraction of the neutrons that will travel 20 meters or more without scattering and can be used to create an image of the source. A few years ago, a proof-of-principle "camera" was demonstrated that could produce images of a scene containing sources of thermalized neutrons and could locate a source comparable in strength with an improvised nuclear device at ranges over 60 meters. The instrument makes use of a coded aperture with a uniformly redundant array of openings, analogous to those used in x-ray and gamma cameras. The detector is a position-sensitive He-3 proportional chamber, originally used for neutron diffraction. A neutron camera has many features in common with those designed for non-focusable photons, as well as some important differences. Potential applications include detecting nuclear smuggling, locating non-metallic land mines, assaying nuclear waste, and surveying for health physics purposes.
Patient position alters attenuation effects in multipinhole cardiac SPECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timmins, Rachel; Ruddy, Terrence D.; Wells, R. Glenn, E-mail: gwells@ottawaheart.ca
2015-03-15
Purpose: Dedicated cardiac cameras offer improved sensitivity over conventional SPECT cameras. Sensitivity gains are obtained by large numbers of detectors and novel collimator arrangements such as an array of multiple pinholes that focus on the heart. Pinholes lead to variable amounts of attenuation as a source is moved within the camera field of view. This study evaluated the effects of this variable attenuation on myocardial SPECT images. Methods: Computer simulations were performed for a set of nine point sources distributed in the left ventricular wall (LV). Sources were placed at the location of the heart in both an anthropomorphic andmore » a water-cylinder computer phantom. Sources were translated in x, y, and z by up to 5 cm from the center. Projections were simulated with and without attenuation and the changes in attenuation were compared. A LV with an inferior wall defect was also simulated in both phantoms over the same range of positions. Real camera data were acquired on a Discovery NM530c camera (GE Healthcare, Haifa, Israel) for five min in list-mode using an anthropomorphic phantom (DataSpectrum, Durham, NC) with 100 MBq of Tc-99m in the LV. Images were taken over the same range of positions as the simulations and were compared based on the summed perfusion score (SPS), defect width, and apparent defect uptake for each position. Results: Point sources in the water phantom showed absolute changes in attenuation of ≤8% over the range of positions and relative changes of ≤5% compared to the apex. In the anthropomorphic computer simulations, absolute change increased to 20%. The changes in relative attenuation caused a change in SPS of <1.5 for the water phantom but up to 4.2 in the anthropomorphic phantom. Changes were larger for axial than for transverse translations. These results were supported by SPS changes of up to six seen in the physical anthropomorphic phantom for axial translations. Defect width was also seen to significantly increase. The position-dependent changes were removed with attenuation correction. Conclusions: Translation of a source relative to a multipinhole camera caused only small changes in homogeneous phantoms with SPS changing <1.5. Inhomogeneous attenuating media cause much larger changes to occur when the source is translated. Changes in SPS of up to six were seen in an anthropomorphic phantom for axial translations. Attenuation correction removes the position-dependent changes in attenuation.« less
Building Security into Schools.
ERIC Educational Resources Information Center
Kosar, John E.; Ahmed, Faruq
2000-01-01
Offers tips for redesigning safer school sites; installing and implementing security technologies (closed-circuit television cameras, door security hardware, electronic security panels, identification cards, metal detectors, and panic buttons); educating students and staff about security functions; and minimizing costs via a comprehensive campus…
Quantitative Measurements of X-ray Intensity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haugh, M. J., Schneider, M.
This chapter describes the characterization of several X-ray sources and their use in calibrating different types of X-ray cameras at National Security Technologies, LLC (NSTec). The cameras are employed in experimental plasma studies at Lawrence Livermore National Laboratory (LLNL), including the National Ignition Facility (NIF). The sources provide X-rays in the energy range from several hundred eV to 110 keV. The key to this effort is measuring the X-ray beam intensity accurately and traceable to international standards. This is accomplished using photodiodes of several types that are calibrated using radioactive sources and a synchrotron source using methods and materials thatmore » are traceable to the U.S. National Institute of Standards and Technology (NIST). The accreditation procedures are described. The chapter begins with an introduction to the fundamental concepts of X-ray physics. The types of X-ray sources that are used for device calibration are described. The next section describes the photodiode types that are used for measuring X-ray intensity: power measuring photodiodes, energy dispersive photodiodes, and cameras comprising photodiodes as pixel elements. Following their description, the methods used to calibrate the primary detectors, the power measuring photodiodes and the energy dispersive photodiodes, as well as the method used to get traceability to international standards are described. The X-ray source beams can then be measured using the primary detectors. The final section then describes the use of the calibrated X-ray beams to calibrate X-ray cameras. Many of the references are web sites that provide databases, explanations of the data and how it was generated, and data calculations for specific cases. Several general reference books related to the major topics are included. Papers expanding some subjects are cited.« less
NASA Astrophysics Data System (ADS)
Taya, T.; Kataoka, J.; Kishimoto, A.; Tagawa, L.; Mochizuki, S.; Toshito, T.; Kimura, M.; Nagao, Y.; Kurita, K.; Yamaguchi, M.; Kawachi, N.
2017-07-01
Particle therapy is an advanced cancer therapy that uses a feature known as the Bragg peak, in which particle beams suddenly lose their energy near the end of their range. The Bragg peak enables particle beams to damage tumors effectively. To achieve precise therapy, the demand for accurate and quantitative imaging of the beam irradiation region or dosage during therapy has increased. The most common method of particle range verification is imaging of annihilation gamma rays by positron emission tomography. Not only 511-keV gamma rays but also prompt gamma rays are generated during therapy; therefore, the Compton camera is expected to be used as an on-line monitor for particle therapy, as it can image these gamma rays in real time. Proton therapy, one of the most common particle therapies, uses a proton beam of approximately 200 MeV, which has a range of ~ 25 cm in water. As gamma rays are emitted along the path of the proton beam, quantitative evaluation of the reconstructed images of diffuse sources becomes crucial, but it is far from being fully developed for Compton camera imaging at present. In this study, we first quantitatively evaluated reconstructed Compton camera images of uniformly distributed diffuse sources, and then confirmed that our Compton camera obtained 3 %(1 σ) and 5 %(1 σ) uniformity for line and plane sources, respectively. Based on this quantitative study, we demonstrated on-line gamma imaging during proton irradiation. Through these studies, we show that the Compton camera is suitable for future use as an on-line monitor for particle therapy.
Deep-Sea Video Cameras Without Pressure Housings
NASA Technical Reports Server (NTRS)
Cunningham, Thomas
2004-01-01
Underwater video cameras of a proposed type (and, optionally, their light sources) would not be housed in pressure vessels. Conventional underwater cameras and their light sources are housed in pods that keep the contents dry and maintain interior pressures of about 1 atmosphere (.0.1 MPa). Pods strong enough to withstand the pressures at great ocean depths are bulky, heavy, and expensive. Elimination of the pods would make it possible to build camera/light-source units that would be significantly smaller, lighter, and less expensive. The depth ratings of the proposed camera/light source units would be essentially unlimited because the strengths of their housings would no longer be an issue. A camera according to the proposal would contain an active-pixel image sensor and readout circuits, all in the form of a single silicon-based complementary metal oxide/semiconductor (CMOS) integrated- circuit chip. As long as none of the circuitry and none of the electrical leads were exposed to seawater, which is electrically conductive, silicon integrated- circuit chips could withstand the hydrostatic pressure of even the deepest ocean. The pressure would change the semiconductor band gap by only a slight amount . not enough to degrade imaging performance significantly. Electrical contact with seawater would be prevented by potting the integrated-circuit chip in a transparent plastic case. The electrical leads for supplying power to the chip and extracting the video signal would also be potted, though not necessarily in the same transparent plastic. The hydrostatic pressure would tend to compress the plastic case and the chip equally on all sides; there would be no need for great strength because there would be no need to hold back high pressure on one side against low pressure on the other side. A light source suitable for use with the camera could consist of light-emitting diodes (LEDs). Like integrated- circuit chips, LEDs can withstand very large hydrostatic pressures. If power-supply regulators or filter capacitors were needed, these could be attached in chip form directly onto the back of, and potted with, the imager chip. Because CMOS imagers dissipate little power, the potting would not result in overheating. To minimize the cost of the camera, a fixed lens could be fabricated as part of the plastic case. For improved optical performance at greater cost, an adjustable glass achromatic lens would be mounted in a reservoir that would be filled with transparent oil and subject to the full hydrostatic pressure, and the reservoir would be mounted on the case to position the lens in front of the image sensor. The lens would by adjusted for focus by use of a motor inside the reservoir (oil-filled motors already exist).
External Mask Based Depth and Light Field Camera
2013-12-08
laid out in the previous light field cameras. A good overview of the sampling of the plenoptic function can be found in the survey work by Wetzstein et...view is shown in Figure 6. 5. Applications High spatial resolution depth and light fields are a rich source of information about the plenoptic ...http://www.pelicanimaging.com/. [4] E. Adelson and J. Wang. Single lens stereo with a plenoptic camera. Pattern Analysis and Machine Intelligence
Measurement of the timing behaviour of off-the-shelf cameras
NASA Astrophysics Data System (ADS)
Schatz, Volker
2017-04-01
This paper presents a measurement method suitable for investigating the timing properties of cameras. A single light source illuminates the camera detector starting with a varying defined delay after the camera trigger. Pixels from the recorded camera frames are summed up and normalised, and the resulting function is indicative of the overlap between illumination and exposure. This allows one to infer the trigger delay and the exposure time with sub-microsecond accuracy. The method is therefore of interest when off-the-shelf cameras are used in reactive systems or synchronised with other cameras. It can supplement radiometric and geometric calibration methods for cameras in scientific use. A closer look at the measurement results reveals deviations from the ideal camera behaviour of constant sensitivity limited to the exposure interval. One of the industrial cameras investigated retains a small sensitivity long after the end of the nominal exposure interval. All three investigated cameras show non-linear variations of sensitivity at O≤ft({{10}-3}\\right) to O≤ft({{10}-2}\\right) during exposure. Due to its sign, the latter effect cannot be described by a sensitivity function depending on the time after triggering, but represents non-linear pixel characteristics.
User-assisted visual search and tracking across distributed multi-camera networks
NASA Astrophysics Data System (ADS)
Raja, Yogesh; Gong, Shaogang; Xiang, Tao
2011-11-01
Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.
2007-06-01
of SNR, she incorporated the effects that an InGaAs photovoltaic detector have in producing the signal along with the photon, Johnson, and shot noises ...the photovoltaic FPA detector modeled? • What detector noise sources limit the computed signal? 3.1 Modeling Methodology Two aspects in the IR camera...Another shot noise source in photovoltaic detectors is dark current. This current represents the current flowing in the detector when no optical radiation
Photometric Calibration of Consumer Video Cameras
NASA Technical Reports Server (NTRS)
Suggs, Robert; Swift, Wesley, Jr.
2007-01-01
Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.
NASA Astrophysics Data System (ADS)
O'Keefe, Eoin S.
2005-10-01
As thermal imaging technology matures and ownership costs decrease, there is a trend to equip a greater proportion of airborne surveillance vehicles used by security and defence forces with both visible band and thermal infrared cameras. These cameras are used for tracking vehicles on the ground, to aid in pursuit of villains in vehicles and on foot, while also assisting in the direction and co-ordination of emergency service vehicles as the occasion arises. These functions rely on unambiguous identification of police and the other emergency service vehicles. In the visible band this is achieved by dark markings with high contrast (light) backgrounds on the roof of vehicles. When there is no ambient lighting, for example at night, thermal imaging is used to track both vehicles and people. In the thermal IR, the visible markings are not obvious. At the wavelength thermal imagers operate, either 3-5 microns or 8-12 microns, the dark and light coloured materials have similar low reflectivity. To maximise the usefulness of IR airborne surveillance, a method of passively and unobtrusively marking vehicles concurrently in the visible and thermal infrared is needed. In this paper we discuss the design, application and operation of some vehicle and personnel marking materials and show airborne IR and visible imagery of materials in use.
NASA Astrophysics Data System (ADS)
Lojacono, Xavier; Richard, Marie-Hélène; Ley, Jean-Luc; Testa, Etienne; Ray, Cédric; Freud, Nicolas; Létang, Jean Michel; Dauvergne, Denis; Maxim, Voichiţa; Prost, Rémy
2013-10-01
The Compton camera is a relevant imaging device for the detection of prompt photons produced by nuclear fragmentation in hadrontherapy. It may allow an improvement in detection efficiency compared to a standard gamma-camera but requires more sophisticated image reconstruction techniques. In this work, we simulate low statistics acquisitions from a point source having a broad energy spectrum compatible with hadrontherapy. We then reconstruct the image of the source with a recently developed filtered backprojection algorithm, a line-cone approach and an iterative List Mode Maximum Likelihood Expectation Maximization algorithm. Simulated data come from a Compton camera prototype designed for hadrontherapy online monitoring. Results indicate that the achievable resolution in directions parallel to the detector, that may include the beam direction, is compatible with the quality control requirements. With the prototype under study, the reconstructed image is elongated in the direction orthogonal to the detector. However this direction is of less interest in hadrontherapy where the first requirement is to determine the penetration depth of the beam in the patient. Additionally, the resolution may be recovered using a second camera.
Use of wildlife webcams - Literature review and annotated bibliography
Ratz, Joan M.; Conk, Shannon J.
2010-01-01
The U.S. Fish and Wildlife Service National Conservation Training Center requested a literature review product that would serve as a resource to natural resource professionals interested in using webcams to connect people with nature. The literature review focused on the effects on the public of viewing wildlife through webcams and on information regarding installation and use of webcams. We searched the peer reviewed, published literature for three topics: wildlife cameras, virtual tourism, and technological nature. Very few publications directly addressed the effect of viewing wildlife webcams. The review of information on installation and use of cameras yielded information about many aspects of the use of remote photography, but not much specifically regarding webcams. Aspects of wildlife camera use covered in the literature review include: camera options, image retrieval, system maintenance and monitoring, time to assemble, power source, light source, camera mount, frequency of image recording, consequences for animals, and equipment security. Webcam technology is relatively new and more publication regarding the use of the technology is needed. Future research should specifically study the effect that viewing wildlife through webcams has on the viewers' conservation attitudes, behaviors, and sense of connectedness to nature.
A near-Infrared SETI Experiment: Alignment and Astrometric precision
NASA Astrophysics Data System (ADS)
Duenas, Andres; Maire, Jerome; Wright, Shelley; Drake, Frank D.; Marcy, Geoffrey W.; Siemion, Andrew; Stone, Remington P. S.; Tallis, Melisa; Treffers, Richard R.; Werthimer, Dan
2016-06-01
Beginning in March 2015, a Near-InfraRed Optical SETI (NIROSETI) instrument aiming to search for fast nanosecond laser pulses, has been commissioned on the Nickel 1m-telescope at Lick Observatory. The NIROSETI instrument makes use of an optical guide camera, SONY ICX694 CCD from PointGrey, to align our selected sources into two 200µm near-infrared Avalanche Photo Diodes (APD) with a field-of-view of 2.5"x2.5" each. These APD detectors operate at very fast bandwidths and are able to detect pulse widths extending down into the nanosecond range. Aligning sources onto these relatively small detectors requires characterizing the guide camera plate scale, static optical distortion solution, and relative orientation with respect to the APD detectors. We determined the guide camera plate scale as 55.9+- 2.7 milli-arcseconds/pixel and magnitude limit of 18.15mag (+1.07/-0.58) in V-band. We will present the full distortion solution of the guide camera, orientation, and our alignment method between the camera and the two APDs, and will discuss target selection within the NIROSETI observational campaign, including coordination with Breakthrough Listen.
Raspberry Pi camera with intervalometer used as crescograph
NASA Astrophysics Data System (ADS)
Albert, Stefan; Surducan, Vasile
2017-12-01
The intervalometer is an attachment or facility on a photo-camera that operates the shutter regularly at set intervals over a period. Professional cameras with built in intervalometers are expensive and quite difficult to find. The Canon CHDK open source operating system allows intervalometer implementation on Canon cameras only. However finding a Canon camera with near infra-red (NIR) photographic lens at affordable price is impossible. On experiments requiring several cameras (used to measure growth in plants - the crescographs, but also for coarse evaluation of the water content of leaves), the costs of the equipment are often over budget. Using two Raspberry Pi modules each equipped with a low cost NIR camera and a WIFI adapter (for downloading pictures stored on the SD card) and some freely available software, we have implemented two low budget intervalometer cameras. The shutting interval, the number of pictures to be taken, image resolution and some other parameters can be fully programmed. Cameras have been in use continuously for three months (July-October 2017) in a relevant environment (outside), proving the concept functionality.
Optical design of portable nonmydriatic fundus camera
NASA Astrophysics Data System (ADS)
Chen, Weilin; Chang, Jun; Lv, Fengxian; He, Yifan; Liu, Xin; Wang, Dajiang
2016-03-01
Fundus camera is widely used in screening and diagnosis of retinal disease. It is a simple, and widely used medical equipment. Early fundus camera expands the pupil with mydriatic to increase the amount of the incoming light, which makes the patients feel vertigo and blurred. Nonmydriatic fundus camera is a trend of fundus camera. Desktop fundus camera is not easy to carry, and only suitable to be used in the hospital. However, portable nonmydriatic retinal camera is convenient for patient self-examination or medical stuff visiting a patient at home. This paper presents a portable nonmydriatic fundus camera with the field of view (FOV) of 40°, Two kinds of light source are used, 590nm is used in imaging, while 808nm light is used in observing the fundus in high resolving power. Ring lights and a hollow mirror are employed to restrain the stray light from the cornea center. The focus of the camera is adjusted by reposition the CCD along the optical axis. The range of the diopter is between -20m-1 and 20m-1.
George, Edward V.; Oster, Yale; Mundinger, David C.
1990-01-01
Deep UV projection lithography can be performed using an e-beam pumped solid excimer UV source, a mask, and a UV reduction camera. The UV source produces deep UV radiation in the range 1700-1300A using xenon, krypton or argon; shorter wavelengths of 850-650A can be obtained using neon or helium. A thin solid layer of the gas is formed on a cryogenically cooled plate and bombarded with an e-beam to cause fluorescence. The UV reduction camera utilizes multilayer mirrors having high reflectivity at the UV wavelength and images the mask onto a resist coated substrate at a preselected demagnification. The mask can be formed integrally with the source as an emitting mask.
NASA Astrophysics Data System (ADS)
Madrigal, Carlos A.; Restrepo, Alejandro; Branch, John W.
2016-09-01
3D reconstruction of small objects is used in applications of surface analysis, forensic analysis and tissue reconstruction in medicine. In this paper, we propose a strategy for the 3D reconstruction of small objects and the identification of some superficial defects. We applied a technique of projection of structured light patterns, specifically sinusoidal fringes and an algorithm of phase unwrapping. A CMOS camera was used to capture images and a DLP digital light projector for synchronous projection of the sinusoidal pattern onto the objects. We implemented a technique based on a 2D flat pattern as calibration process, so the intrinsic and extrinsic parameters of the camera and the DLP were defined. Experimental tests were performed in samples of artificial teeth, coal particles, welding defects and surfaces tested with Vickers indentation. Areas less than 5cm were studied. The objects were reconstructed in 3D with densities of about one million points per sample. In addition, the steps of 3D description, identification of primitive, training and classification were implemented to recognize defects, such as: holes, cracks, roughness textures and bumps. We found that pattern recognition strategies are useful, when quality supervision of surfaces has enough quantities of points to evaluate the defective region, because the identification of defects in small objects is a demanding activity of the visual inspection.
An integration time adaptive control method for atmospheric composition detection of occultation
NASA Astrophysics Data System (ADS)
Ding, Lin; Hou, Shuai; Yu, Fei; Liu, Cheng; Li, Chao; Zhe, Lin
2018-01-01
When sun is used as the light source for atmospheric composition detection, it is necessary to image sun for accurate identification and stable tracking. In the course of 180 second of the occultation, the magnitude of sun light intensity through the atmosphere changes greatly. It is nearly 1100 times illumination change between the maximum atmospheric and the minimum atmospheric. And the process of light change is so severe that 2.9 times per second of light change can be reached. Therefore, it is difficult to control the integration time of sun image camera. In this paper, a novel adaptive integration time control method for occultation is presented. In this method, with the distribution of gray value in the image as the reference variable, and the concepts of speed integral PID control, the integration time adaptive control problem of high frequency imaging. The large dynamic range integration time automatic control in the occultation can be achieved.
Vector-Based Ground Surface and Object Representation Using Cameras
2009-12-01
representations and it is a digital data structure used for the representation of a ground surface in geographical information systems ( GIS ). Figure...Vision API library, and the OpenCV library. Also, the Posix thread library was utilized to quickly capture the source images from cameras. Both
Sheibley, Rich W.; Josberger, Edward G.; Chickadel, Chris
2010-01-01
The input of freshwater and associated nutrients into Lynch Cove and lower Hood Canal (fig. 1) from sources such as groundwater seeps, small streams, and ephemeral creeks may play a major role in the nutrient loading and hydrodynamics of this low dissolved-oxygen (hypoxic) system. These disbursed sources exhibit a high degree of spatial variability. However, few in-situ measurements of groundwater seepage rates and nutrient concentrations are available and thus may not represent adequately the large spatial variability of groundwater discharge in the area. As a result, our understanding of these processes and their effect on hypoxic conditions in Hood Canal is limited. To determine the spatial variability and relative intensity of these sources, the U.S. Geological Survey Washington Water Science Center collaborated with the University of Washington Applied Physics Laboratory to obtain thermal infrared (TIR) images of the nearshore and intertidal regions of Lynch Cove at or near low tide. In the summer, cool freshwater discharges from seeps and streams, flows across the exposed, sun-warmed beach, and out on the warm surface of the marine water. These temperature differences are readily apparent in aerial thermal infrared imagery that we acquired during the summers of 2008 and 2009. When combined with co-incident video camera images, these temperature differences allow identification of the location, the type, and the relative intensity of the sources.
Data Mining and Information Technology: Its Impact on Intelligence Collection and Privacy Rights
2007-11-26
sources include: Cameras - Digital cameras (still and video ) have been improving in capability while simultaneously dropping in cost at a rate...citizen is caught on camera 300 times each day.5 The power of extensive video coverage is magnified greatly by the nascent capability for voice and...software on security videos and tracking cell phone usage in the local area. However, it would only return the names and data of those who
Overview of Digital Forensics Algorithms in Dslr Cameras
NASA Astrophysics Data System (ADS)
Aminova, E.; Trapeznikov, I.; Priorov, A.
2017-05-01
The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.
NASA Astrophysics Data System (ADS)
Gicquel, Adeline; Vincent, Jean-Baptiste; Sierks, Holger; Rose, Martin; Agarwal, Jessica; Deller, Jakob; Guettler, Carsten; Hoefner, Sebastian; Hofmann, Marc; Hu, Xuanyu; Kovacs, Gabor; Oklay Vincent, Nilda; Shi, Xian; Tubiana, Cecilia; Barbieri, Cesare; Lamy, Phylippe; Rodrigo, Rafael; Koschny, Detlef; Rickman, Hans; OSIRIS Team
2016-10-01
Images of the nucleus and the coma (gas and dust) of comet 67P/Churyumov- Gerasimenko have been acquired by the OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) cameras system since March 2014 using both the wide angle camera (WAC) and the narrow angle camera (NAC). We are using the NAC camera to study the bright outburst observed on July 29th, 2015 in the southern hemisphere. The NAC camera's wavelength ranges between 250-1000 nm with a combination of 12 filters. The high spatial resolution is needed to localize the source point of the outburst on the surface of the nucleus. At the time of the observations, the heliocentric distance was 1.25AU and the distance between the spacecraft and the comet was 126 km. We aim to understand the physics leading to such outgassing: Is the jet associated to the outbursts controlled by the micro-topography? Or by ice suddenly exposed? We are using the Direct Simulation Monte Carlo (DSMC) method to study the gas flow close to the nucleus. The goal of the DSMC code is to reproduce the opening angle of the jet, and constrain the outgassing ratio between outburst source and local region. The results of this model will be compared to the images obtained with the NAC camera.
Impact of laser phase and amplitude noises on streak camera temporal resolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wlotzko, V., E-mail: wlotzko@optronis.com; Optronis GmbH, Ludwigstrasse 2, 77694 Kehl; Uhring, W.
2015-09-15
Streak cameras are now reaching sub-picosecond temporal resolution. In cumulative acquisition mode, this resolution does not entirely rely on the electronic or the vacuum tube performances but also on the light source characteristics. The light source, usually an actively mode-locked laser, is affected by phase and amplitude noises. In this paper, the theoretical effects of such noises on the synchronization of the streak system are studied in synchroscan and triggered modes. More precisely, the contribution of band-pass filters, delays, and time walk is ascertained. Methods to compute the resulting synchronization jitter are depicted. The results are verified by measurement withmore » a streak camera combined with a Ti:Al{sub 2}O{sub 3} solid state laser oscillator and also a fiber oscillator.« less
NASA Astrophysics Data System (ADS)
Darwiesh, M.; El-Sherif, Ashraf F.; El-Ghandour, Hatem; Aly, Hussein A.; Mokhtar, A. M.
2011-03-01
Optical imaging systems are widely used in different applications include tracking for portable scanners; input pointing devices for laptop computers, cell phones, and cameras, fingerprint-identification scanners, optical navigation for target tracking, and in optical computer mouse. We presented an experimental work to measure and analyze the laser speckle pattern (LSP) produced from different optical sources (i.e. various color LEDs, 3 mW diode laser, and 10mW He-Ne laser) with different produced operating surfaces (Gabor hologram diffusers), and how they affects the performance of the optical imaging systems; speckle size and signal-to-noise ratio (signal is represented by the patches of the speckles that contain or carry information, and noise is represented by the whole remaining part of the selected image). The theoretical and experimental studies of the colorimetry (color correction is done in the color images captured by the optical imaging system to produce realistic color images which contains most of the information in the image by selecting suitable gray scale which contains most of the informative data in the image, this is done by calculating the accurate Red-Green-Blue (RGB) color components making use of the measured spectrum for light sources, and color matching functions of International Telecommunication Organization (ITU-R709) for CRT phosphorus, Tirinton-SONY Model ) for the used optical sources are investigated and introduced to present the relations between the signal-to-noise ratios with different diffusers for each light source. The source surface coupling has been discussed and concludes that the performance of the optical imaging system for certain source varies from worst to best based on the operating surface. The sensor /surface coupling has been studied and discussed for the case of He-Ne laser and concludes the speckle size is ranged from 4.59 to 4.62 μm, which are slightly different or approximately the same for all produced diffusers (which satisfies the fact that the speckle size is independent on the illuminating surface). But, the calculated value of signal-tonoise ratio takes different values ranged from 0.71 to 0.92 for different diffuser. This means that the surface texture affects the performance of the optical sensor because, all images captured for all diffusers under the same conditions [same source (He-Ne laser), same distances of the experimental set-up, and the same sensor (CCD camera)].
Remote identification of individual volunteer cotton plants
USDA-ARS?s Scientific Manuscript database
Although airborne multispectral remote sensing can identify fields of small cotton plants, improvements to detection sensitivity are needed to identify individual or small clusters of plants that can similarly provide habitat for boll weevils. However, when consumer-grade cameras are used, each pix...
Quantification of Soil Redoximorphic Features by Standardized Color Identification
USDA-ARS?s Scientific Manuscript database
Photography has been a welcome tool in assisting to document and convey qualitative soil information. Greater availability of digital cameras with increased information storage capabilities has promoted novel uses of this technology in investigations of water movement patterns, organic matter conte...
Quality controls for gamma cameras and PET cameras: development of a free open-source ImageJ program
NASA Astrophysics Data System (ADS)
Carlier, Thomas; Ferrer, Ludovic; Berruchon, Jean B.; Cuissard, Regis; Martineau, Adeline; Loonis, Pierre; Couturier, Olivier
2005-04-01
Acquisition data and treatments for quality controls of gamma cameras and Positron Emission Tomography (PET) cameras are commonly performed with dedicated program packages, which are running only on manufactured computers and differ from each other, depending on camera company and program versions. The aim of this work was to develop a free open-source program (written in JAVA language) to analyze data for quality control of gamma cameras and PET cameras. The program is based on the free application software ImageJ and can be easily loaded on any computer operating system (OS) and thus on any type of computer in every nuclear medicine department. Based on standard parameters of quality control, this program includes 1) for gamma camera: a rotation center control (extracted from the American Association of Physics in Medicine, AAPM, norms) and two uniformity controls (extracted from the Institute of Physics and Engineering in Medicine, IPEM, and National Electronic Manufacturers Association, NEMA, norms). 2) For PET systems, three quality controls recently defined by the French Medical Physicist Society (SFPM), i.e. spatial resolution and uniformity in a reconstructed slice and scatter fraction, are included. The determination of spatial resolution (thanks to the Point Spread Function, PSF, acquisition) allows to compute the Modulation Transfer Function (MTF) in both modalities of cameras. All the control functions are included in a tool box which is a free ImageJ plugin and could be soon downloaded from Internet. Besides, this program offers the possibility to save on HTML format the uniformity quality control results and a warning can be set to automatically inform users in case of abnormal results. The architecture of the program allows users to easily add any other specific quality control program. Finally, this toolkit is an easy and robust tool to perform quality control on gamma cameras and PET cameras based on standard computation parameters, is free, run on any type of computer and will soon be downloadable from the net (http://rsb.info.nih.gov/ij/plugins or http://nucleartoolkit.free.fr).
1996-01-01
used to locate and characterize a magnetic dipole source, and this finding accelerated the development of superconducting tensor gradiometers for... superconducting magnetic field gradiometer, two-color infrared camera, synthetic aperture radar, and a visible spectrum camera. The combination of these...Pieter Hoekstra, Blackhawk GeoSciences ......................................... 68 Prediction for UXO Shape and Orientation Effects on Magnetic
NASA Astrophysics Data System (ADS)
Williams, David J.; Wadsworth, Winthrop; Salvaggio, Carl; Messinger, David W.
2006-08-01
Undiscovered gas leaks, known as fugitive emissions, in chemical plants and refinery operations can impact regional air quality and present a loss of product for industry. Surveying a facility for potential gas leaks can be a daunting task. Industrial leak detection and repair programs can be expensive to administer. An efficient, accurate and cost effective method for detecting and quantifying gas leaks would both save industries money by identifying production losses and improve regional air quality. Specialized thermal video systems have proven effective in rapidly locating gas leaks. These systems, however, do not have the spectral resolution for compound identification. Passive FTIR spectrometers can be used for gas compound identification, but using these systems for facility surveys is problematic due to their small field of view. A hybrid approach has been developed that utilizes the thermal video system to locate gas plumes using real time visualization of the leaks, coupled with the high spectral resolution FTIR spectrometer for compound identification and quantification. The prototype hybrid video/spectrometer system uses a sterling cooled thermal camera, operating in the MWIR (3-5 μm) with an additional notch filter set at around 3.4 μm, which allows for the visualization of gas compounds that absorb in this narrow spectral range, such as alkane hydrocarbons. This camera is positioned alongside of a portable, high speed passive FTIR spectrometer, which has a spectral range of 2 - 25 μm and operates at 4 cm -1 resolution. This system uses a 10 cm telescope foreoptic with an onboard blackbody for calibration. The two units are optically aligned using a turning mirror on the spectrometer's telescope with the video camera's output.
Tidal amplitude and fish abundance in the mouth region of a small estuary.
Becker, A; Whitfield, A K; Cowley, P D; Cole, V J; Taylor, M D
2016-09-01
Using an acoustic underwater camera (Dual Frequency IDentification SONar, DIDSON), the abundance and direction of movement of fishes > 80 mm total length (LT ) in the mouth of a small South African estuary during spring and neap tidal cycles were observed. While the sizes of fishes recorded were consistent across both tide cycles, the number of fishes passing the camera was significantly greater during the smaller neap tides. Schooling behaviour was more pronounced for fishes that were travelling into the estuary compared to fishes swimming towards the ocean. © 2016 The Fisheries Society of the British Isles.
Videogrammetric Model Deformation Measurement Technique
NASA Technical Reports Server (NTRS)
Burner, A. W.; Liu, Tian-Shu
2001-01-01
The theory, methods, and applications of the videogrammetric model deformation (VMD) measurement technique used at NASA for wind tunnel testing are presented. The VMD technique, based on non-topographic photogrammetry, can determine static and dynamic aeroelastic deformation and attitude of a wind-tunnel model. Hardware of the system includes a video-rate CCD camera, a computer with an image acquisition frame grabber board, illumination lights, and retroreflective or painted targets on a wind tunnel model. Custom software includes routines for image acquisition, target-tracking/identification, target centroid calculation, camera calibration, and deformation calculations. Applications of the VMD technique at five large NASA wind tunnels are discussed.
Film annotation system for a space experiment
NASA Technical Reports Server (NTRS)
Browne, W. R.; Johnson, S. S.
1989-01-01
This microprocessor system was designed to control and annotate a Nikon 35 mm camera for the purpose of obtaining photographs and data at predefined time intervals. The single STD BUSS interface card was designed in such a way as to allow it to be used in either a stand alone application with minimum features or installed in a STD BUSS computer allowing for maximum features. This control system also allows the exposure of twenty eight alpha/numeric characters across the bottom of each photograph. The data contains such information as camera identification, frame count, user defined text, and time to .01 second.
Three dimensional identification card and applications
NASA Astrophysics Data System (ADS)
Zhou, Changhe; Wang, Shaoqing; Li, Chao; Li, Hao; Liu, Zhao
2016-10-01
Three dimensional Identification Card, with its three-dimensional personal image displayed and stored for personal identification, is supposed be the advanced version of the present two-dimensional identification card in the future [1]. Three dimensional Identification Card means that there are three-dimensional optical techniques are used, the personal image on ID card is displayed to be three-dimensional, so we can see three dimensional personal face. The ID card also stores the three-dimensional face information in its inside electronics chip, which might be recorded by using two-channel cameras, and it can be displayed in computer as three-dimensional images for personal identification. Three-dimensional ID card might be one interesting direction to update the present two-dimensional card in the future. Three-dimension ID card might be widely used in airport custom, entrance of hotel, school, university, as passport for on-line banking, registration of on-line game, etc...
Development of a table tennis robot for ball interception using visual feedback
NASA Astrophysics Data System (ADS)
Parnichkun, Manukid; Thalagoda, Janitha A.
2016-07-01
This paper presents a concept of intercepting a moving table tennis ball using a robot. The robot has four degrees of freedom(DOF) which are simplified in such a way that The system is able to perform the task within the bounded limit. It employs computer vision to localize the ball. For ball identification, Colour Based Threshold Segmentation(CBTS) and Background Subtraction(BS) methodologies are used. Coordinate Transformation(CT) is employed to transform the data, which is taken based on camera coordinate frame to the general coordinate frame. The sensory system consisted of two HD Web Cameras. The computation time of image processing from web cameras is long .it is not possible to intercept table tennis ball using only image processing. Therefore the projectile motion model is employed to predict the final destination of the ball.
Evaluation of modified portable digital camera for screening of diabetic retinopathy.
Chalam, Kakarla V; Brar, Vikram S; Keshavamurthy, Ravi
2009-01-01
To describe a portable wide-field noncontact digital camera for posterior segment photography. The digital camera has a compound lens consisting of two optical elements (a 90-dpt and a 20-dpt lens) attached to a 7.2-megapixel camera. White-light-emitting diodes are used to illuminate the fundus and reduce source reflection. The camera settings are set to candlelight mode, the optic zoom standardized to x2.4 and the focus is manually set to 3.0 m. The new technique provides quality wide-angle digital images of the retina (60 degrees ) in patients with dilated pupils, at a fraction of the cost of established digital fundus photography. The modified digital camera is a useful alternative technique to acquire fundus images and provides a tool for screening posterior segment conditions, including diabetic retinopathy in a variety of clinical settings.
Enhancing swimming pool safety by the use of range-imaging cameras
NASA Astrophysics Data System (ADS)
Geerardyn, D.; Boulanger, S.; Kuijk, M.
2015-05-01
Drowning is the cause of death of 372.000 people, each year worldwide, according to the report of November 2014 of the World Health Organization.1 Currently, most swimming pools only use lifeguards to detect drowning people. In some modern swimming pools, camera-based detection systems are nowadays being integrated. However, these systems have to be mounted underwater, mostly as a replacement of the underwater lighting. In contrast, we are interested in range imaging cameras mounted on the ceiling of the swimming pool, allowing to distinguish swimmers at the surface from drowning people underwater, while keeping the large field-of-view and minimizing occlusions. However, we have to take into account that the water surface of a swimming pool is not a flat, but mostly rippled surface, and that the water is transparent for visible light, but less transparent for infrared or ultraviolet light. We investigated the use of different types of 3D cameras to detect objects underwater at different depths and with different amplitudes of surface perturbations. Specifically, we performed measurements with a commercial Time-of-Flight camera, a commercial structured-light depth camera and our own Time-of-Flight system. Our own system uses pulsed Time-of-Flight and emits light of 785 nm. The measured distances between the camera and the object are influenced through the perturbations on the water surface. Due to the timing of our Time-of-Flight camera, our system is theoretically able to minimize the influence of the reflections of a partially-reflecting surface. The combination of a post image-acquisition filter compensating for the perturbations and the use of a light source with shorter wavelengths to enlarge the depth range can improve the current commercial cameras. As a result, we can conclude that low-cost range imagers can increase swimming pool safety, by inserting a post-processing filter and the use of another light source.
Multispectral imaging system for contaminant detection
NASA Technical Reports Server (NTRS)
Poole, Gavin H. (Inventor)
2003-01-01
An automated inspection system for detecting digestive contaminants on food items as they are being processed for consumption includes a conveyor for transporting the food items, a light sealed enclosure which surrounds a portion of the conveyor, with a light source and a multispectral or hyperspectral digital imaging camera disposed within the enclosure. Operation of the conveyor, light source and camera are controlled by a central computer unit. Light reflected by the food items within the enclosure is detected in predetermined wavelength bands, and detected intensity values are analyzed to detect the presence of digestive contamination.
Extended spectrum SWIR camera with user-accessible Dewar
NASA Astrophysics Data System (ADS)
Benapfl, Brendan; Miller, John Lester; Vemuri, Hari; Grein, Christoph; Sivananthan, Siva
2017-02-01
Episensors has developed a series of extended short wavelength infrared (eSWIR) cameras based on high-Cd concentration Hg1-xCdxTe absorbers. The cameras have a bandpass extending to 3 microns cutoff wavelength, opening new applications relative to traditional InGaAs-based cameras. Applications and uses are discussed and examples given. A liquid nitrogen pour-filled version was initially developed. This was followed by a compact Stirling-cooled version with detectors operating at 200 K. Each camera has unique sensitivity and performance characteristics. The cameras' size, weight and power specifications are presented along with images captured with band pass filters and eSWIR sources to demonstrate spectral response beyond 1.7 microns. The soft seal Dewars of the cameras are designed for accessibility, and can be opened and modified in a standard laboratory environment. This modular approach allows user flexibility for swapping internal components such as cold filters and cold stops. The core electronics of the Stirlingcooled camera are based on a single commercial field programmable gate array (FPGA) that also performs on-board non-uniformity corrections, bad pixel replacement, and directly drives any standard HDMI display.
The ideal subject distance for passport pictures.
Verhoff, Marcel A; Witzel, Carsten; Kreutz, Kerstin; Ramsthaler, Frank
2008-07-04
In an age of global combat against terrorism, the recognition and identification of people on document images is of increasing significance. Experiments and calculations have shown that the camera-to-subject distance - not the focal length of the lens - can have a significant effect on facial proportions. Modern passport pictures should be able to function as a reference image for automatic and manual picture comparisons. This requires a defined subject distance. It is completely unclear which subject distance, in the taking of passport photographs, is ideal for the recognition of the actual person. We show here that the camera-to-subject distance that is perceived as ideal is dependent on the face being photographed, even if the distance of 2m was most frequently preferred. So far the problem of the ideal camera-to-subject distance for faces has only been approached through technical calculations. We have, for the first time, answered this question experimentally with a double-blind experiment. Even if there is apparently no ideal camera-to-subject distance valid for every face, 2m can be proposed as ideal for the taking of passport pictures. The first step would actually be the determination of a camera-to-subject distance for the taking of passport pictures within the standards. From an anthropological point of view it would be interesting to find out which facial features allow the preference of a shorter camera-to-subject distance and which allow the preference of a longer camera-to-subject distance.
AIRWAY IDENTIFICATION WITHIN PLANAR GAMMA CAMERA IMAGES USING COMPUTER MODELS OF LUNG MORPHOLOGY
The quantification of inhaled aerosols could be improved if a more comprehensive assessment of their spatial distribution patterns among lung airways were obtained. A common technique for quantifying particle deposition in human lungs is with planar gamma scintigraphy. However, t...
Color and Contour Based Identification of Stem of Coconut Bunch
NASA Astrophysics Data System (ADS)
Kannan Megalingam, Rajesh; Manoharan, Sakthiprasad K.; Reddy, Rajesh G.; Sriteja, Gone; Kashyap, Ashwin
2017-08-01
Vision is the key component of Artificial Intelligence and Automated Robotics. Sensors or Cameras are the sight organs for a robot. Only through this, they are able to locate themselves or identify the shape of a regular or an irregular object. This paper presents the method of Identification of an object based on color and contour recognition using a camera through digital image processing techniques for robotic applications. In order to identify the contour, shape matching technique is used, which takes the input data from the database provided, and uses it to identify the contour by checking for shape match. The shape match is based on the idea of iterating through each contour of the threshold image. The color is identified on HSV Scale, by approximating the desired range of values from the database. HSV data along with iteration is used for identifying a quadrilateral, which is our required contour. This algorithm could also be used in a non-deterministic plane, which only uses HSV values exclusively.
NASA Astrophysics Data System (ADS)
Grasser, R.; Peyronneaudi, Benjamin; Yon, Kevin; Aubry, Marie
2015-10-01
CILAS, subsidiary of Airbus Defense and Space, develops, manufactures and sales laser-based optronics equipment for defense and homeland security applications. Part of its activity is related to active systems for threat detection, recognition and identification. Active surveillance and active imaging systems are often required to achieve identification capacity in case for long range observation in adverse conditions. In order to ease the deployment of active imaging systems often complex and expensive, CILAS suggests a new concept. It consists on the association of two apparatus working together. On one side, a patented versatile laser platform enables high peak power laser illumination for long range observation. On the other side, a small camera add-on works as a fast optical switch to select photons with specific time of flight only. The association of the versatile illumination platform and the fast optical switch presents itself as an independent body, so called "flash module", giving to virtually any passive observation systems gated active imaging capacity in NIR and SWIR.
Submm/mm galaxy counterpart identification using a characteristic density distribution
NASA Astrophysics Data System (ADS)
Alberts, Stacey; Wilson, Grant W.; Lu, Yu; Johnson, Seth; Yun, Min S.; Scott, Kimberly S.; Pope, Alexandra; Aretxaga, Itziar; Ezawa, Hajime; Hughes, David H.; Kawabe, Ryohei; Kim, Sungeun; Kohno, Kotaro; Oshima, Tai
2013-05-01
We present a new submm/mm galaxy counterpart identification technique which builds on the use of Spitzer Infrared Array Camera (IRAC) colours as discriminators between likely counterparts and the general IRAC galaxy population. Using 102 radio- and Submillimeter Array-confirmed counterparts to AzTEC sources across three fields [Great Observatories Origins Deep Survey-North, -South and Cosmic Evolution Survey (COSMOS)], we develop a non-parametric IRAC colour-colour characteristic density distribution, which, when combined with positional uncertainty information via likelihood ratios, allows us to rank all potential IRAC counterparts around submillimetre galaxies (SMGs) and calculate the significance of each ranking via the reliability factor. We report all robust and tentative radio counterparts to SMGs, the first such list available for AzTEC/COSMOS, as well as the highest ranked IRAC counterparts for all AzTEC SMGs in these fields as determined by our technique. We demonstrate that the technique is free of radio bias and thus applicable regardless of radio detections. For observations made with a moderate beam size (˜18 arcsec), this technique identifies ˜85 per cent of SMG counterparts. For much larger beam sizes (≳30 arcsec), we report identification rates of 33-49 per cent. Using simulations, we demonstrate that this technique is an improvement over using positional information alone for observations with facilities such as AzTEC on the Large Millimeter Telescope and Submillimeter Common User Bolometer Array 2 on the James Clerk Maxwell Telescope.
George, E.V.; Oster, Y.; Mundinger, D.C.
1990-12-25
Deep UV projection lithography can be performed using an e-beam pumped solid excimer UV source, a mask, and a UV reduction camera. The UV source produces deep UV radiation in the range 1,700--1,300A using xenon, krypton or argon; shorter wavelengths of 850--650A can be obtained using neon or helium. A thin solid layer of the gas is formed on a cryogenically cooled plate and bombarded with an e-beam to cause fluorescence. The UV reduction camera utilizes multilayer mirrors having high reflectivity at the UV wavelength and images the mask onto a resist coated substrate at a preselected demagnification. The mask can be formed integrally with the source as an emitting mask. 6 figs.
Flexible nuclear medicine camera and method of using
Dilmanian, F.A.; Packer, S.; Slatkin, D.N.
1996-12-10
A nuclear medicine camera and method of use photographically record radioactive decay particles emitted from a source, for example a small, previously undetectable breast cancer, inside a patient. The camera includes a flexible frame containing a window, a photographic film, and a scintillation screen, with or without a gamma-ray collimator. The frame flexes for following the contour of the examination site on the patient, with the window being disposed in substantially abutting contact with the skin of the patient for reducing the distance between the film and the radiation source inside the patient. The frame is removably affixed to the patient at the examination site for allowing the patient mobility to wear the frame for a predetermined exposure time period. The exposure time may be several days for obtaining early qualitative detection of small malignant neoplasms. 11 figs.
Time-resolved X-ray excited optical luminescence using an optical streak camera
NASA Astrophysics Data System (ADS)
Ward, M. J.; Regier, T. Z.; Vogt, J. M.; Gordon, R. A.; Han, W.-Q.; Sham, T. K.
2013-03-01
We report the development of a time-resolved XEOL (TR-XEOL) system that employs an optical streak camera. We have conducted TR-XEOL experiments at the Canadian Light Source (CLS) operating in single bunch mode with a 570 ns dark gap and 35 ps electron bunch pulse, and at the Advanced Photon Source (APS) operating in top-up mode with a 153 ns dark gap and 33.5 ps electron bunch pulse. To illustrate the power of this technique we measured the TR-XEOL of solid-solution nanopowders of gallium nitride - zinc oxide, and for the first time have been able to resolve near-band-gap (NBG) optical luminescence emission from these materials. Herein we will discuss the development of the streak camera TR-XEOL technique and its application to the study of these novel materials.
Multi-channel automotive night vision system
NASA Astrophysics Data System (ADS)
Lu, Gang; Wang, Li-jun; Zhang, Yi
2013-09-01
A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.
Compact fluorescence and white-light imaging system for intraoperative visualization of nerves
NASA Astrophysics Data System (ADS)
Gray, Dan; Kim, Evgenia; Cotero, Victoria; Staudinger, Paul; Yazdanfar, Siavash; tan Hehir, Cristina
2012-02-01
Fluorescence image guided surgery (FIGS) allows intraoperative visualization of critical structures, with applications spanning neurology, cardiology and oncology. An unmet clinical need is prevention of iatrogenic nerve damage, a major cause of post-surgical morbidity. Here we describe the advancement of FIGS imaging hardware, coupled with a custom nerve-labeling fluorophore (GE3082), to bring FIGS nerve imaging closer to clinical translation. The instrument is comprised of a 405nm laser and a white light LED source for excitation and illumination. A single 90 gram color CCD camera is coupled to a 10mm surgical laparoscope for image acquisition. Synchronization of the light source and camera allows for simultaneous visualization of reflected white light and fluorescence using only a single camera. The imaging hardware and contrast agent were evaluated in rats during in situ surgical procedures.
A compact fluorescence and white light imaging system for intraoperative visualization of nerves
NASA Astrophysics Data System (ADS)
Gray, Dan; Kim, Evgenia; Cotero, Victoria; Staudinger, Paul; Yazdanfar, Siavash; Tan Hehir, Cristina
2012-03-01
Fluorescence image guided surgery (FIGS) allows intraoperative visualization of critical structures, with applications spanning neurology, cardiology and oncology. An unmet clinical need is prevention of iatrogenic nerve damage, a major cause of post-surgical morbidity. Here we describe the advancement of FIGS imaging hardware, coupled with a custom nerve-labeling fluorophore (GE3082), to bring FIGS nerve imaging closer to clinical translation. The instrument is comprised of a 405nm laser and a white light LED source for excitation and illumination. A single 90 gram color CCD camera is coupled to a 10mm surgical laparoscope for image acquisition. Synchronization of the light source and camera allows for simultaneous visualization of reflected white light and fluorescence using only a single camera. The imaging hardware and contrast agent were evaluated in rats during in situ surgical procedures.
A Multi-Camera System for Bioluminescence Tomography in Preclinical Oncology Research
Lewis, Matthew A.; Richer, Edmond; Slavine, Nikolai V.; Kodibagkar, Vikram D.; Soesbe, Todd C.; Antich, Peter P.; Mason, Ralph P.
2013-01-01
Bioluminescent imaging (BLI) of cells expressing luciferase is a valuable noninvasive technique for investigating molecular events and tumor dynamics in the living animal. Current usage is often limited to planar imaging, but tomographic imaging can enhance the usefulness of this technique in quantitative biomedical studies by allowing accurate determination of tumor size and attribution of the emitted light to a specific organ or tissue. Bioluminescence tomography based on a single camera with source rotation or mirrors to provide additional views has previously been reported. We report here in vivo studies using a novel approach with multiple rotating cameras that, when combined with image reconstruction software, provides the desired representation of point source metastases and other small lesions. Comparison with MRI validated the ability to detect lung tumor colonization in mouse lung. PMID:26824926
NASA Astrophysics Data System (ADS)
Costa, Manuel F. M.; Jorge, Jorge M.
1998-01-01
The early evaluation of the visual status of human infants is of a critical importance. It is of utmost importance to the development of the child's visual system that she perceives clear, focused, retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur. Photorefraction is a non-invasive clinical tool rather convenient for application to this kind of population. A qualitative or semi-quantitative information about refractive errors, accommodation, strabismus, amblyogenic factors and some pathologies (cataracts) can the easily obtained. The photorefraction experimental setup we established using new technological breakthroughs on the fields of imaging devices, image processing and fiber optics, allows the implementation of both the isotropic and eccentric photorefraction approaches. Essentially both methods consist on delivering a light beam into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The system is formed by one CCD color camera and a light source. A beam splitter in front of the camera's objective allows coaxial illumination and observation. An optomechanical system also allows eccentric illumination. The light source is a flash type one and is synchronized with the camera's image acquisition. The camera's image is digitized displayed in real time. Image processing routines are applied for image's enhancement and feature extraction.
NASA Astrophysics Data System (ADS)
Costa, Manuel F.; Jorge, Jorge M.
1997-12-01
The early evaluation of the visual status of human infants is of a critical importance. It is of utmost importance to the development of the child's visual system that she perceives clear, focused, retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur. Photorefraction is a non-invasive clinical tool rather convenient for application to this kind of population. A qualitative or semi-quantitative information about refractive errors, accommodation, strabismus, amblyogenic factors and some pathologies (cataracts) can the easily obtained. The photorefraction experimental setup we established using new technological breakthroughs on the fields of imaging devices, image processing and fiber optics, allows the implementation of both the isotropic and eccentric photorefraction approaches. Essentially both methods consist on delivering a light beam into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The system is formed by one CCD color camera and a light source. A beam splitter in front of the camera's objective allows coaxial illumination and observation. An optomechanical system also allows eccentric illumination. The light source is a flash type one and is synchronized with the camera's image acquisition. The camera's image is digitized displayed in real time. Image processing routines are applied for image's enhancement and feature extraction.
Replogle, William C.; Sweatt, William C.
2001-01-01
A photolithography system that employs a condenser that includes a series of aspheric mirrors on one side of a small, incoherent source of radiation producing a series of beams is provided. Each aspheric mirror images the quasi point source into a curved line segment. A relatively small arc of the ring image is needed by the camera; all of the beams are so manipulated that they all fall onto this same arc needed by the camera. Also, all of the beams are aimed through the camera's virtual entrance pupil. The condenser includes a correcting mirror for reshaping a beam segment which improves the overall system efficiency. The condenser efficiently fills the larger radius ringfield created by today's advanced camera designs. The system further includes (i) means for adjusting the intensity profile at the camera's entrance pupil or (ii) means for partially shielding the illumination imaging onto the mask or wafer. The adjusting means can, for example, change at least one of: (i) partial coherence of the photolithography system, (ii) mask image illumination uniformity on the wafer or (iii) centroid position of the illumination flux in the entrance pupil. A particularly preferred adjusting means includes at least one vignetting mask that covers at least a portion of the at least two substantially equal radial segments of the parent aspheric mirror.
Slit-Slat Collimator Equipped Gamma Camera for Whole-Mouse SPECT-CT Imaging
NASA Astrophysics Data System (ADS)
Cao, Liji; Peter, Jörg
2012-06-01
A slit-slat collimator is developed for a gamma camera intended for small-animal imaging (mice). The tungsten housing of a roof-shaped collimator forms a slit opening, and the slats are made of lead foils separated by sparse polyurethane material. Alignment of the collimator with the camera's pixelated crystal is performed by adjusting a micrometer screw while monitoring a Co-57 point source for maximum signal intensity. For SPECT, the collimator forms a cylindrical field-of-view enabling whole mouse imaging with transaxial magnification and constant on-axis sensitivity over the entire axial direction. As the gamma camera is part of a multimodal imaging system incorporating also x-ray CT, five parameters corresponding to the geometric displacements of the collimator as well as to the mechanical co-alignment between the gamma camera and the CT subsystem are estimated by means of bimodal calibration sources. To illustrate the performance of the slit-slat collimator and to compare its performance to a single pinhole collimator, a Derenzo phantom study is performed. Transaxial resolution along the entire long axis is comparable to a pinhole collimator of same pinhole diameter. Axial resolution of the slit-slat collimator is comparable to that of a parallel beam collimator. Additionally, data from an in-vivo mouse study are presented.
Plume propagation direction determination with SO2 cameras
NASA Astrophysics Data System (ADS)
Klein, Angelika; Lübcke, Peter; Bobrowski, Nicole; Kuhn, Jonas; Platt, Ulrich
2017-03-01
SO2 cameras are becoming an established tool for measuring sulfur dioxide (SO2) fluxes in volcanic plumes with good precision and high temporal resolution. The primary result of SO2 camera measurements are time series of two-dimensional SO2 column density distributions (i.e. SO2 column density images). However, it is frequently overlooked that, in order to determine the correct SO2 fluxes, not only the SO2 column density, but also the distance between the camera and the volcanic plume, has to be precisely known. This is because cameras only measure angular extents of objects while flux measurements require knowledge of the spatial plume extent. The distance to the plume may vary within the image array (i.e. the field of view of the SO2 camera) since the plume propagation direction (i.e. the wind direction) might not be parallel to the image plane of the SO2 camera. If the wind direction and thus the camera-plume distance are not well known, this error propagates into the determined SO2 fluxes and can cause errors exceeding 50 %. This is a source of error which is independent of the frequently quoted (approximate) compensation of apparently higher SO2 column densities and apparently lower plume propagation velocities at non-perpendicular plume observation angles.Here, we propose a new method to estimate the propagation direction of the volcanic plume directly from SO2 camera image time series by analysing apparent flux gradients along the image plane. From the plume propagation direction and the known location of the SO2 source (i.e. volcanic vent) and camera position, the camera-plume distance can be determined. Besides being able to determine the plume propagation direction and thus the wind direction in the plume region directly from SO2 camera images, we additionally found that it is possible to detect changes of the propagation direction at a time resolution of the order of minutes. In addition to theoretical studies we applied our method to SO2 flux measurements at Mt Etna and demonstrate that we obtain considerably more precise (up to a factor of 2 error reduction) SO2 fluxes. We conclude that studies on SO2 flux variability become more reliable by excluding the possible influences of propagation direction variations.
32 CFR 813.4 - Combat camera operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., historical, and other significant customers. (e) Sourcing COMCAM forces. See AFMAN 10-401 for specific... Force VI personnel who assist supported commands in determining COMCAM and VI requirements and sourcing...
The Identification of EGRET Sources with Flat-Spectrum Radio Sources
NASA Astrophysics Data System (ADS)
Mattox, J. R.; Schachter, J.; Molnar, L.; Hartman, R. C.; Patnaik, A. R.
1997-05-01
We present a method to assess the reliability of the identification of EGRET sources with extragalactic radio sources. We verify that EGRET is detecting the blazar class of active galactic nuclei (AGNs). However, many published identifications are found to be questionable. We provide a table of 42 blazars that we expect to be robust identifications of EGRET sources. This includes one previously unidentified EGRET source, the lensed AGN PKS 1830-210, near the direction of the Galactic center. We provide the best available positions for 16 more radio sources that are also potential identifications for previously unidentified EGRET sources. All high Galactic latitude EGRET sources (|b| > 3°) that demonstrate significant variability can be identified with flat-spectrum radio sources. This suggests that EGRET is not detecting any type of AGN other than blazars. This identification method has been used to establish with 99.998% confidence that the peak γ-ray flux of a blazar is correlated with its average 5 GHz radio flux. An even better correlation is seen between γ-ray flux and the 2.29 GHz flux density measured with VLBI at the base of the radio jet. Also, using high-confidence identifications, we find that the radio sources identified with EGRET sources have greater correlated VLBI flux densities than the parent population of flat radio spectrum sources.
NASA Astrophysics Data System (ADS)
Yun, Min S.; Scott, K. S.; Guo, Yicheng; Aretxaga, I.; Giavalisco, M.; Austermann, J. E.; Capak, P.; Chen, Yuxi; Ezawa, H.; Hatsukade, B.; Hughes, D. H.; Iono, D.; Johnson, S.; Kawabe, R.; Kohno, K.; Lowenthal, J.; Miller, N.; Morrison, G.; Oshima, T.; Perera, T. A.; Salvato, M.; Silverman, J.; Tamura, Y.; Williams, C. C.; Wilson, G. W.
2012-02-01
We report the results of the counterpart identification and a detailed analysis of the physical properties of the 48 sources discovered in our deep 1.1-mm wavelength imaging survey of the Great Observatories Origins Deep Survey-South (GOODS-S) field using the AzTEC instrument on the Atacama Submillimeter Telescope Experiment. One or more robust or tentative counterpart candidate is found for 27 and 14 AzTEC sources, respectively, by employing deep radio continuum, Spitzer/Multiband Imaging Photometer for Spitzer and Infrared Array Camera, and Large APEX Bolometer Camera 870 μm data. Five of the sources (10 per cent) have two robust counterparts each, supporting the idea that these galaxies are strongly clustered and/or heavily confused. Photometric redshifts and star formation rates (SFRs) are derived by analysing ultraviolet(UV)-to-optical and infrared(IR)-to-radio spectral energy distributions (SEDs). The median redshift of zmed˜ 2.6 is similar to other earlier estimates, but we show that 80 per cent of the AzTEC-GOODS sources are at z≥ 2, with a significant high-redshift tail (20 per cent at z≥ 3.3). Rest-frame UV and optical properties of AzTEC sources are extremely diverse, spanning 10 mag in the i- and K-band photometry (a factor of 104 in flux density) with median values of i= 25.3 and K= 22.6 and a broad range of red colour (i-K= 0-6) with an average value of i-K≈ 3. These AzTEC sources are some of the most luminous galaxies in the rest-frame optical bands at z≥ 2, with inferred stellar masses M*= (1-30) × 1010 M⊙ and UV-derived SFRs of SFRUV≳ 101-3 M⊙ yr-1. The IR-derived SFR, 200-2000 M⊙ yr-1, is independent of z or M*. The resulting specific star formation rates, SSFR ≈ 1-100 Gyr-1, are 10-100 times higher than similar mass galaxies at z= 0, and they extend the previously observed rapid rise in the SSFR with redshift to z= 2-5. These galaxies have a SFR high enough to have built up their entire stellar mass within their Hubble time. We find only marginal evidence for an active galactic nucleus (AGN) contribution to the near-IR and mid-IR SEDs, even among the X-ray detected sources, and the derived M* and SFR show little dependence on the presence of an X-ray bright AGN.
Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.
2010-01-01
Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475
LAMOST CCD camera-control system based on RTS2
NASA Astrophysics Data System (ADS)
Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng
2018-05-01
The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.
Estimation of Enterococci Input from Bathers and Animals on A Recreational Beach Using Camera Images
D, Wang John; M, Solo-Gabriele Helena; M, Abdelzaher Amir; E, Fleming Lora
2010-01-01
Enterococci, are used nationwide as a water quality indicator of marine recreational beaches. Prior research has demonstrated that enterococci inputs to the study beach site (located in Miami, FL) are dominated by non-point sources (including humans and animals). We have estimated their respective source functions by developing a counting methodology for individuals to better understand their non-point source load impacts. The method utilizes camera images of the beach taken at regular time intervals to determine the number of people and animal visitors. The developed method translates raw image counts for weekdays and weekend days into daily and monthly visitation rates. Enterococci source functions were computed from the observed number of unique individuals for average days of each month of the year, and from average load contributions for humans and for animals. Results indicate that dogs represent the larger source of enterococci relative to humans and birds. PMID:20381094
Gemini photographs of the world: A complete index
NASA Technical Reports Server (NTRS)
Giddings, L. E.
1977-01-01
The most authoritative catalogs of photographs of all Gemini missions are assembled. Included for all photographs are JSC (Johnson Space Center) identification number, percent cloud cover, geographical area in sight, and miscellaneous information. In addition, details are given on cameras, filters, films, and other technical details.
ERIC Educational Resources Information Center
Bushweller, Kevin
1994-01-01
Profiles Floyd Wiggins, Jr., veteran school security chief for Richmond (Virginia) Public Schools. Besides a security force, the district uses hand-held metal-detectors and police-dog raids in its secondary schools and is considering use of student identification cards, security video cameras, and a larger parent volunteer force. Wiggins feels…
PN-CCD camera for XMM: performance of high time resolution/bright source operating modes
NASA Astrophysics Data System (ADS)
Kendziorra, Eckhard; Bihler, Edgar; Grubmiller, Willy; Kretschmar, Baerbel; Kuster, Markus; Pflueger, Bernhard; Staubert, Ruediger; Braeuninger, Heinrich W.; Briel, Ulrich G.; Meidinger, Norbert; Pfeffermann, Elmar; Reppin, Claus; Stoetter, Diana; Strueder, Lothar; Holl, Peter; Kemmer, Josef; Soltau, Heike; von Zanthier, Christoph
1997-10-01
The pn-CCD camera is developed as one of the focal plane instruments for the European photon imaging camera (EPIC) on board the x-ray multi mirror (XMM) mission to be launched in 1999. The detector consists of four quadrants of three pn-CCDs each, which are integrated on one silicon wafer. Each CCD has 200 by 64 pixels (150 micrometer by 150 micrometers) with 280 micrometers depletion depth. One CCD of a quadrant is read out at a time, while the four quadrants can be processed independently of each other. In standard imaging mode the CCDs are read out sequentially every 70 ms. Observations of point sources brighter than 1 mCrab will be effected by photon pile- up. However, special operating modes can be used to observe bright sources up to 150 mCrab in timing mode with 30 microseconds time resolution and very bright sources up to several crab in burst mode with 7 microseconds time resolution. We have tested one quadrant of the EPIC pn-CCD camera at line energies from 0.52 keV to 17.4 keV at the long beam test facility Panter in the focus of the qualification mirror module for XMM. In order to test the time resolution of the system, a mechanical chopper was used to periodically modulate the beam intensity. Pulse periods down to 0.7 ms were generated. This paper describes the performance of the pn-CCD detector in timing and burst readout modes with special emphasis on energy and time resolution.
Development, characterization, and modeling of a tunable filter camera
NASA Astrophysics Data System (ADS)
Sartor, Mark Alan
1999-10-01
This paper describes the development, characterization, and modeling of a Tunable Filter Camera (TFC). The TFC is a new multispectral instrument with electronically tuned spectral filtering and low-light-level sensitivity. It represents a hybrid between hyperspectral and multispectral imaging spectrometers that incorporates advantages from each, addressing issues such as complexity, cost, lack of sensitivity, and adaptability. These capabilities allow the TFC to be applied to low- altitude video surveillance for real-time spectral and spatial target detection and image exploitation. Described herein are the theory and principles of operation for the TFC, which includes a liquid crystal tunable filter, an intensified CCD, and a custom apochromatic lens. The results of proof-of-concept testing, and characterization of two prototype cameras are included, along with a summary of the design analyses for the development of a multiple-channel system. A significant result of this effort was the creation of a system-level model, which was used to facilitate development and predict performance. It includes models for the liquid crystal tunable filter and intensified CCD. Such modeling was necessary in the design of the system and is useful for evaluation of the system in remote-sensing applications. Also presented are characterization data from component testing, which included quantitative results for linearity, signal to noise ratio (SNR), linearity, and radiometric response. These data were used to help refine and validate the model. For a pre-defined source, the spatial and spectral response, and the noise of the camera, system can now be predicted. The innovation that sets this development apart is the fact that this instrument has been designed for integrated, multi-channel operation for the express purpose of real-time detection/identification in low- light-level conditions. Many of the requirements for the TFC were derived from this mission. In order to provide background for the design requirements for the TFC development, the mission and principles of operation behind the multi-channel system will be reviewed. Given the combination of the flexibility, simplicity, and sensitivity, the TFC and its multiple-channel extension can play a significant role in the next generation of remote-sensing instruments.
[A Quality Assurance (QA) System with a Web Camera for High-dose-rate Brachytherapy].
Hirose, Asako; Ueda, Yoshihiro; Oohira, Shingo; Isono, Masaru; Tsujii, Katsutomo; Inui, Shouki; Masaoka, Akira; Taniguchi, Makoto; Miyazaki, Masayoshi; Teshima, Teruki
2016-03-01
The quality assurance (QA) system that simultaneously quantifies the position and duration of an (192)Ir source (dwell position and time) was developed and the performance of this system was evaluated in high-dose-rate brachytherapy. This QA system has two functions to verify and quantify dwell position and time by using a web camera. The web camera records 30 images per second in a range from 1,425 mm to 1,505 mm. A user verifies the source position from the web camera at real time. The source position and duration were quantified with the movie using in-house software which was applied with a template-matching technique. This QA system allowed verification of the absolute position in real time and quantification of dwell position and time simultaneously. It was evident from the verification of the system that the mean of step size errors was 0.31±0.1 mm and that of dwell time errors 0.1±0.0 s. Absolute position errors can be determined with an accuracy of 1.0 mm at all dwell points in three step sizes and dwell time errors with an accuracy of 0.1% in more than 10.0 s of the planned time. This system is to provide quick verification and quantification of the dwell position and time with high accuracy at various dwell positions without depending on the step size.
Space telescope low scattered light camera - A model
NASA Technical Reports Server (NTRS)
Breckinridge, J. B.; Kuper, T. G.; Shack, R. V.
1982-01-01
A design approach for a camera to be used with the space telescope is given. Camera optics relay the system pupil onto an annular Gaussian ring apodizing mask to control scattered light. One and two dimensional models of ripple on the primary mirror were calculated. Scattered light calculations using ripple amplitudes between wavelength/20 wavelength/200 with spatial correlations of the ripple across the primary mirror between 0.2 and 2.0 centimeters indicate that the detection of an object a billion times fainter than a bright source in the field is possible. Detection of a Jovian type planet in orbit about alpha Centauri with a camera on the space telescope may be possible.
Leak Location and Classification in the Space Shuttle Main Engine Nozzle by Infrared Testing
NASA Technical Reports Server (NTRS)
Russell, Samuel S.; Walker, James L.; Lansing, Mathew
2003-01-01
The Space Shuttle Main Engine (SSME) is composed of cooling tubes brazed to the inside of a conical structural jacket. Because of the geometry there are regions that can't be inspected for leaks using the bubble solution and low-pressure method. The temperature change due escaping gas is detectable on the surface of the nozzle under the correct conditions. The methods and results presented in this summary address the thermographic identification of leaks in the Space Shuttle Main Engine nozzles. A highly sensitive digital infrared camera is used to record the minute temperature change associated with a leak source, such as a crack or pinhole, hidden within the nozzle wall by observing the inner "hot wall" surface as the nozzle is pressurized. These images are enhanced by digitally subtracting a thermal reference image taken before pressurization, greatly diminishing background noise. The method provides a nonintrusive way of localizing the tube that is leaking and the exact leak source position to within a very small axial distance. Many of the factors that influence the inspectability of the nozzle are addressed; including pressure rate, peak pressure, gas type, ambient temperature and surface preparation.
Sensing and data classification for a robotic meteorite search
NASA Astrophysics Data System (ADS)
Pedersen, Liam; Apostolopoulos, Dimi; Whittaker, William L.; Benedix, Gretchen; Rousch, Ted
1999-01-01
Upcoming missions to Mars and the mon call for highly autonomous robots with capability to perform intra-site exploration, reason about their scientific finds, and perform comprehensive on-board analysis of data collected. An ideal case for testing such technologies and robot capabilities is the robotic search for Antarctic meteorites. The successful identification and classification of meteorites depends on sensing modalities and intelligent evaluation of acquired data. Data from color imagery and spectroscopic measurements are used to identify terrestrial rocks and distinguish them from meteorites. However, because of the large number of rocks and the high cost and delay of using some of the sensors, it is necessary to eliminate as many meteorite candidates as possible using cheap long range sensors, such as color cameras. More resource consuming sensor will be held in reserve for the more promising samples only. Bayes networks are used as the formalism for incrementally combing data from multiple sources in a statistically rigorous manner. Furthermore, they can be used to infer the utility of further sensor readings given currently known data. This information, along with cost estimates, in necessary for the sensing system to rationally schedule further sensor reading sand deployments. This paper address issues associated with sensor selection and implementation of an architecture for automatic identification of rocks and meteorites from a mobile robot.
Conceptual design for an AIUC multi-purpose spectrograph camera using DMD technology
NASA Astrophysics Data System (ADS)
Rukdee, S.; Bauer, F.; Drass, H.; Vanzi, L.; Jordan, A.; Barrientos, F.
2017-02-01
Current and upcoming massive astronomical surveys are expected to discover a torrent of objects, which need groundbased follow-up observations to characterize their nature. For transient objects in particular, rapid early and efficient spectroscopic identification is needed. In particular, a small-field Integral Field Unit (IFU) would mitigate traditional slit losses and acquisition time. To this end, we present the design of a Digital Micromirror Device (DMD) multi-purpose spectrograph camera capable of running in several modes: traditional longslit, small-field patrol IFU, multi-object and full-field IFU mode via Hadamard spectra reconstruction. AIUC Optical multi-purpose CAMera (AIUCOCAM) is a low-resolution spectrograph camera of R 1,600 covering the spectral range of 0.45-0.85 μm. We employ a VPH grating as a disperser, which is removable to allow an imaging mode. This spectrograph is envisioned for use on a 1-2 m class telescope in Chile to take advantage of good site conditions. We present design decisions and challenges for a costeffective robotized spectrograph. The resulting instrument is remarkably versatile, capable of addressing a wide range of scientific topics.
Herring, G.; Ackerman, Joshua T.; Takekawa, John Y.; Eagles-Smith, Collin A.; Eadie, J.M.
2011-01-01
We evaluated predation on nests and methods to detect predators using a combination of infrared cameras and plasticine eggs at nests of American avocets (Recurvirostra americana) and black-necked stilts (Himantopus mexicanus) in Don Edwards San Francisco Bay National Wildlife Refuge, San Mateo and Santa Clara counties, California. Each technique indicated that predation was prevalent; 59% of monitored nests were depredated. Most identifiable predation (n = 49) was caused by mammals (71%) and rates of predation were similar on avocets and stilts. Raccoons (Procyon lotor) and striped skunks (Mephitis mephitis) each accounted for 16% of predations, whereas gray foxes (Urocyon cinereoargenteus) and avian predators each accounted for 14%. Mammalian predation was mainly nocturnal (mean time, 0051 h ?? 5 h 36 min), whereas most avian predation was in late afternoon (mean time, 1800 h ?? 1 h 26 min). Nests with cameras and plasticine eggs were 1.6 times more likely to be predated than nests where only cameras were used in monitoring. Cameras were associated with lower abandonment of nests and provided definitive identification of predators.
Herring, Garth; Ackerman, Joshua T.; Takekawa, John Y.; Eagles-Smith, Collin A.; Eadie, John M.
2011-01-01
We evaluated predation on nests and methods to detect predators using a combination of infrared cameras and plasticine eggs at nests of American avocets (Recurvirostra americana) and black-necked stilts (Himantopus mexicanus) in Don Edwards San Francisco Bay National Wildlife Refuge, San Mateo and Santa Clara counties, California. Each technique indicated that predation was prevalent; 59% of monitored nests were depredated. Most identifiable predation (n = 49) was caused by mammals (71%) and rates of predation were similar on avocets and stilts. Raccoons (Procyon lotor) and striped skunks (Mephitis mephitis) each accounted for 16% of predations, whereas gray foxes (Urocyon cinereoargenteus) and avian predators each accounted for 14%. Mammalian predation was mainly nocturnal (mean time, 0051 h +/- 5 h 36 min), whereas most avian predation was in late afternoon (mean time, 1800 h +/- 1 h 26 min). Nests with cameras and plasticine eggs were 1.6 times more likely to be predated than nests where only cameras were used in monitoring. Cameras were associated with lower abandonment of nests and provided definitive identification of predators.
A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object
NASA Astrophysics Data System (ADS)
Winkler, A. W.; Zagar, B. G.
2013-08-01
An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.
Study on the measurement system of the target polarization characteristics and test
NASA Astrophysics Data System (ADS)
Fu, Qiang; Zhu, Yong; Zhang, Su; Duan, Jin; Yang, Di; Zhan, Juntong; Wang, Xiaoman; Jiang, Hui-Lin
2015-10-01
The polarization imaging detection technology increased the polarization information on the basis of the intensity imaging, which is extensive application in the military and civil and other fields, the research on the polarization characteristics of target is particularly important. The research of the polarization reflection model was introduced in this paper, which describes the scattering vector light energy distribution in reflecting hemisphere polarization characteristics, the target polarization characteristics test system solutions was put forward, by the irradiation light source, measuring turntable and camera, etc, which illuminate light source shall direct light source, with laser light sources and xenon lamp light source, light source can be replaced according to the test need; Hemispherical structure is used in measuring circumarotate placed near its base material sample, equipped with azimuth and pitching rotation mechanism, the manual in order to adjust the azimuth Angle and high Angle observation; Measuring camera pump works, through the different in the way of motor control polaroid polarization test, to ensure the accuracy of measurement and imaging resolution. The test platform has set up by existing laboratory equipment, the laser is 532 nm, line polaroid camera, at the same time also set the sending and receiving optical system. According to the different materials such as wood, metal, plastic, azimuth Angle and zenith Angle in different observation conditions, measurement of target in the polarization scattering properties of different exposure conditions, implementation of hemisphere space pBRDF measurement.
Presence capture cameras - a new challenge to the image quality
NASA Astrophysics Data System (ADS)
Peltoketo, Veli-Tapani
2016-04-01
Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.
Using Arago's spot to monitor optical axis shift in a Petzval refractor.
Bruns, Donald G
2017-03-10
Measuring the change in the optical alignment of a camera attached to a telescope is necessary to perform astrometric measurements. Camera movement when the telescope is refocused changes the plate constants, invalidating the calibration. Monitoring the shift in the optical axis requires a stable internal reference source. This is easily implemented in a Petzval refractor by adding an illuminated pinhole and a small obscuration that creates a spot of Arago on the camera. Measurements of the optical axis shift for a commercial telescope are given as an example.
surface temperature profile of a sandbox containing buried objects using a long-wave infrared camera. Images were recorded for several days under ambient...time of day . Best detection of buried objects corresponded to shallow depths for observed intervals where maxima/minima ambient temperatures coincided
NASA Technical Reports Server (NTRS)
Pelletier, R. E.; Hudnall, W. H.
1987-01-01
The use of Space Shuttle Large Format Camera (LFC) color, IR/color, and B&W images in large-scale soil mapping is discussed and illustrated with sample photographs from STS 41-6 (October 1984). Consideration is given to the characteristics of the film types used; the photographic scales available; geometric and stereoscopic factors; and image interpretation and classification for soil-type mapping (detecting both sharp and gradual boundaries), soil parent material topographic and hydrologic assessment, natural-resources inventory, crop-type identification, and stress analysis. It is suggested that LFC photography can play an important role, filling the gap between aerial and satellite remote sensing.
NASA Technical Reports Server (NTRS)
Price, M. C.; Kearsley, A. T.; Wozniakiewicz, P. J.; Spratt, J.; Burchell, M. J.; Cole, M. J.; Anz-Meador, P.; Liou, J. C.; Ross, D. K.; Opiela, J.;
2014-01-01
Hypervelocity impact features have been recognized on painted surfaces returned from the Hubble Space Telescope (HST). Here we describe experiments that help us to understand their creation, and the preservation of micrometeoroid (MM) remnants. We simulated capture of silicate and sulfide minerals on the Zinc orthotitanate (ZOT) paint and Al alloy plate of the Wide Field and Planetary Camera 2 (WFPC2) radiator, which was returned from HST after 16 years in low Earth orbit (LEO). Our results also allow us to validate analytical methods for identification of MM (and orbital debris) impacts in LEO.
NASA Technical Reports Server (NTRS)
Dillman, R. D.; Eav, B. B.; Baldwin, R. R.
1984-01-01
The Office of Space and Terrestrial Applications-3 payload, scheduled for flight on STS Mission 17, consists of four earth-observation experiments. The Feature Identification and Location Experiment-1 will spectrally sense and numerically classify the earth's surface into water, vegetation, bare earth, and ice/snow/cloud-cover, by means of spectra ratio techniques. The Measurement of Atmospheric Pollution from Satellite experiment will measure CO distribution in the middle and upper troposphere. The Imaging Camera-B uses side-looking SAR to create two-dimensional images of the earth's surface. The Large Format Camera/Attitude Reference System will collect metric quality color, color-IR, and black-and-white photographs for topographic mapping.
Virtual viewpoint synthesis in multi-view video system
NASA Astrophysics Data System (ADS)
Li, Fang; Yang, Shiqiang
2005-07-01
In this paper, we present a virtual viewpoint video synthesis algorithm to satisfy the following three aims: low computing consuming; real time interpolation and acceptable video quality. In contrast with previous technologies, this method obtain incompletely 3D structure using neighbor video sources instead of getting total 3D information with all video sources, so that the computation is reduced greatly. So we demonstrate our interactive multi-view video synthesis algorithm in a personal computer. Furthermore, adopting the method of choosing feature points to build the correspondence between the frames captured by neighbor cameras, we need not require camera calibration. Finally, our method can be used when the angle between neighbor cameras is 25-30 degrees that it is much larger than common computer vision experiments. In this way, our method can be applied into many applications such as sports live, video conference, etc.
Flexible nuclear medicine camera and method of using
Dilmanian, F. Avraham; Packer, Samuel; Slatkin, Daniel N.
1996-12-10
A nuclear medicine camera 10 and method of use photographically record radioactive decay particles emitted from a source, for example a small, previously undetectable breast cancer, inside a patient. The camera 10 includes a flexible frame 20 containing a window 22, a photographic film 24, and a scintillation screen 26, with or without a gamma-ray collimator 34. The frame 20 flexes for following the contour of the examination site on the patient, with the window 22 being disposed in substantially abutting contact with the skin of the patient for reducing the distance between the film 24 and the radiation source inside the patient. The frame 20 is removably affixed to the patient at the examination site for allowing the patient mobility to wear the frame 20 for a predetermined exposure time period. The exposure time may be several days for obtaining early qualitative detection of small malignant neoplasms.
Single-snapshot 2D color measurement by plenoptic imaging system
NASA Astrophysics Data System (ADS)
Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana
2014-03-01
Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.
An autonomous sensor module based on a legacy CCTV camera
NASA Astrophysics Data System (ADS)
Kent, P. J.; Faulkner, D. A. A.; Marshall, G. F.
2016-10-01
A UK MoD funded programme into autonomous sensors arrays (SAPIENT) has been developing new, highly capable sensor modules together with a scalable modular architecture for control and communication. As part of this system there is a desire to also utilise existing legacy sensors. The paper reports upon the development of a SAPIENT-compliant sensor module using a legacy Close-Circuit Television (CCTV) pan-tilt-zoom (PTZ) camera. The PTZ camera sensor provides three modes of operation. In the first mode, the camera is automatically slewed to acquire imagery of a specified scene area, e.g. to provide "eyes-on" confirmation for a human operator or for forensic purposes. In the second mode, the camera is directed to monitor an area of interest, with zoom level automatically optimized for human detection at the appropriate range. Open source algorithms (using OpenCV) are used to automatically detect pedestrians; their real world positions are estimated and communicated back to the SAPIENT central fusion system. In the third mode of operation a "follow" mode is implemented where the camera maintains the detected person within the camera field-of-view without requiring an end-user to directly control the camera with a joystick.
Variation in detection among passive infrared triggered-cameras used in wildlife research
Damm, Philip E.; Grand, James B.; Barnett, Steven W.
2010-01-01
Precise and accurate estimates of demographics such as age structure, productivity, and density are necessary in determining habitat and harvest management strategies for wildlife populations. Surveys using automated cameras are becoming an increasingly popular tool for estimating these parameters. However, most camera studies fail to incorporate detection probabilities, leading to parameter underestimation. The objective of this study was to determine the sources of heterogeneity in detection for trail cameras that incorporate a passive infrared (PIR) triggering system sensitive to heat and motion. Images were collected at four baited sites within the Conecuh National Forest, Alabama, using three cameras at each site operating continuously over the same seven-day period. Detection was estimated for four groups of animals based on taxonomic group and body size. Our hypotheses of detection considered variation among bait sites and cameras. The best model (w=0.99) estimated different rates of detection for each camera in addition to different detection rates for four animal groupings. Factors that explain this variability might include poor manufacturing tolerances, variation in PIR sensitivity, animal behavior, and species-specific infrared radiation. Population surveys using trail cameras with PIR systems must incorporate detection rates for individual cameras. Incorporating time-lapse triggering systems into survey designs should eliminate issues associated with PIR systems.
NASA Astrophysics Data System (ADS)
Hanel, A.; Stilla, U.
2017-05-01
Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.
AutoCNet: A Python library for sparse multi-image correspondence identification for planetary data
NASA Astrophysics Data System (ADS)
Laura, Jason; Rodriguez, Kelvin; Paquette, Adam C.; Dunn, Evin
2018-01-01
In this work we describe the AutoCNet library, written in Python, to support the application of computer vision techniques for n-image correspondence identification in remotely sensed planetary images and subsequent bundle adjustment. The library is designed to support exploratory data analysis, algorithm and processing pipeline development, and application at scale in High Performance Computing (HPC) environments for processing large data sets and generating foundational data products. We also present a brief case study illustrating high level usage for the Apollo 15 Metric camera.
Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.
The Spitzer-IRAC Point-source Catalog of the Vela-D Cloud
NASA Astrophysics Data System (ADS)
Strafella, F.; Elia, D.; Campeggio, L.; Giannini, T.; Lorenzetti, D.; Marengo, M.; Smith, H. A.; Fazio, G.; De Luca, M.; Massi, F.
2010-08-01
This paper presents the observations of Cloud D in the Vela Molecular Ridge, obtained with the Infrared Array Camera (IRAC) camera on board the Spitzer Space Telescope at the wavelengths λ = 3.6, 4.5, 5.8, and 8.0 μm. A photometric catalog of point sources, covering a field of approximately 1.2 deg2, has been extracted and complemented with additional available observational data in the millimeter region. Previous observations of the same region, obtained with the Spitzer MIPS camera in the photometric bands at 24 μm and 70 μm, have also been reconsidered to allow an estimate of the spectral slope of the sources in a wider spectral range. A total of 170,299 point sources, detected at the 5σ sensitivity level in at least one of the IRAC bands, have been reported in the catalog. There were 8796 sources for which good quality photometry was obtained in all four IRAC bands. For this sample, a preliminary characterization of the young stellar population based on the determination of spectral slope is discussed; combining this with diagnostics in the color-magnitude and color-color diagrams, the relative population of young stellar objects (YSOs) in different evolutionary classes has been estimated and a total of 637 candidate YSOs have been selected. The main differences in their relative abundances have been highlighted and a brief account for their spatial distribution is given. The star formation rate has also been estimated and compared with the values derived for other star-forming regions. Finally, an analysis of the spatial distribution of the sources by means of the two-point correlation function shows that the younger population, constituted by the Class I and flat-spectrum sources, is significantly more clustered than the Class II and III sources.
NASA Astrophysics Data System (ADS)
Scott, K. S.; Yun, M. S.; Wilson, G. W.; Austermann, J. E.; Aguilar, E.; Aretxaga, I.; Ezawa, H.; Ferrusca, D.; Hatsukade, B.; Hughes, D. H.; Iono, D.; Giavalisco, M.; Kawabe, R.; Kohno, K.; Mauskopf, P. D.; Oshima, T.; Perera, T. A.; Rand, J.; Tamura, Y.; Tosaki, T.; Velazquez, M.; Williams, C. C.; Zeballos, M.
2010-07-01
We present the first results from a confusion-limited map of the Great Observatories Origins Deep Survey-South (GOODS-S) taken with the AzTEC camera on the Atacama Submillimeter Telescope Experiment. We imaged a field to a 1σ depth of 0.48-0.73 mJybeam-1, making this one of the deepest blank-field surveys at mm-wavelengths ever achieved. Although by traditional standards our GOODS-S map is extremely confused due to a sea of faint underlying sources, we demonstrate through simulations that our source identification and number counts analyses are robust, and the techniques discussed in this paper are relevant for other deeply confused surveys. We find a total of 41 dusty starburst galaxies with signal-to-noise ratios S/N >= 3. 5 within this uniformly covered region, where only two are expected to be false detections, and an additional seven robust source candidates located in the noisier (1σ ~ 1 mJybeam-1) outer region of the map. We derive the 1.1 mm number counts from this field using two different methods: a fluctuation or ``P(d)'' analysis and a semi-Bayesian technique and find that both methods give consistent results. Our data are well fit by a Schechter function model with . Given the depth of this survey, we put the first tight constraints on the 1.1 mm number counts at S1.1mm = 0.5 mJy, and we find evidence that the faint end of the number counts at from various SCUBA surveys towards lensing clusters are biased high. In contrast to the 870μm survey of this field with the LABOCA camera, we find no apparent underdensity of sources compared to previous surveys at 1.1mm the estimates of the number counts of SMGs at flux densities >1mJy determined here are consistent with those measured from the AzTEC/SHADES survey. Additionally, we find a significant number of SMGs not identified in the LABOCA catalogue. We find that in contrast to observations at λ <= 500μm, MIPS 24μm sources do not resolve the total energy density in the cosmic infrared background at 1.1 mm, demonstrating that a population of z >~ 3 dust-obscured galaxies that are unaccounted for at these shorter wavelengths potentially contribute to a large fraction (~2/3) of the infrared background at 1.1 mm.
Absolute colorimetric characterization of a DSLR camera
NASA Astrophysics Data System (ADS)
Guarnera, Giuseppe Claudio; Bianco, Simone; Schettini, Raimondo
2014-03-01
A simple but effective technique for absolute colorimetric camera characterization is proposed. It offers a large dynamic range requiring just a single, off-the-shelf target and a commonly available controllable light source for the characterization. The characterization task is broken down in two modules, respectively devoted to absolute luminance estimation and to colorimetric characterization matrix estimation. The characterized camera can be effectively used as a tele-colorimeter, giving an absolute estimation of the XYZ data in cd=m2. The user is only required to vary the f - number of the camera lens or the exposure time t, to better exploit the sensor dynamic range. The estimated absolute tristimulus values closely match the values measured by a professional spectro-radiometer.
Moving Object Detection on a Vehicle Mounted Back-Up Camera
Kim, Dong-Sun; Kwon, Jinsan
2015-01-01
In the detection of moving objects from vision sources one usually assumes that the scene has been captured by stationary cameras. In case of backing up a vehicle, however, the camera mounted on the vehicle moves according to the vehicle’s movement, resulting in ego-motions on the background. This results in mixed motion in the scene, and makes it difficult to distinguish between the target objects and background motions. Without further treatments on the mixed motion, traditional fixed-viewpoint object detection methods will lead to many false-positive detection results. In this paper, we suggest a procedure to be used with the traditional moving object detection methods relaxing the stationary cameras restriction, by introducing additional steps before and after the detection. We also decribe the implementation as a FPGA platform along with the algorithm. The target application of this suggestion is use with a road vehicle’s rear-view camera systems. PMID:26712761
40 CFR 62.9110 - Identification of sources.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Sulfuric Acid Mist from Existing Sulfuric Acid Plants § 62.9110 Identification of sources. (a) Identification of sources. The plan includes the following sulfuric acid production plants. (1) National Zinc Co...] Fluoride Emissions From Phosphate Fertilizer Plants ...
40 CFR 62.9110 - Identification of sources.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Sulfuric Acid Mist from Existing Sulfuric Acid Plants § 62.9110 Identification of sources. (a) Identification of sources. The plan includes the following sulfuric acid production plants. (1) National Zinc Co...] Fluoride Emissions From Phosphate Fertilizer Plants ...
NASA Astrophysics Data System (ADS)
Edmonds, Peter D.; Gilliland, Ronald L.; Heinke, Craig O.; Grindlay, Jonathan E.
2003-10-01
We report in this study of 47 Tucanae the largest number of optical identifications of X-ray sources yet obtained in a single globular cluster. Using deep Chandra ACIS-I imaging and extensive Hubble Space Telescope studies with Wide Field Planetary Camera 2 (WFPC2; including a 120 orbit program giving superb V and I images), we have detected optical counterparts to at least 22 cataclysmic variables (CVs) and 29 chromospherically active binaries (BY Dra and RS CVn systems) in 47 Tuc. These identifications are all based on tight astrometric matches between X-ray sources and objects with unusual (non-main-sequence [non-MS]) optical colors and/or optical variability. Several other CVs and active binaries have likely been found, but these have marginal significance because of larger offsets between the X-ray and optical positions, or colors and variability that are not statistically convincing. These less secure optical identifications are not subsequently discussed in detail. In the U versus U-V color-magnitude diagram (CMD), where the U band corresponds to either F336W or F300W, the CVs all show evidence for blue colors compared with the MS, but most of them fall close to the main sequence in the V versus V-I CMD, showing that the secondary stars dominate the optical light. The X-ray-detected active binaries have magnitude offsets above the MS (in both the U versus U-V or V versus V-I CMDs) that are indistinguishable from those of the much larger sample of optical variables (eclipsing and contact binaries and BY Dra variables) detected in the recent WFPC2 studies of Albrow et al. We also present the results of a new, deeper search for optical companions to millisecond pulsars (MSPs). One possible optical companion to an MSP (47 Tuc T) was found, adding to the two optical companions already known. Finally, we study several blue stars with periodic variability from Albrow et al. that show little or no evidence for X-ray emission. The optical colors of these objects differ from those of 47 Tuc (and field) CVs. An accompanying paper will present time series results for these optical identifications and will discuss X-ray-to-optical flux ratios, spatial distributions, and an overall interpretation of the results. Based on observations with the NASA/ESA Hubble Space Telescope obtained at STScI, which is operated by AURA, Inc., under NASA contract NAS 5-26555.
Studies on the formation, temporal evolution and forensic applications of camera "fingerprints".
Kuppuswamy, R
2006-06-02
A series of experiments was conducted by exposing negative film in brand new cameras of different make and model. The exposures were repeated at regular time intervals spread over a period of 2 years. The processed film negatives were studied under a stereomicroscope (10-40x) in transmitted illumination for the presence of the characterizing features on their four frame-edges. These features were then related to those present on the masking frame of the cameras by examining the latter in reflected light stereomicroscopy (10-40x). The purpose of the study was to determine the origin and permanence of the frame-edge-marks, and also the processes by which the marks may probably alter with time. The investigations have arrived at the following conclusions: (i) the edge-marks have originated principally from the imperfections received on the film mask from the manufacturing and also occasionally from the accumulated dirt, dust and fiber on the film mask over an extended time period. (ii) The edge profiles of the cameras have remained fixed over a considerable period of time so as to be of a valuable identification medium. (iii) The marks are found to be varying in nature even with those cameras manufactured at similar time. (iv) The influence of f/number and object distance has great effect in the recording of the frame-edge marks during exposure of the film. The above findings would serve as a useful addition to the technique of camera edge-mark comparisons.
A Fast Visible Camera Divertor-Imaging Diagnostic on DIII-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roquemore, A; Maingi, R; Lasnier, C
2007-06-19
In recent campaigns, the Photron Ultima SE fast framing camera has proven to be a powerful diagnostic when applied to imaging divertor phenomena on the National Spherical Torus Experiment (NSTX). Active areas of NSTX divertor research addressed with the fast camera include identification of types of EDGE Localized Modes (ELMs)[1], dust migration, impurity behavior and a number of phenomena related to turbulence. To compare such edge and divertor phenomena in low and high aspect ratio plasmas, a multi-institutional collaboration was developed for fast visible imaging on NSTX and DIII-D. More specifically, the collaboration was proposed to compare the NSTX smallmore » type V ELM regime [2] and the residual ELMs observed during Type I ELM suppression with external magnetic perturbations on DIII-D[3]. As part of the collaboration effort, the Photron camera was installed recently on DIII-D with a tangential view similar to the view implemented on NSTX, enabling a direct comparison between the two machines. The rapid implementation was facilitated by utilization of the existing optics that coupled the visible spectral output from the divertor vacuum ultraviolet UVTV system, which has a view similar to the view developed for the divertor tangential TV camera [4]. A remote controlled filter wheel was implemented, as was the radiation shield required for the DIII-D installation. The installation and initial operation of the camera are described in this paper, and the first images from the DIII-D divertor are presented.« less
NASA Astrophysics Data System (ADS)
Zhang, Shou-ping; Xin, Xiao-kang
2017-07-01
Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.
Perez-Mendez, V.
1997-01-21
A gamma ray camera is disclosed for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array. 6 figs.
Perez-Mendez, Victor
1997-01-01
A gamma ray camera for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array.
Mask-to-wafer alignment system
Sweatt, William C.; Tichenor, Daniel A.; Haney, Steven J.
2003-11-04
A modified beam splitter that has a hole pattern that is symmetric in one axis and anti-symmetric in the other can be employed in a mask-to-wafer alignment device. The device is particularly suited for rough alignment using visible light. The modified beam splitter transmits and reflects light from a source of electromagnetic radiation and it includes a substrate that has a first surface facing the source of electromagnetic radiation and second surface that is reflective of said electromagnetic radiation. The substrate defines a hole pattern about a central line of the substrate. In operation, an input beam from a camera is directed toward the modified beam splitter and the light from the camera that passes through the holes illuminates the reticle on the wafer. The light beam from the camera also projects an image of a corresponding reticle pattern that is formed on the mask surface of the that is positioned downstream from the camera. Alignment can be accomplished by detecting the radiation that is reflected from the second surface of the modified beam splitter since the reflected radiation contains both the image of the pattern from the mask and a corresponding pattern on the wafer.
NASA Astrophysics Data System (ADS)
Dayton, M.; Datte, P.; Carpenter, A.; Eckart, M.; Manuel, A.; Khater, H.; Hargrove, D.; Bell, P.
2017-08-01
The National Ignition Facility's (NIF) harsh radiation environment can cause electronics to malfunction during high-yield DT shots. Until now there has been little experience fielding electronic-based cameras in the target chamber under these conditions; hence, the performance of electronic components in NIF's radiation environment was unknown. It is possible to purchase radiation tolerant devices, however, they are usually qualified for radiation environments different to NIF, such as space flight or nuclear reactors. This paper presents the results from a series of online experiments that used two different prototype camera systems built from non-radiation hardened components and one commercially available camera that permanently failed at relatively low total integrated dose. The custom design built in Livermore endured a 5 × 1015 neutron shot without upset, while the other custom design upset at 2 × 1014 neutrons. These results agreed with offline testing done with a flash x-ray source and a 14 MeV neutron source, which suggested a methodology for developing and qualifying electronic systems for NIF. Further work will likely lead to the use of embedded electronic systems in the target chamber during high-yield shots.
Probing the use of spectroscopy to determine the meteoritic analogues of meteors
NASA Astrophysics Data System (ADS)
Drouard, A.; Vernazza, P.; Loehle, S.; Gattacceca, J.; Vaubaillon, J.; Zanda, B.; Birlan, M.; Bouley, S.; Colas, F.; Eberhart, M.; Hermann, T.; Jorda, L.; Marmo, C.; Meindl, A.; Oefele, R.; Zamkotsian, F.; Zander, F.
2018-05-01
Context. Determining the source regions of meteorites is one of the major goals of current research in planetary science. Whereas asteroid observations are currently unable to pinpoint the source regions of most meteorite classes, observations of meteors with camera networks and the subsequent recovery of the meteorite may help make progress on this question. The main caveat of such an approach, however, is that the recovery rate of meteorite falls is low (<20%), implying that the meteoritic analogues of at least 80% of the observed falls remain unknown. Aims: Spectroscopic observations of incoming bolides may have the potential to mitigate this problem by classifying the incoming meteoritic material. Methods: To probe the use of spectroscopy to determine the meteoritic analogues of incoming bolides, we collected emission spectra in the visible range (320-880 nm) of five meteorite types (H, L, LL, CM, and eucrite) acquired in atmospheric entry-like conditions in a plasma wind tunnel at the Institute of Space Systems (IRS) at the University of Stuttgart (Germany). A detailed spectral analysis including a systematic line identification and mass ratio determinations (Mg/Fe, Na/Fe) was subsequently performed on all spectra. Results: It appears that spectroscopy, via a simple line identification, allows us to distinguish the three main meteorite classes (chondrites, achondrites and irons) but it does not have the potential to distinguish for example an H chondrite from a CM chondrite. Conclusions: The source location within the main belt of the different meteorite classes (H, L, LL, CM, CI, etc.) should continue to be investigated via fireball observation networks. Spectroscopy of incoming bolides only marginally helps precisely classify the incoming material (iron meteorites only). To reach a statistically significant sample of recovered meteorites along with accurate orbits (>100) within a reasonable time frame (10-20 years), the optimal solution may be the spatial extension of existing fireball observation networks. The movie associated to this article is available at http://www.aanda.org
NASA Astrophysics Data System (ADS)
Colbert, Fred
2013-05-01
There has been a significant increase in the number of in-house Infrared Thermographic Predictive Maintenance programs for Electrical/Mechanical inspections as compared to out-sourced programs using hired consultants. In addition, the number of infrared consulting services companies offering out-sourced programs has also has grown exponentially. These market segments include: Building Envelope (commercial and residential), Refractory, Boiler Evaluations, etc... These surges are driven by two main factors: 1. The low cost of investment in the equipment (the cost of cameras and peripherals continues to decline). 2. Novel marketing campaigns by the camera manufacturers who are looking to sell more cameras into an otherwise saturated market. The key characteristics of these campaigns are to over simplify the applications and understate the significances of technical training, specific skills and experience that's needed to obtain the risk-lowering information that a facility manager needs. These camera selling campaigns focuses on the simplicity of taking a thermogram, but ignores the critical factors of what it takes to actually perform and manage a creditable, valid IR program, which in-turn expose everyone to tremendous liability. As the In-house vs. Out-sourced consulting services compete for market share head to head with each other in a constricted market space, the price for out-sourced/consulting services drops to try to compete on price for more market share. The consequences of this approach are, something must be compromised to be able to stay competitive from a price point, and that compromise is the knowledge, technical skills and experience of the thermographer. This also ends up being reflected back into the skill sets of the in-house thermographer as well. This over simplification of the skill and experience is producing the "Perfect Storm" for Infrared Thermography, for both in-house and out-sourced programs.
ERIC Educational Resources Information Center
Bracy, Nicole L.
2009-01-01
Public schools have transformed significantly over the past several decades in response to broad concerns about rising school violence. Today's public schools are high security environments employing tactics commonly found in jails and prisons such as police officers, security cameras, identification systems, and secure building strategies.…
NASA Technical Reports Server (NTRS)
2004-01-01
This image shows where Earth would set on the martian horizon from the perspective of the Mars Exploration Rover Spirit if it were facing northwest atop its lander at Gusev Crater. Earth cannot be seen in this image, but engineers have mapped its location. This image mosaic was taken by the hazard-identification camera onboard Spirit.
Phantom feet on digital radionuclide images and other scary computer tales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freitas, J.E.; Dworkin, H.J.; Dees, S.M.
1989-09-01
Malfunction of a computer-assisted digital gamma camera is reported. Despite what appeared to be adequate acceptance testing, an error in the system gave rise to switching of images and identification text. A suggestion is made for using a hot marker, which would avoid the potential error of misinterpretation of patient images.
21 CFR 886.5820 - Closed-circuit television reading system.
Code of Federal Regulations, 2010 CFR
2010-04-01
... reading system. (a) Identification. A closed-circuit television reading system is a device that consists of a lens, video camera, and video monitor that is intended for use by a patient who has subnormal... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Closed-circuit television reading system. 886.5820...
Design and build a compact Raman sensor for identification of chemical composition
NASA Astrophysics Data System (ADS)
Garcia, Christopher S.; Abedin, M. Nurul; Ismail, Syed; Sharma, Shiv K.; Misra, Anupam K.; Sandford, Stephen P.; Elsayed-Ali, Hani
2008-04-01
A compact remote Raman sensor system was developed at NASA Langley Research Center. This sensor is an improvement over the previously reported system, which consisted of a 532 nm pulsed laser, a 4-inch telescope, a spectrograph, and an intensified CCD camera. One of the attractive features of the previous system was its portability, thereby making it suitable for applications such as planetary surface explorations, homeland security and defense applications where a compact portable instrument is important. The new system was made more compact by replacing bulky components with smaller and lighter components. The new compact system uses a smaller spectrograph measuring 9 x 4 x 4 in. and a smaller intensified CCD camera measuring 5 in. long and 2 in. in diameter. The previous system was used to obtain the Raman spectra of several materials that are important to defense and security applications. Furthermore, the new compact Raman sensor system is used to obtain the Raman spectra of a diverse set of materials to demonstrate the sensor system's potential use in the identification of unknown materials.
Design and Build a Compact Raman Sensor for Identification of Chemical Composition
NASA Technical Reports Server (NTRS)
Garcia, Christopher S.; Abedin, M. Nurul; Ismail, Syed; Sharma, Shiv K.; Misra, Anupam K.; Sandford, Stephen P.; Elsayed-Ali, Hani
2008-01-01
A compact remote Raman sensor system was developed at NASA Langley Research Center. This sensor is an improvement over the previously reported system, which consisted of a 532 nm pulsed laser, a 4-inch telescope, a spectrograph, and an intensified charge-coupled devices (CCD) camera. One of the attractive features of the previous system was its portability, thereby making it suitable for applications such as planetary surface explorations, homeland security and defense applications where a compact portable instrument is important. The new system was made more compact by replacing bulky components with smaller and lighter components. The new compact system uses a smaller spectrograph measuring 9 x 4 x 4 in. and a smaller intensified CCD camera measuring 5 in. long and 2 in. in diameter. The previous system was used to obtain the Raman spectra of several materials that are important to defense and security applications. Furthermore, the new compact Raman sensor system is used to obtain the Raman spectra of a diverse set of materials to demonstrate the sensor system's potential use in the identification of unknown materials.
Identification of metal elements by time-resolved LIBS technique in sediments lake the “Cisne”
NASA Astrophysics Data System (ADS)
Pacheco, P.; Arregui, E.; Álvarez, J.; Rangel, N.; Sarmiento, R.
2017-01-01
Laser induced breakdown spectroscopy (LIBS), is a kind of spectral method of atomic emission that uses pulses of radiation high energy laser as excitation source. One of the advantages of technical LIBS lies in the possibility of analyse the substances in any State of aggregation, already is solid, liquid or gaseous, even in colloids as aerosols, gels and others. Another advantage over other conventional techniques is the simultaneous analysis of elements present in a sample of multielement. This work is made in the use of this technique for the identification of metal pollutants in the Swan Lake sediment samples, collected by drilling cores. Plasmas were generated by focusing the radiation of Nd: YAG laser with an energy per pulse 13mJ and 4ns duration, wavelength of 532nm. The spectra of radiation from the plasmas of sediment were recorded with an Echelle spectrograph type coupled to an ICCD camera. The delay times were between 0.5μs and 7μs, while the gate width was of 2μs. To ensure the homogeneity of the plasmas, the sediment sample was placed in a positioning system of linear and rotary adjustment of smooth step synchronized with the trigger of the laser pulse. The registration of the spectra of the sediment to different times of delay, allowed to identify the lines prominent of the different elements present in the sample. The analysis of the Spectra allowed the identification of some elements in the sample as if, Ca, Na, Mg, and Al through the measurement of wavelengths of the prominent peaks.
Develop Direct Geo-referencing System Based on Open Source Software and Hardware Platform
NASA Astrophysics Data System (ADS)
Liu, H. S.; Liao, H. M.
2015-08-01
Direct geo-referencing system uses the technology of remote sensing to quickly grasp images, GPS tracks, and camera position. These data allows the construction of large volumes of images with geographic coordinates. So that users can be measured directly on the images. In order to properly calculate positioning, all the sensor signals must be synchronized. Traditional aerial photography use Position and Orientation System (POS) to integrate image, coordinates and camera position. However, it is very expensive. And users could not use the result immediately because the position information does not embed into image. To considerations of economy and efficiency, this study aims to develop a direct geo-referencing system based on open source software and hardware platform. After using Arduino microcontroller board to integrate the signals, we then can calculate positioning with open source software OpenCV. In the end, we use open source panorama browser, panini, and integrate all these to open source GIS software, Quantum GIS. A wholesome collection of data - a data processing system could be constructed.
NASA Astrophysics Data System (ADS)
Thorpe, Andrew K.; Frankenberg, Christian; Thompson, David R.; Duren, Riley M.; Aubrey, Andrew D.; Bue, Brian D.; Green, Robert O.; Gerilowski, Konstantin; Krings, Thomas; Borchardt, Jakob; Kort, Eric A.; Sweeney, Colm; Conley, Stephen; Roberts, Dar A.; Dennison, Philip E.
2017-10-01
At local scales, emissions of methane and carbon dioxide are highly uncertain. Localized sources of both trace gases can create strong local gradients in its columnar abundance, which can be discerned using absorption spectroscopy at high spatial resolution. In a previous study, more than 250 methane plumes were observed in the San Juan Basin near Four Corners during April 2015 using the next-generation Airborne Visible/Infrared Imaging Spectrometer (AVIRIS-NG) and a linearized matched filter. For the first time, we apply the iterative maximum a posteriori differential optical absorption spectroscopy (IMAP-DOAS) method to AVIRIS-NG data and generate gas concentration maps for methane, carbon dioxide, and water vapor plumes. This demonstrates a comprehensive greenhouse gas monitoring capability that targets methane and carbon dioxide, the two dominant anthropogenic climate-forcing agents. Water vapor results indicate the ability of these retrievals to distinguish between methane and water vapor despite spectral interference in the shortwave infrared. We focus on selected cases from anthropogenic and natural sources, including emissions from mine ventilation shafts, a gas processing plant, tank, pipeline leak, and natural seep. In addition, carbon dioxide emissions were mapped from the flue-gas stacks of two coal-fired power plants and a water vapor plume was observed from the combined sources of cooling towers and cooling ponds. Observed plumes were consistent with known and suspected emission sources verified by the true color AVIRIS-NG scenes and higher-resolution Google Earth imagery. Real-time detection and geolocation of methane plumes by AVIRIS-NG provided unambiguous identification of individual emission source locations and communication to a ground team for rapid follow-up. This permitted verification of a number of methane emission sources using a thermal camera, including a tank and buried natural gas pipeline.
Appearance-based multimodal human tracking and identification for healthcare in the digital home.
Yang, Mau-Tsuen; Huang, Shen-Yen
2014-08-05
There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare.
Appearance-Based Multimodal Human Tracking and Identification for Healthcare in the Digital Home
Yang, Mau-Tsuen; Huang, Shen-Yen
2014-01-01
There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare. PMID:25098207
NASA Technical Reports Server (NTRS)
Humphreys, Brad; Bellisario, Brian; Gallo, Christopher; Thompson, William K.; Lewandowski, Beth
2016-01-01
Long duration space travel to Mars or to an asteroid will expose astronauts to extended periods of reduced gravity. Since gravity is not present to aid loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft is limited. Therefore, compact resistance exercise device prototypes are being developed. The NASA Digital Astronaut Project (DAP) is supporting the Advanced Exercise Concepts (AEC) Project, Exercise Physiology and Countermeasures (ExPC) project and the National Space Biomedical Research Institute (NSBRI) funded researchers by developing computational models of exercising with these new advanced exercise device concepts. To perform validation of these models and to support the Advanced Exercise Concepts Project, several candidate devices have been flown onboard NASAs Reduced Gravity Aircraft. In terrestrial laboratories, researchers typically have available to them motion capture systems for the measurement of subject kinematics. Onboard the parabolic flight aircraft it is not practical to utilize the traditional motion capture systems due to the large working volume they require and their relatively high replacement cost if damaged. To support measuring kinematics on board parabolic aircraft, a motion capture system is being developed utilizing open source computer vision code with commercial off the shelf (COTS) video camera hardware. While the systems accuracy is lower than lab setups, it provides a means to produce quantitative comparison motion capture kinematic data. Additionally, data such as required exercise volume for small spaces such as the Orion capsule can be determined. METHODS: OpenCV is an open source computer vision library that provides the ability to perform multi-camera 3 dimensional reconstruction. Utilizing OpenCV, via the Python programming language, a set of tools has been developed to perform motion capture in confined spaces using commercial cameras. Four Sony Video Cameras were intrinsically calibrated prior to flight. Intrinsic calibration provides a set of camera specific parameters to remove geometric distortion of the lens and sensor (specific to each individual camera). A set of high contrast markers were placed on the exercising subject (safety also necessitated that they be soft in case they become detached during parabolic flight); small yarn balls were used. Extrinsic calibration, the determination of camera location and orientation parameters, is performed using fixed landmark markers shared by the camera scenes. Additionally a wand calibration, the sweeping of the camera scenes simultaneously, was also performed. Techniques have been developed to perform intrinsic calibration, extrinsic calibration, isolation of the markers in the scene, calculation of marker 2D centroids, and 3D reconstruction from multiple cameras. These methods have been tested in the laboratory side-by-side comparison to a traditional motion capture system and also on a parabolic flight.
Planning Image-Based Measurements in Wind Tunnels by Virtual Imaging
NASA Technical Reports Server (NTRS)
Kushner, Laura Kathryn; Schairer, Edward T.
2011-01-01
Virtual imaging is routinely used at NASA Ames Research Center to plan the placement of cameras and light sources for image-based measurements in production wind tunnel tests. Virtual imaging allows users to quickly and comprehensively model a given test situation, well before the test occurs, in order to verify that all optical testing requirements will be met. It allows optimization of the placement of cameras and light sources and leads to faster set-up times, thereby decreasing tunnel occupancy costs. This paper describes how virtual imaging was used to plan optical measurements for three tests in production wind tunnels at NASA Ames.
Adaptive illumination source for multispectral vision system applied to material discrimination
NASA Astrophysics Data System (ADS)
Conde, Olga M.; Cobo, Adolfo; Cantero, Paulino; Conde, David; Mirapeix, Jesús; Cubillas, Ana M.; López-Higuera, José M.
2008-04-01
A multispectral system based on a monochrome camera and an adaptive illumination source is presented in this paper. Its preliminary application is focused on material discrimination for food and beverage industries, where monochrome, color and infrared imaging have been successfully applied for this task. This work proposes a different approach, in which the relevant wavelengths for the required discrimination task are selected in advance using a Sequential Forward Floating Selection (SFFS) Algorithm. A light source, based on Light Emitting Diodes (LEDs) at these wavelengths is then used to sequentially illuminate the material under analysis, and the resulting images are captured by a CCD camera with spectral response in the entire range of the selected wavelengths. Finally, the several multispectral planes obtained are processed using a Spectral Angle Mapping (SAM) algorithm, whose output is the desired material classification. Among other advantages, this approach of controlled and specific illumination produces multispectral imaging with a simple monochrome camera, and cold illumination restricted to specific relevant wavelengths, which is desirable for the food and beverage industry. The proposed system has been tested with success for the automatic detection of foreign object in the tobacco processing industry.
An imaging system for PLIF/Mie measurements for a combusting flow
NASA Technical Reports Server (NTRS)
Wey, C. C.; Ghorashi, B.; Marek, C. J.; Wey, C.
1990-01-01
The equipment required to establish an imaging system can be divided into four parts: (1) the light source and beam shaping optics; (2) camera and recording; (3) image acquisition and processing; and (4) computer and output systems. A pulsed, Nd:YAG-pummped, frequency-doubled dye laser which can freeze motion in the flowfield is used for an illumination source. A set of lenses is used to form the laser beam into a sheet. The induced fluorescence is collected by an UV-enhanced lens and passes through an UV-enhanced microchannel plate intensifier which is optically coupled to a gated solid state CCD camera. The output of the camera is simultaneously displayed on a monitor and recorded on either a laser videodisc set of a Super VHS VCR. This videodisc set is controlled by a minicomputer via a connection to the RS-232C interface terminals. The imaging system is connected to the host computer by a bus repeater and can be multiplexed between four video input sources. Sample images from a planar shear layer experiment are presented to show the processing capability of the imaging system with the host computer.
NASA Astrophysics Data System (ADS)
Cosson, Benoit; Asséko, André Chateau Akué; Dauphin, Myriam
2018-05-01
The purpose of this paper is to develop a cost-effective, efficient and quick to implement experimental optical method in order to predict the optical properties (extinction coefficient) of semi-transparent polymer composites. The extinction coefficient takes into account the effects due to the absorption and the scattering phenomena in a semi-transparent component during the laser processes, i.e. TTLW (through-transmission laser welding). The present method used a laser as light source and a reflex camera equipped with a macro lens as a measurement device and is based on the light transmission measurement through different thickness samples. The interaction between the incident laser beam and the semi-transparent composite is exanimated. The results are presented for the case of a semi-transparent composite reinforced with the unidirectional glass fiber (UD). A numerical method, ray tracing, is used to validate the experimental results. The ray tracing method is appropriate to characterize the light-scattering phenomenon in semi-transparent materials.
Beam uniformity analysis of infrared laser illuminators
NASA Astrophysics Data System (ADS)
Allik, Toomas H.; Dixon, Roberta E.; Proffitt, R. Patrick; Fung, Susan; Ramboyong, Len; Soyka, Thomas J.
2015-02-01
Uniform near-infrared (NIR) and short-wave infrared (SWIR) illuminators are desired in low ambient light detection, recognition, and identification of military applications. Factors that contribute to laser illumination image degradation are high frequency, coherent laser speckle and low frequency nonuniformities created by the laser or external laser cavity optics. Laser speckle analysis and beam uniformity improvements have been independently studied by numerous authors, but analysis to separate these two effects from a single measurement technique has not been published. In this study, profiles of compact, diode laser NIR and SWIR illuminators were measured and evaluated. Digital 12-bit images were recorded with a flat-field calibrated InGaAs camera with measurements at F/1.4 and F/16. Separating beam uniformity components from laser speckle was approximated by filtering the original image. The goal of this paper is to identify and quantify the beam quality variation of illumination prototypes, draw awareness to its impact on range performance modeling, and develop measurement techniques and methodologies for military, industry, and vendors of active sources.
NASA Astrophysics Data System (ADS)
Meola, Joseph; Absi, Anthony; Islam, Mohammed N.; Peterson, Lauren M.; Ke, Kevin; Freeman, Michael J.; Ifaraguerri, Agustin I.
2014-06-01
Hyperspectral imaging systems are currently used for numerous activities related to spectral identification of materials. These passive imaging systems rely on naturally reflected/emitted radiation as the source of the signal. Thermal infrared systems measure radiation emitted from objects in the scene. As such, they can operate at both day and night. However, visible through shortwave infrared systems measure solar illumination reflected from objects. As a result, their use is limited to daytime applications. Omni Sciences has produced high powered broadband shortwave infrared super-continuum laser illuminators. A 64-watt breadboard system was recently packaged and tested at Wright-Patterson Air Force Base to gauge beam quality and to serve as a proof-of-concept for potential use as an illuminator for a hyperspectral receiver. The laser illuminator was placed in a tower and directed along a 1.4km slant path to various target materials with reflected radiation measured with both a broadband camera and a hyperspectral imaging system to gauge performance.
First images of thunder: Acoustic imaging of triggered lightning
NASA Astrophysics Data System (ADS)
Dayeh, M. A.; Evans, N. D.; Fuselier, S. A.; Trevino, J.; Ramaekers, J.; Dwyer, J. R.; Lucia, R.; Rassoul, H. K.; Kotovsky, D. A.; Jordan, D. M.; Uman, M. A.
2015-07-01
An acoustic camera comprising a linear microphone array is used to image the thunder signature of triggered lightning. Measurements were taken at the International Center for Lightning Research and Testing in Camp Blanding, FL, during the summer of 2014. The array was positioned in an end-fire orientation thus enabling the peak acoustic reception pattern to be steered vertically with a frequency-dependent spatial resolution. On 14 July 2014, a lightning event with nine return strokes was successfully triggered. We present the first acoustic images of individual return strokes at high frequencies (>1 kHz) and compare the acoustically inferred profile with optical images. We find (i) a strong correlation between the return stroke peak current and the radiated acoustic pressure and (ii) an acoustic signature from an M component current pulse with an unusual fast rise time. These results show that acoustic imaging enables clear identification and quantification of thunder sources as a function of lightning channel altitude.
Airborne thermal infrared imaging of the 2004-2005 eruption of Mount St. Helens
NASA Astrophysics Data System (ADS)
Schneider, D. J.; Vallance, J. W.; Logan, M.; Wessels, R.; Ramsey, M.
2005-12-01
A helicopter-mounted forward-looking infrared imaging radiometer (FLIR) documented the explosive and effusive activity at Mount St. Helens during the 2004-2005 eruption. A gyrostabilzed gimbal controlled by a crew member houses the FLIR radiometer and an optical video camera attached at the lower front of the helicopter. Since October 1, 2004 the system has provided an unprecedented data set of thermal and video dome-growth observations. Flights were conducted as frequently as twice daily during the initial month of the eruption (when changes in the crater and dome occurred rapidly), and have been continued on a tri-weekly basis during the period of sustained dome growth. As with any new technology, the routine use of FLIR images to aid in volcano monitoring has been a learning experience in terms of observation strategy and data interpretation. Some of the unique information that has been derived from these data to date include: 1) Rapid identification of the phreatic nature of the early explosive phase; 2) Observation of faulting and associated heat flow during times of large scale deformation; 3) Venting of hot gas through a short lived crater lake, indicative of a shallow magma source; 4) Increased heat flow of the crater floor prior to the initial dome extrusion; 5) Confirmation of new magma reaching the surface; 6) Identification of the source of active lava extrusion, dome collapse, and block and ash flows. Temperatures vary from ambient, in areas insulated by fault gouge and talus produced during extrusion, to as high as 500-740 degrees C in regions of active extrusion, collapse, and fracturing. This temperature variation needs to be accounted for in the retrieval of eruption parameters using satellite-based techniques as such features are sub-pixel size in satellite images.
2D Measurements of the Balmer Series in Proto-MPEX using a Fast Visible Camera Setup
NASA Astrophysics Data System (ADS)
Lindquist, Elizabeth G.; Biewer, Theodore M.; Ray, Holly B.
2017-10-01
The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device with densities up to 1020 m-3 and temperatures up to 20 eV. Broadband spectral measurements show the visible emission spectra are solely due to the Balmer lines of deuterium. Monochromatic and RGB color Sanstreak SC1 Edgertronic fast visible cameras capture high speed video of plasmas in Proto-MPEX. The color camera is equipped with a long pass 450 nm filter and an internal Bayer filter to view the Dα line at 656 nm on the red channel and the Dβ line at 486 nm on the blue channel. The monochromatic camera has a 434 nm narrow bandpass filter to view the Dγ intensity. In the setup, a 50/50 beam splitter is used so both cameras image the same region of the plasma discharge. Camera images were aligned to each other by viewing a grid ensuring 1 pixel registration between the two cameras. A uniform intensity calibrated white light source was used to perform a pixel-to-pixel relative and an absolute intensity calibration for both cameras. Python scripts that combined the dual camera data, rendering the Dα, Dβ, and Dγ intensity ratios. Observations from Proto-MPEX discharges will be presented. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.
Improving Photometric Calibration of Meteor Video Camera Systems
NASA Technical Reports Server (NTRS)
Ehlert, Steven; Kingery, Aaron; Suggs, Robert
2016-01-01
We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.
40 CFR 62.10860 - Identification of sources.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Acid Mist from Existing Sulfuric Acid Plants § 62.10860 Identification of sources. (a) Identification of sources. The plan includes the following sulfuric acid production plants: (1) Diamond-Shamrock... Inc. in Deer Park, Texas. (6) Stauffer Chemical Company in Baytown, Texas. (7) Stauffer Chemical...
40 CFR 62.10860 - Identification of sources.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Acid Mist from Existing Sulfuric Acid Plants § 62.10860 Identification of sources. (a) Identification of sources. The plan includes the following sulfuric acid production plants: (1) Diamond-Shamrock... Inc. in Deer Park, Texas. (6) Stauffer Chemical Company in Baytown, Texas. (7) Stauffer Chemical...
40 CFR 62.10860 - Identification of sources.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Acid Mist from Existing Sulfuric Acid Plants § 62.10860 Identification of sources. (a) Identification of sources. The plan includes the following sulfuric acid production plants: (1) Diamond-Shamrock... Inc. in Deer Park, Texas. (6) Stauffer Chemical Company in Baytown, Texas. (7) Stauffer Chemical...
40 CFR 62.10860 - Identification of sources.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Acid Mist from Existing Sulfuric Acid Plants § 62.10860 Identification of sources. (a) Identification of sources. The plan includes the following sulfuric acid production plants: (1) Diamond-Shamrock... Inc. in Deer Park, Texas. (6) Stauffer Chemical Company in Baytown, Texas. (7) Stauffer Chemical...
40 CFR 62.10860 - Identification of sources.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Acid Mist from Existing Sulfuric Acid Plants § 62.10860 Identification of sources. (a) Identification of sources. The plan includes the following sulfuric acid production plants: (1) Diamond-Shamrock... Inc. in Deer Park, Texas. (6) Stauffer Chemical Company in Baytown, Texas. (7) Stauffer Chemical...
A full field, 3-D velocimeter for microgravity crystallization experiments
NASA Technical Reports Server (NTRS)
Brodkey, Robert S.; Russ, Keith M.
1991-01-01
The programming and algorithms needed for implementing a full-field, 3-D velocimeter for laminar flow systems and the appropriate hardware to fully implement this ultimate system are discussed. It appears that imaging using a synched pair of video cameras and digitizer boards with synched rails for camera motion will provide a viable solution to the laminar tracking problem. The algorithms given here are simple, which should speed processing. On a heavily loaded VAXstation 3100 the particle identification can take 15 to 30 seconds, with the tracking taking less than one second. It seeems reasonable to assume that four image pairs can thus be acquired and analyzed in under one minute.
NASA Astrophysics Data System (ADS)
Ahn, Y.; Box, J. E.; Balog, J.; Lewinter, A.
2008-12-01
Monitoring Greenland outlet glaciers using remotely sensed data has drawn a great attention in earth science communities for decades and time series analysis of sensory data has provided important variability information of glacier flow by detecting speed and thickness changes, tracking features and acquiring model input. Thanks to advancements of commercial digital camera technology and increased solid state storage, we activated automatic ground-based time-lapse camera stations with high spatial/temporal resolution in west Greenland outlet and collected one-hour interval data continuous for more than one year at some but not all sites. We believe that important information of ice dynamics are contained in these data and that terrestrial mono-/stereo-photogrammetry can provide theoretical/practical fundamentals in data processing along with digital image processing techniques. Time-lapse images over periods in west Greenland indicate various phenomenon. Problematic is rain, snow, fog, shadows, freezing of water on camera enclosure window, image over-exposure, camera motion, sensor platform drift, and fox chewing of instrument cables, and the pecking of plastic window by ravens. Other problems include: feature identification, camera orientation, image registration, feature matching in image pairs, and feature tracking. Another obstacle is that non-metric digital camera contains large distortion to be compensated for precise photogrammetric use. Further, a massive number of images need to be processed in a way that is sufficiently computationally efficient. We meet these challenges by 1) identifying problems in possible photogrammetric processes, 2) categorizing them based on feasibility, and 3) clarifying limitation and alternatives, while emphasizing displacement computation and analyzing regional/temporal variability. We experiment with mono and stereo photogrammetric techniques in the aide of automatic correlation matching for efficiently handling the enormous data volumes.
SU-G-IeP4-12: Performance of In-111 Coincident Gamma-Ray Counting: A Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pahlka, R; Kappadath, S; Mawlawi, O
2016-06-15
Purpose: The decay of In-111 results in a non-isotropic gamma-ray cascade, which is normally imaged using a gamma camera. Creating images with a gamma camera using coincident gamma-rays from In-111 has not been previously studied. Our objective was to explore the feasibility of imaging this cascade as coincidence events and to determine the optimal timing resolution and source activity using Monte Carlo simulations. Methods: GEANT4 was used to simulate the decay of the In-111 nucleus and to model the gamma camera. Each photon emission was assigned a timestamp, and the time delay and angular separation for the second gamma-ray inmore » the cascade was consistent with the known intermediate state half-life of 85ns. The gamma-rays are transported through a model of a Siemens dual head Symbia “S” gamma camera with a 5/8-inch thick crystal and medium energy collimators. A true coincident event was defined as a single 171keV gamma-ray followed by a single 245keV gamma-ray within a specified time window (or vice versa). Several source activities (ranging from 10uCi to 5mCi) with and without incorporation of background counts were then simulated. Each simulation was analyzed using varying time windows to assess random events. The noise equivalent count rate (NECR) was computed based on the number of true and random counts for each combination of activity and time window. No scatter events were assumed since sources were simulated in air. Results: As expected, increasing the timing window increased the total number of observed coincidences albeit at the expense of true coincidences. A timing window range of 200–500ns maximizes the NECR at clinically-used source activities. The background rate did not significantly alter the maximum NECR. Conclusion: This work suggests coincident measurements of In-111 gamma-ray decay can be performed with commercial gamma cameras at clinically-relevant activities. Work is ongoing to assess useful clinical applications.« less
Chasing Down Gravitational Wave Sources with the Dark Energy Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Annis, Jim; Soares-Santos, Marcelle
On August 17, 2017, scientists using the Dark Energy Camera tracked down the first visible counterpart to a gravitational wave signal ever spotted by astronomers. Using data provided by the LIGO and Virgo collaborations, scientists embarked on a quest for the unknown, and discovered a new wonder of the universe. Includes interviews with Fermilab’s Jim Annis and Brandeis University’s Marcelle Soares-Santos.
3-D Flow Visualization with a Light-field Camera
NASA Astrophysics Data System (ADS)
Thurow, B.
2012-12-01
Light-field cameras have received attention recently due to their ability to acquire photographs that can be computationally refocused after they have been acquired. In this work, we describe the development of a light-field camera system for 3D visualization of turbulent flows. The camera developed in our lab, also known as a plenoptic camera, uses an array of microlenses mounted next to an image sensor to resolve both the position and angle of light rays incident upon the camera. For flow visualization, the flow field is seeded with small particles that follow the fluid's motion and are imaged using the camera and a pulsed light source. The tomographic MART algorithm is then applied to the light-field data in order to reconstruct a 3D volume of the instantaneous particle field. 3D, 3C velocity vectors are then determined from a pair of 3D particle fields using conventional cross-correlation algorithms. As an illustration of the concept, 3D/3C velocity measurements of a turbulent boundary layer produced on the wall of a conventional wind tunnel are presented. Future experiments are planned to use the camera to study the influence of wall permeability on the 3-D structure of the turbulent boundary layer.Schematic illustrating the concept of a plenoptic camera where each pixel represents both the position and angle of light rays entering the camera. This information can be used to computationally refocus an image after it has been acquired. Instantaneous 3D velocity field of a turbulent boundary layer determined using light-field data captured by a plenoptic camera.
A goggle navigation system for cancer resection surgery
NASA Astrophysics Data System (ADS)
Xu, Junbin; Shao, Pengfei; Yue, Ting; Zhang, Shiwu; Ding, Houzhu; Wang, Jinkun; Xu, Ronald
2014-02-01
We describe a portable fluorescence goggle navigation system for cancer margin assessment during oncologic surgeries. The system consists of a computer, a head mount display (HMD) device, a near infrared (NIR) CCD camera, a miniature CMOS camera, and a 780 nm laser diode excitation light source. The fluorescence and the background images of the surgical scene are acquired by the CCD camera and the CMOS camera respectively, co-registered, and displayed on the HMD device in real-time. The spatial resolution and the co-registration deviation of the goggle navigation system are evaluated quantitatively. The technical feasibility of the proposed goggle system is tested in an ex vivo tumor model. Our experiments demonstrate the feasibility of using a goggle navigation system for intraoperative margin detection and surgical guidance.
Experiments with synchronized sCMOS cameras
NASA Astrophysics Data System (ADS)
Steele, Iain A.; Jermak, Helen; Copperwheat, Chris M.; Smith, Robert J.; Poshyachinda, Saran; Soonthorntham, Boonrucksar
2016-07-01
Scientific-CMOS (sCMOS) cameras can combine low noise with high readout speeds and do not suffer the charge multiplication noise that effectively reduces the quantum efficiency of electron multiplying CCDs by a factor 2. As such they have strong potential in fast photometry and polarimetry instrumentation. In this paper we describe the results of laboratory experiments using a pair of commercial off the shelf sCMOS cameras based around a 4 transistor per pixel architecture. In particular using a both stable and a pulsed light sources we evaluate the timing precision that may be obtained when the cameras readouts are synchronized either in software or electronically. We find that software synchronization can introduce an error of 200-msec. With electronic synchronization any error is below the limit ( 50-msec) of our simple measurement technique.
2015-06-01
ground.aspx?p=1 Texas Tech Security Group, “Automated Open Source Intelligence ( OSINT ) Using APIs.” RaiderSec, Sunday 30 December 2012, http...Open Source Intelligence ( OSINT ) Using APIs,” RaiderSec, Sunday 30 December 2012, http://raidersec.blogspot.com/2012/12/automated-open- source
40 CFR 62.10632 - Identification of sources.
Code of Federal Regulations, 2010 CFR
2010-07-01
....10632 Identification of sources. The Plan applies to all existing HMWI facilities at St. Jude Children's... 40 Protection of Environment 8 2010-07-01 2010-07-01 false Identification of sources. 62.10632 Section 62.10632 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS...
NASA Astrophysics Data System (ADS)
Zheng, Sifa; Liu, Haitao; Dan, Jiabi; Lian, Xiaomin
2015-05-01
Linear time-invariant assumption for the determination of acoustic source characteristics, the source strength and the source impedance in the frequency domain has been proved reasonable in the design of an exhaust system. Different methods have been proposed to its identification and the multi-load method is widely used for its convenience by varying the load number and impedance. Theoretical error analysis has rarely been referred to and previous results have shown an overdetermined set of open pipes can reduce the identification error. This paper contributes a theoretical error analysis for the load selection. The relationships between the error in the identification of source characteristics and the load selection were analysed. A general linear time-invariant model was built based on the four-load method. To analyse the error of the source impedance, an error estimation function was proposed. The dispersion of the source pressure was obtained by an inverse calculation as an indicator to detect the accuracy of the results. It was found that for a certain load length, the load resistance at the frequency points of one-quarter wavelength of odd multiples results in peaks and in the maximum error for source impedance identification. Therefore, the load impedance of frequency range within the one-quarter wavelength of odd multiples should not be used for source impedance identification. If the selected loads have more similar resistance values (i.e., the same order of magnitude), the identification error of the source impedance could be effectively reduced.
Handheld hyperspectral imager for standoff detection of chemical and biological aerosols
NASA Astrophysics Data System (ADS)
Hinnrichs, Michele; Jensen, James O.; McAnally, Gerard
2004-02-01
Pacific Advanced Technology has developed a small hand held imaging spectrometer, Sherlock, for gas leak and aerosol detection and imaging. The system is based on a patent technique that uses diffractive optics and image processing algorithms to detect spectral information about objects in the scene of the camera (IMSS Image Multi-spectral Sensing). This camera has been tested at Dugway Proving Ground and Dstl Porton Down facility looking at Chemical and Biological agent simulants. The camera has been used to investigate surfaces contaminated with chemical agent simulants. In addition to Chemical and Biological detection the camera has been used for environmental monitoring of green house gases and is currently undergoing extensive laboratory and field testing by the Gas Technology Institute, British Petroleum and Shell Oil for applications for gas leak detection and repair. The camera contains an embedded Power PC and a real time image processor for performing image processing algorithms to assist in the detection and identification of gas phase species in real time. In this paper we will present an over view of the technology and show how it has performed for different applications, such as gas leak detection, surface contamination, remote sensing and surveillance applications. In addition a sampling of the results form TRE field testing at Dugway in July of 2002 and Dstl at Porton Down in September of 2002 will be given.
Plenoptic particle image velocimetry with multiple plenoptic cameras
NASA Astrophysics Data System (ADS)
Fahringer, Timothy W.; Thurow, Brian S.
2018-07-01
Plenoptic particle image velocimetry was recently introduced as a viable three-dimensional, three-component velocimetry technique based on light field cameras. One of the main benefits of this technique is its single camera configuration allowing the technique to be applied in facilities with limited optical access. The main drawback of this configuration is decreased accuracy in the out-of-plane dimension. This work presents a solution with the addition of a second plenoptic camera in a stereo-like configuration. A framework for reconstructing volumes with multiple plenoptic cameras including the volumetric calibration and reconstruction algorithms, including: integral refocusing, filtered refocusing, multiplicative refocusing, and MART are presented. It is shown that the addition of a second camera improves the reconstruction quality and removes the ‘cigar’-like elongation associated with the single camera system. In addition, it is found that adding a third camera provides minimal improvement. Further metrics of the reconstruction quality are quantified in terms of a reconstruction algorithm, particle density, number of cameras, camera separation angle, voxel size, and the effect of common image noise sources. In addition, a synthetic Gaussian ring vortex is used to compare the accuracy of the single and two camera configurations. It was determined that the addition of a second camera reduces the RMSE velocity error from 1.0 to 0.1 voxels in depth and 0.2 to 0.1 voxels in the lateral spatial directions. Finally, the technique is applied experimentally on a ring vortex and comparisons are drawn from the four presented reconstruction algorithms, where it was found that MART and multiplicative refocusing produced the cleanest vortex structure and had the least shot-to-shot variability. Filtered refocusing is able to produce the desired structure, albeit with more noise and variability, while integral refocusing struggled to produce a coherent vortex ring.
NASA Astrophysics Data System (ADS)
Eltner, A.; Kaiser, A.; Castillo, C.; Rock, G.; Neugirg, F.; Abellan, A.
2015-12-01
Photogrammetry and geosciences are closely linked since the late 19th century. Today, a wide range of commercial and open-source software enable non-experts users to obtain high-quality 3-D datasets of the environment, which was formerly reserved to remote sensing experts, geodesists or owners of cost-intensive metric airborne imaging systems. Complex tridimensional geomorphological features can be easily reconstructed from images captured with consumer grade cameras. Furthermore, rapid developments in UAV technology allow for high quality aerial surveying and orthophotography generation at a relatively low-cost. The increasing computing capacities during the last decade, together with the development of high-performance digital sensors and the important software innovations developed by other fields of research (e.g. computer vision and visual perception) has extended the rigorous processing of stereoscopic image data to a 3-D point cloud generation from a series of non-calibrated images. Structure from motion methods offer algorithms, e.g. robust feature detectors like the scale-invariant feature transform for 2-D imagery, which allow for efficient and automatic orientation of large image sets without further data acquisition information. Nevertheless, the importance of carrying out correct fieldwork strategies, using proper camera settings, ground control points and ground truth for understanding the different sources of errors still need to be adapted in the common scientific practice. This review manuscript intends not only to summarize the present state of published research on structure-from-motion photogrammetry applications in geomorphometry, but also to give an overview of terms and fields of application, to quantify already achieved accuracies and used scales using different strategies, to evaluate possible stagnations of current developments and to identify key future challenges. It is our belief that the identification of common errors, "bad practices" and some other valuable information in already published articles, scientific reports and book chapters may help in guiding the future use of SfM photogrammetry in geosciences.
Advanced illumination control algorithm for medical endoscopy applications
NASA Astrophysics Data System (ADS)
Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Morgado-Dias, F.
2015-05-01
CMOS image sensor manufacturer, AWAIBA, is providing the world's smallest digital camera modules to the world market for minimally invasive surgery and one time use endoscopic equipment. Based on the world's smallest digital camera head and the evaluation board provided to it, the aim of this paper is to demonstrate an advanced fast response dynamic control algorithm of the illumination LED source coupled to the camera head, over the LED drivers embedded on the evaluation board. Cost efficient and small size endoscopic camera modules nowadays embed minimal size image sensors capable of not only adjusting gain and exposure time but also LED illumination with adjustable illumination power. The LED illumination power has to be dynamically adjusted while navigating the endoscope over changing illumination conditions of several orders of magnitude within fractions of the second to guarantee a smooth viewing experience. The algorithm is centered on the pixel analysis of selected ROIs enabling it to dynamically adjust the illumination intensity based on the measured pixel saturation level. The control core was developed in VHDL and tested in a laboratory environment over changing light conditions. The obtained results show that it is capable of achieving correction speeds under 1 s while maintaining a static error below 3% relative to the total number of pixels on the image. The result of this work will allow the integration of millimeter sized high brightness LED sources on minimal form factor cameras enabling its use in endoscopic surgical robotic or micro invasive surgery.
Harrison, Thomas C; Sigler, Albrecht; Murphy, Timothy H
2009-09-15
We describe a simple and low-cost system for intrinsic optical signal (IOS) imaging using stable LED light sources, basic microscopes, and commonly available CCD cameras. IOS imaging measures activity-dependent changes in the light reflectance of brain tissue, and can be performed with a minimum of specialized equipment. Our system uses LED ring lights that can be mounted on standard microscope objectives or video lenses to provide a homogeneous and stable light source, with less than 0.003% fluctuation across images averaged from 40 trials. We describe the equipment and surgical techniques necessary for both acute and chronic mouse preparations, and provide software that can create maps of sensory representations from images captured by inexpensive 8-bit cameras or by 12-bit cameras. The IOS imaging system can be adapted to commercial upright microscopes or custom macroscopes, eliminating the need for dedicated equipment or complex optical paths. This method can be combined with parallel high resolution imaging techniques such as two-photon microscopy.
High energy X-ray pinhole imaging at the Z facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
McPherson, L. Armon; Ampleford, David J., E-mail: damplef@sandia.gov; Coverdale, Christine A.
A new high photon energy (hν > 15 keV) time-integrated pinhole camera (TIPC) has been developed as a diagnostic instrument at the Z facility. This camera employs five pinholes in a linear array for recording five images at once onto an image plate detector. Each pinhole may be independently filtered to yield five different spectral responses. The pinhole array is fabricated from a 1-cm thick tungsten block and is available with either straight pinholes or conical pinholes. Each pinhole within the array block is 250 μm in diameter. The five pinholes are splayed with respect to each other such thatmore » they point to the same location in space, and hence present the same view of the radiation source at the Z facility. The fielding distance from the radiation source is 66 cm and the nominal image magnification is 0.374. Initial experimental results from TIPC are shown to illustrate the performance of the camera.« less
Underwater Calibration of Dome Port Pressure Housings
NASA Astrophysics Data System (ADS)
Nocerino, E.; Menna, F.; Fassi, F.; Remondino, F.
2016-03-01
Underwater photogrammetry using consumer grade photographic equipment can be feasible for different applications, e.g. archaeology, biology, industrial inspections, etc. The use of a camera underwater can be very different from its terrestrial use due to the optical phenomena involved. The presence of the water and camera pressure housing in front of the camera act as additional optical elements. Spherical dome ports are difficult to manufacture and consequently expensive but at the same time they are the most useful for underwater photogrammetry as they keep the main geometric characteristics of the lens unchanged. Nevertheless, the manufacturing and alignment of dome port pressure housing components can be the source of unexpected changes of radial and decentring distortion, source of systematic errors that can influence the final 3D measurements. The paper provides a brief introduction of underwater optical phenomena involved in underwater photography, then presents the main differences between flat and dome ports to finally discuss the effect of manufacturing on 3D measurements in two case studies.
NASA Technical Reports Server (NTRS)
Viton, M.; Courtes, G.; Sivan, J. P.; Decher, R.; Gary, A.
1985-01-01
Technical difficulties encountered using the Very Wide Field Camera (VWFC) during the Spacelab 1 Shuttle mission are reported. The VWFC is a wide low resolution (5 arcmin half-half width) photographic camera, capable of operating in both spectrometric and photometric modes. The bandpasses of the photometric mode of the VWFC are defined by three Al + MgF2 interference filters. A piggy-back spectrograph attached to the VWFC was used for observations in the spectrometric mode. A total of 48 astronomical frames were obtained using the VWFC, of which only 20 were considered to be of adequate quality for astronomical data processing. Preliminary analysis of the 28 poor-quality images revealed the following possible defects in the VWFC: darkness in the spacing frames, twilight/dawn UV straylight, and internal UV straylight. Improvements in the VWFC astronomical data processing scheme are expected to help identify and eliminate UV straylight sources in the future.
Human recognition in a video network
NASA Astrophysics Data System (ADS)
Bhanu, Bir
2009-10-01
Video networks is an emerging interdisciplinary field with significant and exciting scientific and technological challenges. It has great promise in solving many real-world problems and enabling a broad range of applications, including smart homes, video surveillance, environment and traffic monitoring, elderly care, intelligent environments, and entertainment in public and private spaces. This paper provides an overview of the design of a wireless video network as an experimental environment, camera selection, hand-off and control, anomaly detection. It addresses challenging questions for individual identification using gait and face at a distance and present new techniques and their comparison for robust identification.
40 CFR 62.9110 - Identification of sources.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Sulfuric Acid Mist from Existing Sulfuric Acid Plants § 62.9110 Identification of sources. (a) Identification of sources. The plan includes the following sulfuric acid production plants. (1) National Zinc Co. in Bartlesville, Oklahoma. (2) Tulsa Chemical Co. in Tulsa, Oklahoma. [52 FR 3230, Feb. 3, 1987...
40 CFR 62.9110 - Identification of sources.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Sulfuric Acid Mist from Existing Sulfuric Acid Plants § 62.9110 Identification of sources. (a) Identification of sources. The plan includes the following sulfuric acid production plants. (1) National Zinc Co. in Bartlesville, Oklahoma. (2) Tulsa Chemical Co. in Tulsa, Oklahoma. [52 FR 3230, Feb. 3, 1987...
40 CFR 62.9110 - Identification of sources.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Sulfuric Acid Mist from Existing Sulfuric Acid Plants § 62.9110 Identification of sources. (a) Identification of sources. The plan includes the following sulfuric acid production plants. (1) National Zinc Co. in Bartlesville, Oklahoma. (2) Tulsa Chemical Co. in Tulsa, Oklahoma. [52 FR 3230, Feb. 3, 1987...
Visual object recognition for mobile tourist information systems
NASA Astrophysics Data System (ADS)
Paletta, Lucas; Fritz, Gerald; Seifert, Christin; Luley, Patrick; Almer, Alexander
2005-03-01
We describe a mobile vision system that is capable of automated object identification using images captured from a PDA or a camera phone. We present a solution for the enabling technology of outdoors vision based object recognition that will extend state-of-the-art location and context aware services towards object based awareness in urban environments. In the proposed application scenario, tourist pedestrians are equipped with GPS, W-LAN and a camera attached to a PDA or a camera phone. They are interested whether their field of view contains tourist sights that would point to more detailed information. Multimedia type data about related history, the architecture, or other related cultural context of historic or artistic relevance might be explored by a mobile user who is intending to learn within the urban environment. Learning from ambient cues is in this way achieved by pointing the device towards the urban sight, capturing an image, and consequently getting information about the object on site and within the focus of attention, i.e., the users current field of view.
VizieR Online Data Catalog: Classification of 2XMM variable sources (Lo+, 2014)
NASA Astrophysics Data System (ADS)
Lo, K. K.; Farrell, S.; Murphy, T.; Gaensler, B. M.
2017-06-01
The 2XMMi-DR2 catalog (Cat. IX/40) consists of observations made with the XMM-Newton satellite between 2000 and 2008 and covers a sky area of about 420 deg2. The observations were made using the European Photon Imaging Camera (EPIC) that consists of three CCD cameras - pn, MOS1, and MOS2 - and covers the energy range from 0.2 keV to 12 keV. There are 221012 unique sources in 2XMM-DR2, of which 2267 were flagged as variable by the XMM processing pipeline (Watson et al. 2009, J/A+A/493/339). The variability test used by the pipeline is a {Chi}2 test against the null hypothesis that the source flux is constant, with the probability threshold set at 10-5. (1 data file).
Skylab 2: Photographic index and scene identification
NASA Technical Reports Server (NTRS)
Underwood, R. W.; Holland, J. W.
1973-01-01
A quick reference guide to the photographic imagery obtained on Skylab 2 is presented. Place names and descriptors used give sufficient information to identify frames for discussion purposes and are not intended to be used for ground nadir or geographic coverage purposes. The photographs are further identified with respect to the type of camera used in taking the pictures.
Biometrics Foundation Documents
2009-01-01
a digital form. The quality of the sensor used has a significant impact on the recognition results. Example “sensors” could be digital cameras...Difficult to control sensor and channel variances that significantly impact capabilities Not sufficiently distinctive for identification over large...expressions, hairstyle, glasses, hats, makeup, etc. have on face recognition systems? Minor variances , such as those mentioned, will have a moderate
Location-Based Augmented Reality for Mobile Learning: Algorithm, System, and Implementation
ERIC Educational Resources Information Center
Tan, Qing; Chang, William; Kinshuk
2015-01-01
AR technology can be considered as mainly consisting of two aspects: identification of real-world object and display of computer-generated digital contents related the identified real-world object. The technical challenge of mobile AR is to identify the real-world object that mobile device's camera aim at. In this paper, we will present a…
NASA Astrophysics Data System (ADS)
Giraudeau, A.; Pierron, F.
2010-06-01
The paper presents an experimental application of a method leading to the identification of the elastic and damping material properties of isotropic vibrating plates. The theory assumes that the searched parameters can be extracted from curvature and deflection fields measured on the whole surface of the plate at two particular instants of the vibrating motion. The experimental application consists in an original excitation fixture, a particular adaptation of an optical full-field measurement technique, a data preprocessing giving the curvature and deflection fields and finally in the identification process using the Virtual Fields Method (VFM). The principle of the deflectometry technique used for the measurements is presented. First results of identification on an acrylic plate are presented and compared to reference values. Details about a new experimental arrangement, currently in progress, is presented. It uses a high speed digital camera to over sample the full-field measurements.
OSIRIS-REx Asteroid Sample Return Mission Image Analysis
NASA Astrophysics Data System (ADS)
Chevres Fernandez, Lee Roger; Bos, Brent
2018-01-01
NASA’s Origins Spectral Interpretation Resource Identification Security-Regolith Explorer (OSIRIS-REx) mission constitutes the “first-of-its-kind” project to thoroughly characterize a near-Earth asteroid. The selected asteroid is (101955) 1999 RQ36 (a.k.a. Bennu). The mission launched in September 2016, and the spacecraft will reach its asteroid target in 2018 and return a sample to Earth in 2023. The spacecraft that will travel to, and collect a sample from, Bennu has five integrated instruments from national and international partners. NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch-And-Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample and document asteroid sample stowage. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Analysis of spacecraft imagery acquired by the TAGCAMS during cruise to the target asteroid Bennu was performed using custom codes developed in MATLAB. Assessment of the TAGCAMS in-flight performance using flight imagery was done to characterize camera performance. One specific area of investigation that was targeted was bad pixel mapping. A recent phase of the mission, known as the Earth Gravity Assist (EGA) maneuver, provided images that were used for the detection and confirmation of “questionable” pixels, possibly under responsive, using image segmentation analysis. Ongoing work on point spread function morphology and camera linearity and responsivity will also be used for calibration purposes and further analysis in preparation for proximity operations around Bennu. Said analyses will provide a broader understanding regarding the functionality of the camera system, which will in turn aid in the fly-down to the asteroid, as it will allow the pick of a suitable landing and sample location.
NASA Astrophysics Data System (ADS)
Abdullah, Nurul Azma; Saidi, Md. Jamri; Rahman, Nurul Hidayah Ab; Wen, Chuah Chai; Hamid, Isredza Rahmi A.
2017-10-01
In practice, identification of criminal in Malaysia is done through thumbprint identification. However, this type of identification is constrained as most of criminal nowadays getting cleverer not to leave their thumbprint on the scene. With the advent of security technology, cameras especially CCTV have been installed in many public and private areas to provide surveillance activities. The footage of the CCTV can be used to identify suspects on scene. However, because of limited software developed to automatically detect the similarity between photo in the footage and recorded photo of criminals, the law enforce thumbprint identification. In this paper, an automated facial recognition system for criminal database was proposed using known Principal Component Analysis approach. This system will be able to detect face and recognize face automatically. This will help the law enforcements to detect or recognize suspect of the case if no thumbprint present on the scene. The results show that about 80% of input photo can be matched with the template data.
NASA Astrophysics Data System (ADS)
Ma, Chen; Cheng, Dewen; Xu, Chen; Wang, Yongtian
2014-11-01
Fundus camera is a complex optical system for retinal photography, involving illumination and imaging of the retina. Stray light is one of the most significant problems of fundus camera because the retina is so minimally reflective that back reflections from the cornea and any other optical surface are likely to be significantly greater than the light reflected from the retina. To provide maximum illumination to the retina while eliminating back reflections, a novel design of illumination system used in portable fundus camera is proposed. Internal illumination, in which eyepiece is shared by both the illumination system and the imaging system but the condenser and the objective are separated by a beam splitter, is adopted for its high efficiency. To eliminate the strong stray light caused by corneal center and make full use of light energy, the annular stop in conventional illumination systems is replaced by a fiber-coupled, ring-shaped light source that forms an annular beam. Parameters including size and divergence angle of the light source are specially designed. To weaken the stray light, a polarized light source is used, and an analyzer plate is placed after beam splitter in the imaging system. Simulation results show that the illumination uniformity at the fundus exceeds 90%, and the stray light is within 1%. Finally, a proof-of-concept prototype is developed and retinal photos of an ophthalmophantom are captured. The experimental results show that ghost images and stray light have been greatly reduced to a level that professional diagnostic will not be interfered with.
VizieR Online Data Catalog: Selecting IRAC counterparts to SMGs (Alberts+, 2013)
NASA Astrophysics Data System (ADS)
Alberts, S.; Wilson, G. W.; Lu, Y.; Johnson, S.; Yun, M. S.; Scott, K. S.; Pope, A.; Aretxaga, I.; Ezawa, H.; Hughes, D. H.; Kawabe, R.; Kim, S.; Kohno, K.; Oshima, T.
2014-05-01
We present a new submm/mm galaxy counterpart identification technique which builds on the use of Spitzer Infrared Array Camera (IRAC) colours as discriminators between likely counterparts and the general IRAC galaxy population. Using 102 radio- and Submillimeter Array-confirmed counterparts to AzTEC sources across three fields [Great Observatories Origins Deep Survey-North, -South and Cosmic Evolution Survey (COSMOS)], we develop a non-parametric IRAC colour-colour characteristic density distribution, which, when combined with positional uncertainty information via likelihood ratios, allows us to rank all potential IRAC counterparts around submillimetre galaxies (SMGs) and calculate the significance of each ranking via the reliability factor. We report all robust and tentative radio counterparts to SMGs, the first such list available for AzTEC/COSMOS, as well as the highest ranked IRAC counterparts for all AzTEC SMGs in these fields as determined by our technique. We demonstrate that the technique is free of radio bias and thus applicable regardless of radio detections. For observations made with a moderate beam size (~18"), this technique identifies ~85% of SMG counterparts. For much larger beam sizes (>~30"), we report identification rates of 33-49%. Using simulations, we demonstrate that this technique is an improvement over using positional information alone for observations with facilities such as AzTEC on the Large Millimeter Telescope and Submillimeter Common User Bolometer Array 2 on the James Clerk Maxwell Telescope. (3 data files).
Gutierrez-Heredia, Luis; Benzoni, Francesca; Murphy, Emma; Reynaud, Emmanuel G
2016-01-01
Coral reefs hosts nearly 25% of all marine species and provide food sources for half a billion people worldwide while only a very small percentage have been surveyed. Advances in technology and processing along with affordable underwater cameras and Internet availability gives us the possibility to provide tools and softwares to survey entire coral reefs. Holistic ecological analyses of corals require not only the community view (10s to 100s of meters), but also the single colony analysis as well as corallite identification. As corals are three-dimensional, classical approaches to determine percent cover and structural complexity across spatial scales are inefficient, time-consuming and limited to experts. Here we propose an end-to-end approach to estimate these parameters using low-cost equipment (GoPro, Canon) and freeware (123D Catch, Meshmixer and Netfabb), allowing every community to participate in surveys and monitoring of their coral ecosystem. We demonstrate our approach on 9 species of underwater colonies in ranging size and morphology. 3D models of underwater colonies, fresh samples and bleached skeletons with high quality texture mapping and detailed topographic morphology were produced, and Surface Area and Volume measurements (parameters widely used for ecological and coral health studies) were calculated and analysed. Moreover, we integrated collected sample models with micro-photogrammetry models of individual corallites to aid identification and colony and polyp scale analysis.
Gutierrez-Heredia, Luis; Benzoni, Francesca; Murphy, Emma; Reynaud, Emmanuel G.
2016-01-01
Coral reefs hosts nearly 25% of all marine species and provide food sources for half a billion people worldwide while only a very small percentage have been surveyed. Advances in technology and processing along with affordable underwater cameras and Internet availability gives us the possibility to provide tools and softwares to survey entire coral reefs. Holistic ecological analyses of corals require not only the community view (10s to 100s of meters), but also the single colony analysis as well as corallite identification. As corals are three-dimensional, classical approaches to determine percent cover and structural complexity across spatial scales are inefficient, time-consuming and limited to experts. Here we propose an end-to-end approach to estimate these parameters using low-cost equipment (GoPro, Canon) and freeware (123D Catch, Meshmixer and Netfabb), allowing every community to participate in surveys and monitoring of their coral ecosystem. We demonstrate our approach on 9 species of underwater colonies in ranging size and morphology. 3D models of underwater colonies, fresh samples and bleached skeletons with high quality texture mapping and detailed topographic morphology were produced, and Surface Area and Volume measurements (parameters widely used for ecological and coral health studies) were calculated and analysed. Moreover, we integrated collected sample models with micro-photogrammetry models of individual corallites to aid identification and colony and polyp scale analysis. PMID:26901845
Spickermann, Gunnar; Friederich, Fabian; Roskos, Hartmut G; Bolívar, Peter Haring
2009-11-01
We present a 64x48 pixel 2D electro-optical terahertz (THz) imaging system using a photonic mixing device time-of-flight camera as an optical demodulating detector array. The combination of electro-optic detection with a time-of-flight camera increases sensitivity drastically, enabling the use of a nonamplified laser source for high-resolution real-time THz electro-optic imaging.
Radio astronomy Explorer B antenna aspect processor
NASA Technical Reports Server (NTRS)
Miller, W. H.; Novello, J.; Reeves, C. C.
1972-01-01
The antenna aspect system used on the Radio Astronomy Explorer B spacecraft is described. This system consists of two facsimile cameras, a data encoder, and a data processor. Emphasis is placed on the discussion of the data processor, which contains a data compressor and a source encoder. With this compression scheme a compression ratio of 8 is achieved on a typical line of camera data. These compressed data are then convolutionally encoded.
Retinal axial focusing and multi-layer imaging with a liquid crystal adaptive optics camera
NASA Astrophysics Data System (ADS)
Liu, Rui-Xue; Zheng, Xian-Liang; Li, Da-Yu; Xia, Ming-Liang; Hu, Li-Fa; Cao, Zhao-Liang; Mu, Quan-Quan; Xuan, Li
2014-09-01
With the help of adaptive optics (AO) technology, cellular level imaging of living human retina can be achieved. Aiming to reduce distressing feelings and to avoid potential drug induced diseases, we attempted to image retina with dilated pupil and froze accommodation without drugs. An optimized liquid crystal adaptive optics camera was adopted for retinal imaging. A novel eye stared system was used for stimulating accommodation and fixating imaging area. Illumination sources and imaging camera kept linkage for focusing and imaging different layers. Four subjects with diverse degree of myopia were imaged. Based on the optical properties of the human eye, the eye stared system reduced the defocus to less than the typical ocular depth of focus. In this way, the illumination light can be projected on certain retina layer precisely. Since that the defocus had been compensated by the eye stared system, the adopted 512 × 512 liquid crystal spatial light modulator (LC-SLM) corrector provided the crucial spatial fidelity to fully compensate high-order aberrations. The Strehl ratio of a subject with -8 diopter myopia was improved to 0.78, which was nearly close to diffraction-limited imaging. By finely adjusting the axial displacement of illumination sources and imaging camera, cone photoreceptors, blood vessels and nerve fiber layer were clearly imaged successfully.
Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie
2010-10-10
The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space.
NASA Astrophysics Data System (ADS)
Watanabe, Takara; Enomoto, Ryoji; Muraishi, Hiroshi; Katagiri, Hideaki; Kagaya, Mika; Fukushi, Masahiro; Kano, Daisuke; Satoh, Wataru; Takeda, Tohoru; Tanaka, Manobu M.; Tanaka, Souichi; Uchida, Tomohisa; Wada, Kiyoto; Wakamatsu, Ryo
2018-02-01
We have developed an omnidirectional gamma-ray imaging Compton camera for environmental monitoring at low levels of radiation. The camera consisted of only six CsI(Tl) scintillator cubes of 3.5 cm, each of which was readout by super-bialkali photo-multiplier tubes (PMTs). Our camera enables the visualization of the position of gamma-ray sources in all directions (∼4π sr) over a wide energy range between 300 and 1400 keV. The angular resolution (σ) was found to be ∼11°, which was realized using an image-sharpening technique. A high detection efficiency of 18 cps/(µSv/h) for 511 keV (1.6 cps/MBq at 1 m) was achieved, indicating the capability of this camera to visualize hotspots in areas with low-radiation-level contamination from the order of µSv/h to natural background levels. Our proposed technique can be easily used as a low-radiation-level imaging monitor in radiation control areas, such as medical and accelerator facilities.
ePix100 camera: Use and applications at LCLS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carini, G. A., E-mail: carini@slac.stanford.edu; Alonso-Mori, R.; Blaj, G.
2016-07-27
The ePix100 x-ray camera is a new system designed and built at SLAC for experiments at the Linac Coherent Light Source (LCLS). The camera is the first member of a family of detectors built around a single hardware and software platform, supporting a variety of front-end chips. With a readout speed of 120 Hz, matching the LCLS repetition rate, a noise lower than 80 e-rms and pixels of 50 µm × 50 µm, this camera offers a viable alternative to fast readout, direct conversion, scientific CCDs in imaging mode. The detector, designed for applications such as X-ray Photon Correlation Spectroscopymore » (XPCS) and wavelength dispersive X-ray Emission Spectroscopy (XES) in the energy range from 2 to 10 keV and above, comprises up to 0.5 Mpixels in a very compact form factor. In this paper, we report the performance of the camera during its first use at LCLS.« less
NASA Astrophysics Data System (ADS)
Jaanimagi, Paul A.
1992-01-01
This volume presents papers grouped under the topics on advances in streak and framing camera technology, applications of ultrahigh-speed photography, characterizing high-speed instrumentation, high-speed electronic imaging technology and applications, new technology for high-speed photography, high-speed imaging and photonics in detonics, and high-speed velocimetry. The papers presented include those on a subpicosecond X-ray streak camera, photocathodes for ultrasoft X-ray region, streak tube dynamic range, high-speed TV cameras for streak tube readout, femtosecond light-in-flight holography, and electrooptical systems characterization techniques. Attention is also given to high-speed electronic memory video recording techniques, high-speed IR imaging of repetitive events using a standard RS-170 imager, use of a CCD array as a medium-speed streak camera, the photography of shock waves in explosive crystals, a single-frame camera based on the type LD-S-10 intensifier tube, and jitter diagnosis for pico- and femtosecond sources.
NASA Astrophysics Data System (ADS)
Jeon, Hosang; Kim, Hyunduk; Cha, Bo Kyung; Kim, Jong Yul; Cho, Gyuseong; Chung, Yong Hyun; Yun, Jong-Il
2009-06-01
Presently, the gamma camera system is widely used in various medical diagnostic, industrial and environmental fields. Hence, the quantitative and effective evaluation of its imaging performance is essential for design and quality assurance. The National Electrical Manufacturers Association (NEMA) standards for gamma camera evaluation are insufficient to perform sensitive evaluation. In this study, modulation transfer function (MTF) and normalized noise power spectrum (NNPS) will be suggested to evaluate the performance of small gamma camera with changeable pinhole collimators using Monte Carlo simulation. We simulated the system with a cylinder and a disk source, and seven different pinhole collimators from 1- to 4-mm-diameter pinhole with lead. The MTF and NNPS data were obtained from output images and were compared with full-width at half-maximum (FWHM), sensitivity and differential uniformity. In the result, we found that MTF and NNPS are effective and novel standards to evaluate imaging performance of gamma cameras instead of conventional NEMA standards.
Roberti, Joshua A.; SanClements, Michael D.; Loescher, Henry W.; Ayres, Edward
2014-01-01
Even though fine-root turnover is a highly studied topic, it is often poorly understood as a result of uncertainties inherent in its sampling, e.g., quantifying spatial and temporal variability. While many methods exist to quantify fine-root turnover, use of minirhizotrons has increased over the last two decades, making sensor errors another source of uncertainty. Currently, no standardized methodology exists to test and compare minirhizotron camera capability, imagery, and performance. This paper presents a reproducible, laboratory-based method by which minirhizotron cameras can be tested and validated in a traceable manner. The performance of camera characteristics was identified and test criteria were developed: we quantified the precision of camera location for successive images, estimated the trueness and precision of each camera's ability to quantify root diameter and root color, and also assessed the influence of heat dissipation introduced by the minirhizotron cameras and electrical components. We report detailed and defensible metrology analyses that examine the performance of two commercially available minirhizotron cameras. These cameras performed differently with regard to the various test criteria and uncertainty analyses. We recommend a defensible metrology approach to quantify the performance of minirhizotron camera characteristics and determine sensor-related measurement uncertainties prior to field use. This approach is also extensible to other digital imagery technologies. In turn, these approaches facilitate a greater understanding of measurement uncertainties (signal-to-noise ratio) inherent in the camera performance and allow such uncertainties to be quantified and mitigated so that estimates of fine-root turnover can be more confidently quantified. PMID:25391023
Investigating the Suitability of Mirrorless Cameras in Terrestrial Photogrammetric Applications
NASA Astrophysics Data System (ADS)
Incekara, A. H.; Seker, D. Z.; Delen, A.; Acar, A.
2017-11-01
Digital single-lens reflex cameras (DSLR) which are commonly referred as mirrored cameras are preferred for terrestrial photogrammetric applications such as documentation of cultural heritage, archaeological excavations and industrial measurements. Recently, digital cameras which are called as mirrorless systems that can be used with different lens combinations have become available for using similar applications. The main difference between these two camera types is the presence of the mirror mechanism which means that the incoming beam towards the lens is different in the way it reaches the sensor. In this study, two different digital cameras, one with a mirror (Nikon D700) and the other without a mirror (Sony a6000), were used to apply close range photogrammetric application on the rock surface at Istanbul Technical University (ITU) Ayazaga Campus. Accuracy of the 3D models created by means of photographs taken with both cameras were compared with each other using difference values between field and model coordinates which were obtained after the alignment of the photographs. In addition, cross sections were created on the 3D models for both data source and maximum area difference between them is quite small because they are almost overlapping. The mirrored camera has become more consistent in itself with respect to the change of model coordinates for models created with photographs taken at different times, with almost the same ground sample distance. As a result, it has been determined that mirrorless cameras and point cloud produced using photographs obtained from these cameras can be used for terrestrial photogrammetric studies.
Condenser for photolithography system
Sweatt, William C.
2004-03-02
A condenser for a photolithography system, in which a mask image from a mask is projected onto a wafer through a camera having an entrance pupil, includes a source of propagating radiation, a first mirror illuminated by the radiation, a mirror array illuminated by the radiation reflected from said first mirror, and a second mirror illuminated by the radiation reflected from the array. The mirror array includes a plurality of micromirrors. Each of the micromirrors is selectively actuatable independently of each other. The first mirror and the second mirror are disposed such that the source is imaged onto a plane of the mask and the mirror array is imaged into the entrance pupil of the camera.
Evaluation of S190A radiometric exposure test data
NASA Technical Reports Server (NTRS)
Lockwood, H. E.; Goodding, R. A.
1974-01-01
The S190A preflight radiometric exposure test data generated as part of preflight and system test of KM-002 Sequence 29 on flight camera S/N 002 was analyzed. The analysis was to determine camera system transmission using available data which included: (1) films exposed to a calibrated light source subject; (2) filter transmission data; (3) calibrated light source data; (4) density vs. log10 exposure curves for the films; and (5) spectral sensitometric data for the films. The procedure used is outlined, and includes the data and a transmission matrix as a function of field position for nine measured points on each station-film-filter-aperture-shutter speed combination.
Confocal retinal imaging using a digital light projector with a near infrared VCSEL source
NASA Astrophysics Data System (ADS)
Muller, Matthew S.; Elsner, Ann E.
2018-02-01
A custom near infrared VCSEL source has been implemented in a confocal non-mydriatic retinal camera, the Digital Light Ophthalmoscope (DLO). The use of near infrared light improves patient comfort, avoids pupil constriction, penetrates the deeper retina, and does not mask visual stimuli. The DLO performs confocal imaging by synchronizing a sequence of lines displayed with a digital micromirror device to the rolling shutter exposure of a 2D CMOS camera. Real-time software adjustments enable multiply scattered light imaging, which rapidly and cost-effectively emphasizes drusen and other scattering disruptions in the deeper retina. A separate 5.1" LCD display provides customizable visible stimuli for vision experiments with simultaneous near infrared imaging.
Cell phone camera ballistics: attacks and countermeasures
NASA Astrophysics Data System (ADS)
Steinebach, Martin; Liu, Huajian; Fan, Peishuai; Katzenbeisser, Stefan
2010-01-01
Multimedia forensics deals with the analysis of multimedia data to gather information on its origin and authenticity. One therefore needs to distinguish classical criminal forensics (which today also uses multimedia data as evidence) and multimedia forensics where the actual case is based on a media file. One example for the latter is camera forensics where pixel error patters are used as fingerprints identifying a camera as the source of an image. Of course multimedia forensics can become a tool for criminal forensics when evidence used in a criminal investigation is likely to be manipulated. At this point an important question arises: How reliable are these algorithms? Can a judge trust their results? How easy are they to manipulate? In this work we show how camera forensics can be attacked and introduce a potential countermeasure against these attacks.
A versatile calibration procedure for portable coded aperture gamma cameras and RGB-D sensors
NASA Astrophysics Data System (ADS)
Paradiso, V.; Crivellaro, A.; Amgarou, K.; de Lanaute, N. Blanc; Fua, P.; Liénard, E.
2018-04-01
The present paper proposes a versatile procedure for the geometrical calibration of coded aperture gamma cameras and RGB-D depth sensors, using only one radioactive point source and a simple experimental set-up. Calibration data is then used for accurately aligning radiation images retrieved by means of the γ-camera with the respective depth images computed with the RGB-D sensor. The system resulting from such a combination is thus able to retrieve, automatically, the distance of radioactive hotspots by means of pixel-wise mapping between gamma and depth images. This procedure is of great interest for a wide number of applications, ranging from precise automatic estimation of the shape and distance of radioactive objects to Augmented Reality systems. Incidentally, the corresponding results validated the choice of a perspective design model for a coded aperture γ-camera.
A new compact, high sensitivity neutron imaging systema)
NASA Astrophysics Data System (ADS)
Caillaud, T.; Landoas, O.; Briat, M.; Rossé, B.; Thfoin, I.; Philippe, F.; Casner, A.; Bourgade, J. L.; Disdier, L.; Glebov, V. Yu.; Marshall, F. J.; Sangster, T. C.; Park, H. S.; Robey, H. F.; Amendt, P.
2012-10-01
We have developed a new small neutron imaging system (SNIS) diagnostic for the OMEGA laser facility. The SNIS uses a penumbral coded aperture and has been designed to record images from low yield (109-1010 neutrons) implosions such as those using deuterium as the fuel. This camera was tested at OMEGA in 2009 on a rugby hohlraum energetics experiment where it recorded an image at a yield of 1.4 × 1010. The resolution of this image was 54 μm and the camera was located only 4 meters from target chamber centre. We recently improved the instrument by adding a cooled CCD camera. The sensitivity of the new camera has been fully characterized using a linear accelerator and a 60Co γ-ray source. The calibration showed that the signal-to-noise ratio could be improved by using raw binning detection.
Submillimeter Observations of CLASH 2882 and the Evolution of Dust in this Galaxy
NASA Technical Reports Server (NTRS)
Dwek, Eli; Staguhn, Johannes; Arendt, Richard G; Kovacs, Attila; Decarli, Roberto; Egami, Eiichi; Michalowski, Michal J.; Rawle, Timothy D.; Toft, Sune; Walter, Fabian
2015-01-01
Two millimeter observations of the MACS J1149.6+2223 cluster have detected a source that was consistent with the location of the lensed MACS 1149-JD galaxy at z = 9.6. A positive identification would have rendered this galaxy as the youngest dust forming galaxy in the universe. Follow up observation with the AzTEC 1.1 mm camera and the IRAM NOrthern Extended Millimeter Array (NOEMA) at 1.3 mm have not confirmed this association. In this paper we show that the NOEMA observations associate the 2 mm source with [PCB2012] 2882,12 source number 2882 in the Cluster Lensing And Supernova survey with Hubble (CLASH) catalog of MACS J1149.6 +2223. This source, hereafter referred to as CLASH 2882, is a gravitationally lensed spiral galaxy at z = 0.99. We combine the Goddard IRAM Superconducting 2-Millimeter Observer (GISMO) 2 mm and NOEMA 1.3 mm fluxes with other (rest frame) UV to far-IR observations to construct the full spectral energy distribution of this galaxy, and derive its star formation history, and stellar and interstellar dust content. The current star formation rate of the galaxy is 54/mu/Solar Mass/yr, and its dust mass is about 5 × 10(exp 7)/mu Solar Mass, where mu is the lensing magnification factor for this source, which has a mean value of 2.7. The inferred dust mass is higher than the maximum dust mass that can be produced by core collapse supernovae and evolved AGB stars. As with many other star forming galaxies, most of the dust mass in CLASH 2882 must have been accreted in the dense phases of the interstellar medium.
THE SPITZER-IRAC POINT-SOURCE CATALOG OF THE VELA-D CLOUD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strafella, F.; Elia, D.; Campeggio, L., E-mail: francesco.strafella@le.infn.i, E-mail: loretta.campeggio@le.infn.i, E-mail: eliad@oal.ul.p
2010-08-10
This paper presents the observations of Cloud D in the Vela Molecular Ridge, obtained with the Infrared Array Camera (IRAC) camera on board the Spitzer Space Telescope at the wavelengths {lambda} = 3.6, 4.5, 5.8, and 8.0 {mu}m. A photometric catalog of point sources, covering a field of approximately 1.2 deg{sup 2}, has been extracted and complemented with additional available observational data in the millimeter region. Previous observations of the same region, obtained with the Spitzer MIPS camera in the photometric bands at 24 {mu}m and 70 {mu}m, have also been reconsidered to allow an estimate of the spectral slopemore » of the sources in a wider spectral range. A total of 170,299 point sources, detected at the 5{sigma} sensitivity level in at least one of the IRAC bands, have been reported in the catalog. There were 8796 sources for which good quality photometry was obtained in all four IRAC bands. For this sample, a preliminary characterization of the young stellar population based on the determination of spectral slope is discussed; combining this with diagnostics in the color-magnitude and color-color diagrams, the relative population of young stellar objects (YSOs) in different evolutionary classes has been estimated and a total of 637 candidate YSOs have been selected. The main differences in their relative abundances have been highlighted and a brief account for their spatial distribution is given. The star formation rate has also been estimated and compared with the values derived for other star-forming regions. Finally, an analysis of the spatial distribution of the sources by means of the two-point correlation function shows that the younger population, constituted by the Class I and flat-spectrum sources, is significantly more clustered than the Class II and III sources.« less
Spacecraft camera image registration
NASA Technical Reports Server (NTRS)
Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)
1987-01-01
A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).
In-Situ Cameras for Radiometric Correction of Remotely Sensed Data
NASA Astrophysics Data System (ADS)
Kautz, Jess S.
The atmosphere distorts the spectrum of remotely sensed data, negatively affecting all forms of investigating Earth's surface. To gather reliable data, it is vital that atmospheric corrections are accurate. The current state of the field of atmospheric correction does not account well for the benefits and costs of different correction algorithms. Ground spectral data are required to evaluate these algorithms better. This dissertation explores using cameras as radiometers as a means of gathering ground spectral data. I introduce techniques to implement a camera systems for atmospheric correction using off the shelf parts. To aid the design of future camera systems for radiometric correction, methods for estimating the system error prior to construction, calibration and testing of the resulting camera system are explored. Simulations are used to investigate the relationship between the reflectance accuracy of the camera system and the quality of atmospheric correction. In the design phase, read noise and filter choice are found to be the strongest sources of system error. I explain the calibration methods for the camera system, showing the problems of pixel to angle calibration, and adapting the web camera for scientific work. The camera system is tested in the field to estimate its ability to recover directional reflectance from BRF data. I estimate the error in the system due to the experimental set up, then explore how the system error changes with different cameras, environmental set-ups and inversions. With these experiments, I learn about the importance of the dynamic range of the camera, and the input ranges used for the PROSAIL inversion. Evidence that the camera can perform within the specification set for ELM correction in this dissertation is evaluated. The analysis is concluded by simulating an ELM correction of a scene using various numbers of calibration targets, and levels of system error, to find the number of cameras needed for a full-scale implementation.
Optical Meteor Systems Used by the NASA Meteoroid Environment Office
NASA Technical Reports Server (NTRS)
Kingery, A. M.; Blaauw, R. C.; Cooke, W. J.; Moser, D. E.
2015-01-01
The NASA Meteoroid Environment Office (MEO) uses two main meteor camera networks to characterize the meteoroid environment: an all sky system and a wide field system to study cm and mm size meteors respectively. The NASA All Sky Fireball Network consists of fifteen meteor video cameras in the United States, with plans to expand to eighteen cameras by the end of 2015. The camera design and All-Sky Guided and Real-time Detection (ASGARD) meteor detection software [1, 2] were adopted from the University of Western Ontario's Southern Ontario Meteor Network (SOMN). After seven years of operation, the network has detected over 12,000 multi-station meteors, including meteors from at least 53 different meteor showers. The network is used for speed distribution determination, characterization of meteor showers and sporadic sources, and for informing the public on bright meteor events. The NASA Wide Field Meteor Network was established in December of 2012 with two cameras and expanded to eight cameras in December of 2014. The two camera configuration saw 5470 meteors over two years of operation with two cameras, and has detected 3423 meteors in the first five months of operation (Dec 12, 2014 - May 12, 2015) with eight cameras. We expect to see over 10,000 meteors per year with the expanded system. The cameras have a 20 degree field of view and an approximate limiting meteor magnitude of +5. The network's primary goal is determining the nightly shower and sporadic meteor fluxes. Both camera networks function almost fully autonomously with little human interaction required for upkeep and analysis. The cameras send their data to a central server for storage and automatic analysis. Every morning the servers automatically generates an e-mail and web page containing an analysis of the previous night's events. The current status of the networks will be described, alongside with preliminary results. In addition, future projects, CCD photometry and broadband meteor color camera system, will be discussed.
NASA Astrophysics Data System (ADS)
Gutschwager, Berndt; Hollandt, Jörg
2017-01-01
We present a novel method of nonuniformity correction (NUC) of infrared cameras and focal plane arrays (FPA) in a wide optical spectral range by reading radiance temperatures and by applying a radiation source with an unknown and spatially nonhomogeneous radiance temperature distribution. The benefit of this novel method is that it works with the display and the calculation of radiance temperatures, it can be applied to radiation sources of arbitrary spatial radiance temperature distribution, and it only requires sufficient temporal stability of this distribution during the measurement process. In contrast to this method, an initially presented method described the calculation of NUC correction with the reading of monitored radiance values. Both methods are based on the recording of several (at least three) images of a radiation source and a purposeful row- and line-shift of these sequent images in relation to the first primary image. The mathematical procedure is explained in detail. Its numerical verification with a source of a predefined nonhomogeneous radiance temperature distribution and a thermal imager of a predefined nonuniform FPA responsivity is presented.
NASA Astrophysics Data System (ADS)
Barbosa, F.; Bessuille, J.; Chudakov, E.; Dzhygadlo, R.; Fanelli, C.; Frye, J.; Hardin, J.; Kelsey, J.; Patsyuk, M.; Schwarz, C.; Schwiening, J.; Stevens, J.; Shepherd, M.; Whitlatch, T.; Williams, M.
2017-12-01
The GlueX DIRC (Detection of Internally Reflected Cherenkov light) detector is being developed to upgrade the particle identification capabilities in the forward region of the GlueX experiment at Jefferson Lab. The GlueX DIRC will utilize four existing decommissioned BaBar DIRC bar boxes, which will be oriented to form a plane roughly 4 m away from the fixed target of the experiment. A new photon camera has been designed that is based on the SuperB FDIRC prototype. The full GlueX DIRC system will consist of two such cameras, with the first planned to be built and installed in 2017. We present the current status of the design and R&D, along with the future plans of the GlueX DIRC detector.
Synchronization of video recording and laser pulses including background light suppression
NASA Technical Reports Server (NTRS)
Kalshoven, Jr., James E. (Inventor); Tierney, Jr., Michael (Inventor); Dabney, Philip W. (Inventor)
2004-01-01
An apparatus for and a method of triggering a pulsed light source, in particular a laser light source, for predictable capture of the source by video equipment. A frame synchronization signal is derived from the video signal of a camera to trigger the laser and position the resulting laser light pulse in the appropriate field of the video frame and during the opening of the electronic shutter, if such shutter is included in the camera. Positioning of the laser pulse in the proper video field allows, after recording, for the viewing of the laser light image with a video monitor using the pause mode on a standard cassette-type VCR. This invention also allows for fine positioning of the laser pulse to fall within the electronic shutter opening. For cameras with externally controllable electronic shutters, the invention provides for background light suppression by increasing shutter speed during the frame in which the laser light image is captured. This results in the laser light appearing in one frame in which the background scene is suppressed with the laser light being uneffected, while in all other frames, the shutter speed is slower, allowing for the normal recording of the background scene. This invention also allows for arbitrary (manual or external) triggering of the laser with full video synchronization and background light suppression.
Small format digital photogrammetry for applications in the earth sciences
NASA Astrophysics Data System (ADS)
Rieke-Zapp, Dirk
2010-05-01
Small format digital photogrammetry for applications in the earth sciences Photogrammetry is often considered one of the most precise and versatile surveying techniques. The same camera and analysis software can be used for measurements from sub-millimetre to kilometre scale. Such a measurement device is well suited for application by earth scientists working in the field. In this case a small toolset and a straight forward setup best fit the needs of the operator. While a digital camera is typically already part of the field equipment of an earth scientist the main focus of the field work is often not surveying. Lack in photogrammetric training at the same time requires an easy to learn, straight forward surveying technique. A photogrammetric method was developed aimed primarily at earth scientists for taking accurate measurements in the field minimizing extra bulk and weight of the required equipment. The work included several challenges. A) Definition of an upright coordinate system without heavy and bulky tools like a total station or GNS-Sensor. B) Optimization of image acquisition and geometric stability of the image block. C) Identification of a small camera suitable for precise measurements in the field. D) Optimization of the workflow from image acquisition to preparation of images for stereo measurements. E) Introduction of students and non-photogrammetrists to the workflow. Wooden spheres were used as target points in the field. They were more rugged and available in different sizes than ping pong balls used in a previous setup. Distances between three spheres were introduced as scale information in a photogrammetric adjustment. The distances were measured with a laser distance meter accurate to 1 mm (1 sigma). The vertical angle between the spheres was measured with the same laser distance meter. The precision of the measurement was 0.3° (1 sigma) which is sufficient, i.e. better than inclination measurements with a geological compass. The upright coordinate system is important to measure the dip angle of geologic features in outcrop. The planimetric coordinate systems would be arbitrary, but may easily be oriented to compass north introducing a direction measurement of a compass. Wooden spheres and a Leica disto D3 laser distance meter added less than 0.150 kg to the field equipment considering that a suitable digital camera was already part of it. Identification of a small digital camera suitable for precise measurements was a major part of this work. A group of cameras were calibrated several times over different periods of time on a testfield. Further evaluation involved an accuracy assessment in the field comparing distances between signalized points calculated form a photogrammetric setup with coordinates derived from a total station survey. The smallest camera in the test required calibration on the job as the interior orientation changed significantly between testfield calibration and use in the field. We attribute this to the fact that the lens was retracted then the camera was switched off. Fairly stable camera geometry in a compact size camera with lens retracting system was accomplished for Sigma DP1 and DP2 cameras. While the pixel count of the cameras was less than for the Ricoh, the pixel pitch in the Sigma cameras was much larger. Hence, the same mechanical movement would have less per pixel effect for the Sigma cameras than for the Ricoh camera. A large pixel pitch may therefore compensate for some camera instability explaining why cameras with large sensors and larger pixel pitch typically yield better accuracy in object space. Both Sigma cameras weigh approximately 0.250 kg and may even be suitable for use with ultralight aerial vehicles (UAV) which have payload restriction of 0.200 to 0.300 kg. A set of other cameras that were available were also tested on a calibration field and on location showing once again that it is difficult to reason geometric stability from camera specifications. Image acquisition with geometrically stable cameras was fairly straight forward to cover the area of interest with stereo pairs for analysis. We limited our tests to setups with three to five images to minimize the amount of post processing. The laser dot of the laser distance meter was not visible for distances farther than 5-7 m with the naked eye which also limited the maximum stereo area that may be covered with this technique. Extrapolating the setup to fairly large areas showed no significant decrease in accuracy accomplished in object space. Working with a Sigma SD14 SLR camera on a 6 x 18 x 20 m3 volume the maximum length measurement error ranged between 20 and 30 mm depending on image setup and analysis. For smaller outcrops even the compact cameras yielded maximum length measurement errors in the mm range which was considered sufficient for measurements in the earth sciences. In many cases the resolution per pixel was the limiting factor of image analysis rather than accuracy. A field manual was developed guiding novice users and students to this technique. The technique does not simplify ease of use for precision; therefore successful users of the presented method easily grow into more advanced photogrammetric methods for high precision applications. Originally camera calibration was not part of the methodology for the novice operators. Recent introduction of Camera Calibrator which is a low cost, well automated software for camera calibration, allowed beginners to calibrate their camera within a couple minutes. The complete set of calibration parameters can be applied in ERDAS LPS software easing the workflow. Image orientation was performed in LPS 9.2 software which was also used for further image analysis.
Phua, Joe
2016-05-01
This study examined the effect of the audience's similarity to, and parasocial identification with, spokespersons in obesity public service announcements, on perceived source credibility, and diet and exercise self-efficacy. The results (N = 200) indicated that perceived similarity to the spokesperson was significantly associated with three dimensions of source credibility (competence, trustworthiness, and goodwill), each of which in turn influenced parasocial identification with the spokesperson. Parasocial identification also exerted a positive impact on the audiences' diet and exercise self-efficacy. Additionally, significant differences were found between overweight viewers and non-overweight viewers on perceived similarity, parasocial identification with the spokesperson, and source credibility. © The Author(s) 2014.
Computer aided photographic engineering
NASA Technical Reports Server (NTRS)
Hixson, Jeffrey A.; Rieckhoff, Tom
1988-01-01
High speed photography is an excellent source of engineering data but only provides a two-dimensional representation of a three-dimensional event. Multiple cameras can be used to provide data for the third dimension but camera locations are not always available. A solution to this problem is to overlay three-dimensional CAD/CAM models of the hardware being tested onto a film or photographic image, allowing the engineer to measure surface distances, relative motions between components, and surface variations.
NASA Astrophysics Data System (ADS)
Olweny, Ephrem O.; Tan, Yung K.; Faddegon, Stephen; Jackson, Neil; Wehner, Eleanor F.; Best, Sara L.; Park, Samuel K.; Thapa, Abhas; Cadeddu, Jeffrey A.; Zuzak, Karel J.
2012-03-01
Digital light processing hyperspectral imaging (DLP® HSI) was adapted for use during laparoscopic surgery by coupling a conventional laparoscopic light guide with a DLP-based Agile Light source (OL 490, Optronic Laboratories, Orlando, FL), incorporating a 0° laparoscope, and a customized digital CCD camera (DVC, Austin, TX). The system was used to characterize renal ischemia in a porcine model.
Study and comparison of different sensitivity models for a two-plane Compton camera.
Muñoz, Enrique; Barrio, John; Bernabéu, José; Etxebeste, Ane; Lacasta, Carlos; Llosá, Gabriela; Ros, Ana; Roser, Jorge; Oliver, Josep F
2018-06-25
Given the strong variations in the sensitivity of Compton cameras for the detection of events originating from different points in the field of view (FoV), sensitivity correction is often necessary in Compton image reconstruction. Several approaches for the calculation of the sensitivity matrix have been proposed in the literature. While most of these models are easily implemented and can be useful in many cases, they usually assume high angular coverage over the scattered photon, which is not the case for our prototype. In this work, we have derived an analytical model that allows us to calculate a detailed sensitivity matrix, which has been compared to other sensitivity models in the literature. Specifically, the proposed model describes the probability of measuring a useful event in a two-plane Compton camera, including the most relevant physical processes involved. The model has been used to obtain an expression for the system and sensitivity matrices for iterative image reconstruction. These matrices have been validated taking Monte Carlo simulations as a reference. In order to study the impact of the sensitivity, images reconstructed with our sensitivity model and with other models have been compared. Images have been reconstructed from several simulated sources, including point-like sources and extended distributions of activity, and also from experimental data measured with 22 Na sources. Results show that our sensitivity model is the best suited for our prototype. Although other models in the literature perform successfully in many scenarios, they are not applicable in all the geometrical configurations of interest for our system. In general, our model allows to effectively recover the intensity of point-like sources at different positions in the FoV and to reconstruct regions of homogeneous activity with minimal variance. Moreover, it can be employed for all Compton camera configurations, including those with low angular coverage over the scatterer.
Mask technology for EUV lithography
NASA Astrophysics Data System (ADS)
Bujak, M.; Burkhart, Scott C.; Cerjan, Charles J.; Kearney, Patrick A.; Moore, Craig E.; Prisbrey, Shon T.; Sweeney, Donald W.; Tong, William M.; Vernon, Stephen P.; Walton, Christopher C.; Warrick, Abbie L.; Weber, Frank J.; Wedowski, Marco; Wilhelmsen, Karl C.; Bokor, Jeffrey; Jeong, Sungho; Cardinale, Gregory F.; Ray-Chaudhuri, Avijit K.; Stivers, Alan R.; Tejnil, Edita; Yan, Pei-yang; Hector, Scott D.; Nguyen, Khanh B.
1999-04-01
Extreme UV Lithography (EUVL) is one of the leading candidates for the next generation lithography, which will decrease critical feature size to below 100 nm within 5 years. EUVL uses 10-14 nm light as envisioned by the EUV Limited Liability Company, a consortium formed by Intel and supported by Motorola and AMD to perform R and D work at three national laboratories. Much work has already taken place, with the first prototypical cameras operational at 13.4 nm using low energy laser plasma EUV light sources to investigate issues including the source, camera, electro- mechanical and system issues, photoresists, and of course the masks. EUV lithograph masks are fundamentally different than conventional photolithographic masks as they are reflective instead of transmissive. EUV light at 13.4 nm is rapidly absorbed by most materials, thus all light transmission within the EUVL system from source to silicon wafer, including EUV reflected from the mask, is performed by multilayer mirrors in vacuum.
NASA Astrophysics Data System (ADS)
Zacharek, M.; Delis, P.; Kedzierski, M.; Fryskowska, A.
2017-05-01
These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.
Condenser for extreme-UV lithography with discharge source
Sweatt, William C.; Kubiak, Glenn D.
2001-01-01
Condenser system, for use with a ringfield camera in projection lithography, employs quasi grazing-incidence collector mirrors that are coated with a suitable reflective metal such as ruthenium to collect radiation from a discharge source to minimize the effect of contaminant accumulation on the collecting mirrors.
Imaging Dot Patterns for Measuring Gossamer Space Structures
NASA Technical Reports Server (NTRS)
Dorrington, A. A.; Danehy, P. M.; Jones, T. W.; Pappa, R. S.; Connell, J. W.
2005-01-01
A paper describes a photogrammetric method for measuring the changing shape of a gossamer (membrane) structure deployed in outer space. Such a structure is typified by a solar sail comprising a transparent polymeric membrane aluminized on its Sun-facing side and coated black on the opposite side. Unlike some prior photogrammetric methods, this method does not require an artificial light source or the attachment of retroreflectors to the gossamer structure. In a basic version of the method, the membrane contains a fluorescent dye, and the front and back coats are removed in matching patterns of dots. The dye in the dots absorbs some sunlight and fluoresces at a longer wavelength in all directions, thereby enabling acquisition of high-contrast images from almost any viewing angle. The fluorescent dots are observed by one or more electronic camera(s) on the Sun side, the shade side, or both sides. Filters that pass the fluorescent light and suppress most of the solar spectrum are placed in front of the camera(s) to increase the contrast of the dots against the background. The dot image(s) in the camera(s) are digitized, then processed by use of commercially available photogrammetric software.
2013-06-01
setting, landscape position, watershed size), the structural components of the wetland ecosystem (e.g., plants , animals, soil , water, and the...Community Support Characteristic Invertebrate Community Support Landscape/Regional Biodiversity Diversity of native plant species (index, H’) Number of...Flagging GPS and Digital Camera / Spare Batteries Clipboard, Calculator, and Pencils County Soil Survey Plant Identification Keys Munsell
40 CFR 62.102 - Identification of sources.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Fluoride Emissions from Phosphate Fertilizer Plants § 62.102 Identification of sources. The plan currently does not identify any sources subject to its fluoride emission limits. Landfill Gas Emissions From...
40 CFR 62.102 - Identification of sources.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Fluoride Emissions from Phosphate Fertilizer Plants § 62.102 Identification of sources. The plan currently does not identify any sources subject to its fluoride emission limits. Landfill Gas Emissions From...
Open Source Initiative Powers Real-Time Data Streams
NASA Technical Reports Server (NTRS)
2014-01-01
Under an SBIR contract with Dryden Flight Research Center, Creare Inc. developed a data collection tool called the Ring Buffered Network Bus. The technology has now been released under an open source license and is hosted by the Open Source DataTurbine Initiative. DataTurbine allows anyone to stream live data from sensors, labs, cameras, ocean buoys, cell phones, and more.
NASA Astrophysics Data System (ADS)
Yu, Y.; Kalashnikova, O. V.; Garay, M. J.; Notaro, M.
2017-12-01
Global arid and semi-arid regions supply 1100 to 5000 Tg of Aeolian dust to the atmosphere each year, primarily from North Africa and secondarily from the Middle East. Previous dust source identification methods, based on either remotely-sensed aerosol optical depth (AOD) or dust activity, yield distinct dust source maps, largely due to the limitations in each method and remote-sensing product. Here we apply a novel motion-based method for dust source identification. Dust plume thickness and motion vectors from Multi-angle Imaging SpectroRadiometer (MISR) Cloud Motion Vector Product (CMVP) are examined to identify the regions with high frequency of fast moving-dust plumes, by season. According to MISR CMVP, Bodele depression is the most important dust source across North Africa, consistent with previous studies. Seasonal variability of dust emission across the North Africa is largely driven by climatology of wind and precipitation, featuring the influence of Sharav Cyclone and western African monsoon. In the Middle East, Iraq, Kuwait, and eastern Saudi Arabia are identified as dust source regions, especially during summer months, when the Middle Eastern Shamal wind is active. Furthermore, dust emission trend at each dust source are diagnosed from the motion-based dust source dataset. Increase in dust emission from the Fertile Crescent, Sahel, and eastern African dust sources are identified from MISR CMVP, implying potential contribution from these dust sources to the upward trend in AOD and dust AOD over the Middle East in the 21st century. By comparing with various dust source identification studies, we conclude that the motion-based identification of dust sources is an encouraging alternative and compliment to the AOD-only source identification method.
NASA Astrophysics Data System (ADS)
Haak, Daniel; Doma, Aliaa; Gombert, Alexander; Deserno, Thomas M.
2016-03-01
Today, subject's medical data in controlled clinical trials is captured digitally in electronic case report forms (eCRFs). However, eCRFs only insufficiently support integration of subject's image data, although medical imaging is looming large in studies today. For bed-side image integration, we present a mobile application (App) that utilizes the smartphone-integrated camera. To ensure high image quality with this inexpensive consumer hardware, color reference cards are placed in the camera's field of view next to the lesion. The cards are used for automatic calibration of geometry, color, and contrast. In addition, a personalized code is read from the cards that allows subject identification. For data integration, the App is connected to an communication and image analysis server that also holds the code-study-subject relation. In a second system interconnection, web services are used to connect the smartphone with OpenClinica, an open-source, Food and Drug Administration (FDA)-approved electronic data capture (EDC) system in clinical trials. Once the photographs have been securely stored on the server, they are released automatically from the mobile device. The workflow of the system is demonstrated by an ongoing clinical trial, in which photographic documentation is frequently performed to measure the effect of wound incision management systems. All 205 images, which have been collected in the study so far, have been correctly identified and successfully integrated into the corresponding subject's eCRF. Using this system, manual steps for the study personnel are reduced, and, therefore, errors, latency and costs decreased. Our approach also increases data security and privacy.
Towards Camera-LIDAR Fusion-Based Terrain Modelling for Planetary Surfaces: Review and Analysis
Shaukat, Affan; Blacker, Peter C.; Spiteri, Conrad; Gao, Yang
2016-01-01
In recent decades, terrain modelling and reconstruction techniques have increased research interest in precise short and long distance autonomous navigation, localisation and mapping within field robotics. One of the most challenging applications is in relation to autonomous planetary exploration using mobile robots. Rovers deployed to explore extraterrestrial surfaces are required to perceive and model the environment with little or no intervention from the ground station. Up to date, stereopsis represents the state-of-the art method and can achieve short-distance planetary surface modelling. However, future space missions will require scene reconstruction at greater distance, fidelity and feature complexity, potentially using other sensors like Light Detection And Ranging (LIDAR). LIDAR has been extensively exploited for target detection, identification, and depth estimation in terrestrial robotics, but is still under development to become a viable technology for space robotics. This paper will first review current methods for scene reconstruction and terrain modelling using cameras in planetary robotics and LIDARs in terrestrial robotics; then we will propose camera-LIDAR fusion as a feasible technique to overcome the limitations of either of these individual sensors for planetary exploration. A comprehensive analysis will be presented to demonstrate the advantages of camera-LIDAR fusion in terms of range, fidelity, accuracy and computation. PMID:27879625
Maritime microwave radar and electro-optical data fusion for homeland security
NASA Astrophysics Data System (ADS)
Seastrand, Mark J.
2004-09-01
US Customs is responsible for monitoring all incoming air and maritime traffic, including the island of Puerto Rico as a US territory. Puerto Rico offers potentially obscure points of entry to drug smugglers. This environment sets forth a formula for an illegal drug trade - based relatively near the continental US. The US Customs Caribbean Air and Marine Operations Center (CAMOC), located in Puntas Salinas, has the charter to monitor maritime and Air Traffic Control (ATC) radars. The CAMOC monitors ATC radars and advises the Air and Marine Branch of US Customs of suspicious air activity. In turn, the US Coast Guard and/or US Customs will launch air and sea assets as necessary. The addition of a coastal radar and camera system provides US Customs a maritime monitoring capability for the northwestern end of Puerto Rico (Figure 1). Command and Control of the radar and camera is executed at the CAMOC, located 75 miles away. The Maritime Microwave Surveillance Radar performs search, primary target acquisition and target tracking while the Midwave Infrared (MWIR) camera performs target identification. This wide area surveillance, using a combination of radar and MWIR camera, offers the CAMOC a cost and manpower effective approach to monitor, track and identify maritime targets.
Noise Source Identification in a Reverberant Field Using Spherical Beamforming
NASA Astrophysics Data System (ADS)
Choi, Young-Chul; Park, Jin-Ho; Yoon, Doo-Byung; Kwon, Hyu-Sang
Identification of noise sources, their locations and strengths, has been taken great attention. The method that can identify noise sources normally assumes that noise sources are located at a free field. However, the sound in a reverberant field consists of that coming directly from the source plus sound reflected or scattered by the walls or objects in the field. In contrast to the exterior sound field, reflections are added to sound field. Therefore, the source location estimated by the conventional methods may give unacceptable error. In this paper, we explain the effects of reverberant field on interior source identification process and propose the method that can identify noise sources in the reverberant field.
Toslak, Devrim; Liu, Changgeng; Alam, Minhaj Nur; Yao, Xincheng
2018-06-01
A portable fundus imager is essential for emerging telemedicine screening and point-of-care examination of eye diseases. However, existing portable fundus cameras have limited field of view (FOV) and frequently require pupillary dilation. We report here a miniaturized indirect ophthalmoscopy-based nonmydriatic fundus camera with a snapshot FOV up to 67° external angle, which corresponds to a 101° eye angle. The wide-field fundus camera consists of a near-infrared light source (LS) for retinal guidance and a white LS for color retinal imaging. By incorporating digital image registration and glare elimination methods, a dual-image acquisition approach was used to achieve reflection artifact-free fundus photography.
Road sign recognition using Viapix module and correlation
NASA Astrophysics Data System (ADS)
Ouerhani, Y.; Desthieux, M.; Alfalou, A.
2015-03-01
In this paper, we propose and validate a new system used to explore road assets. In this work we are interested on the vertical road signs. To do this, we are based on the combination of road signs detection, recognition and identification using data provides by sensors. The proposed approach consists on using panoramic views provided by the innovative device, VIAPIX®1, developed by our company ACTRIS2. We are based also on the optimized correlation technique for road signs recognition and identification on pictures. Obtained results shows the interest on using panoramic views compared to results obtained using images provided using only one camera.
Embedded mobile farm robot for identification of diseased plants
NASA Astrophysics Data System (ADS)
Sadistap, S. S.; Botre, B. A.; Pandit, Harshavardhan; Chandrasekhar; Rao, Adesh
2013-07-01
This paper presents the development of a mobile robot used in farms for identification of diseased plants. It puts forth two of the major aspects of robotics namely automated navigation and image processing. The robot navigates on the basis of the GPS (Global Positioning System) location and data obtained from IR (Infrared) sensors to avoid any obstacles in its path. It uses an image processing algorithm to differentiate between diseased and non-diseased plants. A robotic platform consisting of an ARM9 processor, motor drivers, robot mechanical assembly, camera and infrared sensors has been used. Mini2440 microcontroller has been used wherein Embedded linux OS (Operating System) is implemented.
A survey of camera error sources in machine vision systems
NASA Astrophysics Data System (ADS)
Jatko, W. B.
In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.
Characterization and optimization for detector systems of IGRINS
NASA Astrophysics Data System (ADS)
Jeong, Ueejeong; Chun, Moo-Young; Oh, Jae Sok; Park, Chan; Yuk, In-Soo; Oh, Heeyoung; Kim, Kang-Min; Ko, Kyeong Yeon; Pavel, Michael D.; Yu, Young Sam; Jaffe, Daniel T.
2014-07-01
IGRINS (Immersion GRating INfrared Spectrometer) is a high resolution wide-band infrared spectrograph developed by the Korea Astronomy and Space Science Institute (KASI) and the University of Texas at Austin (UT). This spectrograph has H-band and K-band science cameras and a slit viewing camera, all three of which use Teledyne's λc~2.5μm 2k×2k HgCdTe HAWAII-2RG CMOS detectors. The two spectrograph cameras employ science grade detectors, while the slit viewing camera includes an engineering grade detector. Teledyne's cryogenic SIDECAR ASIC boards and JADE2 USB interface cards were installed to control those detectors. We performed experiments to characterize and optimize the detector systems in the IGRINS cryostat. We present measurements and optimization of noise, dark current, and referencelevel stability obtained under dark conditions. We also discuss well depth, linearity and conversion gain measurements obtained using an external light source.
[Regional atmospheric environment risk source identification and assessment].
Zhang, Xiao-Chun; Chen, Wei-Ping; Ma, Chun; Zhan, Shui-Fen; Jiao, Wen-Tao
2012-12-01
Identification and assessment for atmospheric environment risk source plays an important role in regional atmospheric risk assessment and regional atmospheric pollution prevention and control. The likelihood exposure and consequence assessment method (LEC method) and the Delphi method were employed to build a fast and effective method for identification and assessment of regional atmospheric environment risk sources. This method was applied to the case study of a large coal transportation port in North China. The assessment results showed that the risk characteristics and the harm degree of regional atmospheric environment risk source were in line with the actual situation. Fast and effective identification and assessment of risk source has laid an important foundation for the regional atmospheric environmental risk assessment and regional atmospheric pollution prevention and control.
HST NICMOS Observations of the Polarization of NGC 1068
NASA Technical Reports Server (NTRS)
Simpson, Janet P.; Colgan, Sean W. J.; Erickson, Edwin F.; Hines, Dean C.; Schultz, A. S. B.; Trammell, Susan R.; DeVincenzi, D. (Technical Monitor)
2002-01-01
We have observed the polarized light at 2 microns in the center of NGC 1068 with HST (Hubble Space Telescope) NICMOS (Near Infrared Camera Multi Object Spectrometer) Camera 2. The nucleus is dominated by a bright, unresolved source, polarized at a level of 6.0 +/- 1.2% with a position angle of 122 degrees +/- 1.5 degrees. There are two polarized lobes extending tip to 8" northeast and southwest of the nucleus. The polarized flux in both lobes is quite clumpy, with the maximum polarization occurring in the southwest lobe at a level of 17% when smoothed to 0.23" resolution. The perpendiculars to the polarization vectors in these two lobes point back to the intense unresolved nuclear source to within one 0.076" Camera 2 pixel, thereby confirming that this source is the origin of the scattered light and therefore the probable AGN (Active Galactic Nuclei) central engine. Whereas the polarization of the nucleus is probably caused by dichroic absorption, the polarization in the lobes is almost certainly caused by scattering, with very little contribution from dichroic absorption. Features in the polarized lobes include a gap at a distance of about 1" from the nucleus toward the southwest lobe and a "knot" of emission about 5" northwest of the nucleus. Both features had been discussed by groundbased observers, but they are much better defined with the high spatial resolution of NICMOS. The northeast knot may be the side of a molecular cloud that is facing the nucleus, which cloud may be preventing the expansion of the northeast radio lobe at the head of the radio synchrotron-radiation-emitting jet. We also report the presence of two ghosts in the Camera 2 polarizers.
Human Fecal Source Identification: Real-Time Quantitative PCR Method Standardization
Method standardization or the formal development of a protocol that establishes uniform performance benchmarks and practices is necessary for widespread adoption of a fecal source identification approach. Standardization of a human-associated fecal identification method has been...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steeb, Jennifer L.; Mertz, Carol J.; Finck, Martha R.
X-ray fluorescence (XRF) is an attractive technique for nuclear forensics applications. We evaluated a handheld, portable XRF device by applying an external radiation field (10 mR/h to 17 R/h) using two types of radiography sources: a 60Co radiography camera to observe effects from high-energy gamma emissions and an 192Ir radiography camera to observe effects from several low-energy gamma (0.604, 0.468, and 0.317 MeV) and decay daughter x-ray emissions. External radiation tests proved that radiation, in general, has a significant effect on the dead time or background at dose rates over 1 R/hr for both the 192Ir and 60Co sources.
Advanced Wavefront Sensing and Control Testbed (AWCT)
NASA Technical Reports Server (NTRS)
Shi, Fang; Basinger, Scott A.; Diaz, Rosemary T.; Gappinger, Robert O.; Tang, Hong; Lam, Raymond K.; Sidick, Erkin; Hein, Randall C.; Rud, Mayer; Troy, Mitchell
2010-01-01
The Advanced Wavefront Sensing and Control Testbed (AWCT) is built as a versatile facility for developing and demonstrating, in hardware, the future technologies of wave front sensing and control algorithms for active optical systems. The testbed includes a source projector for a broadband point-source and a suite of extended scene targets, a dispersed fringe sensor, a Shack-Hartmann camera, and an imaging camera capable of phase retrieval wavefront sensing. The testbed also provides two easily accessible conjugated pupil planes which can accommodate the active optical devices such as fast steering mirror, deformable mirror, and segmented mirrors. In this paper, we describe the testbed optical design, testbed configurations and capabilities, as well as the initial results from the testbed hardware integrations and tests.
Confocal Retinal Imaging Using a Digital Light Projector with a Near Infrared VCSEL Source
Muller, Matthew S.; Elsner, Ann E.
2018-01-01
A custom near infrared VCSEL source has been implemented in a confocal non-mydriatic retinal camera, the Digital Light Ophthalmoscope (DLO). The use of near infrared light improves patient comfort, avoids pupil constriction, penetrates the deeper retina, and does not mask visual stimuli. The DLO performs confocal imaging by synchronizing a sequence of lines displayed with a digital micromirror device to the rolling shutter exposure of a 2D CMOS camera. Real-time software adjustments enable multiply scattered light imaging, which rapidly and cost-effectively emphasizes drusen and other scattering disruptions in the deeper retina. A separate 5.1″ LCD display provides customizable visible stimuli for vision experiments with simultaneous near infrared imaging. PMID:29899586
Approach to identifying pollutant source and matching flow field
NASA Astrophysics Data System (ADS)
Liping, Pang; Yu, Zhang; Hongquan, Qu; Tao, Hu; Wei, Wang
2013-07-01
Accidental pollution events often threaten people's health and lives, and it is necessary to identify a pollutant source rapidly so that prompt actions can be taken to prevent the spread of pollution. But this identification process is one of the difficulties in the inverse problem areas. This paper carries out some studies on this issue. An approach using single sensor information with noise was developed to identify a sudden continuous emission trace pollutant source in a steady velocity field. This approach first compares the characteristic distance of the measured concentration sequence to the multiple hypothetical measured concentration sequences at the sensor position, which are obtained based on a source-three-parameter multiple hypotheses. Then we realize the source identification by globally searching the optimal values with the objective function of the maximum location probability. Considering the large amount of computation load resulting from this global searching, a local fine-mesh source search method based on priori coarse-mesh location probabilities is further used to improve the efficiency of identification. Studies have shown that the flow field has a very important influence on the source identification. Therefore, we also discuss the impact of non-matching flow fields with estimation deviation on identification. Based on this analysis, a method for matching accurate flow field is presented to improve the accuracy of identification. In order to verify the practical application of the above method, an experimental system simulating a sudden pollution process in a steady flow field was set up and some experiments were conducted when the diffusion coefficient was known. The studies showed that the three parameters (position, emission strength and initial emission time) of the pollutant source in the experiment can be estimated by using the method for matching flow field and source identification.
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2016-12-01
A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.
Assessing the Potential of Low-Cost 3D Cameras for the Rapid Measurement of Plant Woody Structure
Nock, Charles A; Taugourdeau, Olivier; Delagrange, Sylvain; Messier, Christian
2013-01-01
Detailed 3D plant architectural data have numerous applications in plant science, but many existing approaches for 3D data collection are time-consuming and/or require costly equipment. Recently, there has been rapid growth in the availability of low-cost, 3D cameras and related open source software applications. 3D cameras may provide measurements of key components of plant architecture such as stem diameters and lengths, however, few tests of 3D cameras for the measurement of plant architecture have been conducted. Here, we measured Salix branch segments ranging from 2–13 mm in diameter with an Asus Xtion camera to quantify the limits and accuracy of branch diameter measurement with a 3D camera. By scanning at a variety of distances we also quantified the effect of scanning distance. In addition, we also test the sensitivity of the program KinFu for continuous 3D object scanning and modeling as well as other similar software to accurately record stem diameters and capture plant form (<3 m in height). Given its ability to accurately capture the diameter of branches >6 mm, Asus Xtion may provide a novel method for the collection of 3D data on the branching architecture of woody plants. Improvements in camera measurement accuracy and available software are likely to further improve the utility of 3D cameras for plant sciences in the future. PMID:24287538
NASA Astrophysics Data System (ADS)
Stavroulakis, Petros I.; Chen, Shuxiao; Sims-Waterhouse, Danny; Piano, Samanta; Southon, Nicholas; Bointon, Patrick; Leach, Richard
2017-06-01
In non-rigid fringe projection 3D measurement systems, where either the camera or projector setup can change significantly between measurements or the object needs to be tracked, self-calibration has to be carried out frequently to keep the measurements accurate1. In fringe projection systems, it is common to use methods developed initially for photogrammetry for the calibration of the camera(s) in the system in terms of extrinsic and intrinsic parameters. To calibrate the projector(s) an extra correspondence between a pre-calibrated camera and an image created by the projector is performed. These recalibration steps are usually time consuming and involve the measurement of calibrated patterns on planes, before the actual object can continue to be measured after a motion of a camera or projector has been introduced in the setup and hence do not facilitate fast 3D measurement of objects when frequent experimental setup changes are necessary. By employing and combining a priori information via inverse rendering, on-board sensors, deep learning and leveraging a graphics processor unit (GPU), we assess a fine camera pose estimation method which is based on optimising the rendering of a model of a scene and the object to match the view from the camera. We find that the success of this calibration pipeline can be greatly improved by using adequate a priori information from the aforementioned sources.
NASA Astrophysics Data System (ADS)
Gogler, Slawomir; Bieszczad, Grzegorz; Krupinski, Michal
2013-10-01
Thermal imagers and used therein infrared array sensors are subject to calibration procedure and evaluation of their voltage sensitivity on incident radiation during manufacturing process. The calibration procedure is especially important in so-called radiometric cameras, where accurate radiometric quantities, given in physical units, are of concern. Even though non-radiometric cameras are not expected to stand up to such elevated standards, it is still important, that the image faithfully represents temperature variations across the scene. Detectors used in thermal camera are illuminated by infrared radiation transmitted through an infrared transmitting optical system. Often an optical system, when exposed to uniform Lambertian source forms a non-uniform irradiation distribution in its image plane. In order to be able to carry out an accurate non-uniformity correction it is essential to correctly predict irradiation distribution from a uniform source. In the article a non-uniformity correction method has been presented, that takes into account optical system's radiometry. Predictions of the irradiation distribution have been confronted with measured irradiance values. Presented radiometric model allows fast and accurate non-uniformity correction to be carried out.
Free LittleDog!: Towards Completely Untethered Operation of the LittleDog Quadruped
2007-08-01
helpful Intel Open Source Computer Vision ( OpenCV ) library [4] wherever possible rather than reimplementing many of the standard algorithms, however...correspondences between image points and world points, and feeding these to a camera calibration function, such as that provided by OpenCV , allows one to solve... OpenCV calibration function to that used for intrinsic calibration solves for Tboard→camerai . The position of the camera 37 Figure 5.3: Snapshot of
NASA Astrophysics Data System (ADS)
Ćwiok, M.; Dominik, W.; Małek, K.; Mankiewicz, L.; Mrowca-Ciułacz, J.; Nawrocki, K.; Piotrowski, L. W.; Sitek, P.; Sokołowski, M.; Wrochna, G.; Żarnecki, A. F.
2007-06-01
Experiment “Pi of the Sky” is designed to search for prompt optical emission from GRB sources. 32 CCD cameras covering 2 steradians will monitor the sky continuously. The data will be analysed on-line in search for optical flashes. The prototype with 2 cameras operated at Las Campanas (Chile) since 2004 has recognised several outbursts of flaring stars and has given limits for a few GRB.
NASA Astrophysics Data System (ADS)
Dekemper, E.; Fussen, D.; Vanhellemont, F.; Vanhamel, J.; Pieroux, D.; Berkenbosch, S.
2017-12-01
In an urban environment, nitrogen dioxide is emitted by a multitude of static and moving point sources (cars, industry, power plants, heating systems,…). Air quality models generally rely on a limited number of monitoring stations which do not capture the whole pattern, neither allow for full validation. So far, there has been a lack of instrument capable of measuring NO2 fields with the necessary spatio-temporal resolution above major point sources (power plants), or more extended ones (cities). We have developed a new type of passive remote sensing instrument aiming at the measurement of 2-D distributions of NO2 slant column densities (SCDs) with a high spatial (meters) and temporal (minutes) resolution. The measurement principle has some similarities with the popular filter-based SO2 camera (used in volcanic and industrial sulfur emissions monitoring) as it relies on spectral images taken at wavelengths where the molecule absorption cross section is different. But contrary to the SO2 camera, the spectral selection is performed by an acousto-optical tunable filter (AOTF) capable of resolving the target molecule's spectral features. A first prototype was successfully tested with the plume of a coal-firing power plant in Romania, revealing the dynamics of the formation of NO2 in the early plume. A lighter version of the NO2 camera is now being tested on other targets, such as oil refineries and urban air masses.
Performance Evaluation of 98 CZT Sensors for Their Use in Gamma-Ray Imaging
NASA Astrophysics Data System (ADS)
Dedek, Nicolas; Speller, Robert D.; Spendley, Paul; Horrocks, Julie A.
2008-10-01
98 SPEAR sensors from eV Products have been evaluated for their use in a portable Compton camera. The sensors have a 5 mm times 5 mm times 5 mm CdZnTe crystal and are provided together with a preamplifier. The energy resolution was studied in detail for all sensors and was found to be 6% on average at 59.5 keV and 3% on average at 662 keV. The standard deviations of the corresponding energy resolution distributions are remarkably small (0.6% at 59.5 keV, 0.7% at 662 keV) and reflect the uniformity of the sensor characteristics. For a possible outside use the temperature dependence of the sensor performances was investigated for temperatures between 15 and 45 deg Celsius. A linear shift in calibration with temperature was observed. The energy resolution at low energies (81 keV) was found to deteriorate exponentially with temperature, while it stayed constant at higher energies (356 keV). A Compton camera built of these sensors was simulated. To obtain realistic energy spectra a suitable detector response function was implemented. To investigate the angular resolution of the camera a 137Cs point source was simulated. Reconstructed images of the point source were compared for perfect and realistic energy and position resolutions. The angular resolution of the camera was found to be better than 10 deg.
NASA Astrophysics Data System (ADS)
Dekemper, Emmanuel; Vanhamel, Jurgen; Van Opstal, Bert; Fussen, Didier
2016-12-01
The abundance of NO2 in the boundary layer relates to air quality and pollution source monitoring. Observing the spatiotemporal distribution of NO2 above well-delimited (flue gas stacks, volcanoes, ships) or more extended sources (cities) allows for applications such as monitoring emission fluxes or studying the plume dynamic chemistry and its transport. So far, most attempts to map the NO2 field from the ground have been made with visible-light scanning grating spectrometers. Benefiting from a high retrieval accuracy, they only achieve a relatively low spatiotemporal resolution that hampers the detection of dynamic features. We present a new type of passive remote sensing instrument aiming at the measurement of the 2-D distributions of NO2 slant column densities (SCDs) with a high spatiotemporal resolution. The measurement principle has strong similarities with the popular filter-based SO2 camera as it relies on spectral images taken at wavelengths where the molecule absorption cross section is different. Contrary to the SO2 camera, the spectral selection is performed by an acousto-optical tunable filter (AOTF) capable of resolving the target molecule's spectral features. The NO2 camera capabilities are demonstrated by imaging the NO2 abundance in the plume of a coal-fired power plant. During this experiment, the 2-D distribution of the NO2 SCD was retrieved with a temporal resolution of 3 min and a spatial sampling of 50 cm (over a 250 × 250 m2 area). The detection limit was close to 5 × 1016 molecules cm-2, with a maximum detected SCD of 4 × 1017 molecules cm-2. Illustrating the added value of the NO2 camera measurements, the data reveal the dynamics of the NO to NO2 conversion in the early plume with an unprecedent resolution: from its release in the air, and for 100 m upwards, the observed NO2 plume concentration increased at a rate of 0.75-1.25 g s-1. In joint campaigns with SO2 cameras, the NO2 camera could also help in removing the bias introduced by the NO2 interference with the SO2 spectrum.
Electronic Flash In Data Acquisition
NASA Astrophysics Data System (ADS)
Miller, C. E.
1982-02-01
Photographic acquisition of data often may be simplified, or the data quality improved upon by employing electronic flash sources with traditional equipment or techniques. The relatively short flash duration compared to movie camera shutters, or to the long integration time of video camera provides improved spatial resolution through blur reduction, particularly important as image movement becomes a significant fraction of film format dimension. Greater accuracy typically is achieved in velocity and acceleration determinations by using a stroboscopic light source rather than a movie camera frame-rate control as a time standard. Electrical efficiency often is an important advantage of electronic flash sources since almost any necessary light level for exposure may be produced, yet the source typically is "off" most of the time. Various synchronization techniques greatly expand the precise control of exposure. Biomechanical and sports equipment studies may involve velocities up to 200 feet-per-second, and often will have associated very rapid actions of interest. The need for brief exposures increases H.s one "ZOOMS in on the action." In golf, for example, the swing may be examined using 100 microsecond (Us) flashes at rates of 60 or 120 flashes-per-second (FPS). Accurate determination of linear and rotational velocity of the ball requires 10 Us flashes at 500-1,000 FPS, while sub-Us flashes at 20,000-50,000 FPS are required to resolve the interaction of the ball and the club, head. Some seldom. used techniques involving streak photography are described, with enhanced results obtained by combining strobe with the usual continuous light source. The combination of strobe and a fast electro-mechanical shutter is considered for Us photography under daylight conditions.
HUMAN FECAL SOURCE IDENTIFICATION: REAL-TIME QUANTITATIVE PCR METHOD STANDARDIZATION - abstract
Method standardization or the formal development of a protocol that establishes uniform performance benchmarks and practices is necessary for widespread adoption of a fecal source identification approach. Standardization of a human-associated fecal identification method has been...
Condenser for illuminating a ringfield camera with synchrotron emission light
Sweatt, W.C.
1996-04-30
The present invention relates generally to the field of condensers for collecting light from a synchrotron radiation source and directing the light into a ringfield of a lithography camera. The present invention discloses a condenser comprising collecting, processing, and imaging optics. The collecting optics are comprised of concave and convex spherical mirrors that collect the light beams. The processing optics, which receive the light beams, are comprised of flat mirrors that converge and direct the light beams into a real entrance pupil of the camera in a symmetrical pattern. In the real entrance pupil are located flat mirrors, common to the beams emitted from the preceding mirrors, for generating substantially parallel light beams and for directing the beams toward the ringfield of a camera. Finally, the imaging optics are comprised of a spherical mirror, also common to the beams emitted from the preceding mirrors, images the real entrance pupil through the resistive mask and into the virtual entrance pupil of the camera. Thus, the condenser is comprised of a plurality of beams with four mirrors corresponding to a single beam plus two common mirrors. 9 figs.
Condenser for illuminating a ringfield camera with synchrotron emission light
Sweatt, William C.
1996-01-01
The present invention relates generally to the field of condensers for collecting light from a synchrotron radiation source and directing the light into a ringfield of a lithography camera. The present invention discloses a condenser comprising collecting, processing, and imaging optics. The collecting optics are comprised of concave and convex spherical mirrors that collect the light beams. The processing optics, which receive the light beams, are comprised of flat mirrors that converge and direct the light beams into a real entrance pupil of the camera in a symmetrical pattern. In the real entrance pupil are located flat mirrors, common to the beams emitted from the preceding mirrors, for generating substantially parallel light beams and for directing the beams toward the ringfield of a camera. Finally, the imaging optics are comprised of a spherical mirror, also common to the beams emitted from the preceding mirrors, images the real entrance pupil through the resistive mask and into the virtual entrance pupil of the camera. Thus, the condenser is comprised of a plurality of beams with four mirrors corresponding to a single beam plus two common mirrors.
On the development of radiation tolerant surveillance camera from consumer-grade components
NASA Astrophysics Data System (ADS)
Klemen, Ambrožič; Luka, Snoj; Lars, Öhlin; Jan, Gunnarsson; Niklas, Barringer
2017-09-01
In this paper an overview on the process of designing a radiation tolerant surveillance camera from consumer grade components and commercially available particle shielding materials is given. This involves utilization of Monte-Carlo particle transport code MCNP6 and ENDF/B-VII.0 nuclear data libraries, as well as testing the physical electrical systems against γ radiation, utilizing JSI TRIGA mk. II fuel elements as a γ-ray sources. A new, aluminum, 20 cm × 20 cm × 30 cm irradiation facility with electrical power and signal wire guide-tube to the reactor platform, was designed and constructed and used for irradiation of large electronic and optical components assemblies with activated fuel elements. Electronic components to be used in the camera were tested against γ-radiation in an independent manner, to determine their radiation tolerance. Several camera designs were proposed and simulated using MCNP, to determine incident particle and dose attenuation factors. Data obtained from the measurements and MCNP simulations will be used to finalize the design of 3 surveillance camera models, with different radiation tolerances.
NASA Astrophysics Data System (ADS)
Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.
2016-06-01
Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.
Motionless active depth from defocus system using smart optics for camera autofocus applications
NASA Astrophysics Data System (ADS)
Amin, M. Junaid; Riza, Nabeel A.
2016-04-01
This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.
A compact 16-module camera using 64-pixel CsI(Tl)/Si p-i-n photodiode imaging modules
NASA Astrophysics Data System (ADS)
Choong, W.-S.; Gruber, G. J.; Moses, W. W.; Derenzo, S. E.; Holland, S. E.; Pedrali-Noy, M.; Krieger, B.; Mandelli, E.; Meddeler, G.; Wang, N. W.; Witt, E. K.
2002-10-01
We present a compact, configurable scintillation camera employing a maximum of 16 individual 64-pixel imaging modules resulting in a 1024-pixel camera covering an area of 9.6 cm/spl times/9.6 cm. The 64-pixel imaging module consists of optically isolated 3 mm/spl times/3 mm/spl times/5 mm CsI(Tl) crystals coupled to a custom array of Si p-i-n photodiodes read out by a custom integrated circuit (IC). Each imaging module plugs into a readout motherboard that controls the modules and interfaces with a data acquisition card inside a computer. For a given event, the motherboard employs a custom winner-take-all IC to identify the module with the largest analog output and to enable the output address bits of the corresponding module's readout IC. These address bits identify the "winner" pixel within the "winner" module. The peak of the largest analog signal is found and held using a peak detect circuit, after which it is acquired by an analog-to-digital converter on the data acquisition card. The camera is currently operated with four imaging modules in order to characterize its performance. At room temperature, the camera demonstrates an average energy resolution of 13.4% full-width at half-maximum (FWHM) for the 140-keV emissions of /sup 99m/Tc. The system spatial resolution is measured using a capillary tube with an inner diameter of 0.7 mm and located 10 cm from the face of the collimator. Images of the line source in air exhibit average system spatial resolutions of 8.7- and 11.2-mm FWHM when using an all-purpose and high-sensitivity parallel hexagonal holes collimator, respectively. These values do not change significantly when an acrylic scattering block is placed between the line source and the camera.
Pre-hibernation performances of the OSIRIS cameras onboard the Rosetta spacecraft
NASA Astrophysics Data System (ADS)
Magrin, S.; La Forgia, F.; Da Deppo, V.; Lazzarin, M.; Bertini, I.; Ferri, F.; Pajola, M.; Barbieri, M.; Naletto, G.; Barbieri, C.; Tubiana, C.; Küppers, M.; Fornasier, S.; Jorda, L.; Sierks, H.
2015-02-01
Context. The ESA cometary mission Rosetta was launched in 2004. In the past years and until the spacecraft hibernation in June 2011, the two cameras of the OSIRIS imaging system (Narrow Angle and Wide Angle Camera, NAC and WAC) observed many different sources. On 20 January 2014 the spacecraft successfully exited hibernation to start observing the primary scientific target of the mission, comet 67P/Churyumov-Gerasimenko. Aims: A study of the past performances of the cameras is now mandatory to be able to determine whether the system has been stable through the time and to derive, if necessary, additional analysis methods for the future precise calibration of the cometary data. Methods: The instrumental responses and filter passbands were used to estimate the efficiency of the system. A comparison with acquired images of specific calibration stars was made, and a refined photometric calibration was computed, both for the absolute flux and for the reflectivity of small bodies of the solar system. Results: We found a stability of the instrumental performances within ±1.5% from 2007 to 2010, with no evidence of an aging effect on the optics or detectors. The efficiency of the instrumentation is found to be as expected in the visible range, but lower than expected in the UV and IR range. A photometric calibration implementation was discussed for the two cameras. Conclusions: The calibration derived from pre-hibernation phases of the mission will be checked as soon as possible after the awakening of OSIRIS and will be continuously monitored until the end of the mission in December 2015. A list of additional calibration sources has been determined that are to be observed during the forthcoming phases of the mission to ensure a better coverage across the wavelength range of the cameras and to study the possible dust contamination of the optics.
SU-E-T-68: A Quality Assurance System with a Web Camera for High Dose Rate Brachytherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueda, Y; Hirose, A; Oohira, S
Purpose: The purpose of this work was to develop a quality assurance (QA) system for high dose rate (HDR) brachytherapy to verify the absolute position of an 192Ir source in real time and to measure dwell time and position of the source simultaneously with a movie recorded by a web camera. Methods: A web camera was fixed 15 cm above a source position check ruler to monitor and record 30 samples of the source position per second over a range of 8.0 cm, from 1425 mm to 1505 mm. Each frame had a matrix size of 480×640 in the movie.more » The source position was automatically quantified from the movie using in-house software (built with LabVIEW) that applied a template-matching technique. The source edge detected by the software on each frame was corrected to reduce position errors induced by incident light from an oblique direction. The dwell time was calculated by differential processing to displacement of the source. The performance of this QA system was illustrated by recording simple plans and comparing the measured dwell positions and time with the planned parameters. Results: This QA system allowed verification of the absolute position of the source in real time. The mean difference between automatic and manual detection of the source edge was 0.04 ± 0.04 mm. Absolute position error can be determined within an accuracy of 1.0 mm at dwell points of 1430, 1440, 1450, 1460, 1470, 1480, 1490, and 1500 mm, in three step sizes and dwell time errors, with an accuracy of 0.1% in more than 10.0 sec of planned time. The mean step size error was 0.1 ± 0.1 mm for a step size of 10.0 mm. Conclusion: This QA system provides quick verifications of the dwell position and time, with high accuracy, for HDR brachytherapy. This work was supported by the Japan Society for the Promotion of Science Core-to-Core program (No. 23003)« less
A Gender Identification System for Customers in a Shop Using Infrared Area Scanners
NASA Astrophysics Data System (ADS)
Tajima, Takuya; Kimura, Haruhiko; Abe, Takehiko; Abe, Koji; Nakamoto, Yoshinori
Information about customers in shops plays an important role in marketing analysis. Currently, in convenience stores and supermarkets, the identification of customer's gender is examined by clerks. On the other hand, gender identification systems using camera images are investigated. However, these systems have a problem of invading human privacies in identifying attributes of customers. The proposed system identifies gender by using infrared area scanners and Bayesian network. In the proposed system, since infrared area scanners do not take customers' images directly, invasion of privacies are not occurred. The proposed method uses three parameters of height, walking speed and pace for humans. In general, it is shown that these parameters have factors of sexual distinction in humans, and Bayesian network is designed with these three parameters. The proposed method resolves the existent problems of restricting the locations where the systems are set and invading human privacies. Experimental results using data obtained from 450 people show that the identification rate for the proposed method was 91.3% on the average of both of male and female identifications.
A probabilistic model of overt visual attention for cognitive robots.
Begum, Momotaz; Karray, Fakhri; Mann, George K I; Gosine, Raymond G
2010-10-01
Visual attention is one of the major requirements for a robot to serve as a cognitive companion for human. The robotic visual attention is mostly concerned with overt attention which accompanies head and eye movements of a robot. In this case, each movement of the camera head triggers a number of events, namely transformation of the camera and the image coordinate systems, change of content of the visual field, and partial appearance of the stimuli. All of these events contribute to the reduction in probability of meaningful identification of the next focus of attention. These events are specific to overt attention with head movement and, therefore, their effects are not addressed in the classical models of covert visual attention. This paper proposes a Bayesian model as a robot-centric solution for the overt visual attention problem. The proposed model, while taking inspiration from the primates visual attention mechanism, guides a robot to direct its camera toward behaviorally relevant and/or visually demanding stimuli. A particle filter implementation of this model addresses the challenges involved in overt attention with head movement. Experimental results demonstrate the performance of the proposed model.
Barbosa, F.; Bessuille, J.; Chudakov, E.; ...
2017-02-03
We present the GlueX DIRC (Detection of Internally Reflected Cherenkov light) detector that is being developed to upgrade the particle identification capabilities in the forward region of the GlueX experiment at Jefferson Lab. The GlueX DIRC will utilize four existing decommissioned BaBar DIRC bar boxes, which will be oriented to form a plane roughly 4 m away from the fixed target of the experiment. A new photon camera has been designed that is based on the SuperB FDIRC prototype. The full GlueX DIRC system will consist of two such cameras, with the first planned to be built and installed inmore » 2017. In addition, we present the current status of the design and R&D, along with the future plans of the GlueX DIRC detector.« less
Niskanen, Ilpo; Sutinen, Veijo; Thungström, Göran; Räty, Jukka
2018-06-01
The refractive index is a fundamental physical property of a medium, which can be used for the identification and purity issues of all media. Here we describe a refractive index measurement technique to determine simultaneously the refractive index of different solid particles by monitoring the transmittance of light from a suspension using a charge-coupled device (CCD) camera. An important feature of the measurement is the liquid evaporation process for the refractive index matching of the solid particle and the immersion liquid; this was realized by using a pair of volatile and non-volatile immersion liquids. In this study, refractive indices of calcium fluoride (CaF 2 ) and barium fluoride (BaF 2 ) were determined using the proposed method.
Spirit Switches on Its X-ray Vision
NASA Technical Reports Server (NTRS)
2004-01-01
This image shows the Mars Exploration Rover Spirit probing its first target rock, Adirondack. At the time this picture was snapped, the rover had begun analyzing the rock with the alpha particle X-ray spectrometer located on its robotic arm. This instrument uses alpha particles and X-rays to determine the elemental composition of martian rocks and soil. The image was taken by the rover's hazard-identification camera.
Electronic Fingerprinting for Industry
NASA Technical Reports Server (NTRS)
1995-01-01
Veritec's VeriSystem is a complete identification and tracking system for component traceability, improved manufacturing and processing, and automated shop floor applications. The system includes the Vericode Symbol, a more accurate and versatile alternative to the traditional bar code, that is scanned by charge coupled device (CCD) cameras. The system was developed by Veritec, Rockwell International and Marshall Space Flight Center to identify and track Space Shuttle parts.
Sensing and Efficient Inference for Identity Management
2015-12-20
further studies in science, mathematics, engineering or technology fields: Student Metrics This section only applies to graduating undergraduates...of identification errors. Because of this, we believe that further study is warranted to make the Lagrangian formulation computationally more...conducted on the ISSIA data set [40], which is a 3 minutes soccer scene comprising 25 targets (11 from each team and 3 referees), recorded by 6 cameras
The development and practice of forensic podiatry.
Vernon, Wesley
2006-01-01
Forensic podiatry is a small, but potentially useful specialty using clinical podiatric knowledge for the purpose of person identification. The practice of forensic podiatry began in the early 1970s in Canada and the UK, although supportive research commenced later in the 1990s. Techniques of forensic podiatry include identification from podiatry records, the human footprint, footwear, and the analysis of gait forms captured on Closed Circuit Television Cameras. The most valuable techniques relate to the comparison of the foot impressions inside shoes. Tools to describe, measure and compare foot impressions with footwear wear marks have been developed through research with potential for further development. The role of forensic podiatrists is of particular value when dealing with variable factors relating to the functioning and the shod foot. Case studies demonstrate the approach of podiatrists, in footwear identification, when comparing exemplar with questioned foot impressions. Forensic podiatry practice should be approached cautiously and it is essential for podiatrists undertaking this type of work to understand the context within which the process of person identification takes place.
Applications of optical fibers and miniature photonic elements in medical diagnostics
NASA Astrophysics Data System (ADS)
Blaszczak, Urszula; Gilewski, Marian; Gryko, Lukasz; Zajac, Andrzej; Kukwa, Andrzej; Kukwa, Wojciech
2014-05-01
Construction of endoscopes which are known for decades, in particular in small devices with the diameter of few millimetres, are based on the application of fibre optic imaging bundles or bundles of fibers in the illumination systems (usually with a halogen source). Cameras - CCD and CMOS - with the sensor size of less than 5 mm emerging commercially and high power LED solutions allow to design and construct modern endoscopes characterized by many innovative properties. These constructions offer higher resolution. They are also relatively cheaper especially in the context of the integration of the majority of the functions on a single chip. Mentioned features of the CMOS sensors reduce the cycle of introducing the newly developed instruments to the market. The paper includes a description of the concept of the endoscope with a miniature camera built on the basis of CMOS detector manufactured by Omni Vision. The set of LEDs located at the operator side works as the illuminating system. Fibre optic system and the lens of the camera are used in shaping the beam illuminating the observed tissue. Furthermore, to broaden the range of applications of the endoscope, the illuminator allows to control the spectral characteristics of emitted light. The paper presents the analysis of the basic parameters of the light-and-optical system of the endoscope. The possibility of adjusting the magnifications of the lens, the field of view of the camera and its spatial resolution is discussed. Special attention was drawn to the issues related to the selection of the light sources used for the illumination in terms of energy efficiency and the possibility of providing adjusting the colour of the emitted light in order to improve the quality of the image obtained by the camera.
System for photometric calibration of optoelectronic imaging devices especially streak cameras
Boni, Robert; Jaanimagi, Paul
2003-11-04
A system for the photometric calibration of streak cameras and similar imaging devices provides a precise knowledge of the camera's flat-field response as well as a mapping of the geometric distortions. The system provides the flat-field response, representing the spatial variations in the sensitivity of the recorded output, with a signal-to-noise ratio (SNR) greater than can be achieved in a single submicrosecond streak record. The measurement of the flat-field response is carried out by illuminating the input slit of the streak camera with a signal that is uniform in space and constant in time. This signal is generated by passing a continuous wave source through an optical homogenizer made up of a light pipe or pipes in which the illumination typically makes several bounces before exiting as a spatially uniform source field. The rectangular cross-section of the homogenizer is matched to the usable photocathode area of the streak tube. The flat-field data set is obtained by using a slow streak ramp that may have a period from one millisecond (ms) to ten seconds (s), but may be nominally one second in duration. The system also provides a mapping of the geometric distortions, by spatially and temporarily modulating the output of the homogenizer and obtaining a data set using the slow streak ramps. All data sets are acquired using a CCD camera and stored on a computer, which is used to calculate all relevant corrections to the signal data sets. The signal and flat-field data sets are both corrected for geometric distortions prior to applying the flat-field correction. Absolute photometric calibration is obtained by measuring the output fluence of the homogenizer with a "standard-traceable" meter and relating that to the CCD pixel values for a self-corrected flat-field data set.
Optical readout of a two phase liquid argon TPC using CCD camera and THGEMs
NASA Astrophysics Data System (ADS)
Mavrokoridis, K.; Ball, F.; Carroll, J.; Lazos, M.; McCormick, K. J.; Smith, N. A.; Touramanis, C.; Walker, J.
2014-02-01
This paper presents a preliminary study into the use of CCDs to image secondary scintillation light generated by THick Gas Electron Multipliers (THGEMs) in a two phase LAr TPC. A Sony ICX285AL CCD chip was mounted above a double THGEM in the gas phase of a 40 litre two-phase LAr TPC with the majority of the camera electronics positioned externally via a feedthrough. An Am-241 source was mounted on a rotatable motion feedthrough allowing the positioning of the alpha source either inside or outside of the field cage. Developed for and incorporated into the TPC design was a novel high voltage feedthrough featuring LAr insulation. Furthermore, a range of webcams were tested for operation in cryogenics as an internal detector monitoring tool. Of the range of webcams tested the Microsoft HD-3000 (model no:1456) webcam was found to be superior in terms of noise and lowest operating temperature. In ambient temperature and atmospheric pressure 1 ppm pure argon gas, the THGEM gain was ≈ 1000 and using a 1 msec exposure the CCD captured single alpha tracks. Successful operation of the CCD camera in two-phase cryogenic mode was also achieved. Using a 10 sec exposure a photograph of secondary scintillation light induced by the Am-241 source in LAr has been captured for the first time.
Retrieval of Garstang's emission function from all-sky camera images
NASA Astrophysics Data System (ADS)
Kocifaj, Miroslav; Solano Lamphar, Héctor Antonio; Kundracik, František
2015-10-01
The emission function from ground-based light sources predetermines the skyglow features to a large extent, while most mathematical models that are used to predict the night sky brightness require the information on this function. The radiant intensity distribution on a clear sky is experimentally determined as a function of zenith angle using the theoretical approach published only recently in MNRAS, 439, 3405-3413. We have made the experiments in two localities in Slovakia and Mexico by means of two digital single lens reflex professional cameras operating with different lenses that limit the system's field-of-view to either 180º or 167º. The purpose of using two cameras was to identify variances between two different apertures. Images are taken at different distances from an artificial light source (a city) with intention to determine the ratio of zenith radiance relative to horizontal irradiance. Subsequently, the information on the fraction of the light radiated directly into the upward hemisphere (F) is extracted. The results show that inexpensive devices can properly identify the upward emissions with adequate reliability as long as the clear sky radiance distribution is dominated by a largest ground-based light source. Highly unstable turbidity conditions can also make the parameter F difficult to find or even impossible to retrieve. The measurements at low elevation angles should be avoided due to a potentially parasitic effect of direct light emissions from luminaires surrounding the measuring site.
Midplane neutral density profiles in the National Spherical Torus Experiment
Stotler, D. P.; Scotti, F.; Bell, R. E.; ...
2015-08-13
Atomic and molecular density data in the outer midplane of NSTX [Ono et al., Nucl. Fusion 40, 557 (2000)] are inferred from tangential camera data via a forward modeling procedure using the DEGAS 2 Monte Carlo neutral transport code. The observed Balmer-β light emission data from 17 shots during the 2010 NSTX campaign display no obvious trends with discharge parameters such as the divertor Balmer-α emission level or edge deuterium ion density. Simulations of 12 time slices in 7 of these discharges produce molecular densities near the vacuum vessel wall of 2–8 × 10 17 m –3 and atomic densitiesmore » ranging from 1 to 7 ×10 16 m –3; neither has a clear correlation with other parameters. Validation of the technique, begun in an earlier publication, is continued with an assessment of the sensitivity of the simulated camera image and neutral densities to uncertainties in the data input to the model. The simulated camera image is sensitive to the plasma profiles and virtually nothing else. The neutral densities at the vessel wall depend most strongly on the spatial distribution of the source; simulations with a localized neutral source yield densities within a factor of two of the baseline, uniform source, case. Furthermore, the uncertainties in the neutral densities associated with other model inputs and assumptions are ≤ 50%.« less
Chen, Wei; Wang, Weiping; Li, Qun; Chang, Qiang; Hou, Hongtao
2016-01-01
Indoor positioning based on existing Wi-Fi fingerprints is becoming more and more common. Unfortunately, the Wi-Fi fingerprint is susceptible to multiple path interferences, signal attenuation, and environmental changes, which leads to low accuracy. Meanwhile, with the recent advances in charge-coupled device (CCD) technologies and the processing speed of smartphones, indoor positioning using the optical camera on a smartphone has become an attractive research topic; however, the major challenge is its high computational complexity; as a result, real-time positioning cannot be achieved. In this paper we introduce a crowd-sourcing indoor localization algorithm via an optical camera and orientation sensor on a smartphone to address these issues. First, we use Wi-Fi fingerprint based on the K Weighted Nearest Neighbor (KWNN) algorithm to make a coarse estimation. Second, we adopt a mean-weighted exponent algorithm to fuse optical image features and orientation sensor data as well as KWNN in the smartphone to refine the result. Furthermore, a crowd-sourcing approach is utilized to update and supplement the positioning database. We perform several experiments comparing our approach with other positioning algorithms on a common smartphone to evaluate the performance of the proposed sensor-calibrated algorithm, and the results demonstrate that the proposed algorithm could significantly improve accuracy, stability, and applicability of positioning. PMID:27007379
Chen, Wei; Wang, Weiping; Li, Qun; Chang, Qiang; Hou, Hongtao
2016-03-19
Indoor positioning based on existing Wi-Fi fingerprints is becoming more and more common. Unfortunately, the Wi-Fi fingerprint is susceptible to multiple path interferences, signal attenuation, and environmental changes, which leads to low accuracy. Meanwhile, with the recent advances in charge-coupled device (CCD) technologies and the processing speed of smartphones, indoor positioning using the optical camera on a smartphone has become an attractive research topic; however, the major challenge is its high computational complexity; as a result, real-time positioning cannot be achieved. In this paper we introduce a crowd-sourcing indoor localization algorithm via an optical camera and orientation sensor on a smartphone to address these issues. First, we use Wi-Fi fingerprint based on the K Weighted Nearest Neighbor (KWNN) algorithm to make a coarse estimation. Second, we adopt a mean-weighted exponent algorithm to fuse optical image features and orientation sensor data as well as KWNN in the smartphone to refine the result. Furthermore, a crowd-sourcing approach is utilized to update and supplement the positioning database. We perform several experiments comparing our approach with other positioning algorithms on a common smartphone to evaluate the performance of the proposed sensor-calibrated algorithm, and the results demonstrate that the proposed algorithm could significantly improve accuracy, stability, and applicability of positioning.
Memoris, A Wide Angle Camera For Bepicolombo
NASA Astrophysics Data System (ADS)
Cremonese, G.; Memoris Team
In order to answer to the Announcement of Opportunity of ESA for the BepiColombo payload, we are working on a wide angle camera concept named MEMORIS (MEr- cury MOderate Resolution Imaging System). MEMORIS will performe stereoscopic images of the whole Mercury surface using two different channels at +/- 20 degrees from the nadir point. It will achieve a spatial resolution of 50m per pixel at 400 km from the surface (peri-Herm), corresponding to a vertical resolution of about 75m with the stereo performances. The scientific objectives will be addressed by MEMORIS may be identified as follows: Estimate of surface age based on crater counting Crater morphology and degrada- tion Stratigraphic sequence of geological units Identification of volcanic features and related deposits Origin of plain units from morphological observations Distribution and type of the tectonic structures Determination of relative age among the structures based on cross-cutting relationships 3D Tectonics Global mineralogical mapping of main geological units Identification of weathering products The last two items will come from the multispectral capabilities of the camera utilizing 8 to 12 (TBD) broad band filters. MEMORIS will be equipped by a further channel devoted to the observations of the tenuous exosphere. It will look at the limb on a given arc of the BepiColombo orbit, in so doing it will observe the exosphere above a surface latitude range of 25-75 degrees in the northern emisphere. The exosphere images will be obtained above the surface just observed by the other two channels, trying to find possible relantionship, as ground-based observations suggest. The exospheric channel will have four narrow-band filters centered on the sodium and potassium emissions and the adjacent continua.
Of Detection Limits and Effective Mitigation: The Use of Infrared Cameras for Methane Leak Detection
NASA Astrophysics Data System (ADS)
Ravikumar, A. P.; Wang, J.; McGuire, M.; Bell, C.; Brandt, A. R.
2017-12-01
Mitigating methane emissions, a short-lived and potent greenhouse gas, is critical to limiting global temperature rise to two degree Celsius as outlined in the Paris Agreement. A major source of anthropogenic methane emissions in the United States is the oil and gas sector. To this effect, state and federal governments have recommended the use of optical gas imaging systems in periodic leak detection and repair (LDAR) surveys to detect for fugitive emissions or leaks. The most commonly used optical gas imaging systems (OGI) are infrared cameras. In this work, we systematically evaluate the limits of infrared (IR) camera based OGI system for use in methane leak detection programs. We analyze the effect of various parameters that influence the minimum detectable leak rates of infrared cameras. Blind leak detection tests were carried out at the Department of Energy's MONITOR natural gas test-facility in Fort Collins, CO. Leak sources included natural gas wellheads, separators, and tanks. With an EPA mandated 60 g/hr leak detection threshold for IR cameras, we test leak rates ranging from 4 g/hr to over 350 g/hr at imaging distances between 5 ft and 70 ft from the leak source. We perform these experiments over the course of a week, encompassing a wide range of wind and weather conditions. Using repeated measurements at a given leak rate and imaging distance, we generate detection probability curves as a function of leak-size for various imaging distances, and measurement conditions. In addition, we estimate the median detection threshold - leak-size at which the probability of detection is 50% - under various scenarios to reduce uncertainty in mitigation effectiveness. Preliminary analysis shows that the median detection threshold varies from 3 g/hr at an imaging distance of 5 ft to over 150 g/hr at 50 ft (ambient temperature: 80 F, winds < 4 m/s). Results from this study can be directly used to improve OGI based LDAR protocols and reduce uncertainty in estimated mitigation effectiveness. Furthermore, detection limits determined in this study can be used as standards to compare new detection technologies.
Photogrammetric Method and Software for Stream Planform Identification
NASA Astrophysics Data System (ADS)
Stonedahl, S. H.; Stonedahl, F.; Lohberg, M. M.; Lusk, K.; Miller, D.
2013-12-01
Accurately characterizing the planform of a stream is important for many purposes, including recording measurement and sampling locations, monitoring change due to erosion or volumetric discharge, and spatial modeling of stream processes. While expensive surveying equipment or high resolution aerial photography can be used to obtain planform data, our research focused on developing a close-range photogrammetric method (and accompanying free/open-source software) to serve as a cost-effective alternative. This method involves securing and floating a wooden square frame on the stream surface at several locations, taking photographs from numerous angles at each location, and then post-processing and merging data from these photos using the corners of the square for reference points, unit scale, and perspective correction. For our test field site we chose a ~35m reach along Black Hawk Creek in Sunderbruch Park (Davenport, IA), a small, slow-moving stream with overhanging trees. To quantify error we measured 88 distances between 30 marked control points along the reach. We calculated error by comparing these 'ground truth' distances to the corresponding distances extracted from our photogrammetric method. We placed the square at three locations along our reach and photographed it from multiple angles. The square corners, visible control points, and visible stream outline were hand-marked in these photos using the GIMP (open-source image editor). We wrote an open-source GUI in Java (hosted on GitHub), which allows the user to load marked-up photos, designate square corners and label control points. The GUI also extracts the marked pixel coordinates from the images. We also wrote several scripts (currently in MATLAB) that correct the pixel coordinates for radial distortion using Brown's lens distortion model, correct for perspective by forcing the four square corner pixels to form a parallelogram in 3-space, and rotate the points in order to correctly orient all photos of the same square location. Planform data from multiple photos (and multiple square locations) are combined using weighting functions that mitigate the error stemming from the markup-process, imperfect camera calibration, etc. We have used our (beta) software to mark and process over 100 photos, yielding an average error of only 1.5% relative to our 88 measured lengths. Next we plan to translate the MATLAB scripts into Python and release their source code, at which point only free software, consumer-grade digital cameras, and inexpensive building materials will be needed for others to replicate this method at new field sites. Three sample photographs of the square with the created planform and control points
The Catalina Sky Survey for Near-Earth Objects
NASA Astrophysics Data System (ADS)
Christensen, E.
The Catalina Sky Survey (CSS) specializes in the detection of the closest transients in our transient universe: near-Earth objects (NEOs). CSS is the leading NEO survey program since 2005, with a discovery rate of 500-600 NEOs per year. This rate is set to substantially increase starting in 2014 with the deployment of wider FOV cameras at both survey telescopes, while a proposed 3-telescope system in Chile would provide a new and significant capability in the Southern Hemisphere beginning as early as 2015. Elements contributing to the success of CSS may be applied to other surveys, and include 1) Real-time processing, identification, and reporting of interesting transients; 2) Human-assisted validation to ensure a clean transient stream that is efficient to the limits of the system (˜ 1σ); 3) an integrated follow-up capability to ensure threshold or high-priority transients are properly confirmed and followed up. Additionally, the open-source nature of the CSS data enables considerable secondary science (i.e. CRTS), and CSS continues to pursue collaborations to maximize the utility of the data.
A Low-Cost and Portable Dual-Channel Fiber Optic Surface Plasmon Resonance System.
Liu, Qiang; Liu, Yun; Chen, Shimeng; Wang, Fang; Peng, Wei
2017-12-04
A miniaturization and integration dual-channel fiber optic surface plasmon resonance (SPR) system was proposed and demonstrated in this paper. We used a yellow light-emitting diode (LED, peak wavelength 595 nm) and built-in web camera as a light source and detector, respectively. Except for the detection channel, one of the sensors was used as a reference channel to compensate nonspecific binding and physical absorption. We packaged the LED and surface plasmon resonance (SPR) sensors together, which are flexible enough to be applied to mobile devices as a compact and portable system. Experimental results show that the normalized intensity shift and refractive index (RI) of the sample have a good linear relationship in the RI range from 1.328 to 1.348. We used this sensor to monitor the reversible, specific interaction between lectin concanavalin A (Con A) and glycoprotein ribonuclease B (RNase B), which demonstrate its capabilities of specific identification and biochemical samples concentration detection. This sensor system has potential applications in various fields, such as medical diagnosis, public health, food safety, and environment monitoring.
The CFHT MegaCam control system: new solutions based on PLCs, WorldFIP fieldbus and Java softwares
NASA Astrophysics Data System (ADS)
Rousse, Jean Y.; Boulade, Olivier; Charlot, Xavier; Abbon, P.; Aune, Stephan; Borgeaud, Pierre; Carton, Pierre-Henri; Carty, M.; Da Costa, J.; Deschamps, H.; Desforge, D.; Eppele, Dominique; Gallais, Pascal; Gosset, L.; Granelli, Remy; Gros, Michel; de Kat, Jean; Loiseau, Denis; Ritou, J. L.; Starzynski, Pierre; Vignal, Nicolas; Vigroux, Laurent G.
2003-03-01
MegaCam is a wide-field imaging camera built for the prime focus of the 3.6m Canada-France-Hawaii Telescope. This large detector has required new approaches from the hardware up to the instrument control system software. Safe control of the three sub-systems of the instrument (cryogenics, filters and shutter), measurement of the exposure time with an accuracy of 0.1%, identification of the filters and management of the internal calibration source are the major challenges that are taken up by the control system. Another challenge is to insure all these functionalities with the minimum space available on the telescope structure for the electrical hardware and a minimum number of cables to keep the highest reliability. All these requirements have been met with a control system which different elements are linked by a WorldFip fieldbus on optical fiber. The diagnosis and remote user support will be insured with an Engineering Control System station based on software developed on Internet JAVA technologies (applets, servlets) and connected on the fieldbus.
The Comparison Between Nmf and Ica in Pigment Mixture Identification of Ancient Chinese Paintings
NASA Astrophysics Data System (ADS)
Liu, Y.; Lyu, S.; Hou, M.; Yin, Q.
2018-04-01
Since the colour in painting cultural relics observed by our naked eyes or hyperspectral cameras is usually a mixture of several kinds of pigments, the mixed pigments analysis will be an important subject in the field of ancient painting conservation and restoration. This paper aims to find a more effective method to confirm the types of every pure pigment from mixture on the surface of paintings. Firstly, we adopted two kinds of blind source separation algorithms, which are independent component analysis and non-negative matrix factorization, to extract the pure pigment component from mixed spectrum respectively. Moreover, we matched the separated pure spectrum with the pigments spectra library built by our team to determine the pigment type. Furthermore, three kinds of data including simulation data, mixed pigments spectral data measured in laboratory, and the spectral data of an ancient painting were chosen to evaluate the performance of the different algorithms. And the accuracy was compared between the two algorithms. Finally, the experimental results show that non-negative matrix factorization method is more suitable for endmember extraction in the field of ancient painting conservation and restoration.
NASA Astrophysics Data System (ADS)
Sugizaki, Mutsumi; Mihara, Tatehiro; Nakajima, Motoki; Makishima, Kazuo
2017-12-01
To study observationally the spin-period changes of accreting pulsars caused by the accretion torque, the present work analyzes X-ray light curves of 12 Be binary pulsars obtained by the MAXI Gas-Slit Camera all-sky survey and their pulse periods measured by the Fermi Gamma-ray Burst Monitor pulsar project, both covering more than six years, from 2009 August to 2016 March. The 12 objects were selected because they are accompanied by clear optical identification and accurate measurements of surface magnetic fields. The luminosity L and the spin-frequency derivatives \\dot{ν}, measured during large outbursts with L ≳ 1 × 1037 erg s-1, were found to follow approximately the theoretical relations in the accretion torque models, represented by \\dot{ν} ∝ L^{α} (α ≃ 1), and the coefficient of proportionality between \\dot{ν} and Lα agrees, within a factor of ˜3, with that proposed by Ghosh and Lamb (1979b, ApJ, 234, 296). In the course of the present study, the orbital elements of several sources were refined.
Multilevel microvibration test for performance predictions of a space optical load platform
NASA Astrophysics Data System (ADS)
Li, Shiqi; Zhang, Heng; Liu, Shiping; Wang, Yue
2018-05-01
This paper presents a framework for the multilevel microvibration analysis and test of a space optical load platform. The test framework is conducted on three levels, including instrument, subsystem, and system level. Disturbance source experimental investigations are performed to evaluate the vibration amplitude and study vibration mechanism. Transfer characteristics of space camera are validated by a subsystem test, which allows the calculation of transfer functions from various disturbance sources to optical performance outputs. In order to identify the influence of the source on the spacecraft performance, a system level microvibration measurement test has been performed on the ground. From the time domain analysis and spectrum analysis of multilevel microvibration tests, we concluded that the disturbance source has a significant effect on its installation position. After transmitted through mechanical links, the residual vibration reduces to a background noise level. In addition, the angular microvibration of the platform jitter is mainly concentrated in the rotation of y-axes. This work is applied to a real practical application involving the high resolution satellite camera system.
Huber, V; Huber, A; Kinna, D; Balboa, I; Collins, S; Conway, N; Drewelow, P; Maggi, C F; Matthews, G F; Meigs, A G; Mertens, Ph; Price, M; Sergienko, G; Silburn, S; Wynn, A; Zastrow, K-D
2016-11-01
The in situ absolute calibration of the JET real-time protection imaging system has been performed for the first time by means of radiometric light source placed inside the JET vessel and operated by remote handling. High accuracy of the calibration is confirmed by cross-validation of the near infrared (NIR) cameras against each other, with thermal IR cameras, and with the beryllium evaporator, which lead to successful protection of the JET first wall during the last campaign. The operation temperature ranges of NIR protection cameras for the materials used on JET are Be 650-1600 °C, W coating 600-1320 °C, and W 650-1500 °C.
NASA Astrophysics Data System (ADS)
Gutierrez, A.; Baker, C.; Boston, H.; Chung, S.; Judson, D. S.; Kacperek, A.; Le Crom, B.; Moss, R.; Royle, G.; Speller, R.; Boston, A. J.
2018-01-01
The main objective of this work is to test a new semiconductor Compton camera for prompt gamma imaging. Our device is composed of three active layers: a Si(Li) detector as a scatterer and two high purity Germanium detectors as absorbers of high-energy gamma rays. We performed Monte Carlo simulations using the Geant4 toolkit to characterise the expected gamma field during proton beam therapy and have made experimental measurements of the gamma spectrum with a 60 MeV passive scattering beam irradiating a phantom. In this proceeding, we describe the status of the Compton camera and present the first preliminary measurements with radioactive sources and their corresponding reconstructed images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huber, V., E-mail: V.Huber@fz-juelich.de; Huber, A.; Mertens, Ph.
The in situ absolute calibration of the JET real-time protection imaging system has been performed for the first time by means of radiometric light source placed inside the JET vessel and operated by remote handling. High accuracy of the calibration is confirmed by cross-validation of the near infrared (NIR) cameras against each other, with thermal IR cameras, and with the beryllium evaporator, which lead to successful protection of the JET first wall during the last campaign. The operation temperature ranges of NIR protection cameras for the materials used on JET are Be 650-1600 °C, W coating 600-1320 °C, and W 650-1500 °C.
Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R
2003-09-10
We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.
IDENTIFICATION OF SEDIMENT SOURCE AREAS WITHIN A WATERSHED
Identification of sediment source areas is crucial for designing proper abatement strategies that reduce sediment and associated contaminant loading to receiving water downstream. In this study, two methodologies were developed to identify the source areas and their relative stre...
NASA Technical Reports Server (NTRS)
Borgstahl, Gloria (Inventor); Lovelace, Jeff (Inventor); Snell, Edward Holmes (Inventor); Bellamy, Henry (Inventor)
2008-01-01
The present invention provides a digital topography imaging system for determining the crystalline structure of a biological macromolecule, wherein the system employs a charge coupled device (CCD) camera with antiblooming circuitry to directly convert x-ray signals to electrical signals without the use of phosphor and measures reflection profiles from the x-ray emitting source after x-rays are passed through a sample. Methods for using said system are also provided.
Multispectral imaging of the ocular fundus using light emitting diode illumination
NASA Astrophysics Data System (ADS)
Everdell, N. L.; Styles, I. B.; Calcagni, A.; Gibson, J.; Hebden, J.; Claridge, E.
2010-09-01
We present an imaging system based on light emitting diode (LED) illumination that produces multispectral optical images of the human ocular fundus. It uses a conventional fundus camera equipped with a high power LED light source and a highly sensitive electron-multiplying charge coupled device camera. It is able to take pictures at a series of wavelengths in rapid succession at short exposure times, thereby eliminating the image shift introduced by natural eye movements (saccades). In contrast with snapshot systems the images retain full spatial resolution. The system is not suitable for applications where the full spectral resolution is required as it uses discrete wavebands for illumination. This is not a problem in retinal imaging where the use of selected wavelengths is common. The modular nature of the light source allows new wavelengths to be introduced easily and at low cost. The use of wavelength-specific LEDs as a source is preferable to white light illumination and subsequent filtering of the remitted light as it minimizes the total light exposure of the subject. The system is controlled via a graphical user interface that enables flexible control of intensity, duration, and sequencing of sources in synchrony with the camera. Our initial experiments indicate that the system can acquire multispectral image sequences of the human retina at exposure times of 0.05 s in the range of 500-620 nm with mean signal to noise ratio of 17 dB (min 11, std 4.5), making it suitable for quantitative analysis with application to the diagnosis and screening of eye diseases such as diabetic retinopathy and age-related macular degeneration.
Multispectral imaging of the ocular fundus using light emitting diode illumination.
Everdell, N L; Styles, I B; Calcagni, A; Gibson, J; Hebden, J; Claridge, E
2010-09-01
We present an imaging system based on light emitting diode (LED) illumination that produces multispectral optical images of the human ocular fundus. It uses a conventional fundus camera equipped with a high power LED light source and a highly sensitive electron-multiplying charge coupled device camera. It is able to take pictures at a series of wavelengths in rapid succession at short exposure times, thereby eliminating the image shift introduced by natural eye movements (saccades). In contrast with snapshot systems the images retain full spatial resolution. The system is not suitable for applications where the full spectral resolution is required as it uses discrete wavebands for illumination. This is not a problem in retinal imaging where the use of selected wavelengths is common. The modular nature of the light source allows new wavelengths to be introduced easily and at low cost. The use of wavelength-specific LEDs as a source is preferable to white light illumination and subsequent filtering of the remitted light as it minimizes the total light exposure of the subject. The system is controlled via a graphical user interface that enables flexible control of intensity, duration, and sequencing of sources in synchrony with the camera. Our initial experiments indicate that the system can acquire multispectral image sequences of the human retina at exposure times of 0.05 s in the range of 500-620 nm with mean signal to noise ratio of 17 dB (min 11, std 4.5), making it suitable for quantitative analysis with application to the diagnosis and screening of eye diseases such as diabetic retinopathy and age-related macular degeneration.
Motorcycle detection and counting using stereo camera, IR camera, and microphone array
NASA Astrophysics Data System (ADS)
Ling, Bo; Gibson, David R. P.; Middleton, Dan
2013-03-01
Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.
NASA Astrophysics Data System (ADS)
Hausmann, Anita; Duschek, Frank; Fischbach, Thomas; Pargmann, Carsten; Aleksejev, Valeri; Poryvkina, Larisa; Sobolev, Innokenti; Babichenko, Sergey; Handke, Jürgen
2014-05-01
The challenges of detecting hazardous biological materials are manifold: Such material has to be discriminated from other substances in various natural surroundings. The detection sensitivity should be extremely high. As living material may reproduce itself, already one single bacterium may represent a high risk. Of course, identification should be quite fast with a low false alarm rate. Up to now, there is no single technique to solve this problem. Point sensors may collect material and identify it, but the problems of fast identification and especially of appropriate positioning of local collectors are sophisticated. On the other hand, laser based standoff detection may instantaneously provide the information of some accidental spillage of material by detecting the generated thin cloud. LIF technique may classify but hardly identify the substance. A solution can be the use of LIF technique in a first step to collect primary data and - if necessary- followed by utilizing these data for an optimized positioning of point sensors. We perform studies on an open air laser test range at distances between 20 and 135 m applying LIF technique to detect and classify aerosols. In order to employ LIF capability, we use a laser source emitting two wavelengths alternatively, 280 and 355 nm, respectively. Moreover, the time dependence of fluorescence spectra is recorded by a gated intensified CCD camera. Signal processing is performed by dedicated software for spectral pattern recognition. The direct comparison of all results leads to a basic classification of the various compounds.
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2016-09-01
In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.
Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi
2014-12-08
Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.
Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi
2014-01-01
Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system. PMID:25494350
NASA Technical Reports Server (NTRS)
Nelson, David L.; Diner, David J.; Thompson, Charles K.; Hall, Jeffrey R.; Rheingans, Brian E.; Garay, Michael J.; Mazzoni, Dominic
2010-01-01
MISR (Multi-angle Imaging SpectroRadiometer) INteractive eXplorer (MINX) is an interactive visualization program that allows a user to digitize smoke, dust, or volcanic plumes in MISR multiangle images, and automatically retrieve height and wind profiles associated with those plumes. This innovation can perform 9-camera animations of MISR level-1 radiance images to study the 3D relationships of clouds and plumes. MINX also enables archiving MISR aerosol properties and Moderate Resolution Imaging Spectroradiometer (MODIS) fire radiative power along with the heights and winds. It can correct geometric misregistration between cameras by correlating off-nadir camera scenes with corresponding nadir scenes and then warping the images to minimize the misregistration offsets. Plots of BRF (bidirectional reflectance factor) vs. camera angle for points clicked in an image can be displayed. Users get rapid access to map views of MISR path and orbit locations and overflight dates, and past or future orbits can be identified that pass over a specified location at a specified time. Single-camera, level-1 radiance data at 1,100- or 275- meter resolution can be quickly displayed in color using a browse option. This software determines the heights and motion vectors of features above the terrain with greater precision and coverage than previous methods, based on an algorithm that takes wind direction into consideration. Human interpreters can precisely identify plumes and their extent, and wind direction. Overposting of MODIS thermal anomaly data aids in the identification of smoke plumes. The software has been used to preserve graphical and textural versions of the digitized data in a Web-based database.
Wet atmospheric generation apparatus
NASA Technical Reports Server (NTRS)
Hamner, Richard M. (Inventor); Allen, Janice K. (Inventor)
1990-01-01
The invention described relates to an apparatus for providing a selectively humidified gas to a camera canister containing cameras and film used in space. A source of pressurized gas (leak test gas or motive gas) is selected by a valve, regulated to a desired pressure by a regulator, and routed through an ejector (venturi device). A regulated source of water vapor in the form of steam from a heated reservoir is coupled to a low pressure region of the ejector which mixes with high velocity gas flow through the ejector. This mixture is sampled by a dew point sensor to obtain dew point thereof (ratio of water vapor to gas) and the apparatus adjusted by varying gas pressure or water vapor to provide a mixture at a connector having selected humidity content.
Soft x-ray reduction camera for submicron lithography
Hawryluk, Andrew M.; Seppala, Lynn G.
1991-01-01
Soft x-ray projection lithography can be performed using x-ray optical components and spherical imaging lenses (mirrors), which form an x-ray reduction camera. The x-ray reduction is capable of projecting a 5x demagnified image of a mask onto a resist coated wafer using 4.5 nm radiation. The diffraction limited resolution of this design is about 135 nm with a depth of field of about 2.8 microns and a field of view of 0.2 cm.sup.2. X-ray reflecting masks (patterned x-ray multilayer mirrors) which are fabricated on thick substrates and can be made relatively distortion free are used, with a laser produced plasma for the source. Higher resolution and/or larger areas are possible by varying the optic figures of the components and source characteristics.
Research of mine water source identification based on LIF technology
NASA Astrophysics Data System (ADS)
Zhou, Mengran; Yan, Pengcheng
2016-09-01
According to the problem that traditional chemical methods to the mine water source identification takes a long time, put forward a method for rapid source identification system of mine water inrush based on the technology of laser induced fluorescence (LIF). Emphatically analyzes the basic principle of LIF technology. The hardware composition of LIF system are analyzed and the related modules were selected. Through the fluorescence experiment with the water samples of coal mine in the LIF system, fluorescence spectra of water samples are got. Traditional water source identification mainly according to the ion concentration representative of the water, but it is hard to analysis the ion concentration of the water from the fluorescence spectra. This paper proposes a simple and practical method of rapid identification of water by fluorescence spectrum, which measure the space distance between unknown water samples and standard samples, and then based on the clustering analysis, the category of the unknown water sample can be get. Water source identification for unknown samples verified the reliability of the LIF system, and solve the problem that the current coal mine can't have a better real-time and online monitoring on water inrush, which is of great significance for coal mine safety in production.
NASA Astrophysics Data System (ADS)
Priore, Ryan J.; Jacksen, Niels
2016-05-01
Infrared hyperspectral imagers (HSI) have been fielded for the detection of hazardous chemical and biological compounds, tag detection (friend versus foe detection) and other defense critical sensing missions over the last two decades. Low Size/Weight/Power/Cost (SWaPc) methods of identification of chemical compounds spectroscopy has been a long term goal for hand held applications. We describe a new HSI concept for low cost / high performance InGaAs SWIR camera chemical identification for military, security, industrial and commercial end user applications. Multivariate Optical Elements (MOEs) are thin-film devices that encode a broadband, spectroscopic pattern allowing a simple broadband detector to generate a highly sensitive and specific detection for a target analyte. MOEs can be matched 1:1 to a discrete analyte or class prediction. Additionally, MOE filter sets are capable of sensing an orthogonal projection of the original sparse spectroscopic space enabling a small set of MOEs to discriminate a multitude of target analytes. This paper identifies algorithms and broadband optical filter designs that have been demonstrated to identify chemical compounds using high performance InGaAs VGA detectors. It shows how some of the initial models have been reduced to simple spectral designs and tested to produce positive identification of such chemicals. We also are developing pixilated MOE compressed detection sensors for the detection of a multitude of chemical targets in challenging backgrounds/environments for both commercial and defense/security applications. This MOE based, real-time HSI sensor will exhibit superior sensitivity and specificity as compared to currently fielded HSI systems.
40 CFR 62.6120 - Identification of sources.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Fluoride Emissions from Phosphate Fertilizer Plants § 62.6120 Identification of sources. The plan applies... Corporation in Pascagoula. Fluoride Emissions From Primary Aluminum Reduction Plants ...
40 CFR 62.6120 - Identification of sources.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Fluoride Emissions from Phosphate Fertilizer Plants § 62.6120 Identification of sources. The plan applies... Corporation in Pascagoula. Fluoride Emissions From Primary Aluminum Reduction Plants ...
Air pollution source identification
NASA Technical Reports Server (NTRS)
Fordyce, J. S.
1975-01-01
Techniques for air pollution source identification are reviewed, and some results obtained with them are evaluated. Described techniques include remote sensing from satellites and aircraft, on-site monitoring, and the use of injected tracers and pollutants themselves as tracers. The use of a large number of trace elements in ambient airborne particulate matter as a practical means of identifying sources is discussed in detail. Sampling and analysis techniques are described, and it is shown that elemental constituents can be related to specific source types such as those found in the earth's crust and those associated with specific industries. Source identification sytems are noted which utilize charged particle X-ray fluorescence analysis of original field data.
24/7 security system: 60-FPS color EMCCD camera with integral human recognition
NASA Astrophysics Data System (ADS)
Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.
2007-04-01
An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.
NASA Astrophysics Data System (ADS)
Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele; Pernechele, Claudio; Dionisio, Cesare
2017-11-01
This paper presents an innovative algorithm developed for attitude determination of a space platform. The algorithm exploits images taken from a multi-purpose panoramic camera equipped with hyper-hemispheric lens and used as star tracker. The sensor architecture is also original since state-of-the-art star trackers accurately image as many stars as possible within a narrow- or medium-size field-of-view, while the considered sensor observes an extremely large portion of the celestial sphere but its observation capabilities are limited by the features of the optical system. The proposed original approach combines algorithmic concepts, like template matching and point cloud registration, inherited from the computer vision and robotic research fields, to carry out star identification. The final aim is to provide a robust and reliable initial attitude solution (lost-in-space mode), with a satisfactory accuracy level in view of the multi-purpose functionality of the sensor and considering its limitations in terms of resolution and sensitivity. Performance evaluation is carried out within a simulation environment in which the panoramic camera operation is realistically reproduced, including perturbations in the imaged star pattern. Results show that the presented algorithm is able to estimate attitude with accuracy better than 1° with a success rate around 98% evaluated by densely covering the entire space of the parameters representing the camera pointing in the inertial space.
CULTURE-INDEPENDENT MOLECULAR METHODS FOR FECAL SOURCE IDENTIFICATION
Fecal contamination is widespread in the waterways of the United States. Both to correct the problem, and to estimate public health risk, it is necessary to identify the source of the contamination. Several culture-independent molecular methods for fecal source identification hav...