NASA Astrophysics Data System (ADS)
Reulke, R.; Baltrusch, S.; Brunn, A.; Komp, K.; Kresse, W.; von Schönermark, M.; Spreckels, V.
2012-08-01
10 years after the first introduction of a digital airborne mapping camera in the ISPRS conference 2000 in Amsterdam, several digital cameras are now available. They are well established in the market and have replaced the analogue camera. A general improvement in image quality accompanied the digital camera development. The signal-to-noise ratio and the dynamic range are significantly better than with the analogue cameras. In addition, digital cameras can be spectrally and radiometrically calibrated. The use of these cameras required a rethinking in many places though. New data products were introduced. In the recent years, some activities took place that should lead to a better understanding of the cameras and the data produced by these cameras. Several projects, like the projects of the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) or EuroSDR (European Spatial Data Research), were conducted to test and compare the performance of the different cameras. In this paper the current DIN (Deutsches Institut fuer Normung - German Institute for Standardization) standards will be presented. These include the standard for digital cameras, the standard for ortho rectification, the standard for classification, and the standard for pan-sharpening. In addition, standards for the derivation of elevation models, the use of Radar / SAR, and image quality are in preparation. The OGC has indicated its interest in participating that development. The OGC has already published specifications in the field of photogrammetry and remote sensing. One goal of joint future work could be to merge these formerly independent developments and the joint development of a suite of implementation specifications for photogrammetry and remote sensing.
Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products
NASA Astrophysics Data System (ADS)
Williams, Don; Burns, Peter D.
2007-01-01
There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.
Quigley, Elizabeth A; Tokay, Barbara A; Jewell, Sarah T; Marchetti, Michael A; Halpern, Allan C
2015-08-01
Photographs are invaluable dermatologic diagnostic, management, research, teaching, and documentation tools. Digital Imaging and Communications in Medicine (DICOM) standards exist for many types of digital medical images, but there are no DICOM standards for camera-acquired dermatologic images to date. To identify and describe existing or proposed technology and technique standards for camera-acquired dermatologic images in the scientific literature. Systematic searches of the PubMed, EMBASE, and Cochrane databases were performed in January 2013 using photography and digital imaging, standardization, and medical specialty and medical illustration search terms and augmented by a gray literature search of 14 websites using Google. Two reviewers independently screened titles of 7371 unique publications, followed by 3 sequential full-text reviews, leading to the selection of 49 publications with the most recent (1985-2013) or detailed description of technology or technique standards related to the acquisition or use of images of skin disease (or related conditions). No universally accepted existing technology or technique standards for camera-based digital images in dermatology were identified. Recommendations are summarized for technology imaging standards, including spatial resolution, color resolution, reproduction (magnification) ratios, postacquisition image processing, color calibration, compression, output, archiving and storage, and security during storage and transmission. Recommendations are also summarized for technique imaging standards, including environmental conditions (lighting, background, and camera position), patient pose and standard view sets, and patient consent, privacy, and confidentiality. Proposed standards for specific-use cases in total body photography, teledermatology, and dermoscopy are described. The literature is replete with descriptions of obtaining photographs of skin disease, but universal imaging standards have not been developed, validated, and adopted to date. Dermatologic imaging is evolving without defined standards for camera-acquired images, leading to variable image quality and limited exchangeability. The development and adoption of universal technology and technique standards may first emerge in scenarios when image use is most associated with a defined clinical benefit.
Desai, Nandini J.; Gupta, B. D.; Patel, Pratik Narendrabhai
2014-01-01
Introduction: Obtaining images of slides viewed by a microscope can be invaluable for both diagnosis and teaching.They can be transferred among technologically-advanced hospitals for further consultation and evaluation. But a standard microscopic photography camera unit (MPCU)(MIPS-Microscopic Image projection System) is costly and not available in resource poor settings. The aim of our endeavour was to find a comparable and cheaper alternative method for photomicrography. Materials and Methods: We used a NIKON Coolpix S6150 camera (box type digital camera) with Olympus CH20i microscope and a fluorescent microscope for the purpose of this study. Results: We got comparable results for capturing images of light microscopy, but the results were not as satisfactory for fluorescent microscopy. Conclusion: A box type digital camera is a comparable, less expensive and convenient alternative to microscopic photography camera unit. PMID:25478350
Digital Semaphore: Technical Feasibility of QR Code Optical Signaling for Fleet Communications
2013-06-01
Standards (http://www.iso.org) JIS Japanese Industrial Standard JPEG Joint Photographic Experts Group (digital image format; http://www.jpeg.org) LED...Denso Wave corporation in the 1990s for the Japanese automotive manufacturing industry. See Appendix A for full details. Reed-Solomon Error...eliminates camera blur induced by the shutter, providing clear images at extremely high frame rates. Thusly, digital cinema cameras are more suitable
Evaluation of modified portable digital camera for screening of diabetic retinopathy.
Chalam, Kakarla V; Brar, Vikram S; Keshavamurthy, Ravi
2009-01-01
To describe a portable wide-field noncontact digital camera for posterior segment photography. The digital camera has a compound lens consisting of two optical elements (a 90-dpt and a 20-dpt lens) attached to a 7.2-megapixel camera. White-light-emitting diodes are used to illuminate the fundus and reduce source reflection. The camera settings are set to candlelight mode, the optic zoom standardized to x2.4 and the focus is manually set to 3.0 m. The new technique provides quality wide-angle digital images of the retina (60 degrees ) in patients with dilated pupils, at a fraction of the cost of established digital fundus photography. The modified digital camera is a useful alternative technique to acquire fundus images and provides a tool for screening posterior segment conditions, including diabetic retinopathy in a variety of clinical settings.
NASA Astrophysics Data System (ADS)
Gliss, Christine; Parel, Jean-Marie A.; Flynn, John T.; Pratisto, Hans S.; Niederer, Peter F.
2003-07-01
We present a miniaturized version of a fundus camera. The camera is designed for the use in screening for retinopathy of prematurity (ROP). There, but also in other applications a small, light weight, digital camera system can be extremely useful. We present a small wide angle digital camera system. The handpiece is significantly smaller and lighter then in all other systems. The electronics is truly portable fitting in a standard boardcase. The camera is designed to be offered at a compatible price. Data from tests on young rabbits' eyes is presented. The development of the camera system is part of a telemedicine project screening for ROP. Telemedical applications are a perfect application for this camera system using both advantages: the portability as well as the digital image.
Next-generation digital camera integration and software development issues
NASA Astrophysics Data System (ADS)
Venkataraman, Shyam; Peters, Ken; Hecht, Richard
1998-04-01
This paper investigates the complexities associated with the development of next generation digital cameras due to requirements in connectivity and interoperability. Each successive generation of digital camera improves drastically in cost, performance, resolution, image quality and interoperability features. This is being accomplished by advancements in a number of areas: research, silicon, standards, etc. As the capabilities of these cameras increase, so do the requirements for both hardware and software. Today, there are two single chip camera solutions in the market including the Motorola MPC 823 and LSI DCAM- 101. Real time constraints for a digital camera may be defined by the maximum time allowable between capture of images. Constraints in the design of an embedded digital camera include processor architecture, memory, processing speed and the real-time operating systems. This paper will present the LSI DCAM-101, a single-chip digital camera solution. It will present an overview of the architecture and the challenges in hardware and software for supporting streaming video in such a complex device. Issues presented include the development of the data flow software architecture, testing and integration on this complex silicon device. The strategy for optimizing performance on the architecture will also be presented.
A stereoscopic lens for digital cinema cameras
NASA Astrophysics Data System (ADS)
Lipton, Lenny; Rupkalvis, John
2015-03-01
Live-action stereoscopic feature films are, for the most part, produced using a costly post-production process to convert planar cinematography into stereo-pair images and are only occasionally shot stereoscopically using bulky dual-cameras that are adaptations of the Ramsdell rig. The stereoscopic lens design described here might very well encourage more live-action image capture because it uses standard digital cinema cameras and workflow to save time and money.
NASA Astrophysics Data System (ADS)
Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu
To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.
A direct-view customer-oriented digital holographic camera
NASA Astrophysics Data System (ADS)
Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.
2018-01-01
In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.
Integration of image capture and processing: beyond single-chip digital camera
NASA Astrophysics Data System (ADS)
Lim, SukHwan; El Gamal, Abbas
2001-05-01
An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.
Estimation of spectral distribution of sky radiance using a commercial digital camera.
Saito, Masanori; Iwabuchi, Hironobu; Murata, Isao
2016-01-10
Methods for estimating spectral distribution of sky radiance from images captured by a digital camera and for accurately estimating spectral responses of the camera are proposed. Spectral distribution of sky radiance is represented as a polynomial of the wavelength, with coefficients obtained from digital RGB counts by linear transformation. The spectral distribution of radiance as measured is consistent with that obtained by spectrometer and radiative transfer simulation for wavelengths of 430-680 nm, with standard deviation below 1%. Preliminary applications suggest this method is useful for detecting clouds and studying the relation between irradiance at the ground and cloud distribution.
Computerized digital dermoscopy.
Gewirtzman, A J; Braun, R P
2003-01-01
Within the past 15 years, dermoscopy has become a widely used non-invasive technique for physicians to better visualize pigmented lesions. Dermoscopy has helped trained physicians to better diagnose pigmented lesions. Now, the digital revolution is beginning to enhance standard dermoscopic procedures. Using digital dermoscopy, physicians are better able to document pigmented lesions for patient follow-up and to get second opinions, either through teledermoscopy with an expert colleague or by using computer-assisted diagnosis. As the market for digital dermoscopy products begins to grow, so do the number of decisions physicians need to make when choosing a system to fit their needs. The current market for digital dermoscopy includes two varieties of relatively simple and cheap attachments which can convert a consumer digital camera into a digital dermoscope. A coupling adapter acts as a fastener between the camera and an ordinary dermoscope, whereas a dermoscopy attachment includes the dermoscope optics and light source and can be attached directly to the camera. Other options for digital dermoscopy include complete dermoscopy systems that use a hand-held video camera linked directly to a computer. These systems differ from each other in whether or not they are calibrated as well as the quality of the camera and software interface. Another option in digital skin imaging involves spectral analysis rather than dermoscopy. This article serves as a guide to the current systems available and their capabilities.
Optimization of digitization procedures in cultural heritage preservation
NASA Astrophysics Data System (ADS)
Martínez, Bea; Mitjà, Carles; Escofet, Jaume
2013-11-01
The digitization of both volumetric and flat objects is the nowadays-preferred method in order to preserve cultural heritage items. High quality digital files obtained from photographic plates, films and prints, paintings, drawings, gravures, fabrics and sculptures, allows not only for a wider diffusion and on line transmission, but also for the preservation of the original items from future handling. Early digitization procedures used scanners for flat opaque or translucent objects and camera only for volumetric or flat highly texturized materials. The technical obsolescence of the high-end scanners and the improvement achieved by professional cameras has result in a wide use of cameras with digital back to digitize any kind of cultural heritage item. Since the lens, the digital back, the software controlling the camera and the digital image processing provide a wide range of possibilities, there is necessary to standardize the methods used in the reproduction work leading to preserve as high as possible the original item properties. This work presents an overview about methods used for camera system characterization, as well as the best procedures in order to identify and counteract the effect of the lens residual aberrations, sensor aliasing, image illumination, color management and image optimization by means of parametric image processing. As a corollary, the work shows some examples of reproduction workflow applied to the digitization of valuable art pieces and glass plate photographic black and white negatives.
Digital dental photography. Part 6: camera settings.
Ahmad, I
2009-07-25
Once the appropriate camera and equipment have been purchased, the next considerations involve setting up and calibrating the equipment. This article provides details regarding depth of field, exposure, colour spaces and white balance calibration, concluding with a synopsis of camera settings for a standard dental set-up.
Development of digital shade guides for color assessment using a digital camera with ring flashes.
Tung, Oi-Hong; Lai, Yu-Lin; Ho, Yi-Ching; Chou, I-Chiang; Lee, Shyh-Yuan
2011-02-01
Digital photographs taken with cameras and ring flashes are commonly used for dental documentation. We hypothesized that different illuminants and camera's white balance setups shall influence color rendering of digital images and affect the effectiveness of color matching using digital images. Fifteen ceramic disks of different shades were fabricated and photographed with a digital camera in both automatic white balance (AWB) and custom white balance (CWB) under either light-emitting diode (LED) or electronic ring flash. The Commission Internationale d'Éclairage L*a*b* parameters of the captured images were derived from Photoshop software and served as digital shade guides. We found significantly high correlation coefficients (r² > 0.96) between the respective spectrophotometer standards and those shade guides generated in CWB setups. Moreover, the accuracy of color matching of another set of ceramic disks using digital shade guides, which was verified by ten operators, improved from 67% in AWB to 93% in CWB under LED illuminants. Probably, because of the inconsistent performance of the flashlight and specular reflection, the digital images captured under electronic ring flash in both white balance setups revealed less reliable and relative low-matching ability. In conclusion, the reliability of color matching with digital images is much influenced by the illuminants and camera's white balance setups, while digital shade guides derived under LED illuminants with CWB demonstrate applicable potential in the fields of color assessments.
Observation of Planetary Motion Using a Digital Camera
ERIC Educational Resources Information Center
Meyn, Jan-Peter
2008-01-01
A digital SLR camera with a standard lens (50 mm focal length, f/1.4) on a fixed tripod is used to obtain photographs of the sky which contain stars up to 8[superscript m] apparent magnitude. The angle of view is large enough to ensure visual identification of the photograph with a large sky region in a stellar map. The resolution is sufficient to…
Cryptography Would Reveal Alterations In Photographs
NASA Technical Reports Server (NTRS)
Friedman, Gary L.
1995-01-01
Public-key decryption method proposed to guarantee authenticity of photographic images represented in form of digital files. In method, digital camera generates original data from image in standard public format; also produces coded signature to verify standard-format image data. Scheme also helps protect against other forms of lying, such as attaching false captions.
Quantifying plant colour and colour difference as perceived by humans using digital images.
Kendal, Dave; Hauser, Cindy E; Garrard, Georgia E; Jellinek, Sacha; Giljohann, Katherine M; Moore, Joslin L
2013-01-01
Human perception of plant leaf and flower colour can influence species management. Colour and colour contrast may influence the detectability of invasive or rare species during surveys. Quantitative, repeatable measures of plant colour are required for comparison across studies and generalisation across species. We present a standard method for measuring plant leaf and flower colour traits using images taken with digital cameras. We demonstrate the method by quantifying the colour of and colour difference between the flowers of eleven grassland species near Falls Creek, Australia, as part of an invasive species detection experiment. The reliability of the method was tested by measuring the leaf colour of five residential garden shrub species in Ballarat, Australia using five different types of digital camera. Flowers and leaves had overlapping but distinct colour distributions. Calculated colour differences corresponded well with qualitative comparisons. Estimates of proportional cover of yellow flowers identified using colour measurements correlated well with estimates obtained by measuring and counting individual flowers. Digital SLR and mirrorless cameras were superior to phone cameras and point-and-shoot cameras for producing reliable measurements, particularly under variable lighting conditions. The analysis of digital images taken with digital cameras is a practicable method for quantifying plant flower and leaf colour in the field or lab. Quantitative, repeatable measurements allow for comparisons between species and generalisations across species and studies. This allows plant colour to be related to human perception and preferences and, ultimately, species management.
Quantifying Plant Colour and Colour Difference as Perceived by Humans Using Digital Images
Kendal, Dave; Hauser, Cindy E.; Garrard, Georgia E.; Jellinek, Sacha; Giljohann, Katherine M.; Moore, Joslin L.
2013-01-01
Human perception of plant leaf and flower colour can influence species management. Colour and colour contrast may influence the detectability of invasive or rare species during surveys. Quantitative, repeatable measures of plant colour are required for comparison across studies and generalisation across species. We present a standard method for measuring plant leaf and flower colour traits using images taken with digital cameras. We demonstrate the method by quantifying the colour of and colour difference between the flowers of eleven grassland species near Falls Creek, Australia, as part of an invasive species detection experiment. The reliability of the method was tested by measuring the leaf colour of five residential garden shrub species in Ballarat, Australia using five different types of digital camera. Flowers and leaves had overlapping but distinct colour distributions. Calculated colour differences corresponded well with qualitative comparisons. Estimates of proportional cover of yellow flowers identified using colour measurements correlated well with estimates obtained by measuring and counting individual flowers. Digital SLR and mirrorless cameras were superior to phone cameras and point-and-shoot cameras for producing reliable measurements, particularly under variable lighting conditions. The analysis of digital images taken with digital cameras is a practicable method for quantifying plant flower and leaf colour in the field or lab. Quantitative, repeatable measurements allow for comparisons between species and generalisations across species and studies. This allows plant colour to be related to human perception and preferences and, ultimately, species management. PMID:23977275
Integration of USB and firewire cameras in machine vision applications
NASA Astrophysics Data System (ADS)
Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard
1999-08-01
Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.
NASA Astrophysics Data System (ADS)
Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith
2017-02-01
The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.
Generation of high-dynamic range image from digital photo
NASA Astrophysics Data System (ADS)
Wang, Ying; Potemin, Igor S.; Zhdanov, Dmitry D.; Wang, Xu-yang; Cheng, Han
2016-10-01
A number of the modern applications such as medical imaging, remote sensing satellites imaging, virtual prototyping etc use the High Dynamic Range Image (HDRI). Generally to obtain HDRI from ordinary digital image the camera is calibrated. The article proposes the camera calibration method based on the clear sky as the standard light source and takes sky luminance from CIE sky model for the corresponding geographical coordinates and time. The article considers base algorithms for getting real luminance values from ordinary digital image and corresponding programmed implementation of the algorithms. Moreover, examples of HDRI reconstructed from ordinary images illustrate the article.
DIGITAL CARTOGRAPHY OF THE PLANETS: NEW METHODS, ITS STATUS, AND ITS FUTURE.
Batson, R.M.
1987-01-01
A system has been developed that establishes a standardized cartographic database for each of the 19 planets and major satellites that have been explored to date. Compilation of the databases involves both traditional and newly developed digital image processing and mosaicking techniques, including radiometric and geometric corrections of the images. Each database, or digital image model (DIM), is a digital mosaic of spacecraft images that have been radiometrically and geometrically corrected and photometrically modeled. During compilation, ancillary data files such as radiometric calibrations and refined photometric values for all camera lens and filter combinations and refined camera-orientation matrices for all images used in the mapping are produced.
Thermal imagers: from ancient analog video output to state-of-the-art video streaming
NASA Astrophysics Data System (ADS)
Haan, Hubertus; Feuchter, Timo; Münzberg, Mario; Fritze, Jörg; Schlemmer, Harry
2013-06-01
The video output of thermal imagers stayed constant over almost two decades. When the famous Common Modules were employed a thermal image at first was presented to the observer in the eye piece only. In the early 1990s TV cameras were attached and the standard output was CCIR. In the civil camera market output standards changed to digital formats a decade ago with digital video streaming being nowadays state-of-the-art. The reasons why the output technique in the thermal world stayed unchanged over such a long time are: the very conservative view of the military community, long planning and turn-around times of programs and a slower growth of pixel number of TIs in comparison to consumer cameras. With megapixel detectors the CCIR output format is not sufficient any longer. The paper discusses the state-of-the-art compression and streaming solutions for TIs.
Status of the photomultiplier-based FlashCam camera for the Cherenkov Telescope Array
NASA Astrophysics Data System (ADS)
Pühlhofer, G.; Bauer, C.; Eisenkolb, F.; Florin, D.; Föhr, C.; Gadola, A.; Garrecht, F.; Hermann, G.; Jung, I.; Kalekin, O.; Kalkuhl, C.; Kasperek, J.; Kihm, T.; Koziol, J.; Lahmann, R.; Manalaysay, A.; Marszalek, A.; Rajda, P. J.; Reimer, O.; Romaszkan, W.; Rupinski, M.; Schanz, T.; Schwab, T.; Steiner, S.; Straumann, U.; Tenzer, C.; Vollhardt, A.; Weitzel, Q.; Winiarski, K.; Zietara, K.
2014-07-01
The FlashCam project is preparing a camera prototype around a fully digital FADC-based readout system, for the medium sized telescopes (MST) of the Cherenkov Telescope Array (CTA). The FlashCam design is the first fully digital readout system for Cherenkov cameras, based on commercial FADCs and FPGAs as key components for digitization and triggering, and a high performance camera server as back end. It provides the option to easily implement different types of trigger algorithms as well as digitization and readout scenarios using identical hardware, by simply changing the firmware on the FPGAs. The readout of the front end modules into the camera server is Ethernet-based using standard Ethernet switches and a custom, raw Ethernet protocol. In the current implementation of the system, data transfer and back end processing rates of 3.8 GB/s and 2.4 GB/s have been achieved, respectively. Together with the dead-time-free front end event buffering on the FPGAs, this permits the cameras to operate at trigger rates of up to several ten kHz. In the horizontal architecture of FlashCam, the photon detector plane (PDP), consisting of photon detectors, preamplifiers, high voltage-, control-, and monitoring systems, is a self-contained unit, mechanically detached from the front end modules. It interfaces to the digital readout system via analogue signal transmission. The horizontal integration of FlashCam is expected not only to be more cost efficient, it also allows PDPs with different types of photon detectors to be adapted to the FlashCam readout system. By now, a 144-pixel mini-camera" setup, fully equipped with photomultipliers, PDP electronics, and digitization/ trigger electronics, has been realized and extensively tested. Preparations for a full-scale, 1764 pixel camera mechanics and a cooling system are ongoing. The paper describes the status of the project.
NASA Astrophysics Data System (ADS)
Gamadia, Mark Noel
In order to gain valuable market share in the growing consumer digital still camera and camera phone market, camera manufacturers have to continually add and improve existing features to their latest product offerings. Auto-focus (AF) is one such feature, whose aim is to enable consumers to quickly take sharply focused pictures with little or no manual intervention in adjusting the camera's focus lens. While AF has been a standard feature in digital still and cell-phone cameras, consumers often complain about their cameras' slow AF performance, which may lead to missed photographic opportunities, rendering valuable moments and events with undesired out-of-focus pictures. This dissertation addresses this critical issue to advance the state-of-the-art in the digital band-pass filter, passive AF method. This method is widely used to realize AF in the camera industry, where a focus actuator is adjusted via a search algorithm to locate the in-focus position by maximizing a sharpness measure extracted from a particular frequency band of the incoming image of the scene. There are no known systematic methods for automatically deriving the parameters such as the digital pass-bands or the search step-size increments used in existing passive AF schemes. Conventional methods require time consuming experimentation and tuning in order to arrive at a set of parameters which balance AF performance in terms of speed and accuracy ultimately causing a delay in product time-to-market. This dissertation presents a new framework for determining an optimal set of passive AF parameters, named Filter- Switching AF, providing an automatic approach to achieve superior AF performance, both in good and low lighting conditions based on the following performance measures (metrics): speed (total number of iterations), accuracy (offset from truth), power consumption (total distance moved), and user experience (in-focus position overrun). Performance results using three different prototype cameras are presented to further illustrate the real-world AF performance gains achieved by the developed approach. The major contribution of this dissertation is that the developed auto focusing approach can be successfully used by camera manufacturers in the development of the AF feature in future generations of digital still cameras and camera phones.
NASA Astrophysics Data System (ADS)
Holland, S. Douglas
1992-09-01
A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.
NASA Technical Reports Server (NTRS)
Holland, S. Douglas (Inventor)
1992-01-01
A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.
NASA Astrophysics Data System (ADS)
Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia
Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with FiBun software to model not only an image variant interior orientation, but also deformations in the sensor domain of the cameras, showed significant improvements only for a small group of cameras. The Nikon D3 camera yielded the best overall accuracy (25 μm maximum absolute length measurement error in object space) with this calibration procedure indicating at the same time the presence of image invariant error in the sensor domain. Overall, calibration results showed that digital cameras can be applied for an accurate photogrammetric survey and that only a little effort was sufficient to greatly improve the accuracy potential of digital cameras.
Advanced digital image archival system using MPEG technologies
NASA Astrophysics Data System (ADS)
Chang, Wo
2009-08-01
Digital information and records are vital to the human race regardless of the nationalities and eras in which they were produced. Digital image contents are produced at a rapid pace from cultural heritages via digitalization, scientific and experimental data via high speed imaging sensors, national defense satellite images from governments, medical and healthcare imaging records from hospitals, personal collection of photos from digital cameras. With these mass amounts of precious and irreplaceable data and knowledge, what standards technologies can be applied to preserve and yet provide an interoperable framework for accessing the data across varieties of systems and devices? This paper presents an advanced digital image archival system by applying the international standard of MPEG technologies to preserve digital image content.
Suitability of digital camcorders for virtual reality image data capture
NASA Astrophysics Data System (ADS)
D'Apuzzo, Nicola; Maas, Hans-Gerd
1998-12-01
Today's consumer market digital camcorders offer features which make them appear quite interesting devices for virtual reality data capture. The paper compares a digital camcorder with an analogue camcorder and a machine vision type CCD camera and discusses the suitability of these three cameras for virtual reality applications. Besides the discussion of technical features of the cameras, this includes a detailed accuracy test in order to define the range of applications. In combination with the cameras, three different framegrabbers are tested. The geometric accuracy potential of all three cameras turned out to be surprisingly large, and no problems were noticed in the radiometric performance. On the other hand, some disadvantages have to be reported: from the photogrammetrists point of view, the major disadvantage of most camcorders is the missing possibility to synchronize multiple devices, limiting the suitability for 3-D motion data capture. Moreover, the standard video format contains interlacing, which is also undesirable for all applications dealing with moving objects or moving cameras. Further disadvantages are computer interfaces with functionality, which is still suboptimal. While custom-made solutions to these problems are probably rather expensive (and will make potential users turn back to machine vision like equipment), this functionality could probably be included by the manufacturers at almost zero cost.
Akkaynak, Derya; Treibitz, Tali; Xiao, Bei; Gürkan, Umut A.; Allen, Justine J.; Demirci, Utkan; Hanlon, Roger T.
2014-01-01
Commercial off-the-shelf digital cameras are inexpensive and easy-to-use instruments that can be used for quantitative scientific data acquisition if images are captured in raw format and processed so that they maintain a linear relationship with scene radiance. Here we describe the image-processing steps required for consistent data acquisition with color cameras. In addition, we present a method for scene-specific color calibration that increases the accuracy of color capture when a scene contains colors that are not well represented in the gamut of a standard color-calibration target. We demonstrate applications of the proposed methodology in the fields of biomedical engineering, artwork photography, perception science, marine biology, and underwater imaging. PMID:24562030
Best practices to optimize intraoperative photography.
Gaujoux, Sébastien; Ceribelli, Cecilia; Goudard, Geoffrey; Khayat, Antoine; Leconte, Mahaut; Massault, Pierre-Philippe; Balagué, Julie; Dousset, Bertrand
2016-04-01
Intraoperative photography is used extensively for communication, research, or teaching. The objective of the present work was to define, using a standardized methodology and literature review, the best technical conditions for intraoperative photography. Using either a smartphone camera, a bridge camera, or a single-lens reflex (SLR) camera, photographs were taken under various standard conditions by a professional photographer. All images were independently assessed blinded to technical conditions to define the best shooting conditions and methods. For better photographs, an SLR camera with manual settings should be used. Photographs should be centered and taken vertically and orthogonal to the surgical field with a linear scale to avoid error in perspective. The shooting distance should be about 75 cm using an 80-100-mm focal lens. Flash should be avoided and scialytic low-powered light should be used without focus. The operative field should be clean, wet surfaces should be avoided, and metal instruments should be hidden to avoid reflections. For SLR camera, International Organization for Standardization speed should be as low as possible, autofocus area selection mode should be on single point AF, shutter speed should be above 1/100 second, and aperture should be as narrow as possible, above f/8. For smartphone, use high dynamic range setting if available, use of flash, digital filter, effect apps, and digital zoom is not recommended. If a few basic technical rules are known and applied, high-quality photographs can be taken by amateur photographers and fit the standards accepted in clinical practice, academic communication, and publications. Copyright © 2016 Elsevier Inc. All rights reserved.
Phiri, R; Keeffe, J E; Harper, C A; Taylor, H R
2006-08-01
To show that the non-mydriatic retinal camera (NMRC) using polaroid film is as effective as the NMRC using digital imaging in detecting referrable retinopathy. A series of patients with diabetes attending the eye out-patients department at the Royal Victorian Eye and Ear Hospital had single-field non-mydriatic fundus photographs taken using first a digital and then a polaroid camera. Dilated 30 degrees seven-field stereo fundus photographs were then taken of each eye as the gold standard. The photographs were graded in a masked fashion. Retinopathy levels were defined using the simplified Wisconsin Grading system. We used the kappa statistics for inter-reader and intrareader agreement and the generalized linear model to derive the odds ratio. There were 196 participants giving 325 undilated retinal photographs. Of these participants 111 (57%) were males. The mean age of the patients was 68.8 years. There were 298 eyes with all three sets of photographs from 154 patients. The digital NMRC had a sensitivity of 86.2%[95% confidence interval (CI) 65.8, 95.3], whilst the polaroid NMRC had a sensitivity of 84.1% (95% CI 65.5, 93.7). The specificities of the two cameras were identical at 71.2% (95% CI 58.8, 81.1). There was no difference in the ability of the polaroid and digital camera to detect referrable retinopathy (odds ratio 1.06, 95% CI 0.80, 1.40, P = 0.68). This study suggests that non-mydriatic retinal photography using polaroid film is as effective as digital imaging in the detection of referrable retinopathy in countries such as the USA and Australia or others that use the same criterion for referral.
Development of the SEASIS instrument for SEDSAT
NASA Technical Reports Server (NTRS)
Maier, Mark W.
1996-01-01
Two SEASIS experiment objectives are key: take images that allow three axis attitude determination and take multi-spectral images of the earth. During the tether mission it is also desirable to capture images for the recoiling tether from the endmass perspective (which has never been observed). SEASIS must store all its imagery taken during the tether mission until the earth downlink can be established. SEASIS determines attitude with a panoramic camera and performs earth observation with a telephoto lens camera. Camera video is digitized, compressed, and stored in solid state memory. These objectives are addressed through the following architectural choices: (1) A camera system using a Panoramic Annular Lens (PAL). This lens has a 360 deg. azimuthal field of view by a +45 degree vertical field measured from a plan normal to the lens boresight axis. It has been shown in Mr. Mark Steadham's UAH M.S. thesis that his camera can determine three axis attitude anytime the earth and one other recognizable celestial object (for example, the sun) is in the field of view. This will be essentially all the time during tether deployment. (2) A second camera system using telephoto lens and filter wheel. The camera is a black and white standard video camera. The filters are chosen to cover the visible spectral bands of remote sensing interest. (3) A processor and mass memory arrangement linked to the cameras. Video signals from the cameras are digitized, compressed in the processor, and stored in a large static RAM bank. The processor is a multi-chip module consisting of a T800 Transputer and three Zoran floating point Digital Signal Processors. This processor module was supplied under ARPA contract by the Space Computer Corporation to demonstrate its use in space.
ForestCrowns: a transparency estimation tool for digital photographs of forest canopies
Matthew Winn; Jeff Palmer; S.-M. Lee; Philip Araman
2016-01-01
ForestCrowns is a Windows®-based computer program that calculates forest canopy transparency (light transmittance) using ground-based digital photographs taken with standard or hemispherical camera lenses. The software can be used by forest managers and researchers to monitor growth/decline of forest canopies; provide input for leaf area index estimation; measure light...
Off-axis digital holographic camera for quantitative phase microscopy.
Monemhaghdoust, Zahra; Montfort, Frédéric; Emery, Yves; Depeursinge, Christian; Moser, Christophe
2014-06-01
We propose and experimentally demonstrate a digital holographic camera which can be attached to the camera port of a conventional microscope for obtaining digital holograms in a self-reference configuration, under short coherence illumination and in a single shot. A thick holographic grating filters the beam containing the sample information in two dimensions through diffraction. The filtered beam creates the reference arm of the interferometer. The spatial filtering method, based on the high angular selectivity of the thick grating, reduces the alignment sensitivity to angular displacements compared with pinhole based Fourier filtering. The addition of a thin holographic grating alters the coherence plane tilt introduced by the thick grating so as to create high-visibility interference over the entire field of view. The acquired full-field off-axis holograms are processed to retrieve the amplitude and phase information of the sample. The system produces phase images of cheek cells qualitatively similar to phase images extracted with a standard commercial DHM.
Polarizing aperture stereoscopic cinema camera
NASA Astrophysics Data System (ADS)
Lipton, Lenny
2012-03-01
The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor (the size of the standard 35mm frame) with the means to select left and right image information. Even with the added stereoscopic capability the appearance of existing camera bodies will be unaltered.
Polarizing aperture stereoscopic cinema camera
NASA Astrophysics Data System (ADS)
Lipton, Lenny
2012-07-01
The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS... as video cameras, digital scanning sonar, and upweller systems; monitoring of sediment quality...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS... as video cameras, digital scanning sonar, and upweller systems; monitoring of sediment quality...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS... as video cameras, digital scanning sonar, and upweller systems; monitoring of sediment quality...
Low Noise Camera for Suborbital Science Applications
NASA Technical Reports Server (NTRS)
Hyde, David; Robertson, Bryan; Holloway, Todd
2015-01-01
Low-cost, commercial-off-the-shelf- (COTS-) based science cameras are intended for lab use only and are not suitable for flight deployment as they are difficult to ruggedize and repackage into instruments. Also, COTS implementation may not be suitable since mission science objectives are tied to specific measurement requirements, and often require performance beyond that required by the commercial market. Custom camera development for each application is cost prohibitive for the International Space Station (ISS) or midrange science payloads due to nonrecurring expenses ($2,000 K) for ground-up camera electronics design. While each new science mission has a different suite of requirements for camera performance (detector noise, speed of image acquisition, charge-coupled device (CCD) size, operation temperature, packaging, etc.), the analog-to-digital conversion, power supply, and communications can be standardized to accommodate many different applications. The low noise camera for suborbital applications is a rugged standard camera platform that can accommodate a range of detector types and science requirements for use in inexpensive to mid range payloads supporting Earth science, solar physics, robotic vision, or astronomy experiments. Cameras developed on this platform have demonstrated the performance found in custom flight cameras at a price per camera more than an order of magnitude lower.
Software for Managing an Archive of Images
NASA Technical Reports Server (NTRS)
Hallai, Charles; Jones, Helene; Callac, Chris
2003-01-01
This is a revised draft by Innovators concerning the report on Software for Managing and Archive of Images.The SSC Multimedia Archive is an automated electronic system to manage images, acquired both by film and digital cameras, for the Public Affairs Office (PAO) at Stennis Space Center (SSC). Previously, the image archive was based on film photography and utilized a manual system that, by todays standards, had become inefficient and expensive. Now, the SSC Multimedia Archive, based on a server at SSC, contains both catalogs and images for pictures taken both digitally and with a traditional film-based camera, along with metadata about each image.
Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish
2018-01-01
Purpose: The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Methods: Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Conclusion: Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching. PMID:29283133
Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish
2018-01-01
The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching.
Joint Calibration of 3d Laser Scanner and Digital Camera Based on Dlt Algorithm
NASA Astrophysics Data System (ADS)
Gao, X.; Li, M.; Xing, L.; Liu, Y.
2018-04-01
Design a calibration target that can be scanned by 3D laser scanner while shot by digital camera, achieving point cloud and photos of a same target. A method to joint calibrate 3D laser scanner and digital camera based on Direct Linear Transformation algorithm was proposed. This method adds a distortion model of digital camera to traditional DLT algorithm, after repeating iteration, it can solve the inner and external position element of the camera as well as the joint calibration of 3D laser scanner and digital camera. It comes to prove that this method is reliable.
Sedgewick, Gerald J.; Ericson, Marna
2015-01-01
Obtaining digital images of color brightfield microscopy is an important aspect of biomedical research and the clinical practice of diagnostic pathology. Although the field of digital pathology has had tremendous advances in whole-slide imaging systems, little effort has been directed toward standardizing color brightfield digital imaging to maintain image-to-image consistency and tonal linearity. Using a single camera and microscope to obtain digital images of three stains, we show that microscope and camera systems inherently produce image-to-image variation. Moreover, we demonstrate that post-processing with a widely used raster graphics editor software program does not completely correct for session-to-session inconsistency. We introduce a reliable method for creating consistent images with a hardware/software solution (ChromaCal™; Datacolor Inc., NJ) along with its features for creating color standardization, preserving linear tonal levels, providing automated white balancing and setting automated brightness to consistent levels. The resulting image consistency using this method will also streamline mean density and morphometry measurements, as images are easily segmented and single thresholds can be used. We suggest that this is a superior method for color brightfield imaging, which can be used for quantification and can be readily incorporated into workflows. PMID:25575568
Selecting a digital camera for telemedicine.
Patricoski, Chris; Ferguson, A Stewart
2009-06-01
The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.
Digitized Photography: What You Can Do with It.
ERIC Educational Resources Information Center
Kriss, Jack
1997-01-01
Discusses benefits of digital cameras which allow users to take a picture, store it on a digital disk, and manipulate/export these photos to a print document, Web page, or multimedia presentation. Details features of digital cameras and discusses educational uses. A sidebar presents prices and other information for 12 digital cameras. (AEF)
NASA Astrophysics Data System (ADS)
Moriya, Gentaro; Chikatsu, Hirofumi
2011-07-01
Recently, pixel numbers and functions of consumer grade digital camera are amazingly increasing by modern semiconductor and digital technology, and there are many low-priced consumer grade digital cameras which have more than 10 mega pixels on the market in Japan. In these circumstances, digital photogrammetry using consumer grade cameras is enormously expected in various application fields. There is a large body of literature on calibration of consumer grade digital cameras and circular target location. Target location with subpixel accuracy had been investigated as a star tracker issue, and many target location algorithms have been carried out. It is widely accepted that the least squares models with ellipse fitting is the most accurate algorithm. However, there are still problems for efficient digital close range photogrammetry. These problems are reconfirmation of the target location algorithms with subpixel accuracy for consumer grade digital cameras, relationship between number of edge points along target boundary and accuracy, and an indicator for estimating the accuracy of normal digital close range photogrammetry using consumer grade cameras. With this motive, an empirical testing of several algorithms for target location with subpixel accuracy and an indicator for estimating the accuracy are investigated in this paper using real data which were acquired indoors using 7 consumer grade digital cameras which have 7.2 mega pixels to 14.7 mega pixels.
ERIC Educational Resources Information Center
Lancor, Rachael; Lancor, Brian
2014-01-01
In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…
Digital camera with apparatus for authentication of images produced from an image file
NASA Technical Reports Server (NTRS)
Friedman, Gary L. (Inventor)
1993-01-01
A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.
Digital Camera with Apparatus for Authentication of Images Produced from an Image File
NASA Technical Reports Server (NTRS)
Friedman, Gary L. (Inventor)
1996-01-01
A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.
SLR digital camera for forensic photography
NASA Astrophysics Data System (ADS)
Har, Donghwan; Son, Youngho; Lee, Sungwon
2004-06-01
Forensic photography, which was systematically established in the late 19th century by Alphonse Bertillon of France, has developed a lot for about 100 years. The development will be more accelerated with the development of high technologies, in particular the digital technology. This paper reviews three studies to answer the question: Can the SLR digital camera replace the traditional silver halide type ultraviolet photography and infrared photography? 1. Comparison of relative ultraviolet and infrared sensitivity of SLR digital camera to silver halide photography. 2. How much ultraviolet or infrared sensitivity is improved when removing the UV/IR cutoff filter built in the SLR digital camera? 3. Comparison of relative sensitivity of CCD and CMOS for ultraviolet and infrared. The test result showed that the SLR digital camera has a very low sensitivity for ultraviolet and infrared. The cause was found to be the UV/IR cutoff filter mounted in front of the image sensor. Removing the UV/IR cutoff filter significantly improved the sensitivity for ultraviolet and infrared. Particularly for infrared, the sensitivity of the SLR digital camera was better than that of the silver halide film. This shows the possibility of replacing the silver halide type ultraviolet photography and infrared photography with the SLR digital camera. Thus, the SLR digital camera seems to be useful for forensic photography, which deals with a lot of ultraviolet and infrared photographs.
Clausner, Tommy; Dalal, Sarang S; Crespo-García, Maité
2017-01-01
The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D . Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position.
Clausner, Tommy; Dalal, Sarang S.; Crespo-García, Maité
2017-01-01
The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D. Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position. PMID:28559791
Diagnostic accuracy of chest X-rays acquired using a digital camera for low-cost teleradiology.
Szot, Agnieszka; Jacobson, Francine L; Munn, Samson; Jazayeri, Darius; Nardell, Edward; Harrison, David; Drosten, Ralph; Ohno-Machado, Lucila; Smeaton, Laura M; Fraser, Hamish S F
2004-02-01
Store-and-forward telemedicine, using e-mail to send clinical data and digital images, offers a low-cost alternative for physicians in developing countries to obtain second opinions from specialists. To explore the potential usefulness of this technique, 91 chest X-ray images were photographed using a digital camera and a view box. Four independent readers (three radiologists and one pulmonologist) read two types of digital (JPEG and JPEG2000) and original film images and indicated their confidence in the presence of eight features known to be radiological indicators of tuberculosis (TB). The results were compared to a "gold standard" established by two different radiologists, and assessed using receiver operating characteristic (ROC) curve analysis. There was no statistical difference in the overall performance between the readings from the original films and both types of digital images. The size of JPEG2000 images was approximately 120KB, making this technique feasible for slow internet connections. Our preliminary results show the potential usefulness of this technique particularly for tuberculosis and lung disease, but further studies are required to refine its potential.
ERIC Educational Resources Information Center
Liu, Rong; Unger, John A.; Scullion, Vicki A.
2014-01-01
Drawing data from an action-oriented research project for integrating digital video cameras into the reading process in pre-college courses, this study proposes using digital video cameras in reading summaries and responses to promote critical thinking and to teach social justice concepts. The digital video research project is founded on…
Quantification of Soil Redoximorphic Features by Standardized Color Identification
USDA-ARS?s Scientific Manuscript database
Photography has been a welcome tool in assisting to document and convey qualitative soil information. Greater availability of digital cameras with increased information storage capabilities has promoted novel uses of this technology in investigations of water movement patterns, organic matter conte...
Smartphone-based low light detection for bioluminescence application
USDA-ARS?s Scientific Manuscript database
We report a smartphone-based device and associated imaging-processing algorithm to maximize the sensitivity of standard smartphone cameras, that can detect the presence of single-digit pW of radiant flux intensity. The proposed hardware and software, called bioluminescent-based analyte quantitation ...
Lock-in imaging with synchronous digital mirror demodulation
NASA Astrophysics Data System (ADS)
Bush, Michael G.
2010-04-01
Lock-in imaging enables high contrast imaging in adverse conditions by exploiting a modulated light source and homodyne detection. We report results on a patent pending lock-in imaging system fabricated from commercial-off-theshelf parts utilizing standard cameras and a spatial light modulator. By leveraging the capabilities of standard parts we are able to present a low cost, high resolution, high sensitivity camera with applications in search and rescue, friend or foe identification (IFF), and covert surveillance. Different operating modes allow the same instrument to be utilized for dual band multispectral imaging or high dynamic range imaging, increasing the flexibility in different operational settings.
Roberti, Joshua A.; SanClements, Michael D.; Loescher, Henry W.; Ayres, Edward
2014-01-01
Even though fine-root turnover is a highly studied topic, it is often poorly understood as a result of uncertainties inherent in its sampling, e.g., quantifying spatial and temporal variability. While many methods exist to quantify fine-root turnover, use of minirhizotrons has increased over the last two decades, making sensor errors another source of uncertainty. Currently, no standardized methodology exists to test and compare minirhizotron camera capability, imagery, and performance. This paper presents a reproducible, laboratory-based method by which minirhizotron cameras can be tested and validated in a traceable manner. The performance of camera characteristics was identified and test criteria were developed: we quantified the precision of camera location for successive images, estimated the trueness and precision of each camera's ability to quantify root diameter and root color, and also assessed the influence of heat dissipation introduced by the minirhizotron cameras and electrical components. We report detailed and defensible metrology analyses that examine the performance of two commercially available minirhizotron cameras. These cameras performed differently with regard to the various test criteria and uncertainty analyses. We recommend a defensible metrology approach to quantify the performance of minirhizotron camera characteristics and determine sensor-related measurement uncertainties prior to field use. This approach is also extensible to other digital imagery technologies. In turn, these approaches facilitate a greater understanding of measurement uncertainties (signal-to-noise ratio) inherent in the camera performance and allow such uncertainties to be quantified and mitigated so that estimates of fine-root turnover can be more confidently quantified. PMID:25391023
Use of a Digital Camera To Document Student Observations in a Microbiology Laboratory Class.
ERIC Educational Resources Information Center
Mills, David A.; Kelley, Kevin; Jones, Michael
2001-01-01
Points out the lack of microscopic images of wine-related microbes. Uses a digital camera during a wine microbiology laboratory to capture student-generated microscope images. Discusses the advantages of using a digital camera in a teaching lab. (YDS)
Digital Cameras for Student Use.
ERIC Educational Resources Information Center
Simpson, Carol
1997-01-01
Describes the features, equipment and operations of digital cameras and compares three different digital cameras for use in education. Price, technology requirements, features, transfer software, and accessories for the Kodak DC25, Olympus D-200L and Casio QV-100 are presented in a comparison table. (AEF)
High Speed Digital Camera Technology Review
NASA Technical Reports Server (NTRS)
Clements, Sandra D.
2009-01-01
A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.
A model for a PC-based, universal-format, multimedia digitization system: moving beyond the scanner.
McEachen, James C; Cusack, Thomas J; McEachen, John C
2003-08-01
Digitizing images for use in case presentations based on hardcopy films, slides, photographs, negatives, books, and videos can present a challenging task. Scanners and digital cameras have become standard tools of the trade. Unfortunately, use of these devices to digitize multiple images in many different media formats can be a time-consuming and in some cases unachievable process. The authors' goal was to create a PC-based solution for digitizing multiple media formats in a timely fashion while maintaining adequate image presentation quality. The authors' PC-based solution makes use of off-the-shelf hardware applications to include a digital document camera (DDC), VHS video player, and video-editing kit. With the assistance of five staff radiologists, the authors examined the quality of multiple image types digitized with this equipment. The authors also quantified the speed of digitization of various types of media using the DDC and video-editing kit. With regard to image quality, the five staff radiologists rated the digitized angiography, CT, and MR images as adequate to excellent for use in teaching files and case presentations. With regard to digitized plain films, the average rating was adequate. As for performance, the authors recognized a 68% improvement in the time required to digitize hardcopy films using the DDC instead of a professional quality scanner. The PC-based solution provides a means for digitizing multiple images from many different types of media in a timely fashion while maintaining adequate image presentation quality.
NASA Technical Reports Server (NTRS)
Gradl, Paul
2016-01-01
Paired images were collected using a projected pattern instead of standard painting of the speckle pattern on her abdomen. High Speed cameras were post triggered after movements felt. Data was collected at 120 fps -limited due to 60hz frequency of projector. To ensure that kicks and movement data was real a background test was conducted with no baby movement (to correct for breathing and body motion).
Camera Ready: Capturing a Digital History of Chester
ERIC Educational Resources Information Center
Lehman, Kathy
2008-01-01
Armed with digital cameras, voice recorders, and movie cameras, students from Thomas Dale High School in Chester, Virginia, have been exploring neighborhoods, interviewing residents, and collecting memories of their hometown. In this article, the author describes "Digital History of Chester", a project for creating a commemorative DVD.…
Color correction pipeline optimization for digital cameras
NASA Astrophysics Data System (ADS)
Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo
2013-04-01
The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.
High resolution multispectral photogrammetric imagery: enhancement, interpretation and evaluations
NASA Astrophysics Data System (ADS)
Roberts, Arthur; Haefele, Martin; Bostater, Charles; Becker, Thomas
2007-10-01
A variety of aerial mapping cameras were adapted and developed into simulated multiband digital photogrammetric mapping systems. Direct digital multispectral, two multiband cameras (IIS 4 band and Itek 9 band) and paired mapping and reconnaissance cameras were evaluated for digital spectral performance and photogrammetric mapping accuracy in an aquatic environment. Aerial films (24cm X 24cm format) tested were: Agfa color negative and extended red (visible and near infrared) panchromatic, and; Kodak color infrared and B&W (visible and near infrared) infrared. All films were negative processed to published standards and digitally converted at either 16 (color) or 10 (B&W) microns. Excellent precision in the digital conversions was obtained with scanning errors of less than one micron. Radiometric data conversion was undertaken using linear density conversion and centered 8 bit histogram exposure. This resulted in multiple 8 bit spectral image bands that were unaltered (not radiometrically enhanced) "optical count" conversions of film density. This provided the best film density conversion to a digital product while retaining the original film density characteristics. Data covering water depth, water quality, surface roughness, and bottom substrate were acquired using different measurement techniques as well as different techniques to locate sampling points on the imagery. Despite extensive efforts to obtain accurate ground truth data location errors, measurement errors, and variations in the correlation between water depth and remotely sensed signal persisted. These errors must be considered endemic and may not be removed through even the most elaborate sampling set up. Results indicate that multispectral photogrammetric systems offer improved feature mapping capability.
Applications of Action Cam Sensors in the Archaeological Yard
NASA Astrophysics Data System (ADS)
Pepe, M.; Ackermann, S.; Fregonese, L.; Fassi, F.; Adami, A.
2018-05-01
In recent years, special digital cameras called "action camera" or "action cam", have become popular due to their low price, smallness, lightness, strength and capacity to make videos and photos even in extreme environment surrounding condition. Indeed, these particular cameras have been designed mainly to capture sport actions and work even in case of dirt, bumps, or underwater and at different external temperatures. High resolution of Digital single-lens reflex (DSLR) cameras are usually preferred to be employed in photogrammetric field. Indeed, beyond the sensor resolution, the combination of such cameras with fixed lens with low distortion are preferred to perform accurate 3D measurements; at the contrary, action cameras have small and wide-angle lens, with a lower performance in terms of sensor resolution, lens quality and distortions. However, by considering the characteristics of the action cameras to acquire under conditions that may result difficult for standard DSLR cameras and because of their lower price, these could be taken into consideration as a possible and interesting approach during archaeological excavation activities to document the state of the places. In this paper, the influence of lens radial distortion and chromatic aberration on this type of cameras in self-calibration mode and an evaluation of their application in the field of Cultural Heritage will be investigated and discussed. Using a suitable technique, it has been possible to improve the accuracy of the 3D model obtained by action cam images. Case studies show the quality and the utility of the use of this type of sensor in the survey of archaeological artefacts.
Video camera system for locating bullet holes in targets at a ballistics tunnel
NASA Technical Reports Server (NTRS)
Burner, A. W.; Rummler, D. R.; Goad, W. K.
1990-01-01
A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.
3D digital image correlation using single color camera pseudo-stereo system
NASA Astrophysics Data System (ADS)
Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang
2017-10-01
Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.
NASA Astrophysics Data System (ADS)
Bratcher, Tim; Kroutil, Robert; Lanouette, André; Lewis, Paul E.; Miller, David; Shen, Sylvia; Thomas, Mark
2013-05-01
The development concept paper for the MSIC system was first introduced in August 2012 by these authors. This paper describes the final assembly, testing, and commercial availability of the Mapping System Interface Card (MSIC). The 2.3kg MSIC is a self-contained, compact variable configuration, low cost real-time precision metadata annotator with embedded INS/GPS designed specifically for use in small aircraft. The MSIC was specifically designed to convert commercial-off-the-shelf (COTS) digital cameras and imaging/non-imaging spectrometers with Camera Link standard data streams into mapping systems for airborne emergency response and scientific remote sensing applications. COTS digital cameras and imaging/non-imaging spectrometers covering the ultraviolet through long-wave infrared wavelengths are important tools now readily available and affordable for use by emergency responders and scientists. The MSIC will significantly enhance the capability of emergency responders and scientists by providing a direct transformation of these important COTS sensor tools into low-cost real-time aerial mapping systems.
Single chip camera active pixel sensor
NASA Technical Reports Server (NTRS)
Shaw, Timothy (Inventor); Pain, Bedabrata (Inventor); Olson, Brita (Inventor); Nixon, Robert H. (Inventor); Fossum, Eric R. (Inventor); Panicacci, Roger A. (Inventor); Mansoorian, Barmak (Inventor)
2003-01-01
A totally digital single chip camera includes communications to operate most of its structure in serial communication mode. The digital single chip camera include a D/A converter for converting an input digital word into an analog reference signal. The chip includes all of the necessary circuitry for operating the chip using a single pin.
Selecting the right digital camera for telemedicine-choice for 2009.
Patricoski, Chris; Ferguson, A Stewart; Brudzinski, Jay; Spargo, Garret
2010-03-01
Digital cameras are fundamental tools for store-and-forward telemedicine (electronic consultation). The choice of a camera may significantly impact this consultative process based on the quality of the images, the ability of users to leverage the cameras' features, and other facets of the camera design. The goal of this research was to provide a substantive framework and clearly defined process for reviewing digital cameras and to demonstrate the results obtained when employing this process to review point-and-shoot digital cameras introduced in 2009. The process included a market review, in-house evaluation of features, image reviews, functional testing, and feature prioritization. Seventy-two cameras were identified new on the market in 2009, and 10 were chosen for in-house evaluation. Four cameras scored very high for mechanical functionality and ease-of-use. The final analysis revealed three cameras that had excellent scores for both color accuracy and photographic detail and these represent excellent options for telemedicine: Canon Powershot SD970 IS, Fujifilm FinePix F200EXR, and Panasonic Lumix DMC-ZS3. Additional features of the Canon Powershot SD970 IS make it the camera of choice for our Alaska program.
Using Digital Imaging in Classroom and Outdoor Activities.
ERIC Educational Resources Information Center
Thomasson, Joseph R.
2002-01-01
Explains how to use digital cameras and related basic equipment during indoor and outdoor activities. Uses digital imaging in general botany class to identify unknown fungus samples. Explains how to select a digital camera and other necessary equipment. (YDS)
Issues in implementing services for a wireless web-enabled digital camera
NASA Astrophysics Data System (ADS)
Venkataraman, Shyam; Sampat, Nitin; Fisher, Yoram; Canosa, John; Noel, Nicholas
2001-05-01
The competition in the exploding digital photography market has caused vendors to explore new ways to increase their return on investment. A common view among industry analysts is that increasingly it will be services provided by these cameras, and not the cameras themselves, that will provide the revenue stream. These services will be coupled to e- Appliance based Communities. In addition, the rapidly increasing need to upload images to the Internet for photo- finishing services as well as the need to download software upgrades to the camera is driving many camera OEMs to evaluate the benefits of using the wireless web to extend their enterprise systems. Currently, creating a viable e- appliance such as a digital camera coupled with a wireless web service requires more than just a competency in product development. This paper will evaluate the system implications in the deployment of recurring revenue services and enterprise connectivity of a wireless, web-enabled digital camera. These include, among other things, an architectural design approach for services such as device management, synchronization, billing, connectivity, security, etc. Such an evaluation will assist, we hope, anyone designing or connecting a digital camera to the enterprise systems.
Voss with video camera in Service Module
2001-04-08
ISS002-E-5329 (08 April 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, sets up a video camera on a mounting bracket in the Zvezda / Service Module of the International Space Station (ISS). A 35mm camera and a digital still camera are also visible nearby. This image was recorded with a digital still camera.
Spectral colors capture and reproduction based on digital camera
NASA Astrophysics Data System (ADS)
Chen, Defen; Huang, Qingmei; Li, Wei; Lu, Yang
2018-01-01
The purpose of this work is to develop a method for the accurate reproduction of the spectral colors captured by digital camera. The spectral colors being the purest color in any hue, are difficult to reproduce without distortion on digital devices. In this paper, we attempt to achieve accurate hue reproduction of the spectral colors by focusing on two steps of color correction: the capture of the spectral colors and the color characterization of digital camera. Hence it determines the relationship among the spectral color wavelength, the RGB color space of the digital camera device and the CIEXYZ color space. This study also provides a basis for further studies related to the color spectral reproduction on digital devices. In this paper, methods such as wavelength calibration of the spectral colors and digital camera characterization were utilized. The spectrum was obtained through the grating spectroscopy system. A photo of a clear and reliable primary spectrum was taken by adjusting the relative parameters of the digital camera, from which the RGB values of color spectrum was extracted in 1040 equally-divided locations. Calculated using grating equation and measured by the spectrophotometer, two wavelength values were obtained from each location. The polynomial fitting method for the camera characterization was used to achieve color correction. After wavelength calibration, the maximum error between the two sets of wavelengths is 4.38nm. According to the polynomial fitting method, the average color difference of test samples is 3.76. This has satisfied the application needs of the spectral colors in digital devices such as display and transmission.
Takemura, Akihiro; Ueda, Shinichi; Noto, Kimiya; Kurata, Yuichi; Shoji, Saori
2011-01-01
In this study, we proposed and evaluated a positional accuracy assessment method with two high-resolution digital cameras for add-on six-degrees-of-freedom radiotherapy (6D) couches. Two high resolution digital cameras (D5000, Nikon Co.) were used in this accuracy assessment method. These cameras were placed on two orthogonal axes of a linear accelerator (LINAC) coordinate system and focused on the isocenter of the LINAC. Pictures of a needle that was fixed on the 6D couch were taken by the cameras during couch motions of translation and rotation of each axis. The coordinates of the needle in the pictures were obtained using manual measurement, and the coordinate error of the needle was calculated. The accuracy of a HexaPOD evo (Elekta AB, Sweden) was evaluated using this method. All of the mean values of the X, Y, and Z coordinate errors in the translation tests were within ±0.1 mm. However, the standard deviation of the Z coordinate errors in the Z translation test was 0.24 mm, which is higher than the others. In the X rotation test, we found that the X coordinate of the rotational origin of the 6D couch was shifted. We proposed an accuracy assessment method for a 6D couch. The method was able to evaluate the accuracy of the motion of only the 6D couch and revealed the deviation of the origin of the couch rotation. This accuracy assessment method is effective for evaluating add-on 6D couch positioning.
Synchronous high speed multi-point velocity profile measurement by heterodyne interferometry
NASA Astrophysics Data System (ADS)
Hou, Xueqin; Xiao, Wen; Chen, Zonghui; Qin, Xiaodong; Pan, Feng
2017-02-01
This paper presents a synchronous multipoint velocity profile measurement system, which acquires the vibration velocities as well as images of vibrating objects by combining optical heterodyne interferometry and a high-speed CMOS-DVR camera. The high-speed CMOS-DVR camera records a sequence of images of the vibrating object. Then, by extracting and processing multiple pixels at the same time, a digital demodulation technique is implemented to simultaneously acquire the vibrating velocity of the target from the recorded sequences of images. This method is validated with an experiment. A piezoelectric ceramic plate with standard vibration characteristics is used as the vibrating target, which is driven by a standard sinusoidal signal.
Accuracy Assessment of GO Pro Hero 3 (black) Camera in Underwater Environment
NASA Astrophysics Data System (ADS)
Helmholz, , P.; Long, J.; Munsie, T.; Belton, D.
2016-06-01
Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras) have become available, which often cost less than 500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black). The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm) and 0.0072 mm for 12MB (for an average c of 3.642mm). The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.
Overview of Digital Forensics Algorithms in Dslr Cameras
NASA Astrophysics Data System (ADS)
Aminova, E.; Trapeznikov, I.; Priorov, A.
2017-05-01
The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.
ERIC Educational Resources Information Center
Kuntz, Jeffrey J.; Snyder, John
2004-01-01
This article describes how one substitute teacher traveling the United States as a meet intern with USA Track and Field, a classroom teacher with an eager group of fifth graders, one stuffed Punxsy Phil groundhog, the Pennsylvania Academic Standards and a digital camera combined to form a collaborative classroom travel project entitled,…
Center for Coastline Security Technology, Year 3
2008-05-01
Polarization control for 3D Imaging with the Sony SRX-R105 Digital Cinema Projectors 3.4 HDMAX Camera and Sony SRX-R105 Projector Configuration for 3D...HDMAX Camera Pair Figure 3.2 Sony SRX-R105 Digital Cinema Projector Figure 3.3 Effect of camera rotation on projected overlay image. Figure 3.4...system that combines a pair of FAU’s HD-MAX video cameras with a pair of Sony SRX-R105 digital cinema projectors for stereo imaging and projection
Validation of Smartphone Based Retinal Photography for Diabetic Retinopathy Screening.
Rajalakshmi, Ramachandran; Arulmalar, Subramanian; Usha, Manoharan; Prathiba, Vijayaraghavan; Kareemuddin, Khaji Syed; Anjana, Ranjit Mohan; Mohan, Viswanathan
2015-01-01
To evaluate the sensitivity and specificity of "fundus on phone' (FOP) camera, a smartphone based retinal imaging system, as a screening tool for diabetic retinopathy (DR) detection and DR severity in comparison with 7-standard field digital retinal photography. Single-site, prospective, comparative, instrument validation study. 301 patients (602 eyes) with type 2 diabetes underwent standard seven-field digital fundus photography with both Carl Zeiss fundus camera and indigenous FOP at a tertiary care diabetes centre in South India. Grading of DR was performed by two independent retina specialists using modified Early Treatment of Diabetic Retinopathy Study grading system. Sight threatening DR (STDR) was defined by the presence of proliferative DR(PDR) or diabetic macular edema. The sensitivity, specificity and image quality were assessed. The mean age of the participants was 53.5 ±9.6 years and mean duration of diabetes 12.5±7.3 years. The Zeiss camera showed that 43.9% had non-proliferative DR(NPDR) and 15.3% had PDR while the FOP camera showed that 40.2% had NPDR and 15.3% had PDR. The sensitivity and specificity for detecting any DR by FOP was 92.7% (95%CI 87.8-96.1) and 98.4% (95%CI 94.3-99.8) respectively and the kappa (ĸ) agreement was 0.90 (95%CI-0.85-0.95 p<0.001) while for STDR, the sensitivity was 87.9% (95%CI 83.2-92.9), specificity 94.9% (95%CI 89.7-98.2) and ĸ agreement was 0.80 (95%CI 0.71-0.89 p<0.001), compared to conventional photography. Retinal photography using FOP camera is effective for screening and diagnosis of DR and STDR with high sensitivity and specificity and has substantial agreement with conventional retinal photography.
Low-cost conversion of the Polaroid MD-4 land camera to a digital gel documentation system.
Porch, Timothy G; Erpelding, John E
2006-04-30
A simple, inexpensive design is presented for the rapid conversion of the popular MD-4 Polaroid land camera to a high quality digital gel documentation system. Images of ethidium bromide stained DNA gels captured using the digital system were compared to images captured on Polaroid instant film. Resolution and sensitivity were enhanced using the digital system. In addition to the low cost and superior image quality of the digital system, there is also the added convenience of real-time image viewing through the swivel LCD of the digital camera, wide flexibility of gel sizes, accurate automatic focusing, variable image resolution, and consistent ease of use and quality. Images can be directly imported to a computer by using the USB port on the digital camera, further enhancing the potential of the digital system for documentation, analysis, and archiving. The system is appropriate for use as a start-up gel documentation system and for routine gel analysis.
Imagers for digital still photography
NASA Astrophysics Data System (ADS)
Bosiers, Jan; Dillen, Bart; Draijer, Cees; Manoury, Erik-Jan; Meessen, Louis; Peters, Inge
2006-04-01
This paper gives an overview of the requirements for, and current state-of-the-art of, CCD and CMOS imagers for use in digital still photography. Four market segments will be reviewed: mobile imaging, consumer "point-and-shoot cameras", consumer digital SLR cameras and high-end professional camera systems. The paper will also present some challenges and innovations with respect to packaging, testing, and system integration.
NASA Astrophysics Data System (ADS)
Sampat, Nitin; Grim, John F.; O'Hara, James E.
1998-04-01
The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.
2016-01-01
Digital single-molecule technologies are expanding diagnostic capabilities, enabling the ultrasensitive quantification of targets, such as viral load in HIV and hepatitis C infections, by directly counting single molecules. Replacing fluorescent readout with a robust visual readout that can be captured by any unmodified cell phone camera will facilitate the global distribution of diagnostic tests, including in limited-resource settings where the need is greatest. This paper describes a methodology for developing a visual readout system for digital single-molecule amplification of RNA and DNA by (i) selecting colorimetric amplification-indicator dyes that are compatible with the spectral sensitivity of standard mobile phones, and (ii) identifying an optimal ratiometric image-process for a selected dye to achieve a readout that is robust to lighting conditions and camera hardware and provides unambiguous quantitative results, even for colorblind users. We also include an analysis of the limitations of this methodology, and provide a microfluidic approach that can be applied to expand dynamic range and improve reaction performance, allowing ultrasensitive, quantitative measurements at volumes as low as 5 nL. We validate this methodology using SlipChip-based digital single-molecule isothermal amplification with λDNA as a model and hepatitis C viral RNA as a clinically relevant target. The innovative combination of isothermal amplification chemistry in the presence of a judiciously chosen indicator dye and ratiometric image processing with SlipChip technology allowed the sequence-specific visual readout of single nucleic acid molecules in nanoliter volumes with an unmodified cell phone camera. When paired with devices that integrate sample preparation and nucleic acid amplification, this hardware-agnostic approach will increase the affordability and the distribution of quantitative diagnostic and environmental tests. PMID:26900709
Automatic source camera identification using the intrinsic lens radial distortion
NASA Astrophysics Data System (ADS)
Choi, Kai San; Lam, Edmund Y.; Wong, Kenneth K. Y.
2006-11-01
Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.
Making Connections with Digital Data
ERIC Educational Resources Information Center
Leonard, William; Bassett, Rick; Clinger, Alicia; Edmondson, Elizabeth; Horton, Robert
2004-01-01
State-of-the-art digital cameras open up enormous possibilities in the science classroom, especially when used as data collectors. Because most high school students are not fully formal thinkers, the digital camera can provide a much richer learning experience than traditional observation. Data taken through digital images can make the…
Cost-effective poster and print production with digital camera and computer technology.
Chen, M Y; Ott, D J; Rohde, R P; Henson, E; Gelfand, D W; Boehme, J M
1997-10-01
The purpose of this report is to describe a cost-effective method for producing black-and-white prints and color posters within a radiology department. Using a high-resolution digital camera, personal computer, and color printer, the average cost of a 5 x 7 inch (12.5 x 17.5 cm) black-and-white print may be reduced from $8.50 to $1 each in our institution. The average cost for a color print (8.5 x 14 inch [21.3 x 35 cm]) varies from $2 to $3 per sheet depending on the selection of ribbons for a color-capable laser printer and the paper used. For a 30-panel, 4 x 8 foot (1.2 x 2.4 m) standard-sized poster, the cost for materials and construction is approximately $100.
Evaluating video digitizer errors
NASA Astrophysics Data System (ADS)
Peterson, C.
2016-01-01
Analog output video cameras remain popular for recording meteor data. Although these cameras uniformly employ electronic detectors with fixed pixel arrays, the digitization process requires resampling the horizontal lines as they are output in order to reconstruct the pixel data, usually resulting in a new data array of different horizontal dimensions than the native sensor. Pixel timing is not provided by the camera, and must be reconstructed based on line sync information embedded in the analog video signal. Using a technique based on hot pixels, I present evidence that jitter, sync detection, and other timing errors introduce both position and intensity errors which are not present in cameras which internally digitize their sensors and output the digital data directly.
NASA Technical Reports Server (NTRS)
Stefanov, William L.; Lee, Yeon Jin; Dille, Michael
2016-01-01
Handheld astronaut photography of the Earth has been collected from the International Space Station (ISS) since 2000, making it the most temporally extensive remotely sensed dataset from this unique Low Earth orbital platform. Exclusive use of digital handheld cameras to perform Earth observations from the ISS began in 2004. Nadir viewing imagery is constrained by the inclined equatorial orbit of the ISS to between 51.6 degrees North and South latitude, however numerous oblique images of land surfaces above these latitudes are included in the dataset. While unmodified commercial off-the-shelf digital cameras provide only visible wavelength, three-band spectral information of limited quality current cameras used with long (400+ mm) lenses can obtain high quality spatial information approaching 2 meters/ground pixel resolution. The dataset is freely available online at the Gateway to Astronaut Photography of Earth site (http://eol.jsc.nasa.gov), and now comprises over 2 million images. Despite this extensive image catalog, use of the data for scientific research, disaster response, commercial applications and visualizations is minimal in comparison to other data collected from free-flying satellite platforms such as Landsat, Worldview, etc. This is due primarily to the lack of fully-georeferenced data products - while current digital cameras typically have integrated GPS, this does not function in the Low Earth Orbit environment. The Earth Science and Remote Sensing (ESRS) Unit at NASA Johnson Space Center provides training in Earth Science topics to ISS crews, performs daily operations and Earth observation target delivery to crews through the Crew Earth Observations (CEO) Facility on board ISS, and also catalogs digital handheld imagery acquired from orbit by manually adding descriptive metadata and determining an image geographic centerpoint using visual feature matching with other georeferenced data, e.g. Landsat, Google Earth, etc. The lack of full geolocation information native to the data makes it difficult to integrate astronaut photographs with other georeferenced data to facilitate quantitative analysis such as urban land cover/land use classification, change detection, or geologic mapping. The manual determination of image centerpoints is both time and labor-intensive, leading to delays in releasing geolocated and cataloged data to the public, such as the timely use of data for disaster response. The GeoCam Space project was funded by the ISS Program in 2015 to develop an on-orbit hardware and ground-based software system for increasing the efficiency of geolocating astronaut photographs from the ISS (Fig. 1). The Intelligent Robotics Group at NASA Ames Research Center leads the development of both the ground and on-orbit systems in collaboration with the ESRS Unit. The hardware component consists of modified smartphone elements including cameras, central processing unit, wireless Ethernet, and an inertial measurement unit (gyroscopes/accelerometers/magnetometers) reconfigured into a compact unit that attaches to the base of the current Nikon D4 camera - and its replacement, the Nikon D5 - and connects using the standard Nikon peripheral connector or USB port. This provides secondary, side and downward facing cameras perpendicular to the primary camera pointing direction. The secondary cameras observe calibration targets with known internal X, Y, and Z position affixed to the interior of the ISS to determine the camera pose corresponding to each image frame. This information is recorded by the GeoCam Space unit and indexed for correlation to the camera time recorded for each image frame. Data - image, EXIF header, and camera pose information - is transmitted to the ground software system (GeoRef) using the established Ku-band USOS downlink system. Following integration on the ground, the camera pose information provides an initial geolocation estimate for the individual film frame. This new capability represents a significant advance in geolocation from the manual feature-matching approach for both nadir and off-nadir viewing imagery. With the initial geolocation estimate, full georeferencing of an image is completed using the rapid tie-pointing interface in GeoRef, and the resulting data is added to the Gateway to Astronaut Photography of Earth online database in both Geotiff and Keyhole Markup Language (kml) formats. The integration of the GeoRef software component of Geocam Space into the CEO image cataloging workflow is complete, and disaster response imagery acquired by the ISS crew is now fully georeferenced as a standard data product. The on-orbit hardware component (GeoSens) is in final prototyping phase, and is on-schedule for launch to the ISS in late 2016. Installation and routine use of the Geocam Space system for handheld digital camera photography from the ISS is expected to significantly improve the usefulness of this unique dataset for a variety of public- and private-sector applications.
Use of Standardized, Quantitative Digital Photography in a Multicenter Web-based Study
Molnar, Joseph A.; Lew, Wesley K.; Rapp, Derek A.; Gordon, E. Stanley; Voignier, Denise; Rushing, Scott; Willner, William
2009-01-01
Objective: We developed a Web-based, blinded, prospective, randomized, multicenter trial, using standardized digital photography to clinically evaluate hand burn depth and accurately determine wound area with digital planimetry. Methods: Photos in each center were taken with identical digital cameras with standardized settings on a custom backdrop developed at Wake Forest University containing a gray, white, black, and centimeter scale. The images were downloaded, transferred via the Web, and stored on servers at the principal investigator's home institution. Color adjustments to each photo were made using Adobe Photoshop 6.0 (Adobe, San Jose, Calif). In an initial pilot study, model hands marked with circles of known areas were used to determine the accuracy of the planimetry technique. Two-dimensional digital planimetry using SigmaScan Pro 5.0 (SPSS Science, Chicago, Ill) was used to calculate wound area from the digital images. Results: Digital photography is a simple and cost-effective method for quantifying wound size when used in conjunction with digital planimetry (SigmaScan) and photo enhancement (Adobe Photoshop) programs. The accuracy of the SigmaScan program in calculating predetermined areas was within 4.7% (95% CI, 3.4%–5.9%). Dorsal hand burns of the initial 20 patients in a national study involving several centers were evaluated with this technique. Images obtained by individuals denying experience in photography proved reliable and useful for clinical evaluation and quantification of wound area. Conclusion: Standardized digital photography may be used quantitatively in a Web-based, multicenter trial of burn care. This technique could be modified for other medical studies with visual endpoints. PMID:19212431
Use of standardized, quantitative digital photography in a multicenter Web-based study.
Molnar, Joseph A; Lew, Wesley K; Rapp, Derek A; Gordon, E Stanley; Voignier, Denise; Rushing, Scott; Willner, William
2009-01-01
We developed a Web-based, blinded, prospective, randomized, multicenter trial, using standardized digital photography to clinically evaluate hand burn depth and accurately determine wound area with digital planimetry. Photos in each center were taken with identical digital cameras with standardized settings on a custom backdrop developed at Wake Forest University containing a gray, white, black, and centimeter scale. The images were downloaded, transferred via the Web, and stored on servers at the principal investigator's home institution. Color adjustments to each photo were made using Adobe Photoshop 6.0 (Adobe, San Jose, Calif). In an initial pilot study, model hands marked with circles of known areas were used to determine the accuracy of the planimetry technique. Two-dimensional digital planimetry using SigmaScan Pro 5.0 (SPSS Science, Chicago, Ill) was used to calculate wound area from the digital images. Digital photography is a simple and cost-effective method for quantifying wound size when used in conjunction with digital planimetry (SigmaScan) and photo enhancement (Adobe Photoshop) programs. The accuracy of the SigmaScan program in calculating predetermined areas was within 4.7% (95% CI, 3.4%-5.9%). Dorsal hand burns of the initial 20 patients in a national study involving several centers were evaluated with this technique. Images obtained by individuals denying experience in photography proved reliable and useful for clinical evaluation and quantification of wound area. Standardized digital photography may be used quantitatively in a Web-based, multicenter trial of burn care. This technique could be modified for other medical studies with visual endpoints.
Mertens, Jan E.J.; Roie, Martijn Van; Merckx, Jonas; Dekoninck, Wouter
2017-01-01
Abstract Digitization of specimen collections has become a key priority of many natural history museums. The camera systems built for this purpose are expensive, providing a barrier in institutes with limited funding, and therefore hampering progress. An assessment is made on whether a low cost compact camera with image stacking functionality can help expedite the digitization process in large museums or provide smaller institutes and amateur entomologists with the means to digitize their collections. Images of a professional setup were compared with the Olympus Stylus TG-4 Tough, a low-cost compact camera with internal focus stacking functions. Parameters considered include image quality, digitization speed, price, and ease-of-use. The compact camera’s image quality, although inferior to the professional setup, is exceptional considering its fourfold lower price point. Producing the image slices in the compact camera is a matter of seconds and when optimal image quality is less of a priority, the internal stacking function omits the need for dedicated stacking software altogether, further decreasing the cost and speeding up the process. In general, it is found that, aware of its limitations, this compact camera is capable of digitizing entomological collections with sufficient quality. As technology advances, more institutes and amateur entomologists will be able to easily and affordably catalogue their specimens. PMID:29134038
Three-dimensional image signals: processing methods
NASA Astrophysics Data System (ADS)
Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru
2010-11-01
Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.
Resolution for color photography
NASA Astrophysics Data System (ADS)
Hubel, Paul M.; Bautsch, Markus
2006-02-01
Although it is well known that luminance resolution is most important, the ability to accurately render colored details, color textures, and colored fabrics cannot be overlooked. This includes the ability to accurately render single-pixel color details as well as avoiding color aliasing. All consumer digital cameras on the market today record in color and the scenes people are photographing are usually color. Yet almost all resolution measurements made on color cameras are done using a black and white target. In this paper we present several methods for measuring and quantifying color resolution. The first method, detailed in a previous publication, uses a slanted-edge target of two colored surfaces in place of the standard black and white edge pattern. The second method employs the standard black and white targets recommended in the ISO standard, but records these onto the camera through colored filters thus giving modulation between black and one particular color component; red, green, and blue color separation filters are used in this study. The third method, conducted at Stiftung Warentest, an independent consumer organization of Germany, uses a whitelight interferometer to generate fringe pattern targets of varying color and spatial frequency.
A digital gigapixel large-format tile-scan camera.
Ben-Ezra, M
2011-01-01
Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.
Low-cost digital dynamic visualization system
NASA Astrophysics Data System (ADS)
Asundi, Anand K.; Sajan, M. R.
1995-05-01
High speed photographic systems like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording systems requiring time consuming and tedious wet processing of the films. Currently digital cameras are replacing to certain extent the conventional cameras for static experiments. Recently, there is lot of interest in developing and modifying CCD architectures and recording arrangements for dynamic scene analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration (TDI) mode for digitally recording dynamic scenes. Applications in solid as well as fluid impact problems are presented.
Sensitivity and specificity of digital retinal imaging for screening diabetic retinopathy.
Lopez-Bastida, J; Cabrera-Lopez, F; Serrano-Aguilar, P
2007-04-01
To assess the effectiveness of a non-mydriatic digital camera (45 degrees -30 degrees photographs) compared with the reference method for screening diabetic retinopathy. Type 1 and 2 diabetic patients (n = 773; 1546 eyes) underwent screening for diabetic retinopathy in a prospective observational study. Hospital-based non-mydriatic digital retinal imaging by a consultant specialist in retinal diseases was compared with slit-lamp biomicroscopy and indirect ophthalmoscopy through dilated pupils, as a gold standard, previously performed in a community health centre by another consultant specialist in retinal diseases. The main outcome measures were sensitivity and specificity of screening methods and prevalence of diabetic retinopathy. The prevalence of any form of diabetic retinopathy was 42.4% (n = 328); the prevalence of sight-threatening including macular oedema and proliferative retinopathy was 9.6% (n = 74). Sensitivity of detection of any diabetic retinopathy by digital imaging was 92% (95% confidence interval 90, 94). Specificity of detection of any diabetic retinopathy was 96% (95, 98). The predictive value of the negative tests was 94% and of a positive test 95%. For sight-threatening retinopathy digital imaging had a sensitivity of 100%. A high sensitivity and specificity are essential for an effective screening programme. These results confirm digital retinal imaging with a non-mydriatic camera as an effective option in community-based screening programmes for diabetic retinopathy.
Methods for identification of images acquired with digital cameras
NASA Astrophysics Data System (ADS)
Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki
2001-02-01
From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.
Assessment of skin wound healing with a multi-aperture camera
NASA Astrophysics Data System (ADS)
Nabili, Marjan; Libin, Alex; Kim, Loan; Groah, Susan; Ramella-Roman, Jessica C.
2009-02-01
A clinical trial was conducted at the National Rehabilitation Hospital on 15 individuals to assess whether Rheparan Skin, a bio-engineered component of the extracellular matrix of the skin, is effective at promoting healing of a variety of wounds. Along with standard clinical outcome measures, a spectroscopic camera was used to assess the efficacy of Rheparan skin. Gauzes soaked with Rheparan skin were placed on volunteers wounds for 5 minutes twice weekly for four weeks. Images of the wounds were taken using a multi spectral camera and a digital camera at baseline and weekly thereafter. Spectral images collected at different wavelengths were used combined with optical skin models to quantify parameters of interest such as oxygen saturation (SO2), water content, and melanin concentration. A digital wound measurement system (VERG) was also used to measure the size of the wound. 9 of the 15 measured subjects showed a definitive improvement post treatment in the form of a decrease in wound area. 7 of these 9 individuals also showed an increase in oxygen saturation in the ulcerated area during the trial. A similar trend was seen in other metrics. Spectral imaging of skin wound can be a valuable tool to establish wound-healing trends and to clarify healing mechanisms.
Edge directed image interpolation with Bamberger pyramids
NASA Astrophysics Data System (ADS)
Rosiles, Jose Gerardo
2005-08-01
Image interpolation is a standard feature in digital image editing software, digital camera systems and printers. Classical methods for resizing produce blurred images with unacceptable quality. Bamberger Pyramids and filter banks have been successfully used for texture and image analysis. They provide excellent multiresolution and directional selectivity. In this paper we present an edge-directed image interpolation algorithm which takes advantage of the simultaneous spatial-directional edge localization at the subband level. The proposed algorithm outperform classical schemes like bilinear and bicubic schemes from the visual and numerical point of views.
A Review Of Oculoplastic Photography: A Guide For Clinician Photographers
Yap, Jun Fai; Wai, Yong Zheng; Ng, Qi Xiong
2016-01-01
Clinical photography in the field of oculoplastic surgery has many applications. It is possible for clinicians to obtain standardized clinical photographs without a studio. A clinician photographer has the advantage of knowing exactly what to photograph as well as having immediate access to the images. In order to maintain standardization in the photographs, the photographic settings should remain constant. This article covers essential photographic equipment, camera settings, patient pose, and digital asset management. PMID:27630805
Duangsang, Suampa; Tengtrisorn, Supaporn
2012-05-01
To determine the normal range of Central Corneal Light Reflex Ratio (CCLRR) from photographs of young adults. A digital camera equipped with a telephoto lens with a flash attachment placed directly above the lens was used to obtain corneal light reflex photographs of 104 subjects, first with the subject fixating on the lens of the camera at a distance of 43 centimeters, and then while looking past the camera to a wall at a distance of 5.4 meters. Digital images were displayed using Adobe Photoshop at a magnification of l200%. The CCLRR was the ratio of the sum of distances between the inner margin of cornea and the central corneal light reflex of each eye to the sum of horizontal corneal diameter of each eye. Measurements were made by three technicians on all subjects, and repeated on a 16% (n=17) subsample. Mean ratios (standard deviation-SD) from near/distance measurements were 0.468 (0.012)/0.452 (0.019). Limits of the normal range, with 95% certainty, were 0.448 and 0.488 for near measurements and 0.419 and 0.484 for distance measurements. Lower and upper indeterminate zones were 0.440-0.447 and 0.489-0.497 for near measurements and 0.406-0.418 and 0.485-0.497 for distance measurements. More extreme values can be considered as abnormal. The reproducibility and repeatability of the test was good. This method is easy to perform and has potential for use in strabismus screening by paramedical personnel.
Imaging Emission Spectra with Handheld and Cellphone Cameras
NASA Astrophysics Data System (ADS)
Sitar, David
2012-12-01
As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboralory setting on a shoestring budget and get immediale results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon point-and-shoot auto focusing camera and two different cellphone cameras: one at 6.1 MP and the other at 5.1 MP.
Chen, Brian R; Poon, Emily; Alam, Murad
2017-08-01
Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.
Habib, A.; Jarvis, A.; Al-Durgham, M. M.; Lay, J.; Quackenbush, P.; Stensaas, G.; Moe, D.
2007-01-01
The mapping community is witnessing significant advances in available sensors, such as medium format digital cameras (MFDC) and Light Detection and Ranging (LiDAR) systems. In this regard, the Digital Photogrammetry Research Group (DPRG) of the Department of Geomatics Engineering at the University of Calgary has been actively involved in the development of standards and specifications for regulating the use of these sensors in mapping activities. More specifically, the DPRG has been working on developing new techniques for the calibration and stability analysis of medium format digital cameras. This research is essential since these sensors have not been developed with mapping applications in mind. Therefore, prior to their use in Geomatics activies, new standards should be developed to ensure the quality of the developed products. In another front, the persistent improvement in direct geo-referencing technology has led to an expansion in the use of LiDAR systems for the acquisition of dense and accurate surface information. However, the processing of the raw LiDAR data (e.g., ranges, mirror angles, and navigation data) remains a non-transparent process that is proprietary to the manufacturers of LiDAR systems. Therefore, the DPRG has been focusing on the development of quality control procedures to quantify the accuracy of LiDAR output in the absence of initial system measurements. This paper presents a summary of the research conducted by the DPRG together with the British Columbia Base Mapping and Geomatic Services (BMGS) and the United States Geological Survey (USGS) for the development of quality assurance and quality control procedures for emerging mapping technologies. The outcome of this research will allow for the possiblity of introducing North American Standards and Specifications to regulate the use of MFDC and LiDAR systems in the mapping industry.
Integration of Geodata in Documenting Castle Ruins
NASA Astrophysics Data System (ADS)
Delis, P.; Wojtkowska, M.; Nerc, P.; Ewiak, I.; Lada, A.
2016-06-01
Textured three dimensional models are currently the one of the standard methods of representing the results of photogrammetric works. A realistic 3D model combines the geometrical relations between the structure's elements with realistic textures of each of its elements. Data used to create 3D models of structures can be derived from many different sources. The most commonly used tool for documentation purposes, is a digital camera and nowadays terrestrial laser scanning (TLS). Integration of data acquired from different sources allows modelling and visualization of 3D models historical structures. Additional aspect of data integration is possibility of complementing of missing points for example in point clouds. The paper shows the possibility of integrating data from terrestrial laser scanning with digital imagery and an analysis of the accuracy of the presented methods. The paper describes results obtained from raw data consisting of a point cloud measured using terrestrial laser scanning acquired from a Leica ScanStation2 and digital imagery taken using a Kodak DCS Pro 14N camera. The studied structure is the ruins of the Ilza castle in Poland.
Can light-field photography ease focusing on the scalp and oral cavity?
Taheri, Arash; Feldman, Steven R
2013-08-01
Capturing a well-focused image using an autofocus camera can be difficult in oral cavity and on a hairy scalp. Light-field digital cameras capture data regarding the color, intensity, and direction of rays of light. Having information regarding direction of rays of light, computer software can be used to focus on different subjects in the field after the image data have been captured. A light-field camera was used to capture the images of the scalp and oral cavity. The related computer software was used to focus on scalp or different parts of oral cavity. The final pictures were compared with pictures taken with conventional, compact, digital cameras. The camera worked well for oral cavity. It also captured the pictures of scalp easily; however, we had to repeat clicking between the hairs on different points to choose the scalp for focusing. A major drawback of the system was the resolution of the resulting pictures that was lower than conventional digital cameras. Light-field digital cameras are fast and easy to use. They can capture more information on the full depth of field compared with conventional cameras. However, the resolution of the pictures is relatively low. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
High-performance dual-speed CCD camera system for scientific imaging
NASA Astrophysics Data System (ADS)
Simpson, Raymond W.
1996-03-01
Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.
Korzynska, Anna; Roszkowiak, Lukasz; Pijanowska, Dorota; Kozlowski, Wojciech; Markiewicz, Tomasz
2014-01-01
The aim of this study is to compare the digital images of the tissue biopsy captured with optical microscope using bright field technique under various light conditions. The range of colour's variation in immunohistochemically stained with 3,3'-Diaminobenzidine and Haematoxylin tissue samples is immense and coming from various sources. One of them is inadequate setting of camera's white balance to microscope's light colour temperature. Although this type of error can be easily handled during the stage of image acquisition, it can be eliminated with use of colour adjustment algorithms. The examination of the dependence of colour variation from microscope's light temperature and settings of the camera is done as an introductory research to the process of automatic colour standardization. Six fields of view with empty space among the tissue samples have been selected for analysis. Each field of view has been acquired 225 times with various microscope light temperature and camera white balance settings. The fourteen randomly chosen images have been corrected and compared, with the reference image, by the following methods: Mean Square Error, Structural SIMilarity and visual assessment of viewer. For two types of backgrounds and two types of objects, the statistical image descriptors: range, median, mean and its standard deviation of chromaticity on a and b channels from CIELab colour space, and luminance L, and local colour variability for objects' specific area have been calculated. The results have been averaged for 6 images acquired in the same light conditions and camera settings for each sample. The analysis of the results leads to the following conclusions: (1) the images collected with white balance setting adjusted to light colour temperature clusters in certain area of chromatic space, (2) the process of white balance correction for images collected with white balance camera settings not matched to the light temperature moves image descriptors into proper chromatic space but simultaneously the value of luminance changes. So the process of the image unification in a sense of colour fidelity can be solved in separate introductory stage before the automatic image analysis.
Precision of FLEET Velocimetry Using High-speed CMOS Camera Systems
NASA Technical Reports Server (NTRS)
Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.
2015-01-01
Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 micro sec, precisions of 0.5 m/s in air and 0.2 m/s in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision High Speed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.
Measuring Distances Using Digital Cameras
ERIC Educational Resources Information Center
Kendal, Dave
2007-01-01
This paper presents a generic method of calculating accurate horizontal and vertical object distances from digital images taken with any digital camera and lens combination, where the object plane is parallel to the image plane or tilted in the vertical plane. This method was developed for a project investigating the size, density and spatial…
Camera! Action! Collaborate with Digital Moviemaking
ERIC Educational Resources Information Center
Swan, Kathleen Owings; Hofer, Mark; Levstik, Linda S.
2007-01-01
Broadly defined, digital moviemaking integrates a variety of media (images, sound, text, video, narration) to communicate with an audience. There is near-ubiquitous access to the necessary software (MovieMaker and iMovie are bundled free with their respective operating systems) and hardware (computers with Internet access, digital cameras, etc.).…
Accurate estimation of camera shot noise in the real-time
NASA Astrophysics Data System (ADS)
Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.
2017-10-01
Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the accuracy of the obtained temporal noise values was estimated.
High Scalability Video ISR Exploitation
2012-10-01
Surveillance, ARGUS) on the National Image Interpretability Rating Scale (NIIRS) at level 6. Ultra-high quality cameras like the Digital Cinema 4K (DC-4K...Scale (NIIRS) at level 6. Ultra-high quality cameras like the Digital Cinema 4K (DC-4K), which recognizes objects smaller than people, will be available...purchase ultra-high quality cameras like the Digital Cinema 4K (DC-4K) for use in the field. However, even if such a UAV sensor with a DC-4K was flown
Dynamic code block size for JPEG 2000
NASA Astrophysics Data System (ADS)
Tsai, Ping-Sing; LeCornec, Yann
2008-02-01
Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.
Organize Your Digital Photos: Display Your Images Without Hogging Hard-Disk Space
ERIC Educational Resources Information Center
Branzburg, Jeffrey
2005-01-01
According to InfoTrends/CAP Ventures, by the end of this year more than 55 percent of all U.S. households will own at least one digital camera. With so many digital cameras in use, it is important for people to understand how to organize and store digital images in ways that make them easy to find. Additionally, today's affordable, large megapixel…
NASA Technical Reports Server (NTRS)
Kiplinger, Alan L.; Dennis, Brian R.; Orwig, Larry E.; Chen, P. C.
1988-01-01
A solid-state digital camera was developed for obtaining H alpha images of solar flares with 0.1 s time resolution. Beginning in the summer of 1988, this system will be operated in conjunction with SMM's hard X-ray burst spectrometer (HXRBS). Important electron time-of-flight effects that are crucial for determining the flare energy release processes should be detectable with these combined H alpha and hard X-ray observations. Charge-injection device (CID) cameras provide 128 x 128 pixel images simultaneously in the H alpha blue wing, line center, and red wing, or other wavelength of interest. The data recording system employs a microprocessor-controlled, electronic interface between each camera and a digital processor board that encodes the data into a serial bitstream for continuous recording by a standard video cassette recorder. Only a small fraction of the data will be permanently archived through utilization of a direct memory access interface onto a VAX-750 computer. In addition to correlations with hard X-ray data, observations from the high speed H alpha camera will also be correlated and optical and microwave data and data from future MAX 1991 campaigns. Whether the recorded optical flashes are simultaneous with X-ray peaks to within 0.1 s, are delayed by tenths of seconds or are even undetectable, the results will have implications on the validity of both thermal and nonthermal models of hard X-ray production.
Study of optical techniques for the Ames unitary wind tunnels. Part 4: Model deformation
NASA Technical Reports Server (NTRS)
Lee, George
1992-01-01
A survey of systems capable of model deformation measurements was conducted. The survey included stereo-cameras, scanners, and digitizers. Moire, holographic, and heterodyne interferometry techniques were also looked at. Stereo-cameras with passive or active targets are currently being deployed for model deformation measurements at NASA Ames and LaRC, Boeing, and ONERA. Scanners and digitizers are widely used in robotics, motion analysis, medicine, etc., and some of the scanner and digitizers can meet the model deformation requirements. Commercial stereo-cameras, scanners, and digitizers are being improved in accuracy, reliability, and ease of operation. A number of new systems are coming onto the market.
Color film spectral properties test experiment for target simulation
NASA Astrophysics Data System (ADS)
Liu, Xinyue; Ming, Xing; Fan, Da; Guo, Wenji
2017-04-01
In hardware-in-loop test of the aviation spectra camera, the liquid crystal light valve and digital micro-mirror device could not simulate the spectrum characteristics of the landmark. A test system frame was provided based on the color film for testing the spectra camera; and the spectrum characteristics of the color film was test in the paper. The result of the experiment shows that difference was existed between the landmark and the film spectrum curse. However, the spectrum curse peak should change according to the color, and the curse is similar with the standard color traps. So, if the quantity value of error between the landmark and the film was calibrated and the error could be compensated, the film could be utilized in the hardware-in-loop test for the aviation spectra camera.
A Simple Spectrophotometer Using Common Materials and a Digital Camera
ERIC Educational Resources Information Center
Widiatmoko, Eko; Widayani; Budiman, Maman; Abdullah, Mikrajuddin; Khairurrijal
2011-01-01
A simple spectrophotometer was designed using cardboard, a DVD, a pocket digital camera, a tripod and a computer. The DVD was used as a diffraction grating and the camera as a light sensor. The spectrophotometer was calibrated using a reference light prior to use. The spectrophotometer was capable of measuring optical wavelengths with a…
Imaging Emission Spectra with Handheld and Cellphone Cameras
ERIC Educational Resources Information Center
Sitar, David
2012-01-01
As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboratory setting on a shoestring budget and get immediate results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon…
Validation of Smartphone Based Retinal Photography for Diabetic Retinopathy Screening
Rajalakshmi, Ramachandran; Arulmalar, Subramanian; Usha, Manoharan; Prathiba, Vijayaraghavan; Kareemuddin, Khaji Syed; Anjana, Ranjit Mohan; Mohan, Viswanathan
2015-01-01
Aim To evaluate the sensitivity and specificity of “fundus on phone’ (FOP) camera, a smartphone based retinal imaging system, as a screening tool for diabetic retinopathy (DR) detection and DR severity in comparison with 7-standard field digital retinal photography. Design Single-site, prospective, comparative, instrument validation study. Methods 301 patients (602 eyes) with type 2 diabetes underwent standard seven-field digital fundus photography with both Carl Zeiss fundus camera and indigenous FOP at a tertiary care diabetes centre in South India. Grading of DR was performed by two independent retina specialists using modified Early Treatment of Diabetic Retinopathy Study grading system. Sight threatening DR (STDR) was defined by the presence of proliferative DR(PDR) or diabetic macular edema. The sensitivity, specificity and image quality were assessed. Results The mean age of the participants was 53.5 ±9.6 years and mean duration of diabetes 12.5±7.3 years. The Zeiss camera showed that 43.9% had non-proliferative DR(NPDR) and 15.3% had PDR while the FOP camera showed that 40.2% had NPDR and 15.3% had PDR. The sensitivity and specificity for detecting any DR by FOP was 92.7% (95%CI 87.8–96.1) and 98.4% (95%CI 94.3–99.8) respectively and the kappa (ĸ) agreement was 0.90 (95%CI-0.85–0.95 p<0.001) while for STDR, the sensitivity was 87.9% (95%CI 83.2–92.9), specificity 94.9% (95%CI 89.7–98.2) and ĸ agreement was 0.80 (95%CI 0.71–0.89 p<0.001), compared to conventional photography. Conclusion Retinal photography using FOP camera is effective for screening and diagnosis of DR and STDR with high sensitivity and specificity and has substantial agreement with conventional retinal photography. PMID:26401839
Tracking a Head-Mounted Display in a Room-Sized Environment with Head-Mounted Cameras
1990-04-01
poor resolution and a very limited working volume [Wan90]. 4 OPTOTRAK [Nor88] uses one camera with two dual-axis CCD infrared position sensors. Each...Nor88] Northern Digital. Trade literature on Optotrak - Northern Digital’s Three Dimensional Optical Motion Tracking and Analysis System. Northern Digital
A Picture is Worth a Thousand Words
ERIC Educational Resources Information Center
Davison, Sarah
2009-01-01
Lions, tigers, and bears, oh my! Digital cameras, young inquisitive scientists, give it a try! In this project, students create an open-ended question for investigation, capture and record their observations--data--with digital cameras, and create a digital story to share their findings. The project follows a 5E learning cycle--Engage, Explore,…
Software Graphical User Interface For Analysis Of Images
NASA Technical Reports Server (NTRS)
Leonard, Desiree M.; Nolf, Scott R.; Avis, Elizabeth L.; Stacy, Kathryn
1992-01-01
CAMTOOL software provides graphical interface between Sun Microsystems workstation and Eikonix Model 1412 digitizing camera system. Camera scans and digitizes images, halftones, reflectives, transmissives, rigid or flexible flat material, or three-dimensional objects. Users digitize images and select from three destinations: work-station display screen, magnetic-tape drive, or hard disk. Written in C.
Fundamentals of in Situ Digital Camera Methodology for Water Quality Monitoring of Coast and Ocean
Goddijn-Murphy, Lonneke; Dailloux, Damien; White, Martin; Bowers, Dave
2009-01-01
Conventional digital cameras, the Nikon Coolpix885® and the SeaLife ECOshot®, were used as in situ optical instruments for water quality monitoring. Measured response spectra showed that these digital cameras are basically three-band radiometers. The response values in the red, green and blue bands, quantified by RGB values of digital images of the water surface, were comparable to measurements of irradiance levels at red, green and cyan/blue wavelengths of water leaving light. Different systems were deployed to capture upwelling light from below the surface, while eliminating direct surface reflection. Relationships between RGB ratios of water surface images, and water quality parameters were found to be consistent with previous measurements using more traditional narrow-band radiometers. This current paper focuses on the method that was used to acquire digital images, derive RGB values and relate measurements to water quality parameters. Field measurements were obtained in Galway Bay, Ireland, and in the Southern Rockall Trough in the North Atlantic, where both yellow substance and chlorophyll concentrations were successfully assessed using the digital camera method. PMID:22346729
Low Power Camera-on-a-Chip Using CMOS Active Pixel Sensor Technology
NASA Technical Reports Server (NTRS)
Fossum, E. R.
1995-01-01
A second generation image sensor technology has been developed at the NASA Jet Propulsion Laboratory as a result of the continuing need to miniaturize space science imaging instruments. Implemented using standard CMOS, the active pixel sensor (APS) technology permits the integration of the detector array with on-chip timing, control and signal chain electronics, including analog-to-digital conversion.
Using DSLR cameras in digital holography
NASA Astrophysics Data System (ADS)
Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge
2017-08-01
In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.
Multi-band infrared camera systems
NASA Astrophysics Data System (ADS)
Davis, Tim; Lang, Frank; Sinneger, Joe; Stabile, Paul; Tower, John
1994-12-01
The program resulted in an IR camera system that utilizes a unique MOS addressable focal plane array (FPA) with full TV resolution, electronic control capability, and windowing capability. Two systems were delivered, each with two different camera heads: a Stirling-cooled 3-5 micron band head and a liquid nitrogen-cooled, filter-wheel-based, 1.5-5 micron band head. Signal processing features include averaging up to 16 frames, flexible compensation modes, gain and offset control, and real-time dither. The primary digital interface is a Hewlett-Packard standard GPID (IEEE-488) port that is used to upload and download data. The FPA employs an X-Y addressed PtSi photodiode array, CMOS horizontal and vertical scan registers, horizontal signal line (HSL) buffers followed by a high-gain preamplifier and a depletion NMOS output amplifier. The 640 x 480 MOS X-Y addressed FPA has a high degree of flexibility in operational modes. By changing the digital data pattern applied to the vertical scan register, the FPA can be operated in either an interlaced or noninterlaced format. The thermal sensitivity performance of the second system's Stirling-cooled head was the best of the systems produced.
A high-speed digital camera system for the observation of rapid H-alpha fluctuations in solar flares
NASA Technical Reports Server (NTRS)
Kiplinger, Alan L.; Dennis, Brian R.; Orwig, Larry E.
1989-01-01
Researchers developed a prototype digital camera system for obtaining H-alpha images of solar flares with 0.1 s time resolution. They intend to operate this system in conjunction with SMM's Hard X Ray Burst Spectrometer, with x ray instruments which will be available on the Gamma Ray Observatory and eventually with the Gamma Ray Imaging Device (GRID), and with the High Resolution Gamma-Ray and Hard X Ray Spectrometer (HIREGS) which are being developed for the Max '91 program. The digital camera has recently proven to be successful as a one camera system operating in the blue wing of H-alpha during the first Max '91 campaign. Construction and procurement of a second and possibly a third camera for simultaneous observations at other wavelengths are underway as are analyses of the campaign data.
It's not the pixel count, you fool
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2012-01-01
The first thing a "marketing guy" asks the digital camera engineer is "how many pixels does it have, for we need as many mega pixels as possible since the other guys are killing us with their "umpteen" mega pixel pocket sized digital cameras. And so it goes until the pixels get smaller and smaller in order to inflate the pixel count in the never-ending pixel-wars. These small pixels just are not very good. The truth of the matter is that the most important feature of digital cameras in the last five years is the automatic motion control to stabilize the image on the sensor along with some very sophisticated image processing. All the rest has been hype and some "cool" design. What is the future for digital imaging and what will drive growth of camera sales (not counting the cell phone cameras which totally dominate the market in terms of camera sales) and more importantly after sales profits? Well sit in on the Dark Side of Color and find out what is being done to increase the after sales profits and don't be surprised if has been done long ago in some basement lab of a photographic company and of course, before its time.
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Yokum, Jeffrey S.; Pryputniewicz, Ryszard J.
2002-06-01
Sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography based on fiber optics and high-spatial and high-digital resolution cameras, are discussed in this paper. It is shown that sensitivity, accuracy, and precision dependent on both, the effective determination of optical phase and the effective characterization of the illumination-observation conditions. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gages, demonstrating the applicability of quantitative optical metrology techniques to satisfy constantly increasing needs for the study and development of emerging technologies.
3D photography is as accurate as digital planimetry tracing in determining burn wound area.
Stockton, K A; McMillan, C M; Storey, K J; David, M C; Kimble, R M
2015-02-01
In the paediatric population careful attention needs to be made concerning techniques utilised for wound assessment to minimise discomfort and stress to the child. To investigate whether 3D photography is a valid measure of burn wound area in children compared to the current clinical gold standard method of digital planimetry using Visitrak™. Twenty-five children presenting to the Stuart Pegg Paediatric Burn Centre for burn dressing change following acute burn injury were included in the study. Burn wound area measurement was undertaken using both digital planimetry (Visitrak™ system) and 3D camera analysis. Inter-rater reliability of the 3D camera software was determined by three investigators independently assessing the burn wound area. A comparison of wound area was assessed using intraclass correlation co-efficients (ICC) which demonstrated excellent agreement 0.994 (CI 0.986, 0.997). Inter-rater reliability measured using ICC 0.989 (95% CI 0.979, 0.995) demonstrated excellent inter-rater reliability. Time taken to map the wound was significantly quicker using the camera at bedside compared to Visitrak™ 14.68 (7.00)s versus 36.84 (23.51)s (p<0.001). In contrast, analysing wound area was significantly quicker using the Visitrak™ tablet compared to Dermapix(®) software for the 3D Images 31.36 (19.67)s versus 179.48 (56.86)s (p<0.001). This study demonstrates that images taken with the 3D LifeViz™ camera and assessed with Dermapix(®) software is a reliable method for wound area assessment in the acute paediatric burn setting. Copyright © 2014 Elsevier Ltd and ISBI. All rights reserved.
NASA Astrophysics Data System (ADS)
Sensui, Takayuki
2012-10-01
Although digitalization has tripled consumer-class camera market scale, extreme reductions in prices of fixed-lens cameras has reduced profitability. As a result, a number of manufacturers have entered the market of the System DSC i.e. digital still camera with interchangeable lens, where large profit margins are possible, and many high ratio zoom lenses with image stabilization functions have been released. Quiet actuators are another indispensable component. Design with which there is little degradation in performance due to all types of errors is preferred for good balance in terms of size, lens performance, and the rate of quality to sub-standard products. Decentering, such as that caused by tilting, sensitivity of moving groups is especially important. In addition, image stabilization mechanisms actively shift lens groups. Development of high ratio zoom lenses with vibration reduction mechanism is confronted by the challenge of reduced performance due to decentering, making control over decentering sensitivity between lens groups everything. While there are a number of ways to align lenses (axial alignment), shock resistance and ability to stand up to environmental conditions must also be considered. Naturally, it is very difficult, if not impossible, to make lenses smaller and achieve a low decentering sensitivity at the same time. 4-group zoom construction is beneficial in making lenses smaller, but decentering sensitivity is greater. 5-group zoom configuration makes smaller lenses more difficult, but it enables lower decentering sensitivities. At Nikon, the most advantageous construction is selected for each lens based on specifications. The AF-S DX NIKKOR 18-200mm f/3.5-5.6G ED VR II and AF-S NIKKOR 28-300mm f/3.5-5.6G ED VR are excellent examples of this.
Accurate and cost-effective MTF measurement system for lens modules of digital cameras
NASA Astrophysics Data System (ADS)
Chang, Gao-Wei; Liao, Chia-Cheng; Yeh, Zong-Mu
2007-01-01
For many years, the widening use of digital imaging products, e.g., digital cameras, has given rise to much attention in the market of consumer electronics. However, it is important to measure and enhance the imaging performance of the digital ones, compared to that of conventional cameras (with photographic films). For example, the effect of diffraction arising from the miniaturization of the optical modules tends to decrease the image resolution. As a figure of merit, modulation transfer function (MTF) has been broadly employed to estimate the image quality. Therefore, the objective of this paper is to design and implement an accurate and cost-effective MTF measurement system for the digital camera. Once the MTF of the sensor array is provided, that of the optical module can be then obtained. In this approach, a spatial light modulator (SLM) is employed to modulate the spatial frequency of light emitted from the light-source. The modulated light going through the camera under test is consecutively detected by the sensors. The corresponding images formed from the camera are acquired by a computer and then, they are processed by an algorithm for computing the MTF. Finally, through the investigation on the measurement accuracy from various methods, such as from bar-target and spread-function methods, it appears that our approach gives quite satisfactory results.
Evaluation of Digital Camera Technology For Bridge Inspection
DOT National Transportation Integrated Search
1997-07-18
As part of a cooperative agreement between the Tennessee Department of Transportation and the Federal Highway Administration, a study was conducted to evaluate current levels of digital camera and color printing technology with regard to their applic...
How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.
Digital dental photography. Part 4: choosing a camera.
Ahmad, I
2009-06-13
With so many cameras and systems on the market, making a choice of the right one for your practice needs is a daunting task. As described in Part 1 of this series, a digital single reflex (DSLR) camera is an ideal choice for dental use in enabling the taking of portraits, close-up or macro images of the dentition and study casts. However, for the sake of completion, some other cameras systems that are used in dentistry are also discussed.
2016-06-25
The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was
Digital Earth Watch: Investigating the World with Digital Cameras
NASA Astrophysics Data System (ADS)
Gould, A. D.; Schloss, A. L.; Beaudry, J.; Pickle, J.
2015-12-01
Every digital camera including the smart phone camera can be a scientific tool. Pictures contain millions of color intensity measurements organized spatially allowing us to measure properties of objects in the images. This presentation will demonstrate how digital pictures can be used for a variety of studies with a special emphasis on using repeat digital photographs to study change-over-time in outdoor settings with a Picture Post. Demonstrations will include using inexpensive color filters to take pictures that enhance features in images such as unhealthy leaves on plants, or clouds in the sky. Software available at no cost from the Digital Earth Watch (DEW) website that lets students explore light, color and pixels, manipulate color in images and make measurements, will be demonstrated. DEW and Picture Post were developed with support from NASA. Please visit our websites: DEW: http://dew.globalsystemsscience.orgPicture Post: http://picturepost.unh.edu
A digital ISO expansion technique for digital cameras
NASA Astrophysics Data System (ADS)
Yoo, Youngjin; Lee, Kangeui; Choe, Wonhee; Park, SungChan; Lee, Seong-Deok; Kim, Chang-Yong
2010-01-01
Market's demands of digital cameras for higher sensitivity capability under low-light conditions are remarkably increasing nowadays. The digital camera market is now a tough race for providing higher ISO capability. In this paper, we explore an approach for increasing maximum ISO capability of digital cameras without changing any structure of an image sensor or CFA. Our method is directly applied to the raw Bayer pattern CFA image to avoid non-linearity characteristics and noise amplification which are usually deteriorated after ISP (Image Signal Processor) of digital cameras. The proposed method fuses multiple short exposed images which are noisy, but less blurred. Our approach is designed to avoid the ghost artifact caused by hand-shaking and object motion. In order to achieve a desired ISO image quality, both low frequency chromatic noise and fine-grain noise that usually appear in high ISO images are removed and then we modify the different layers which are created by a two-scale non-linear decomposition of an image. Once our approach is performed on an input Bayer pattern CFA image, the resultant Bayer image is further processed by ISP to obtain a fully processed RGB image. The performance of our proposed approach is evaluated by comparing SNR (Signal to Noise Ratio), MTF50 (Modulation Transfer Function), color error ~E*ab and visual quality with reference images whose exposure times are properly extended into a variety of target sensitivity.
NASA Astrophysics Data System (ADS)
Hatala, J.; Sonnentag, O.; Detto, M.; Runkle, B.; Vargas, R.; Kelly, M.; Baldocchi, D. D.
2009-12-01
Ground-based, visible light imagery has been used for different purposes in agricultural and ecological research. A series of recent studies explored the utilization of networked digital cameras to continuously monitor vegetation by taking oblique canopy images at fixed view angles and time intervals. In our contribution we combine high temporal resolution digital camera imagery, eddy-covariance, and meteorological measurements with weekly field-based hyperspectral and LAI measurements to gain new insights on temporal changes in canopy structure and functioning of two managed ecosystems in California’s Sacramento-San Joaquin River Delta: a pasture infested by the invasive perennial pepperweed (Lepidium latifolium) and a rice plantation (Oryza sativa). Specific questions we address are: a) how does year-round grazing affect pepperweed canopy development, b) is it possible to identify phenological key events of managed ecosystems (pepperweed: flowering; rice: heading) from the limited spectral information of digital camera imagery, c) is a simple greenness index derived from digital camera imagery sufficient to track leaf area index and canopy development of managed ecosystems, and d) what are the scales of temporal correlation between digital camera signals and carbon and water fluxes of managed ecosystems? Preliminary results for the pasture-pepperweed ecosystem show that year-round grazing inhibits the accumulation of dead stalks causing earlier green-up and that digital camera imagery is well suited to capture the onset of flowering and the associated decrease in photosynthetic CO2 uptake. Results from our analyses are of great relevance from both a global environmental change and land management perspective.
Digital Morphometrics: A New Upper Airway Phenotyping Paradigm in OSA.
Schwab, Richard J; Leinwand, Sarah E; Bearn, Cary B; Maislin, Greg; Rao, Ramya Bhat; Nagaraja, Adithya; Wang, Stephen; Keenan, Brendan T
2017-08-01
OSA is associated with changes in pharyngeal anatomy. The goal of this study was to objectively and reproducibly quantify pharyngeal anatomy by using digital morphometrics based on a laser ruler and to assess differences between subjects with OSA and control subjects and associations with the apnea-hypopnea index (AHI). To the best of our knowledge, this study is the first to use digital morphometrics to quantify intraoral risk factors for OSA. Digital photographs were obtained by using an intraoral laser ruler and digital camera in 318 control subjects (mean AHI, 4.2 events/hour) and 542 subjects with OSA (mean AHI, 39.2 events/hour). The digital morphometric paradigm was validated and reproducible over time and camera distances. A larger modified Mallampati score and having a nonvisible airway were associated with a higher AHI, both unadjusted (P < .001) and controlling for age, sex, race, and BMI (P = .015 and P = .018, respectively). Measures of tongue size were larger in subjects with OSA vs control subjects in unadjusted models and controlling for age, sex, and race but nonsignificant controlling for BMI; similar results were observed with AHI severity. Multivariate regression suggests photography-based variables capture independent associations with OSA. Measures of tongue size, airway visibility, and Mallampati scores were associated with increased OSA risk and severity. This study shows that digital morphometrics is an accurate, high-throughput, and noninvasive technique to identify anatomic OSA risk factors. Morphometrics may also provide a more reproducible and standardized measurement of the Mallampati score. Digital morphometrics represent an efficient and cost-effective method of examining intraoral crowding and tongue size when examining large populations, genetics, or screening for OSA. Copyright © 2017 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Humphreys, William M., Jr.; Bartram, Scott M.
2001-01-01
A novel multiple-camera system for the recording of digital particle image velocimetry (DPIV) images acquired in a two-dimensional separating/reattaching flow is described. The measurements were performed in the NASA Langley Subsonic Basic Research Tunnel as part of an overall series of experiments involving the simultaneous acquisition of dynamic surface pressures and off-body velocities. The DPIV system utilized two frequency-doubled Nd:YAG lasers to generate two coplanar, orthogonally polarized light sheets directed upstream along the horizontal centerline of the test model. A recording system containing two pairs of matched high resolution, 8-bit cameras was used to separate and capture images of illuminated tracer particles embedded in the flow field. Background image subtraction was used to reduce undesirable flare light emanating from the surface of the model, and custom pixel alignment algorithms were employed to provide accurate registration among the various cameras. Spatial cross correlation analysis with median filter validation was used to determine the instantaneous velocity structure in the separating/reattaching flow region illuminated by the laser light sheets. In operation the DPIV system exhibited a good ability to resolve large-scale separated flow structures with acceptable accuracy over the extended field of view of the cameras. The recording system design provided enhanced performance versus traditional DPIV systems by allowing a variety of standard and non-standard cameras to be easily incorporated into the system.
Coincidence ion imaging with a fast frame camera
NASA Astrophysics Data System (ADS)
Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen
2014-12-01
A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.
Sackstein, M
2006-10-01
Over the last five years digital photography has become ubiquitous. For the family photo album, a 4 or 5 megapixel camera costing about 2000 NIS will produce satisfactory results for most people. However, for intra-oral photography the common wisdom holds that only professional photographic equipment is up to the task. Such equipment typically costs around 12,000 NIS and includes the camera body, an attachable macro lens and a ringflash. The following article challenges this conception. Although professional equipment does produce the most exemplary results, a highly effective database of clinical pictures can be compiled even with a "non-professional" digital camera. Since the year 2002, my clinical work has been routinely documented with digital cameras of the Nikon CoolPix series. The advantages are that these digicams are economical both in price and in size and allow easy transport and operation when compared to their expensive and bulky professional counterparts. The details of how to use a non-professional digicam to produce and maintain an effective clinical picture database, for documentation, monitoring, demonstration and professional fulfillment, are described below.
Ramsthaler, Frank; Kettner, Mattias; Verhoff, Marcel A
2014-01-01
In forensic anthropological casework, estimating age-at-death is key to profiling unknown skeletal remains. The aim of this study was to examine the reliability of a new, simple, fast, and inexpensive digital odontological method for age-at-death estimation. The method is based on the original Lamendin method, which is a widely used technique in the repertoire of odontological aging methods in forensic anthropology. We examined 129 single root teeth employing a digital camera and imaging software for the measurement of the luminance of the teeth's translucent root zone. Variability in luminance detection was evaluated using statistical technical error of measurement analysis. The method revealed stable values largely unrelated to observer experience, whereas requisite formulas proved to be camera-specific and should therefore be generated for an individual recording setting based on samples of known chronological age. Multiple regression analysis showed a highly significant influence of the coefficients of the variables "arithmetic mean" and "standard deviation" of luminance for the regression formula. For the use of this primer multivariate equation for age-at-death estimation in casework, a standard error of the estimate of 6.51 years was calculated. Step-by-step reduction of the number of embedded variables to linear regression analysis employing the best contributor "arithmetic mean" of luminance yielded a regression equation with a standard error of 6.72 years (p < 0.001). The results of this study not only support the premise of root translucency as an age-related phenomenon, but also demonstrate that translucency reflects a number of other influencing factors in addition to age. This new digital measuring technique of the zone of dental root luminance can broaden the array of methods available for estimating chronological age, and furthermore facilitate measurement and age classification due to its low dependence on observer experience.
Colomb, Tristan; Dürr, Florian; Cuche, Etienne; Marquet, Pierre; Limberger, Hans G; Salathé, René-Paul; Depeursinge, Christian
2005-07-20
We present a digital holographic microscope that permits one to image polarization state. This technique results from the coupling of digital holographic microscopy and polarization digital holography. The interference between two orthogonally polarized reference waves and the wave transmitted by a microscopic sample, magnified by a microscope objective, is recorded on a CCD camera. The off-axis geometry permits one to reconstruct separately from this single hologram two wavefronts that are used to image the object-wave Jones vector. We applied this technique to image the birefringence of a bent fiber. To evaluate the precision of the phase-difference measurement, the birefringence induced by internal stress in an optical fiber is measured and compared to the birefringence profile captured by a standard method, which had been developed to obtain high-resolution birefringence profiles of optical fibers.
Salazar, Antonio José; Camacho, Juan Camilo; Aguirre, Diego Andrés
2012-02-01
A common teleradiology practice is digitizing films. The costs of specialized digitizers are very high, that is why there is a trend to use conventional scanners and digital cameras. Statistical clinical studies are required to determine the accuracy of these devices, which are very difficult to carry out. The purpose of this study was to compare three capture devices in terms of their capacity to detect several image characteristics. Spatial resolution, contrast, gray levels, and geometric deformation were compared for a specialized digitizer ICR (US$ 15,000), a conventional scanner UMAX (US$ 1,800), and a digital camera LUMIX (US$ 450, but require an additional support system and a light box for about US$ 400). Test patterns printed in films were used. The results detected gray levels lower than real values for all three devices; acceptable contrast and low geometric deformation with three devices. All three devices are appropriate solutions, but a digital camera requires more operator training and more settings.
Shaw, S L; Salmon, E D; Quatrano, R S
1995-12-01
In this report, we describe a relatively inexpensive method for acquiring, storing and processing light microscope images that combines the advantages of video technology with the powerful medium now termed digital photography. Digital photography refers to the recording of images as digital files that are stored, manipulated and displayed using a computer. This report details the use of a gated video-rate charge-coupled device (CCD) camera and a frame grabber board for capturing 256 gray-level digital images from the light microscope. This camera gives high-resolution bright-field, phase contrast and differential interference contrast (DIC) images but, also, with gated on-chip integration, has the capability to record low-light level fluorescent images. The basic components of the digital photography system are described, and examples are presented of fluorescence and bright-field micrographs. Digital processing of images to remove noise, to enhance contrast and to prepare figures for printing is discussed.
NDVI derived from IR-enabled digital cameras: applicability across different plant functional types
NASA Astrophysics Data System (ADS)
Filippa, Gianluca; Cremonese, Edoardo; Galvagno, Marta; Migliavacca, Mirco; Sonnentag, Oliver; Hufkens, Koen; Ryu, Youngryel; Humphreys, Elyn; Morra di Cella, Umberto; Richardson, Andrew D.
2017-04-01
Phenological time-series based on the deployment of radiometric measurements are now being constructed at different spatial and temporal scales ranging from weekly satellite observations to sub-hourly in situ measurements by means of e.g. radiometers or digital cameras. In situ measurements are strongly required to provide high-frequency validation data for satellite-derived vegetation indices. In this study we used a recently developed method to calculate NDVI from NIR-enabled digital cameras (NDVIC) at 17 sites encompassing 6 plant functional types and totalizing 74 year-sites of data from the PHENOCAM network. The seasonality of NDVIC was comparable to both NDVI measured by ground light emitting diode (LED) sensors and by MODIS, whereas site-specific scaling factors are required to compare absolute values of NDVIC to standard NDVI measurements. We also compared green chromatic coordinate (GCC) extracted from RGB-only images to NDVIC and found that the two are characterized by slight different dynamics, dependent on the plant functional type. During senescence, NDVIC lags behind GCC in deciduous broad-leaf forests and grasslands, suggesting that GCC is more sensitive to leaf decoloration and NDVIC to the biomass reduction resulting from leaf abscission and green to dry biomass ratio of the canopy. In evergreen forests, NDVIC peaks later than GCC in spring, likely tracking the processes of shoot elongation and new needle formation. Our findings suggest therefore that NDVIC and GCC can complement each other in describing ecosystem phenology.
Uav Borne Low Altitude Photogrammetry System
NASA Astrophysics Data System (ADS)
Lin, Z.; Su, G.; Xie, F.
2012-07-01
In this paper,the aforementioned three major aspects related to the Unmanned Aerial Vehicles (UAV) system for low altitude aerial photogrammetry, i.e., flying platform, imaging sensor system and data processing software, are discussed. First of all, according to the technical requirements about the least cruising speed, the shortest taxiing distance, the level of the flight control and the performance of turbulence flying, the performance and suitability of the available UAV platforms (e.g., fixed wing UAVs, the unmanned helicopters and the unmanned airships) are compared and analyzed. Secondly, considering the restrictions on the load weight of a platform and the resolution pertaining to a sensor, together with the exposure equation and the theory of optical information, the principles of designing self-calibration and self-stabilizing combined wide-angle digital cameras (e.g., double-combined camera and four-combined camera) are placed more emphasis on. Finally, a software named MAP-AT, considering the specialty of UAV platforms and sensors, is developed and introduced. Apart from the common functions of aerial image processing, MAP-AT puts more effort on automatic extraction, automatic checking and artificial aided adding of the tie points for images with big tilt angles. Based on the recommended process for low altitude photogrammetry with UAVs in this paper, more than ten aerial photogrammetry missions have been accomplished, the accuracies of Aerial Triangulation, Digital orthophotos(DOM)and Digital Line Graphs(DLG) of which meet the standard requirement of 1:2000, 1:1000 and 1:500 mapping.
Mars Cameras Make Panoramic Photography a Snap
NASA Technical Reports Server (NTRS)
2008-01-01
If you wish to explore a Martian landscape without leaving your armchair, a few simple clicks around the NASA Web site will lead you to panoramic photographs taken from the Mars Exploration Rovers, Spirit and Opportunity. Many of the technologies that enable this spectacular Mars photography have also inspired advancements in photography here on Earth, including the panoramic camera (Pancam) and its housing assembly, designed by the Jet Propulsion Laboratory and Cornell University for the Mars missions. Mounted atop each rover, the Pancam mast assembly (PMA) can tilt a full 180 degrees and swivel 360 degrees, allowing for a complete, highly detailed view of the Martian landscape. The rover Pancams take small, 1 megapixel (1 million pixel) digital photographs, which are stitched together into large panoramas that sometimes measure 4 by 24 megapixels. The Pancam software performs some image correction and stitching after the photographs are transmitted back to Earth. Different lens filters and a spectrometer also assist scientists in their analyses of infrared radiation from the objects in the photographs. These photographs from Mars spurred developers to begin thinking in terms of larger and higher quality images: super-sized digital pictures, or gigapixels, which are images composed of 1 billion or more pixels. Gigapixel images are more than 200 times the size captured by today s standard 4 megapixel digital camera. Although originally created for the Mars missions, the detail provided by these large photographs allows for many purposes, not all of which are limited to extraterrestrial photography.
ERIC Educational Resources Information Center
Rowe, Deborah Wells; Miller, Mary E.
2016-01-01
This paper reports the findings of a two-year design study exploring instructional conditions supporting emerging, bilingual/biliterate, four-year-olds' digital composing. With adult support, children used child-friendly, digital cameras and iPads equipped with writing, drawing and bookmaking apps to compose multimodal, multilingual eBooks…
ERIC Educational Resources Information Center
Hoge, Robert Joaquin
2010-01-01
Within the sphere of education, navigating throughout a digital world has become a matter of necessity for the developing professional, as with the advent of Document Camera Technology (DCT). This study explores the pedagogical implications of implementing DCT; to see if there is a relationship between teachers' comfort with DCT and to the…
Digital Diversity: A Basic Tool with Lots of Uses
ERIC Educational Resources Information Center
Coy, Mary
2006-01-01
In this article the author relates how the digital camera has altered the way she teaches and the way her students learn. She also emphasizes the importance for teachers to have software that can edit, print, and incorporate photos. She cites several instances in which a digital camera can be used: (1) PowerPoint presentations; (2) Open house; (3)…
Camera-Model Identification Using Markovian Transition Probability Matrix
NASA Astrophysics Data System (ADS)
Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei
Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.
Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.
Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua
2017-05-01
In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.
Photogrammetry of a 5m Inflatable Space Antenna With Consumer Digital Cameras
NASA Technical Reports Server (NTRS)
Pappa, Richard S.; Giersch, Louis R.; Quagliaroli, Jessica M.
2000-01-01
This paper discusses photogrammetric measurements of a 5m-diameter inflatable space antenna using four Kodak DC290 (2.1 megapixel) digital cameras. The study had two objectives: 1) Determine the photogrammetric measurement precision obtained using multiple consumer-grade digital cameras and 2) Gain experience with new commercial photogrammetry software packages, specifically PhotoModeler Pro from Eos Systems, Inc. The paper covers the eight steps required using this hardware/software combination. The baseline data set contained four images of the structure taken from various viewing directions. Each image came from a separate camera. This approach simulated the situation of using multiple time-synchronized cameras, which will be required in future tests of vibrating or deploying ultra-lightweight space structures. With four images, the average measurement precision for more than 500 points on the antenna surface was less than 0.020 inches in-plane and approximately 0.050 inches out-of-plane.
Acquisition of gamma camera and physiological data by computer.
Hack, S N; Chang, M; Line, B R; Cooper, J A; Robeson, G H
1986-11-01
We have designed, implemented, and tested a new Research Data Acquisition System (RDAS) that permits a general purpose digital computer to acquire signals from both gamma camera sources and physiological signal sources concurrently. This system overcomes the limited multi-source, high speed data acquisition capabilities found in most clinically oriented nuclear medicine computers. The RDAS can simultaneously input signals from up to four gamma camera sources with a throughput of 200 kHz per source and from up to eight physiological signal sources with an aggregate throughput of 50 kHz. Rigorous testing has found the RDAS to exhibit acceptable linearity and timing characteristics. In addition, flood images obtained by this system were compared with flood images acquired by a commercial nuclear medicine computer system. National Electrical Manufacturers Association performance standards of the flood images were found to be comparable.
A digital system for surface reconstruction
Zhou, Weiyang; Brock, Robert H.; Hopkins, Paul F.
1996-01-01
A digital photogrammetric system, STEREO, was developed to determine three dimensional coordinates of points of interest (POIs) defined with a grid on a textureless and smooth-surfaced specimen. Two CCD cameras were set up with unknown orientation and recorded digital images of a reference model and a specimen. Points on the model were selected as control or check points for calibrating or assessing the system. A new algorithm for edge-detection called local maximum convolution (LMC) helped extract the POIs from the stereo image pairs. The system then matched the extracted POIs and used a least squares “bundle” adjustment procedure to solve for the camera orientation parameters and the coordinates of the POIs. An experiment with STEREO found that the standard deviation of the residuals at the check points was approximately 24%, 49% and 56% of the pixel size in the X, Y and Z directions, respectively. The average of the absolute values of the residuals at the check points was approximately 19%, 36% and 49% of the pixel size in the X, Y and Z directions, respectively. With the graphical user interface, STEREO demonstrated a high degree of automation and its operation does not require special knowledge of photogrammetry, computers or image processing.
Riccardi, M; Mele, G; Pulvento, C; Lavini, A; d'Andria, R; Jacobsen, S-E
2014-06-01
Leaf chlorophyll content provides valuable information about physiological status of plants; it is directly linked to photosynthetic potential and primary production. In vitro assessment by wet chemical extraction is the standard method for leaf chlorophyll determination. This measurement is expensive, laborious, and time consuming. Over the years alternative methods, rapid and non-destructive, have been explored. The aim of this work was to evaluate the applicability of a fast and non-invasive field method for estimation of chlorophyll content in quinoa and amaranth leaves based on RGB components analysis of digital images acquired with a standard SLR camera. Digital images of leaves from different genotypes of quinoa and amaranth were acquired directly in the field. Mean values of each RGB component were evaluated via image analysis software and correlated to leaf chlorophyll provided by standard laboratory procedure. Single and multiple regression models using RGB color components as independent variables have been tested and validated. The performance of the proposed method was compared to that of the widely used non-destructive SPAD method. Sensitivity of the best regression models for different genotypes of quinoa and amaranth was also checked. Color data acquisition of the leaves in the field with a digital camera was quick, more effective, and lower cost than SPAD. The proposed RGB models provided better correlation (highest R (2)) and prediction (lowest RMSEP) of the true value of foliar chlorophyll content and had a lower amount of noise in the whole range of chlorophyll studied compared with SPAD and other leaf image processing based models when applied to quinoa and amaranth.
Integrating TV/digital data spectrograph system
NASA Technical Reports Server (NTRS)
Duncan, B. J.; Fay, T. D.; Miller, E. R.; Wamsteker, W.; Brown, R. M.; Neely, P. L.
1975-01-01
A 25-mm vidicon camera was previously modified to allow operation in an integration mode for low-light-level astronomical work. The camera was then mated to a low-dispersion spectrograph for obtaining spectral information in the 400 to 750 nm range. A high speed digital video image system was utilized to digitize the analog video signal, place the information directly into computer-type memory, and record data on digital magnetic tape for permanent storage and subsequent analysis.
Printed products for digital cameras and mobile devices
NASA Astrophysics Data System (ADS)
Fageth, Reiner; Schmidt-Sacht, Wulf
2005-01-01
Digital photography is no longer simply a successor to film. The digital market is now driven by additional devices such as mobile phones with camera and video functions (camphones) as well as innovative products derived from digital files. A large number of consumers do not print their images and non-printing has become the major enemy of wholesale printers, home printing suppliers and retailers. This paper addresses the challenge facing our industry, namely how to encourage the consumer to print images easily and conveniently from all types of digital media.
Modeling of digital information optical encryption system with spatially incoherent illumination
NASA Astrophysics Data System (ADS)
Bondareva, Alyona P.; Cheremkhin, Pavel A.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.; Starikov, Sergey N.
2015-10-01
State of the art micromirror DMD spatial light modulators (SLM) offer unprecedented framerate up to 30000 frames per second. This, in conjunction with high speed digital camera, should allow to build high speed optical encryption system. Results of modeling of digital information optical encryption system with spatially incoherent illumination are presented. Input information is displayed with first SLM, encryption element - with second SLM. Factors taken into account are: resolution of SLMs and camera, holograms reconstruction noise, camera noise and signal sampling. Results of numerical simulation demonstrate high speed (several gigabytes per second), low bit error rate and high crypto-strength.
Introduction of A New Toolbox for Processing Digital Images From Multiple Camera Networks: FMIPROT
NASA Astrophysics Data System (ADS)
Melih Tanis, Cemal; Nadir Arslan, Ali
2017-04-01
Webcam networks intended for scientific monitoring of ecosystems is providing digital images and other environmental data for various studies. Also, other types of camera networks can also be used for scientific purposes, e.g. usage of traffic webcams for phenological studies, camera networks for ski tracks and avalanche monitoring over mountains for hydrological studies. To efficiently harness the potential of these camera networks, easy to use software which can obtain and handle images from different networks having different protocols and standards is necessary. For the analyses of the images from webcam networks, numerous software packages are freely available. These software packages have different strong features not only for analyzing but also post processing digital images. But specifically for the ease of use, applicability and scalability, a different set of features could be added. Thus, a more customized approach would be of high value, not only for analyzing images of comprehensive camera networks, but also considering the possibility to create operational data extraction and processing with an easy to use toolbox. At this paper, we introduce a new toolbox, entitled; Finnish Meteorological Institute Image PROcessing Tool (FMIPROT) which a customized approach is followed. FMIPROT has currently following features: • straightforward installation, • no software dependencies that require as extra installations, • communication with multiple camera networks, • automatic downloading and handling images, • user friendly and simple user interface, • data filtering, • visualizing results on customizable plots, • plugins; allows users to add their own algorithms. Current image analyses in FMIPROT include "Color Fraction Extraction" and "Vegetation Indices". The analysis of color fraction extraction is calculating the fractions of the colors in a region of interest, for red, green and blue colors along with brightness and luminance parameters. The analysis of vegetation indices is a collection of indices used in vegetation phenology and includes "Green Fraction" (green chromatic coordinate), "Green-Red Vegetation Index" and "Green Excess Index". "Snow cover fraction" analysis which detects snow covered pixels in the images and georeference them on a geospatial plane to calculate the snow cover fraction is being implemented at the moment. FMIPROT is being developed during the EU Life+ MONIMET project. Altogether we mounted 28 cameras at 14 different sites in Finland as MONIMET camera network. In this paper, we will present details of FMIPROT and analysis results from MONIMET camera network. We will also discuss on future planned developments of FMIPROT.
Toward a digital camera to rival the human eye
NASA Astrophysics Data System (ADS)
Skorka, Orit; Joseph, Dileepan
2011-07-01
All things considered, electronic imaging systems do not rival the human visual system despite notable progress over 40 years since the invention of the CCD. This work presents a method that allows design engineers to evaluate the performance gap between a digital camera and the human eye. The method identifies limiting factors of the electronic systems by benchmarking against the human system. It considers power consumption, visual field, spatial resolution, temporal resolution, and properties related to signal and noise power. A figure of merit is defined as the performance gap of the weakest parameter. Experimental work done with observers and cadavers is reviewed to assess the parameters of the human eye, and assessment techniques are also covered for digital cameras. The method is applied to 24 modern image sensors of various types, where an ideal lens is assumed to complete a digital camera. Results indicate that dynamic range and dark limit are the most limiting factors. The substantial functional gap, from 1.6 to 4.5 orders of magnitude, between the human eye and digital cameras may arise from architectural differences between the human retina, arranged in a multiple-layer structure, and image sensors, mostly fabricated in planar technologies. Functionality of image sensors may be significantly improved by exploiting technologies that allow vertical stacking of active tiers.
Standardized access, display, and retrieval of medical video
NASA Astrophysics Data System (ADS)
Bellaire, Gunter; Steines, Daniel; Graschew, Georgi; Thiel, Andreas; Bernarding, Johannes; Tolxdorff, Thomas; Schlag, Peter M.
1999-05-01
The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video sequences into DICOM 3.0. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital video documents of diagnostic and therapeutic procedures should be examined regarding the clip length and size necessary for second opinion and manageable with today's hardware. Image sources relevant for this paper include 3D laparoscope, 3D surgical microscope, 3D open surgery camera, synthetic video, and monoscopic endoscopes, etc. The global DICOM video concept and three special workplaces of distinct applications are described. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video-cutting. Digital stereoscopic video sequences are especially in demand for surgery . Therefore DSVS are also integrated into the DICOM video concept. Results are presented describing the suitability of stereoscopic display techniques for the operating room.
History and use of remote sensing for conservation and management of federal lands in Alaska, USA
Markon, Carl
1995-01-01
Remote sensing has been used to aid land use planning efforts for federal public lands in Alaska since the 1940s. Four federal land management agencies-the U.S. Fish and Wildlife Service, US. Bureau of Land Management, US. National Park Service, and U.S. Forest Service-have used aerial photography and satellite imagery to document the extent, type, and condition of Alaska's natural resources. Aerial photographs have been used to collect detailed information over small to medium-sized areas. This standard management tool is obtainable using equipment ranging from hand-held 35-mm cameras to precision metric mapping cameras. Satellite data, equally important, provide synoptic views of landscapes, are digitally manipulatable, and are easily merged with other digital databases. To date, over 109.2 million ha (72%) of Alaska's land cover have been mapped via remote sensing. This information has provided a base for conservation, management, and planning on federal public lands in Alaska.
Low-cost mobile phone microscopy with a reversed mobile phone camera lens.
Switz, Neil A; D'Ambrosio, Michael V; Fletcher, Daniel A
2014-01-01
The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples.
Low-Cost Mobile Phone Microscopy with a Reversed Mobile Phone Camera Lens
Fletcher, Daniel A.
2014-01-01
The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples. PMID:24854188
ERIC Educational Resources Information Center
Northcote, Maria
2011-01-01
Digital cameras are now commonplace in many classrooms and in the lives of many children in early childhood centres and primary schools. They are regularly used by adults and teachers for "saving special moments and documenting experiences." The use of previously expensive photographic and recording equipment has often remained in the domain of…
Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C
2012-01-01
Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.
High-speed line-scan camera with digital time delay integration
NASA Astrophysics Data System (ADS)
Bodenstorfer, Ernst; Fürtler, Johannes; Brodersen, Jörg; Mayer, Konrad J.; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert
2007-02-01
Dealing with high-speed image acquisition and processing systems, the speed of operation is often limited by the amount of available light, due to short exposure times. Therefore, high-speed applications often use line-scan cameras, based on charge-coupled device (CCD) sensors with time delayed integration (TDI). Synchronous shift and accumulation of photoelectric charges on the CCD chip - according to the objects' movement - result in a longer effective exposure time without introducing additional motion blur. This paper presents a high-speed color line-scan camera based on a commercial complementary metal oxide semiconductor (CMOS) area image sensor with a Bayer filter matrix and a field programmable gate array (FPGA). The camera implements a digital equivalent to the TDI effect exploited with CCD cameras. The proposed design benefits from the high frame rates of CMOS sensors and from the possibility of arbitrarily addressing the rows of the sensor's pixel array. For the digital TDI just a small number of rows are read out from the area sensor which are then shifted and accumulated according to the movement of the inspected objects. This paper gives a detailed description of the digital TDI algorithm implemented on the FPGA. Relevant aspects for the practical application are discussed and key features of the camera are listed.
Commercially available high-speed system for recording and monitoring vocal fold vibrations.
Sekimoto, Sotaro; Tsunoda, Koichi; Kaga, Kimitaka; Makiyama, Kiyoshi; Tsunoda, Atsunobu; Kondo, Kenji; Yamasoba, Tatsuya
2009-12-01
We have developed a special purpose adaptor making it possible to use a commercially available high-speed camera to observe vocal fold vibrations during phonation. The camera can capture dynamic digital images at speeds of 600 or 1200 frames per second. The adaptor is equipped with a universal-type attachment and can be used with most endoscopes sold by various manufacturers. Satisfactory images can be obtained with a rigid laryngoscope even with the standard light source. The total weight of the adaptor and camera (including battery) is only 1010 g. The new system comprising the high-speed camera and the new adaptor can be purchased for about $3000 (US), while the least expensive stroboscope costs about 10 times that price, and a high-performance high-speed imaging system may cost 100 times as much. Therefore the system is both cost-effective and useful in the outpatient clinic or casualty setting, on house calls, and for the purpose of student or patient education.
Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.
Camera calibration: active versus passive targets
NASA Astrophysics Data System (ADS)
Schmalz, Christoph; Forster, Frank; Angelopoulou, Elli
2011-11-01
Traditionally, most camera calibrations rely on a planar target with well-known marks. However, the localization error of the marks in the image is a source of inaccuracy. We propose the use of high-resolution digital displays as active calibration targets to obtain more accurate calibration results for all types of cameras. The display shows a series of coded patterns to generate correspondences between world points and image points. This has several advantages. No special calibration hardware is necessary because suitable displays are practically ubiquitious. The method is fully automatic, and no identification of marks is necessary. For a coding scheme based on phase shifting, the localization accuracy is approximately independent of the camera's focus settings. Most importantly, higher accuracy can be achieved compared to passive targets, such as printed checkerboards. A rigorous evaluation is performed to substantiate this claim. Our active target method is compared to standard calibrations using a checkerboard target. We perform camera, calibrations with different combinations of displays, cameras, and lenses, as well as with simulated images and find markedly lower reprojection errors when using active targets. For example, in a stereo reconstruction task, the accuracy of a system calibrated with an active target is five times better.
Very High-Speed Digital Video Capability for In-Flight Use
NASA Technical Reports Server (NTRS)
Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald
2006-01-01
digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2 and altitudes up to 50,000 ft (15.24 km). The digital video was used to determine the structural survivability of the debris in a real flight environment and quantify the aerodynamic trajectories of the debris.
Meteor Film Recording with Digital Film Cameras with large CMOS Sensors
NASA Astrophysics Data System (ADS)
Slansky, P. C.
2016-12-01
In this article the author combines his professional know-how about cameras for film and television production with his amateur astronomy activities. Professional digital film cameras with high sensitivity are still quite rare in astronomy. One reason for this may be their costs of up to 20 000 and more (camera body only). In the interim, however,consumer photo cameras with film mode and very high sensitivity have come to the market for about 2 000 EUR. In addition, ultra-high sensitive professional film cameras, that are very interesting for meteor observation, have been introduced to the market. The particular benefits of digital film cameras with large CMOS sensors, including photo cameras with film recording function, for meteor recording are presented by three examples: a 2014 Camelopardalid, shot with a Canon EOS C 300, an exploding 2014 Aurigid, shot with a Sony alpha7S, and the 2016 Perseids, shot with a Canon ME20F-SH. All three cameras use large CMOS sensors; "large" meaning Super-35 mm, the classic 35 mm film format (24x13.5 mm, similar to APS-C size), or full format (36x24 mm), the classic 135 photo camera format. Comparisons are made to the widely used cameras with small CCD sensors, such as Mintron or Watec; "small" meaning 12" (6.4x4.8 mm) or less. Additionally, special photographic image processing of meteor film recordings is discussed.
Forensics for flatbed scanners
NASA Astrophysics Data System (ADS)
Gloe, Thomas; Franz, Elke; Winkler, Antje
2007-02-01
Within this article, we investigate possibilities for identifying the origin of images acquired with flatbed scanners. A current method for the identification of digital cameras takes advantage of image sensor noise, strictly speaking, the spatial noise. Since flatbed scanners and digital cameras use similar technologies, the utilization of image sensor noise for identifying the origin of scanned images seems to be possible. As characterization of flatbed scanner noise, we considered array reference patterns and sensor line reference patterns. However, there are particularities of flatbed scanners which we expect to influence the identification. This was confirmed by extensive tests: Identification was possible to a certain degree, but less reliable than digital camera identification. In additional tests, we simulated the influence of flatfielding and down scaling as examples for such particularities of flatbed scanners on digital camera identification. One can conclude from the results achieved so far that identifying flatbed scanners is possible. However, since the analyzed methods are not able to determine the image origin in all cases, further investigations are necessary.
Situational Awareness from a Low-Cost Camera System
NASA Technical Reports Server (NTRS)
Freudinger, Lawrence C.; Ward, David; Lesage, John
2010-01-01
A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.
Poland, Michael P.; Dzurisin, Daniel; LaHusen, Richard G.; Major, John J.; Lapcewich, Dennis; Endo, Elliot T.; Gooding, Daniel J.; Schilling, Steve P.; Janda, Christine G.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.
2008-01-01
Images from a Web-based camera (Webcam) located 8 km north of Mount St. Helens and a network of remote, telemetered digital cameras were used to observe eruptive activity at the volcano between October 2004 and February 2006. The cameras offered the advantages of low cost, low power, flexibility in deployment, and high spatial and temporal resolution. Images obtained from the cameras provided important insights into several aspects of dome extrusion, including rockfalls, lava extrusion rates, and explosive activity. Images from the remote, telemetered digital cameras were assembled into time-lapse animations of dome extrusion that supported monitoring, research, and outreach efforts. The wide-ranging utility of remote camera imagery should motivate additional work, especially to develop the three-dimensional quantitative capabilities of terrestrial camera networks.
Loehfelm, Thomas W; Prater, Adam B; Debebe, Tequam; Sekhar, Aarti K
2017-02-01
We digitized the radiography teaching file at Black Lion Hospital (Addis Ababa, Ethiopia) during a recent trip, using a standard digital camera and a fluorescent light box. Our goal was to photograph every radiograph in the existing library while optimizing the final image size to the maximum resolution of a high quality tablet computer, preserving the contrast resolution of the radiographs, and minimizing total library file size. A secondary important goal was to minimize the cost and time required to take and process the images. Three workers were able to efficiently remove the radiographs from their storage folders, hang them on the light box, operate the camera, catalog the image, and repack the radiographs back to the storage folder. Zoom, focal length, and film speed were fixed, while aperture and shutter speed were manually adjusted for each image, allowing for efficiency and flexibility in image acquisition. Keeping zoom and focal length fixed, which kept the view box at the same relative position in all of the images acquired during a single photography session, allowed unused space to be batch-cropped, saving considerable time in post-processing, at the expense of final image resolution. We present an analysis of the trade-offs in workflow efficiency and final image quality, and demonstrate that a few people with minimal equipment can efficiently digitize a teaching file library.
Hardware/Software Issues for Video Guidance Systems: The Coreco Frame Grabber
NASA Technical Reports Server (NTRS)
Bales, John W.
1996-01-01
The F64 frame grabber is a high performance video image acquisition and processing board utilizing the TMS320C40 and TMS34020 processors. The hardware is designed for the ISA 16 bit bus and supports multiple digital or analog cameras. It has an acquisition rate of 40 million pixels per second, with a variable sampling frequency of 510 kHz to MO MHz. The board has a 4MB frame buffer memory expandable to 32 MB, and has a simultaneous acquisition and processing capability. It supports both VGA and RGB displays, and accepts all analog and digital video input standards.
Precision of FLEET Velocimetry Using High-Speed CMOS Camera Systems
NASA Technical Reports Server (NTRS)
Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.
2015-01-01
Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 microseconds, precisions of 0.5 meters per second in air and 0.2 meters per second in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision HighSpeed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.
Linear array of photodiodes to track a human speaker for video recording
NASA Astrophysics Data System (ADS)
DeTone, D.; Neal, H.; Lougheed, R.
2012-12-01
Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.
High-frame rate multiport CCD imager and camera
NASA Astrophysics Data System (ADS)
Levine, Peter A.; Patterson, David R.; Esposito, Benjamin J.; Tower, John R.; Lawler, William B.
1993-01-01
A high frame rate visible CCD camera capable of operation up to 200 frames per second is described. The camera produces a 256 X 256 pixel image by using one quadrant of a 512 X 512 16-port, back illuminated CCD imager. Four contiguous outputs are digitally reformatted into a correct, 256 X 256 image. This paper details the architecture and timing used for the CCD drive circuits, analog processing, and the digital reformatter.
An assessment of the utility of a non-metric digital camera for measuring standing trees
Neil Clark; Randolph H. Wynne; Daniel L. Schmoldt; Matthew F. Winn
2000-01-01
Images acquired with a commercially available digital camera were used to make measurements on 20 red oak (Quercus spp.) stems. The ranges of diameter at breast height (DBH) and height to a 10 cm upper-stem diameter were 16-66 cm and 12-20 m, respectively. Camera stations located 3, 6, 9, 12, and 15 m from the stem were studied to determine the best distance to be...
Thin film transistors on plastic substrates with reflective coatings for radiation protection
Wolfe, Jesse D.; Theiss, Steven D.; Carey, Paul G.; Smith, Patrick M.; Wickboldt, Paul
2003-11-04
Fabrication of silicon thin film transistors (TFT) on low-temperature plastic substrates using a reflective coating so that inexpensive plastic substrates may be used in place of standard glass, quartz, and silicon wafer-based substrates. The TFT can be used in large area low cost electronics, such as flat panel displays and portable electronics such as video cameras, personal digital assistants, and cell phones.
Thin film transistors on plastic substrates with reflective coatings for radiation protection
Wolfe, Jesse D [Fairfield, CA; Theiss, Steven D [Woodbury, MN; Carey, Paul G [Mountain View, CA; Smith, Patrick M [San Ramon, CA; Wickbold, Paul [Walnut Creek, CA
2006-09-26
Fabrication of silicon thin film transistors (TFT) on low-temperature plastic substrates using a reflective coating so that inexpensive plastic substrates may be used in place of standard glass, quartz, and silicon wafer-based substrates. The TFT can be used in large area low cost electronics, such as flat panel displays and portable electronics such as video cameras, personal digital assistants, and cell phones.
ERIC Educational Resources Information Center
Ochsner, Karl
2010-01-01
Students are moving away from content consumption to content production. Short movies are uploaded onto video social networking sites and shared around the world. Unfortunately they usually contain little to no educational value, lack a narrative and are rarely created in the science classroom. According to new Arizona Technology standards and…
Color reproduction software for a digital still camera
NASA Astrophysics Data System (ADS)
Lee, Bong S.; Park, Du-Sik; Nam, Byung D.
1998-04-01
We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.
Coincidence ion imaging with a fast frame camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei
2014-12-15
A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots onmore » each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.« less
Digital Camera Project Fosters Communication Skills
ERIC Educational Resources Information Center
Fisher, Ashley; Lazaros, Edward J.
2009-01-01
This article details the many benefits of educators' use of digital camera technology and provides an activity in which students practice taking portrait shots of classmates, manipulate the resulting images, and add language arts practice by interviewing their subjects to produce a photo-illustrated Word document. This activity gives…
Marcus, Inna; Tung, Irene T; Dosunmu, Eniolami O; Thiamthat, Warakorn; Freedman, Sharon F
2013-12-01
To compare anterior segment findings identified in young children using digital photographic images from the Lytro light field camera to those observed clinically. This was a prospective study of children <9 years of age with an anterior segment abnormality. Clinically observed anterior segment examination findings for each child were recorded and several digital images of the anterior segment of each eye captured with the Lytro camera. The images were later reviewed by a masked examiner. Sensitivity of abnormal examination findings on Lytro imaging was calculated and compared to the clinical examination as the gold standard. A total of 157 eyes of 80 children (mean age, 4.4 years; range, 0.1-8.9) were included. Clinical examination revealed 206 anterior segment abnormalities altogether: lids/lashes (n = 21 eyes), conjunctiva/sclera (n = 28 eyes), cornea (n = 71 eyes), anterior chamber (n = 14 eyes), iris (n = 43 eyes), and lens (n = 29 eyes). Review of Lytro photographs of eyes with clinically diagnosed anterior segment abnormality correctly identified 133 of 206 (65%) of all abnormalities. Additionally, 185 abnormalities in 50 children were documented at examination under anesthesia. The Lytro camera was able to document most abnormal anterior segment findings in un-sedated young children. Its unique ability to allow focus change after image capture is a significant improvement on prior technology. Copyright © 2013 American Association for Pediatric Ophthalmology and Strabismus. Published by Mosby, Inc. All rights reserved.
Pancam: A Multispectral Imaging Investigation on the NASA 2003 Mars Exploration Rover Mission
NASA Technical Reports Server (NTRS)
Bell, J. F., III; Squyres, S. W.; Herkenhoff, K. E.; Maki, J.; Schwochert, M.; Dingizian, A.; Brown, D.; Morris, R. V.; Arneson, H. M.; Johnson, M. J.
2003-01-01
One of the six science payload elements carried on each of the NASA Mars Exploration Rovers (MER; Figure 1) is the Panoramic Camera System, or Pancam. Pancam consists of three major components: a pair of digital CCD cameras, the Pancam Mast Assembly (PMA), and a radiometric calibration target. The PMA provides the azimuth and elevation actuation for the cameras as well as a 1.5 meter high vantage point from which to image. The calibration target provides a set of reference color and grayscale standards for calibration validation, and a shadow post for quantification of the direct vs. diffuse illumination of the scene. Pancam is a multispectral, stereoscopic, panoramic imaging system, with a field of regard provided by the PMA that extends across 360 of azimuth and from zenith to nadir, providing a complete view of the scene around the rover in up to 12 unique wavelengths. The major characteristics of Pancam are summarized.
Kottner, Sören; Ebert, Lars C; Ampanozi, Garyfalia; Braun, Marcel; Thali, Michael J; Gascho, Dominic
2017-03-01
Injuries such as bite marks or boot prints can leave distinct patterns on the body's surface and can be used for 3D reconstructions. Although various systems for 3D surface imaging have been introduced in the forensic field, most techniques are both cost-intensive and time-consuming. In this article, we present the VirtoScan, a mobile, multi-camera rig based on close-range photogrammetry. The system can be integrated into automated PMCT scanning procedures or used manually together with lifting carts, autopsy tables and examination couch. The VirtoScan is based on a moveable frame that carries 7 digital single-lens reflex cameras. A remote control is attached to each camera and allows the simultaneous triggering of the shutter release of all cameras. Data acquisition in combination with the PMCT scanning procedures took 3:34 min for the 3D surface documentation of one side of the body compared to 20:20 min of acquisition time when using our in-house standard. A surface model comparison between the high resolution output from our in-house standard and a high resolution model from the multi-camera rig showed a mean surface deviation of 0.36 mm for the whole body scan and 0.13 mm for a second comparison of a detailed section of the scan. The use of the multi-camera rig reduces the acquisition time for whole-body surface documentations in medico-legal examinations and provides a low-cost 3D surface scanning alternative for forensic investigations.
50 CFR 216.155 - Requirements for monitoring and reporting.
Code of Federal Regulations, 2010 CFR
2010-10-01
... place 3 autonomous digital video cameras overlooking chosen haul-out sites located varying distances from the missile launch site. Each video camera will be set to record a focal subgroup within the... presence and activity will be conducted and recorded in a field logbook or recorded on digital video for...
Digital Video Cameras for Brainstorming and Outlining: The Process and Potential
ERIC Educational Resources Information Center
Unger, John A.; Scullion, Vicki A.
2013-01-01
This "Voices from the Field" paper presents methods and participant-exemplar data for integrating digital video cameras into the writing process across postsecondary literacy contexts. The methods and participant data are part of an ongoing action-based research project systematically designed to bring research and theory into practice…
PhenoCam Dataset v1.0: Vegetation Phenology from Digital Camera Imagery, 2000-2015
USDA-ARS?s Scientific Manuscript database
This data set provides a time series of vegetation phenological observations for 133 sites across diverse ecosystems of North America and Europe from 2000-2015. The phenology data were derived from conventional visible-wavelength automated digital camera imagery collected through the PhenoCam Networ...
Cloud Height Estimation with a Single Digital Camera and Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Carretas, Filipe; Janeiro, Fernando M.
2014-05-01
Clouds influence the local weather, the global climate and are an important parameter in the weather prediction models. Clouds are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Therefore it is important to develop low cost and robust systems that can be easily deployed in the field, enabling large scale acquisition of cloud parameters. Recently, the authors developed a low-cost system for the measurement of cloud base height using stereo-vision and digital photography. However, due to the stereo nature of the system, some challenges were presented. In particular, the relative camera orientation requires calibration and the two cameras need to be synchronized so that the photos from both cameras are acquired simultaneously. In this work we present a new system that estimates the cloud height between 1000 and 5000 meters. This prototype is composed by one digital camera controlled by a Raspberry Pi and is installed at Centro de Geofísica de Évora (CGE) in Évora, Portugal. The camera is periodically triggered to acquire images of the overhead sky and the photos are downloaded to the Raspberry Pi which forwards them to a central computer that processes the images and estimates the cloud height in real time. To estimate the cloud height using just one image requires a computer model that is able to learn from previous experiences and execute pattern recognition. The model proposed in this work is an Artificial Neural Network (ANN) that was previously trained with cloud features at different heights. The chosen Artificial Neural Network is a three-layer network, with six parameters in the input layer, 12 neurons in the hidden intermediate layer, and an output layer with only one output. The six input parameters are the average intensity values and the intensity standard deviation of each RGB channel. The output parameter in the output layer is the cloud height estimated by the ANN. The training procedure was performed, using the back-propagation method, in a set of 260 different clouds with heights in the range [1000, 5000] m. The training of the ANN has resulted in a correlation ratio of 0.74. This trained ANN can therefore be used to estimate the cloud height. The previously described system can also measure the wind speed and direction at cloud height by measuring the displacement, in pixels, of a cloud feature between consecutively acquired photos. Also, the geographical north direction can be estimated using this setup through sequential night images with high exposure times. A further advantage of this single camera system is that no camera calibration or synchronization is needed. This significantly reduces the cost and complexity of field deployment of cloud height measurement systems based on digital photography.
NASA Astrophysics Data System (ADS)
Olweny, Ephrem O.; Tan, Yung K.; Faddegon, Stephen; Jackson, Neil; Wehner, Eleanor F.; Best, Sara L.; Park, Samuel K.; Thapa, Abhas; Cadeddu, Jeffrey A.; Zuzak, Karel J.
2012-03-01
Digital light processing hyperspectral imaging (DLP® HSI) was adapted for use during laparoscopic surgery by coupling a conventional laparoscopic light guide with a DLP-based Agile Light source (OL 490, Optronic Laboratories, Orlando, FL), incorporating a 0° laparoscope, and a customized digital CCD camera (DVC, Austin, TX). The system was used to characterize renal ischemia in a porcine model.
Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M
2006-02-01
Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.
Mapping Land and Water Surface Topography with instantaneous Structure from Motion
NASA Astrophysics Data System (ADS)
Dietrich, J.; Fonstad, M. A.
2012-12-01
Structure from Motion (SfM) has given researchers an invaluable tool for low-cost, high-resolution 3D mapping of the environment. These SfM 3D surface models are commonly constructed from many digital photographs collected with one digital camera (either handheld or attached to aerial platform). This method works for stationary or very slow moving objects. However, objects in motion are impossible to capture with one-camera SfM. With multiple simultaneously triggered cameras, it becomes possible to capture multiple photographs at the same time which allows for the construction 3D surface models of moving objects and surfaces, an instantaneous SfM (ISfM) surface model. In river science, ISfM provides a low-cost solution for measuring a number of river variables that researchers normally estimate or are unable to collect over large areas. With ISfM and sufficient coverage of the banks and RTK-GPS control it is possible to create a digital surface model of land and water surface elevations across an entire channel and water surface slopes at any point within the surface model. By setting the cameras to collect time-lapse photography of a scene it is possible to create multiple surfaces that can be compared using traditional digital surface model differencing. These water surface models could be combined the high-resolution bathymetry to create fully 3D cross sections that could be useful in hydrologic modeling. Multiple temporal image sets could also be used in 2D or 3D particle image velocimetry to create 3D surface velocity maps of a channel. Other applications in earth science include anything where researchers could benefit from temporal surface modeling like mass movements, lava flows, dam removal monitoring. The camera system that was used for this research consisted of ten pocket digital cameras (Canon A3300) equipped with wireless triggers. The triggers were constructed with an Arduino-style microcontroller and off-the-shelf handheld radios with a maximum range of several kilometers. The cameras are controlled from another microcontroller/radio combination that allows for manual or automatic triggering of the cameras. The total cost of the camera system was approximately 1500 USD.
NASA Astrophysics Data System (ADS)
Shahbazi, M.; Sattari, M.; Homayouni, S.; Saadatseresht, M.
2012-07-01
Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS) for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD) technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83 % in RMS of range error and 72 % in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF) is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90 % true positive recognition and the average of 12 centimetres 3D positioning accuracy.
NASA Astrophysics Data System (ADS)
Shahbazi, M.; Sattari, M.; Homayouni, S.; Saadatseresht, M.
2012-07-01
Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS) for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD) technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83% in RMS of range error and 72% in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF) is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90% true positive recognition and the average of 12 centimetres 3D positioning accuracy.
Stegmayr, Armin; Fessl, Benjamin; Hörtnagl, Richard; Marcadella, Michael; Perkhofer, Susanne
2013-08-01
The aim of the study was to assess the potential negative impact of cellular phones and digitally enhanced cordless telecommunication (DECT) devices on the quality of static and dynamic scintigraphy to avoid repeated testing in infant and teenage patients to protect them from unnecessary radiation exposure. The assessment was conducted by performing phantom measurements under real conditions. A functional renal-phantom acting as a pair of kidneys in dynamic scans was created. Data were collected using the setup of cellular phones and DECT phones placed in different positions in relation to a camera head to test the potential interference of cellular phones and DECT phones with the cameras. Cellular phones reproducibly interfered with the oldest type of gamma camera, which, because of its single-head specification, is the device most often used for renal examinations. Curves indicating the renal function were considerably disrupted; cellular phones as well as DECT phones showed a disturbance concerning static acquisition. Variable electromagnetic tolerance in different types of γ-cameras could be identified. Moreover, a straightforward, low-cost method of testing the susceptibility of equipment to interference caused by cellular phones and DECT phones was generated. Even though some departments use newer models of γ-cameras, which are less susceptible to electromagnetic interference, we recommend testing examination rooms to avoid any interference caused by cellular phones. The potential electromagnetic interference should be taken into account when the purchase of new sensitive medical equipment is being considered, not least because the technology of mobile communication is developing fast, which also means that different standards of wave bands will be issued in the future.
Compression of CCD raw images for digital still cameras
NASA Astrophysics Data System (ADS)
Sriram, Parthasarathy; Sudharsanan, Subramania
2005-03-01
Lossless compression of raw CCD images captured using color filter arrays has several benefits. The benefits include improved storage capacity, reduced memory bandwidth, and lower power consumption for digital still camera processors. The paper discusses the benefits in detail and proposes the use of a computationally efficient block adaptive scheme for lossless compression. Experimental results are provided that indicate that the scheme performs well for CCD raw images attaining compression factors of more than two. The block adaptive method also compares favorably with JPEG-LS. A discussion is provided indicating how the proposed lossless coding scheme can be incorporated into digital still camera processors enabling lower memory bandwidth and storage requirements.
Cost-effective handling of digital medical images in the telemedicine environment.
Choong, Miew Keen; Logeswaran, Rajasvaran; Bister, Michel
2007-09-01
This paper concentrates on strategies for less costly handling of medical images. Aspects of digitization using conventional digital cameras, lossy compression with good diagnostic quality, and visualization through less costly monitors are discussed. For digitization of film-based media, subjective evaluation of the suitability of digital cameras as an alternative to the digitizer was undertaken. To save on storage, bandwidth and transmission time, the acceptable degree of compression with diagnostically no loss of important data was studied through randomized double-blind tests of the subjective image quality when compression noise was kept lower than the inherent noise. A diagnostic experiment was undertaken to evaluate normal low cost computer monitors as viable viewing displays for clinicians. The results show that conventional digital camera images of X-ray images were diagnostically similar to the expensive digitizer. Lossy compression, when used moderately with the imaging noise to compression noise ratio (ICR) greater than four, can bring about image improvement with better diagnostic quality than the original image. Statistical analysis shows that there is no diagnostic difference between expensive high quality monitors and conventional computer monitors. The results presented show good potential in implementing the proposed strategies to promote widespread cost-effective telemedicine and digital medical environments. 2006 Elsevier Ireland Ltd
Enhancement of the Shared Graphics Workspace.
1987-12-31
participants to share videodisc images and computer graphics displayed in color and text and facsimile information displayed in black on amber. They...could annotate the information in up to five * colors and print the annotated version at both sites, using a standard fax machine. The SGWS also used a fax...system to display a document, whether text or photo, the camera scans the document, digitizes the data, and sends it via direct memory access (DMA) to
Multiple Sensor Camera for Enhanced Video Capturing
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko
A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.
NASA Astrophysics Data System (ADS)
Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute
1998-04-01
Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.
Using a Digital Video Camera to Study Motion
ERIC Educational Resources Information Center
Abisdris, Gil; Phaneuf, Alain
2007-01-01
To illustrate how a digital video camera can be used to analyze various types of motion, this simple activity analyzes the motion and measures the acceleration due to gravity of a basketball in free fall. Although many excellent commercially available data loggers and software can accomplish this task, this activity requires almost no financial…
Bringing the Digital Camera to the Physics Lab
ERIC Educational Resources Information Center
Rossi, M.; Gratton, L. M.; Oss, S.
2013-01-01
We discuss how compressed images created by modern digital cameras can lead to even severe problems in the quantitative analysis of experiments based on such images. Difficulties result from the nonlinear treatment of lighting intensity values stored in compressed files. To overcome such troubles, one has to adopt noncompressed, native formats, as…
Development of a digital camera tree evaluation system
Neil Clark; Daniel L. Schmoldt; Philip A. Araman
2000-01-01
Within the Strategic Plan for Forest Inventory and Monitoring (USDA Forest Service 1998), there is a call to "conduct applied research in the use of [advanced technology] towards the end of increasing the operational efficiency and effectiveness of our program". The digital camera tree evaluation system is part of that research, aimed at decreasing field...
Data filtering with support vector machines in geometric camera calibration.
Ergun, B; Kavzoglu, T; Colkesen, I; Sahin, C
2010-02-01
The use of non-metric digital cameras in close-range photogrammetric applications and machine vision has become a popular research agenda. Being an essential component of photogrammetric evaluation, camera calibration is a crucial stage for non-metric cameras. Therefore, accurate camera calibration and orientation procedures have become prerequisites for the extraction of precise and reliable 3D metric information from images. The lack of accurate inner orientation parameters can lead to unreliable results in the photogrammetric process. A camera can be well defined with its principal distance, principal point offset and lens distortion parameters. Different camera models have been formulated and used in close-range photogrammetry, but generally sensor orientation and calibration is performed with a perspective geometrical model by means of the bundle adjustment. In this study, support vector machines (SVMs) using radial basis function kernel is employed to model the distortions measured for Olympus Aspherical Zoom lens Olympus E10 camera system that are later used in the geometric calibration process. It is intended to introduce an alternative approach for the on-the-job photogrammetric calibration stage. Experimental results for DSLR camera with three focal length settings (9, 18 and 36 mm) were estimated using bundle adjustment with additional parameters, and analyses were conducted based on object point discrepancies and standard errors. Results show the robustness of the SVMs approach on the correction of image coordinates by modelling total distortions on-the-job calibration process using limited number of images.
Distributing digital video to multiple computers
Murray, James A.
2004-01-01
Video is an effective teaching tool, and live video microscopy is especially helpful in teaching dissection techniques and the anatomy of small neural structures. Digital video equipment is more affordable now and allows easy conversion from older analog video devices. I here describe a simple technique for bringing digital video from one camera to all of the computers in a single room. This technique allows students to view and record the video from a single camera on a microscope. PMID:23493464
Testing and Validation of Timing Properties for High Speed Digital Cameras - A Best Practices Guide
2016-07-27
a five year plan to begin replacing its inventory of antiquated film and video systems with more modern and capable digital systems. As evidenced in...installation, testing, and documentation of DITCS. If shop support can be accelerated due to shifting mission priorities, this schedule can likely...assistance from the machine shop , welding shop , paint shop , and carpenter shop . Testing the DITCS system will require a KTM with digital cameras and
Dynamic photoelasticity by TDI imaging
NASA Astrophysics Data System (ADS)
Asundi, Anand K.; Sajan, M. R.
2001-06-01
High speed photographic system like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for the recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording system requiring time consuming and tedious wet processing of the films. Digital cameras are replacing the conventional cameras, to certain extent in static experiments. Recently, there is lots of interest in development and modifying CCD architectures and recording arrangements for dynamic scenes analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration mode for digitally recording dynamic photoelastic stress patterns. Applications in strobe and streak photoelastic pattern recording and system limitations will be explained in the paper.
Comparison of 10 digital SLR cameras for orthodontic photography.
Bister, D; Mordarai, F; Aveling, R M
2006-09-01
Digital photography is now widely used to document orthodontic patients. High quality intra-oral photography depends on a satisfactory 'depth of field' focus and good illumination. Automatic 'through the lens' (TTL) metering is ideal to achieve both the above aims. Ten current digital single lens reflex (SLR) cameras were tested for use in intra- and extra-oral photography as used in orthodontics. The manufacturers' recommended macro-lens and macro-flash were used with each camera. Handling characteristics, colour-reproducibility, quality of the viewfinder and flash recharge time were investigated. No camera took acceptable images in factory default setting or 'automatic' mode: this mode was not present for some cameras (Nikon, Fujifilm); led to overexposure (Olympus) or poor depth of field (Canon, Konica-Minolta, Pentax), particularly for intra-oral views. Once adjusted, only Olympus cameras were able to take intra- and extra-oral photographs without the need to change settings, and were therefore the easiest to use. All other cameras needed adjustments of aperture (Canon, Konica-Minolta, Pentax), or aperture and flash (Fujifilm, Nikon), making the latter the most complex to use. However, all cameras produced high quality intra- and extra-oral images, once appropriately adjusted. The resolution of the images is more than satisfactory for all cameras. There were significant differences relating to the quality of colour reproduction, size and brightness of the viewfinders. The Nikon D100 and Fujifilm S 3 Pro consistently scored best for colour fidelity. Pentax and Konica-Minolta had the largest and brightest viewfinders.
Applications of digital image acquisition in anthropometry
NASA Technical Reports Server (NTRS)
Woolford, B.; Lewis, J. L.
1981-01-01
A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.
Coincidence electron/ion imaging with a fast frame camera
NASA Astrophysics Data System (ADS)
Li, Wen; Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander; Fan, Lin
2015-05-01
A new time- and position- sensitive particle detection system based on a fast frame CMOS camera is developed for coincidence electron/ion imaging. The system is composed of three major components: a conventional microchannel plate (MCP)/phosphor screen electron/ion imager, a fast frame CMOS camera and a high-speed digitizer. The system collects the positional information of ions/electrons from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of MCPs processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of electron/ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide. We further show that a time resolution of 30 ps can be achieved when measuring electron TOF spectrum and this enables the new system to achieve a good energy resolution along the TOF axis.
Multi-ion detection by one-shot optical sensors using a colour digital photographic camera.
Lapresta-Fernández, Alejandro; Capitán-Vallvey, Luis Fermín
2011-10-07
The feasibility and performance of a procedure to evaluate previously developed one-shot optical sensors as single and selective analyte sensors for potassium, magnesium and hardness are presented. The procedure uses a conventional colour digital photographic camera as the detection system for simultaneous multianalyte detection. A 6.0 megapixel camera was used, and the procedure describes how it is possible to quantify potassium, magnesium and hardness simultaneously from the images captured, using multianalyte one-shot sensors based on ionophore-chromoionophore chemistry, employing the colour information computed from a defined region of interest on the sensing membrane. One of the colour channels in the red, green, blue (RGB) colour space is used to build the analytical parameter, the effective degree of protonation (1-α(eff)), in good agreement with the theoretical model. The linearization of the sigmoidal response function increases the limit of detection (LOD) and analytical range in all cases studied. The increases were from 5.4 × 10(-6) to 2.7 × 10(-7) M for potassium, from 1.4 × 10(-4) to 2.0 × 10(-6) M for magnesium and from 1.7 to 2.0 × 10(-2) mg L(-1) of CaCO(3) for hardness. The method's precision was determined in terms of the relative standard deviation (RSD%) which was from 2.4 to 7.6 for potassium, from 6.8 to 7.8 for magnesium and from 4.3 to 7.8 for hardness. The procedure was applied to the simultaneous determination of potassium, magnesium and hardness using multianalyte one-shot sensors in different types of waters and beverages in order to cover the entire application range, statistically validating the results against atomic absorption spectrometry as the reference procedure. Accordingly, this paper is an attempt to demonstrate the possibility of using a conventional digital camera as an analytical device to measure this type of one-shot sensor based on ionophore-chromoionophore chemistry instead of using conventional lab instrumentation.
NV-CMOS HD camera for day/night imaging
NASA Astrophysics Data System (ADS)
Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.
2014-06-01
SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.
Investigating the Suitability of Mirrorless Cameras in Terrestrial Photogrammetric Applications
NASA Astrophysics Data System (ADS)
Incekara, A. H.; Seker, D. Z.; Delen, A.; Acar, A.
2017-11-01
Digital single-lens reflex cameras (DSLR) which are commonly referred as mirrored cameras are preferred for terrestrial photogrammetric applications such as documentation of cultural heritage, archaeological excavations and industrial measurements. Recently, digital cameras which are called as mirrorless systems that can be used with different lens combinations have become available for using similar applications. The main difference between these two camera types is the presence of the mirror mechanism which means that the incoming beam towards the lens is different in the way it reaches the sensor. In this study, two different digital cameras, one with a mirror (Nikon D700) and the other without a mirror (Sony a6000), were used to apply close range photogrammetric application on the rock surface at Istanbul Technical University (ITU) Ayazaga Campus. Accuracy of the 3D models created by means of photographs taken with both cameras were compared with each other using difference values between field and model coordinates which were obtained after the alignment of the photographs. In addition, cross sections were created on the 3D models for both data source and maximum area difference between them is quite small because they are almost overlapping. The mirrored camera has become more consistent in itself with respect to the change of model coordinates for models created with photographs taken at different times, with almost the same ground sample distance. As a result, it has been determined that mirrorless cameras and point cloud produced using photographs obtained from these cameras can be used for terrestrial photogrammetric studies.
Generating High resolution surfaces from images: when photogrammetry and applied geophysics meets
NASA Astrophysics Data System (ADS)
Bretar, F.; Pierrot-Deseilligny, M.; Schelstraete, D.; Martin, O.; Quernet, P.
2012-04-01
Airborne digital photogrammetry has been used for some years to create digital models of the Earth's topography from calibrated cameras. But, in the recent years, the use of non-professionnal digital cameras has become valuable to reconstruct topographic surfaces. Today, the multi megapixel resolution of non-professionnal digital cameras, either used in a close range configuration or from low altitude flights, provide a ground pixel size of respectively a fraction of millimeters to couple of centimeters. Such advances turned into reality because the data processing chain made a tremendous break through during the last five years. This study investigates the potential of the open source software MICMAC developed by the French National Survey IGN (http://www.micmac.ign.fr) to calibrate unoriented digital images and calculate surface models of extremely high resolution for Earth Science purpose. We would like to report two experiences performed in 2011. The first has been performed in the context of risk assessment of rock falls and landslides along the cliffs of Normandy seashore. The acquisition protocol for the first site of "Criel-sur-Mer" has been very simple: a walk along the chalk vertical cliffs taking photos with a focal of 18mm every approx. 50m with an overlap of 80% allowed to generate 2.5km of digital surface at centimeter resolution. The site of "Les Vaches Noires" has been more complicated to acquire because of both the geology (dark clays) and the geometry (the landslide direction is parallel to the seashore and has a high field depth from the shore). We therefore developed an innovative device mounted on board of an autogyre (in-between ultralight power driven aircraft and helicopter). The entire area has been surveyed with a focal of 70mm at 400m asl with a ground pixel of 3cm. MICMAC gives the possibility to directly georeference digital Model. Here, it has been performed by a net of wireless GPS called Geocubes, also developed at IGN. The second experience is a part of field measurements performed over the flanks of the volcano Piton de la Fournaise, La Réunion island. In order to characterize the roughness of different type of lava flows, extremely high resolution Digital Terrain Models (0.6mm) have been generated with MICMAC. The use of such high definition topography made the characterization possible through the calculation of the correlation length, the standard deviation and the fractal dimension. To conclude, we will sketch a synthesis of the need of geoscientists vs. the optimal resolution of digital topographic data.
NASA Astrophysics Data System (ADS)
Morozova, K.; Jaeger, R.; Balodis, J.; Kaminskis, J.
2017-10-01
Over several years the Institute of Geodesy and Geoinformatics (GGI) was engaged in the design and development of a digital zenith camera. At the moment the camera developments are finished and tests by field measurements are done. In order to check these data and to use them for geoid model determination DFHRS (Digital Finite element Height reference surface (HRS)) v4.3. software is used. It is based on parametric modelling of the HRS as a continous polynomial surface. The HRS, providing the local Geoid height N, is a necessary geodetic infrastructure for a GNSS-based determination of physcial heights H from ellipsoidal GNSS heights h, by H=h-N. The research and this publication is dealing with the inclusion of the data of observed vertical deflections from digital zenith camera into the mathematical model of the DFHRS approach and software v4.3. A first target was to test out and validate the mathematical model and software, using additionally real data of the above mentioned zenith camera observations of deflections of the vertical. A second concern of the research was to analyze the results and the improvement of the Latvian quasi-geoid computation compared to the previous version HRS computed without zenith camera based deflections of the vertical. The further development of the mathematical model and software concerns the use of spherical-cap-harmonics as the designed carrier function for the DFHRS v.5. It enables - in the sense of the strict integrated geodesy approach, holding also for geodetic network adjustment - both a full gravity field and a geoid and quasi-geoid determination. In addition, it allows the inclusion of gravimetric measurements, together with deflections of the vertical from digital-zenith cameras, and all other types of observations. The theoretical description of the updated version of DFHRS software and methods are discussed in this publication.
Film cameras or digital sensors? The challenge ahead for aerial imaging
Light, D.L.
1996-01-01
Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.
Design study report. Volume 2: Electronic unit
NASA Technical Reports Server (NTRS)
1973-01-01
The recording system discussed is required to record and reproduce wideband data from either of the two primary Earth Resources Technology Satellite sensors: Return Beam Vidicon (RBV) camera or Multi-Spectral Scanner (MSS). The camera input is an analog signal with a bandwidth from dc to 3.5 MHz; this signal is accommodated through FM recording techniques which provide a recorder signal-to-noise ratio in excess of 39 db, black-to-white signal/rms noise, over the specified bandwidth. The MSS provides, as initial output, 26 narrowband channels. These channels are multiplexed prior to transmission, or recording, into a single 15 Megabit/second digital data stream. Within the recorder, the 15 Megabit/second NRZL signal is processed through the same FM electronics as the RBV signal, but the basic FM standards are modified to provide an internal, 10.5 MHz baseland response with signal-to-noise ratio of about 25 db. Following FM demodulation, however, the MSS signal is digitally re-shaped and re-clocked so that good bit stability and signal-to-noise exist at the recorder output.
Suresh, R
2017-08-01
Pertinent marks of fired cartridge cases such as firing pin, breech face, extractor, ejector, etc. are used for firearm identification. A non-standard semiautomatic pistol and four .22rim fire cartridges (head stamp KF) is used for known source comparison study. Two test fired cartridge cases are examined under stereomicroscope. The characteristic marks are captured by digital camera and comparative analysis of striation marks is done by using different tools available in the Microsoft word (Windows 8) of a computer system. The similarities of striation marks thus obtained are highly convincing to identify the firearm. In this paper, an effort has been made to study and compare the striation marks of two fired cartridge cases using stereomicroscope, digital camera and computer system. Comparison microscope is not used in this study. The method described in this study is simple, cost effective, transport to field study and can be equipped in a crime scene vehicle to facilitate immediate on spot examination. The findings may be highly helpful to the forensic community, law enforcement agencies and students. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Andres, Vince; Walter, David; Hallal, Charles; Jones, Helene; Callac, Chris
2004-01-01
The SSC Multimedia Archive is an automated electronic system to manage images, acquired both by film and digital cameras, for the Public Affairs Office (PAO) at Stennis Space Center (SSC). Previously, the image archive was based on film photography and utilized a manual system that, by today s standards, had become inefficient and expensive. Now, the SSC Multimedia Archive, based on a server at SSC, contains both catalogs and images for pictures taken both digitally and with a traditional, film-based camera, along with metadata about each image. After a "shoot," a photographer downloads the images into the database. Members of the PAO can use a Web-based application to search, view and retrieve images, approve images for publication, and view and edit metadata associated with the images. Approved images are archived and cross-referenced with appropriate descriptions and information. Security is provided by allowing administrators to explicitly grant access privileges to personnel to only access components of the system that they need to (i.e., allow only photographers to upload images, only PAO designated employees may approve images).
A new high-speed IR camera system
NASA Technical Reports Server (NTRS)
Travis, Jeffrey W.; Shu, Peter K.; Jhabvala, Murzy D.; Kasten, Michael S.; Moseley, Samuel H.; Casey, Sean C.; Mcgovern, Lawrence K.; Luers, Philip J.; Dabney, Philip W.; Kaipa, Ravi C.
1994-01-01
A multi-organizational team at the Goddard Space Flight Center is developing a new far infrared (FIR) camera system which furthers the state of the art for this type of instrument by the incorporating recent advances in several technological disciplines. All aspects of the camera system are optimized for operation at the high data rates required for astronomical observations in the far infrared. The instrument is built around a Blocked Impurity Band (BIB) detector array which exhibits responsivity over a broad wavelength band and which is capable of operating at 1000 frames/sec, and consists of a focal plane dewar, a compact camera head electronics package, and a Digital Signal Processor (DSP)-based data system residing in a standard 486 personal computer. In this paper we discuss the overall system architecture, the focal plane dewar, and advanced features and design considerations for the electronics. This system, or one derived from it, may prove useful for many commercial and/or industrial infrared imaging or spectroscopic applications, including thermal machine vision for robotic manufacturing, photographic observation of short-duration thermal events such as combustion or chemical reactions, and high-resolution surveillance imaging.
Principles and practice of external digital photography in ophthalmology
Mukherjee, Bipasha; Nair, Akshay Gopinathan
2012-01-01
It is mandatory to incorporate clinical photography in an ophthalmic practice. Patient photographs are routinely used in teaching, presentations, documenting surgical outcomes and marketing. Standardized clinical photographs are part of an armamentarium for any ophthalmologist interested in enhancing his or her practice. Unfortunately, many clinicians still avoid taking patient photographs for want of basic knowledge or inclination. The ubiquitous presence of the digital camera and digital technology has made it extremely easy and affordable to take high-quality images. It is not compulsory to employ a professional photographer or invest in expensive equipment any longer for this purpose. Any ophthalmologist should be able to take clinical photographs in his/her office settings with minimal technical skill. The purpose of this article is to provide an ophthalmic surgeon with guidelines to achieve standardized photographic views for specific procedures, to achieve consistency, to help in pre-operative planning and to produce accurate pre-operative and post-operative comparisons, which will aid in self-improvement, patient education, medicolegal documentation and publications. This review also discusses editing, storage, patient consent, medicolegal issues and importance of maintenance of patient confidentiality. PMID:22446907
Principles and practice of external digital photography in ophthalmology.
Mukherjee, Bipasha; Nair, Akshay Gopinathan
2012-01-01
It is mandatory to incorporate clinical photography in an ophthalmic practice. Patient photographs are routinely used in teaching, presentations, documenting surgical outcomes and marketing. Standardized clinical photographs are part of an armamentarium for any ophthalmologist interested in enhancing his or her practice. Unfortunately, many clinicians still avoid taking patient photographs for want of basic knowledge or inclination. The ubiquitous presence of the digital camera and digital technology has made it extremely easy and affordable to take high-quality images. It is not compulsory to employ a professional photographer or invest in expensive equipment any longer for this purpose. Any ophthalmologist should be able to take clinical photographs in his/her office settings with minimal technical skill. The purpose of this article is to provide an ophthalmic surgeon with guidelines to achieve standardized photographic views for specific procedures, to achieve consistency, to help in pre-operative planning and to produce accurate pre-operative and post-operative comparisons, which will aid in self-improvement, patient education, medicolegal documentation and publications. This review also discusses editing, storage, patient consent, medicolegal issues and importance of maintenance of patient confidentiality.
Strauss, Rupert W; Krieglstein, Tina R; Priglinger, Siegfried G; Reis, Werner; Ulbig, Michael W; Kampik, Anselm; Neubauer, Aljoscha S
2007-11-01
To establish a set of quality parameters for grading image quality and apply those to evaluate the fundus image quality obtained by a new scanning digital ophthalmoscope (SDO) compared with standard slide photography. On visual analogue scales a total of eight image characteristics were defined: overall quality, contrast, colour brilliance, focus (sharpness), resolution and details, noise, artefacts and validity of clinical assessment. Grading was repeated after 4 months to assess repeatability. Fundus images of 23 patients imaged digitally by SDO and by Zeiss 450FF fundus camera using Kodak film were graded side-by-side by three graders. Lens opacity was quantified with the Interzeag Lens Opacity Meter 701. For all of the eight scales of image quality, good repeatability within the graders (mean Kendall's W 0.69) was obtained after 4 months. Inter-grader agreement ranged between 0.31 and 0.66. Despite the SDO's limited nominal image resolution of 720 x 576 pixels, the Zeiss FF 450 camera performed better in only two of the subscales - noise (p = 0.001) and artefacts (p = 0.01). Lens opacities significantly influenced only the two subscales 'resolution' and 'details', which deteriorated with increasing media opacities for both imaging systems. Distinct scales to grade image characteristics of different origin were developed and validated. Overall SDO digital imaging was found to provide fundus pictures of a similarly high level of quality as expert photography on slides.
Bringing the Digital Camera to the Physics Lab
NASA Astrophysics Data System (ADS)
Rossi, M.; Gratton, L. M.; Oss, S.
2013-03-01
We discuss how compressed images created by modern digital cameras can lead to even severe problems in the quantitative analysis of experiments based on such images. Difficulties result from the nonlinear treatment of lighting intensity values stored in compressed files. To overcome such troubles, one has to adopt noncompressed, native formats, as we examine in this work.
A Digital Approach to Learning Petrology
NASA Astrophysics Data System (ADS)
Reid, M. R.
2011-12-01
In the undergraduate igneous and metamorphic petrology course at Northern Arizona University, we are employing petrographic microscopes equipped with relatively inexpensive ( $200) digital cameras that are linked to pen-tablet computers. The camera-tablet systems can assist student learning in a variety of ways. Images provided by the tablet computers can be used for helping students filter the visually complex specimens they examine. Instructors and students can simultaneously view the same petrographic features captured by the cameras and exchange information about them by pointing to salient features using the tablet pen. These images can become part of a virtual mineral/rock/texture portfolio tailored to individual student's needs. Captured digital illustrations can be annotated with digital ink or computer graphics tools; this activity emulates essential features of more traditional line drawings (visualizing an appropriate feature and selecting a representative image of it, internalizing the feature through studying and annotating it) while minimizing the frustration that many students feel about drawing. In these ways, we aim to help a student progress more efficiently from novice to expert. A number of our petrology laboratory exercises involve use of the camera-tablet systems for collaborative learning. Observational responsibilities are distributed among individual members of teams in order to increase interdependence and accountability, and to encourage efficiency. Annotated digital images are used to share students' findings and arrive at an understanding of an entire rock suite. This interdependence increases the individual's sense of responsibility for their work, and reporting out encourages students to practice use of technical vocabulary and to defend their observations. Pre- and post-course student interest in the camera-tablet systems has been assessed. In a post-course survey, the majority of students reported that, if available, they would use camera-tablet systems to capture microscope images (77%) and to make notes on images (71%). An informal focus group recommended introducing the cameras as soon as possible and having them available for making personal mineralogy/petrology portfolios. Because the stakes are perceived as high, use of the camera-tablet systems for peer-peer learning has been progressively modified to bolster student confidence in their collaborative efforts.
NASA Astrophysics Data System (ADS)
Feng, Zhixin
2018-02-01
Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.
Estimating the spatial position of marine mammals based on digital camera recordings
Hoekendijk, Jeroen P A; de Vries, Jurre; van der Bolt, Krissy; Greinert, Jens; Brasseur, Sophie; Camphuysen, Kees C J; Aarts, Geert
2015-01-01
Estimating the spatial position of organisms is essential to quantify interactions between the organism and the characteristics of its surroundings, for example, predator–prey interactions, habitat selection, and social associations. Because marine mammals spend most of their time under water and may appear at the surface only briefly, determining their exact geographic location can be challenging. Here, we developed a photogrammetric method to accurately estimate the spatial position of marine mammals or birds at the sea surface. Digital recordings containing landscape features with known geographic coordinates can be used to estimate the distance and bearing of each sighting relative to the observation point. The method can correct for frame rotation, estimates pixel size based on the reference points, and can be applied to scenarios with and without a visible horizon. A set of R functions was written to process the images and obtain accurate geographic coordinates for each sighting. The method is applied to estimate the spatiotemporal fine-scale distribution of harbour porpoises in a tidal inlet. Video recordings of harbour porpoises were made from land, using a standard digital single-lens reflex (DSLR) camera, positioned at a height of 9.59 m above mean sea level. Porpoises were detected up to a distance of ∽3136 m (mean 596 m), with a mean location error of 12 m. The method presented here allows for multiple detections of different individuals within a single video frame and for tracking movements of individuals based on repeated sightings. In comparison with traditional methods, this method only requires a digital camera to provide accurate location estimates. It especially has great potential in regions with ample data on local (a)biotic conditions, to help resolve functional mechanisms underlying habitat selection and other behaviors in marine mammals in coastal areas. PMID:25691982
The Art and Science of Photography in Hand Surgery
Wang, Keming; Kowalski, Evan J.; Chung, Kevin C.
2013-01-01
High-quality medical photography plays an important role in teaching and demonstrating the functional capacity of the hands, as well as in medicolegal documentation. Obtaining standardized, high-quality photographs is now an essential component of many surgery practices. The importance of standardized photography in facial and cosmetic surgery has been well documented in previous studies, but no studies have thoroughly addressed the details of photography for hand surgery. In this paper, we will provide a set of guidelines and basic camera concepts for different scenarios to help hand surgeons obtain appropriate and informative high quality photographs. A camera used for medical photography should come equipped with a large sensor size and an optical zoom lens with a focal length ranging anywhere from 14-75mm. In a clinic or office setting, we recommend six standardized views of the hand and four views for the wrist, and additional views should be taken for tendon ruptures, nerve injuries, or other deformities of the hand. For intra-operative pictures, the camera operator should understand the procedure and pertinent anatomy in order to properly obtain high-quality photographs. When digital radiographs are not available, and radiographic film must be photographed, it is recommended to reduce the exposure and change the color mode to black and white to obtain the best possible pictures. The goal of medical photography is to present the subject in an accurate and precise fashion. PMID:23755927
D Reconstruction of AN Underwater Archaelogical Site: Comparison Between Low Cost Cameras
NASA Astrophysics Data System (ADS)
Capra, A.; Dubbini, M.; Bertacchini, E.; Castagnetti, C.; Mancini, F.
2015-04-01
The 3D reconstruction with a metric content of a submerged area, where objects and structures of archaeological interest are found, could play an important role in the research and study activities and even in the digitization of the cultural heritage. The reconstruction of 3D object, of interest for archaeologists, constitutes a starting point in the classification and description of object in digital format and for successive fruition by user after delivering through several media. The starting point is a metric evaluation of the site obtained with photogrammetric surveying and appropriate 3D restitution. The authors have been applying the underwater photogrammetric technique since several years using underwater digital cameras and, in this paper, digital low cost cameras (off-the-shelf). Results of tests made on submerged objects with three cameras are presented: Canon Power Shot G12, Intova Sport HD e GoPro HERO 2. The experimentation had the goal to evaluate the precision in self-calibration procedures, essential for multimedia underwater photogrammetry, and to analyze the quality of 3D restitution. Precisions obtained in the calibration and orientation procedures was assessed by using three cameras, and an homogeneous set control points. Data were processed with Agisoft Photoscan. Successively, 3D models were created and the comparison of the models derived from the use of different cameras was performed. Different potentialities of the used cameras are reported in the discussion section. The 3D restitution of objects and structures was integrated with sea bottom floor morphology in order to achieve a comprehensive description of the site. A possible methodology of survey and representation of submerged objects is therefore illustrated, considering an automatic and a semi-automatic approach.
High-Speed Edge-Detecting Line Scan Smart Camera
NASA Technical Reports Server (NTRS)
Prokop, Norman F.
2012-01-01
A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..
Packet based serial link realized in FPGA dedicated for high resolution infrared image transmission
NASA Astrophysics Data System (ADS)
Bieszczad, Grzegorz
2015-05-01
In article the external digital interface specially designed for thermographic camera built in Military University of Technology is described. The aim of article is to illustrate challenges encountered during design process of thermal vision camera especially related to infrared data processing and transmission. Article explains main requirements for interface to transfer Infra-Red or Video digital data and describes the solution which we elaborated based on Low Voltage Differential Signaling (LVDS) physical layer and signaling scheme. Elaborated link for image transmission is built using FPGA integrated circuit with built-in high speed serial transceivers achieving up to 2500Gbps throughput. Image transmission is realized using proprietary packet protocol. Transmission protocol engine was described in VHDL language and tested in FPGA hardware. The link is able to transmit 1280x1024@60Hz 24bit video data using one signal pair. Link was tested to transmit thermal-vision camera picture to remote monitor. Construction of dedicated video link allows to reduce power consumption compared to solutions with ASIC based encoders and decoders realizing video links like DVI or packed based Display Port, with simultaneous reduction of wires needed to establish link to one pair. Article describes functions of modules integrated in FPGA design realizing several functions like: synchronization to video source, video stream packeting, interfacing transceiver module and dynamic clock generation for video standard conversion.
Automated Meteor Detection by All-Sky Digital Camera Systems
NASA Astrophysics Data System (ADS)
Suk, Tomáš; Šimberová, Stanislava
2017-12-01
We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.
Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test.
Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno
2008-11-17
The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces.
Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications
NASA Astrophysics Data System (ADS)
Olson, Gaylord G.; Walker, Jo N.
1997-09-01
Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.
D Point Cloud Model Colorization by Dense Registration of Digital Images
NASA Astrophysics Data System (ADS)
Crombez, N.; Caron, G.; Mouaddib, E.
2015-02-01
Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.
Development of a camera casing suited for cryogenic and vacuum applications
NASA Astrophysics Data System (ADS)
Delaquis, S. C.; Gornea, R.; Janos, S.; Lüthi, M.; von Rohr, Ch Rudolf; Schenk, M.; Vuilleumier, J.-L.
2013-12-01
We report on the design, construction, and operation of a PID temperature controlled and vacuum tight camera casing. The camera casing contains a commercial digital camera and a lighting system. The design of the camera casing and its components are discussed in detail. Pictures taken by this cryo-camera while immersed in argon vapour and liquid nitrogen are presented. The cryo-camera can provide a live view inside cryogenic set-ups and allows to record video.
ERIC Educational Resources Information Center
Catelli, Francisco; Giovannini, Odilon; Bolzan, Vicente Dall Agnol
2011-01-01
The interference fringes produced by a diffraction grating illuminated with radiation from a TV remote control and a red laser beam are, simultaneously, captured by a digital camera. Based on an image with two interference patterns, an estimate of the infrared radiation wavelength emitted by a TV remote control is made. (Contains 4 figures.)
NASA Astrophysics Data System (ADS)
Nishidate, Izumi; Hoshi, Akira; Aoki, Yuta; Nakano, Kazuya; Niizeki, Kyuichi; Aizu, Yoshihisa
2016-03-01
A non-contact imaging method with a digital RGB camera is proposed to evaluate plethysmogram and spontaneous lowfrequency oscillation. In vivo experiments with human skin during mental stress induced by the Stroop color-word test demonstrated the feasibility of the method to evaluate the activities of autonomic nervous systems.
Silva, Paolo S; Walia, Saloni; Cavallerano, Jerry D; Sun, Jennifer K; Dunn, Cheri; Bursell, Sven-Erik; Aiello, Lloyd M; Aiello, Lloyd Paul
2012-09-01
To compare agreement between diagnosis of clinical level of diabetic retinopathy (DR) and diabetic macular edema (DME) derived from nonmydriatic fundus images using a digital camera back optimized for low-flash image capture (MegaVision) compared with standard seven-field Early Treatment Diabetic Retinopathy Study (ETDRS) photographs and dilated clinical examination. Subject comfort and image acquisition time were also evaluated. In total, 126 eyes from 67 subjects with diabetes underwent Joslin Vision Network nonmydriatic retinal imaging. ETDRS photographs were obtained after pupillary dilation, and fundus examination was performed by a retina specialist. There was near-perfect agreement between MegaVision and ETDRS photographs (κ=0.81, 95% confidence interval [CI] 0.73-0.89) for clinical DR severity levels. Substantial agreement was observed with clinical examination (κ=0.71, 95% CI 0.62-0.80). For DME severity level there was near-perfect agreement with ETDRS photographs (κ=0.92, 95% CI 0.87-0.98) and moderate agreement with clinical examination (κ=0.58, 95% CI 0.46-0.71). The wider MegaVision 45° field led to identification of nonproliferative changes in areas not imaged by the 30° field of ETDRS photos. Field area unique to ETDRS photographs identified proliferative changes not visualized with MegaVision. Mean MegaVision acquisition time was 9:52 min. After imaging, 60% of subjects preferred the MegaVision lower flash settings. When evaluated using a rigorous protocol, images captured using a low-light digital camera compared favorably with ETDRS photography and clinical examination for grading level of DR and DME. Furthermore, these data suggest the importance of more extensive peripheral images and suggest that utilization of wide-field retinal imaging may further improve accuracy of DR assessment.
Fisheye image rectification using spherical and digital distortion models
NASA Astrophysics Data System (ADS)
Li, Xin; Pi, Yingdong; Jia, Yanling; Yang, Yuhui; Chen, Zhiyong; Hou, Wenguang
2018-02-01
Fisheye cameras have been widely used in many applications including close range visual navigation and observation and cyber city reconstruction because its field of view is much larger than that of a common pinhole camera. This means that a fisheye camera can capture more information than a pinhole camera in the same scenario. However, the fisheye image contains serious distortion, which may cause trouble for human observers in recognizing the objects within. Therefore, in most practical applications, the fisheye image should be rectified to a pinhole perspective projection image to conform to human cognitive habits. The traditional mathematical model-based methods cannot effectively remove the distortion, but the digital distortion model can reduce the image resolution to some extent. Considering these defects, this paper proposes a new method that combines the physical spherical model and the digital distortion model. The distortion of fisheye images can be effectively removed according to the proposed approach. Many experiments validate its feasibility and effectiveness.
An automated digital imaging system for environmental monitoring applications
Bogle, Rian; Velasco, Miguel; Vogel, John
2013-01-01
Recent improvements in the affordability and availability of high-resolution digital cameras, data loggers, embedded computers, and radio/cellular modems have advanced the development of sophisticated automated systems for remote imaging. Researchers have successfully placed and operated automated digital cameras in remote locations and in extremes of temperature and humidity, ranging from the islands of the South Pacific to the Mojave Desert and the Grand Canyon. With the integration of environmental sensors, these automated systems are able to respond to local conditions and modify their imaging regimes as needed. In this report we describe in detail the design of one type of automated imaging system developed by our group. It is easily replicated, low-cost, highly robust, and is a stand-alone automated camera designed to be placed in remote locations, without wireless connectivity.
Niamtu, Joseph
2004-01-01
Cosmetic surgery and photography are inseparable. Clinical photographs serve as diagnostic aids, medical records, legal protection, and marketing tools. In the past, taking high-quality, standardized images and maintaining and using them for presentations were tasks of significant proportion when done correctly. Although the cosmetic literature is replete with articles on standardized photography, this has eluded many practitioners in part to the complexity. A paradigm shift has occurred in the past decade, and digital technology has revolutionized clinical photography and presentations. Digital technology has made it easier than ever to take high-quality, standardized images and to use them in a multitude of ways to enhance the practice of cosmetic surgery. PowerPoint presentations have become the standard for academic presentations, but many pitfalls exist, especially when taking a backup disc to play on an alternate computer at a lecture venue. Embracing digital technology has a mild to moderate learning curve but is complicated by old habits and holdovers from the days of slide photography, macro lenses, and specialized flashes. Discussion is presented to circumvent common problems involving computer glitches with PowerPoint presentations. In the past, high-quality clinical photography was complex and sometimes beyond the confines of a busy clinical practice. The digital revolution of the past decade has removed many of these associated barriers, and it has never been easier or more affordable to take images and use them in a multitude of ways for learning, judging surgical outcomes, teaching and lecturing, and marketing. Even though this technology has existed for years, many practitioners have failed to embrace it for various reasons or fears. By following a few simple techniques, even the most novice practitioner can be on the forefront of digital imaging technology. By observing a number of modified techniques with digital cameras, any practitioner can take high-quality, standardized clinical photographs and can make and use these images to enhance his or her practice. This article deals with common pitfalls of digital photography and PowerPoint presentations and presents multiple pearls to achieve proficiency quickly with digital photography and imaging as well as avoid malfunction of PowerPoint presentations in an academic lecture venue.
Establishing a gold standard for manual cough counting: video versus digital audio recordings
Smith, Jaclyn A; Earis, John E; Woodcock, Ashley A
2006-01-01
Background Manual cough counting is time-consuming and laborious; however it is the standard to which automated cough monitoring devices must be compared. We have compared manual cough counting from video recordings with manual cough counting from digital audio recordings. Methods We studied 8 patients with chronic cough, overnight in laboratory conditions (diagnoses were 5 asthma, 1 rhinitis, 1 gastro-oesophageal reflux disease and 1 idiopathic cough). Coughs were recorded simultaneously using a video camera with infrared lighting and digital sound recording. The numbers of coughs in each 8 hour recording were counted manually, by a trained observer, in real time from the video recordings and using audio-editing software from the digital sound recordings. Results The median cough frequency was 17.8 (IQR 5.9–28.7) cough sounds per hour in the video recordings and 17.7 (6.0–29.4) coughs per hour in the digital sound recordings. There was excellent agreement between the video and digital audio cough rates; mean difference of -0.3 coughs per hour (SD ± 0.6), 95% limits of agreement -1.5 to +0.9 coughs per hour. Video recordings had poorer sound quality even in controlled conditions and can only be analysed in real time (8 hours per recording). Digital sound recordings required 2–4 hours of analysis per recording. Conclusion Manual counting of cough sounds from digital audio recordings has excellent agreement with simultaneous video recordings in laboratory conditions. We suggest that ambulatory digital audio recording is therefore ideal for validating future cough monitoring devices, as this as this can be performed in the patients own environment. PMID:16887019
Digital methods of recording color television images on film tape
NASA Astrophysics Data System (ADS)
Krivitskaya, R. Y.; Semenov, V. M.
1985-04-01
Three methods are now available for recording color television images on film tape, directly or after appropriate finish of signal processing. Conventional recording of images from the screens of three kinescopes with synthetic crystal face plates is still most effective for high fidelity. This method was improved by digital preprocessing of brightness color-difference signal. Frame-by-frame storage of these signals in the memory in digital form is followed by gamma and aperture correction and electronic correction of crossover distortions in the color layers of the film with fixing in accordance with specific emulsion procedures. The newer method of recording color television images with line arrays of light-emitting diodes involves dichromic superposing mirrors and a movable scanning mirror. This method allows the use of standard movie cameras, simplifies interlacing-to-linewise conversion and the mechanical equipment, and lengthens exposure time while it shortens recording time. The latest image transform method requires an audio-video recorder, a memory disk, a digital computer, and a decoder. The 9-step procedure includes preprocessing the total color television signal with reduction of noise level and time errors, followed by frame frequency conversion and setting the number of lines. The total signal is then resolved into its brightness and color-difference components and phase errors and image blurring are also reduced. After extraction of R,G,B signals and colorimetric matching of TV camera and film tape, the simultaneous R,B, B signals are converted from interlacing to sequential triades of color-quotient frames with linewise scanning at triple frequency. Color-quotient signals are recorded with an electron beam on a smoothly moving black-and-white film tape under vacuum. While digital techniques improve the signal quality and simplify the control of processes, not requiring stabilization of circuits, image processing is still analog.
Restoration of hot pixels in digital imagers using lossless approximation techniques
NASA Astrophysics Data System (ADS)
Hadar, O.; Shleifer, A.; Cohen, E.; Dotan, Y.
2015-09-01
During the last twenty years, digital imagers have spread into industrial and everyday devices, such as satellites, security cameras, cell phones, laptops and more. "Hot pixels" are the main defects in remote digital cameras. In this paper we prove an improvement of existing restoration methods that use (solely or as an auxiliary tool) some average of the surrounding single pixel, such as the method of the Chapman-Koren study 1,2. The proposed method uses the CALIC algorithm and adapts it to a full use of the surrounding pixels.
Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy
NASA Technical Reports Server (NTRS)
Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)
2011-01-01
Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.
Digital In, Digital Out: Digital Editing with Firewire.
ERIC Educational Resources Information Center
Doyle, Bob; Sauer, Jeff
1997-01-01
Reviews linear and nonlinear digital video (DV) editing equipment and software, using the IEEE 1394 (FireWire) connector. Includes a chart listing specifications and rating eight DV editing systems, reviews two DV still-photo cameras, and previews beta DV products. (PEN)
NASA Astrophysics Data System (ADS)
Soliz, Peter; Nemeth, Sheila C.; Barriga, E. Simon; Harding, Simon P.; Lewallen, Susan; Taylor, Terrie E.; MacCormick, Ian J.; Joshi, Vinayak S.
2016-03-01
The purpose of this study was to test the suitability of three available camera technologies (desktop, portable, and iphone based) for imaging comatose children who presented with clinical symptoms of malaria. Ultimately, the results of the project would form the basis for a design of a future camera to screen for malaria retinopathy (MR) in a resource challenged environment. The desktop, portable, and i-phone based cameras were represented by the Topcon, Pictor Plus, and Peek cameras, respectively. These cameras were tested on N=23 children presenting with symptoms of cerebral malaria (CM) at a malaria clinic, Queen Elizabeth Teaching Hospital in Malawi, Africa. Each patient was dilated for binocular indirect ophthalmoscopy (BIO) exam by an ophthalmologist followed by imaging with all three cameras. Each of the cases was graded according to an internationally established protocol and compared to the BIO as the clinical ground truth. The reader used three principal retinal lesions as markers for MR: hemorrhages, retinal whitening, and vessel discoloration. The study found that the mid-priced Pictor Plus hand-held camera performed considerably better than the lower price mobile phone-based camera, and slightly the higher priced table top camera. When comparing the readings of digital images against the clinical reference standard (BIO), the Pictor Plus camera had sensitivity and specificity for MR of 100% and 87%, respectively. This compares to a sensitivity and specificity of 87% and 75% for the i-phone based camera and 100% and 75% for the desktop camera. The drawback of all the cameras were their limited field of view which did not allow complete view of the periphery where vessel discoloration occurs most frequently. The consequence was that vessel discoloration was not addressed in this study. None of the cameras offered real-time image quality assessment to ensure high quality images to afford the best possible opportunity for reading by a remotely located specialist.
Windsor, J S; Rodway, G W; Middleton, P M; McCarthy, S
2006-01-01
Objective The emergence of a new generation of “point‐and‐shoot” digital cameras offers doctors a compact, portable and user‐friendly solution to the recording of highly detailed digital photographs and video images. This work highlights the use of such technology, and provides information for those who wish to record, store and display their own medical images. Methods Over a 3‐month period, a digital camera was carried by a doctor in a busy, adult emergency department and used to record a range of clinical images that were subsequently transferred to a computer database. Results In total, 493 digital images were recorded, of which 428 were photographs and 65 were video clips. These were successfully used for teaching purposes, publications and patient records. Conclusions This study highlights the importance of informed consent, the selection of a suitable package of digital technology and the role of basic photographic technique in developing a successful digital database in a busy clinical environment. PMID:17068281
Operative record using intraoperative digital data in neurosurgery.
Houkin, K; Kuroda, S; Abe, H
2000-01-01
The purpose of this study was to develop a new method for more efficient and accurate operative records using intra-operative digital data in neurosurgery, including macroscopic procedures and microscopic procedures under an operating microscope. Macroscopic procedures were recorded using a digital camera and microscopic procedures were also recorded using a microdigital camera attached to an operating microscope. Operative records were then recorded digitally and filed in a computer using image retouch software and database base software. The time necessary for editing of the digital data and completing the record was less than 30 minutes. Once these operative records are digitally filed, they are easily transferred and used as database. Using digital operative records along with digital photography, neurosurgeons can document their procedures more accurately and efficiently than by the conventional method (handwriting). A complete digital operative record is not only accurate but also time saving. Construction of a database, data transfer and desktop publishing can be achieved using the intra-operative data, including intra-operative photographs.
[Symbolical violence in the access of disabled persons to basic health units].
de França, Inacia Sátiro Xavier; Pagliuca, Lorita Marlena Freitag; Baptista, Rosilene Santos; de França, Eurípedes Gil; Coura, Alexsandro Silva; de Souza, Jeová Alves
2010-01-01
A descriptive study which aimed to characterize the conditions of people with disabilities (PD) in the Basic Health Units-UBS. Data were collected in January 2009 in 20 UBSF. It was used digital camera and check list based on the 9050-NBR ABNT. The results showed: Access town - no traffic lights (100%) of lanes for pedestrians (100%), bumpy sidewalks (90%); Access in UBS: non-standard ports (30%) staircases without banisters (20%); floor outside the standard (75%), in disagreement with standard mobile (20%), drinking at odds with standard (55%), making it difficult to people with disabilities to use a filter (30%), has no drinking or filters (15%); telephones installed inadequately (55%); inaccessible restrooms (96%). Access to UBS of PD is permeated by the symbolic violence.
Novel Principle of Contactless Gauge Block Calibration
Buchta, Zdeněk; Řeřucha, Šimon; Mikel, Břetislav; Čížek, Martin; Lazar, Josef; Číp, Ondřej
2012-01-01
In this paper, a novel principle of contactless gauge block calibration is presented. The principle of contactless gauge block calibration combines low-coherence interferometry and laser interferometry. An experimental setup combines Dowell interferometer and Michelson interferometer to ensure a gauge block length determination with direct traceability to the primary length standard. By monitoring both gauge block sides with a digital camera gauge block 3D surface measurements are possible too. The principle presented is protected by the Czech national patent No. 302948. PMID:22737012
Novel principle of contactless gauge block calibration.
Buchta, Zdeněk; Reřucha, Simon; Mikel, Břetislav; Cížek, Martin; Lazar, Josef; Cíp, Ondřej
2012-01-01
In this paper, a novel principle of contactless gauge block calibration is presented. The principle of contactless gauge block calibration combines low-coherence interferometry and laser interferometry. An experimental setup combines Dowell interferometer and Michelson interferometer to ensure a gauge block length determination with direct traceability to the primary length standard. By monitoring both gauge block sides with a digital camera gauge block 3D surface measurements are possible too. The principle presented is protected by the Czech national patent No. 302948.
2001-03-19
STS102-E-5315 (18 March 2001) --- The International Space Station (ISS) backdropped against a mass of clouds over Earth was photographed with a digital still camera from the Space Shuttle Discovery on March 18, 2001. It is a standard operation for the shuttle to make a final fly-around of the outpost following unlinking from it. A new crew comprised of cosmonaut Yury V. Usachev and astronauts James S. Voss and Susan J. Helms will spend several months aboard the station.
Helms in FGB/Zarya with cameras
2001-06-08
ISS002-E-6526 (8 June 2001) --- Astronaut Susan J. Helms, Expedition Two flight engineer, mounts a video camera onto a bracket in the Zarya or Functional Cargo Block (FGB) of the International Space Station (ISS). The image was recorded with a digital still camera.
Pirie, Chris G; Pizzirani, Stefano
2011-12-01
To describe a digital single lens reflex (dSLR) camera adaptor for posterior segment photography. A total of 30 normal canine and feline animals were imaged using a dSLR adaptor which mounts between a dSLR camera body and lens. Posterior segment viewing and imaging was performed with the aid of an indirect lens ranging from 28-90D. Coaxial illumination for viewing was provided by a single white light emitting diode (LED) within the adaptor, while illumination during exposure was provided by the pop-up flash or an accessory flash. Corneal and/or lens reflections were reduced using a pair of linear polarizers, having their azimuths perpendicular to one another. Quality high-resolution, reflection-free, digital images of the retina were obtained. Subjective image evaluation demonstrated the same amount of detail, as compared to a conventional fundus camera. A wide range of magnification(s) [1.2-4X] and/or field(s) of view [31-95 degrees, horizontal] were obtained by altering the indirect lens utilized. The described adaptor may provide an alternative to existing fundus camera systems. Quality images were obtained and the adapter proved to be versatile, portable and of low cost.
NASA Astrophysics Data System (ADS)
Zhao, Guihua; Chen, Hong; Li, Xingquan; Zou, Xiaoliang
The paper presents the concept of lever arm and boresight angle, the design requirements of calibration sites and the integrated calibration method of boresight angles of digital camera or laser scanner. Taking test data collected by Applanix's LandMark system as an example, the camera calibration method is introduced to be piling three consecutive stereo images and OTF-Calibration method using ground control points. The laser calibration of boresight angle is proposed to use a manual and automatic method with ground control points. Integrated calibration between digital camera and laser scanner is introduced to improve the systemic precision of two sensors. By analyzing the measurement value between ground control points and its corresponding image points in sequence images, a conclusion is that position objects between camera and images are within about 15cm in relative errors and 20cm in absolute errors. By comparing the difference value between ground control points and its corresponding laser point clouds, the errors is less than 20cm. From achieved results of these experiments in analysis, mobile mapping system is efficient and reliable system for generating high-accuracy and high-density road spatial data more rapidly.
NASA Astrophysics Data System (ADS)
Chi, Yuxi; Yu, Liping; Pan, Bing
2018-05-01
A low-cost, portable, robust and high-resolution single-camera stereo-digital image correlation (stereo-DIC) system for accurate surface three-dimensional (3D) shape and deformation measurements is described. This system adopts a single consumer-grade high-resolution digital Single Lens Reflex (SLR) camera and a four-mirror adaptor, rather than two synchronized industrial digital cameras, for stereo image acquisition. In addition, monochromatic blue light illumination and coupled bandpass filter imaging are integrated to ensure the robustness of the system against ambient light variations. In contrast to conventional binocular stereo-DIC systems, the developed pseudo-stereo-DIC system offers the advantages of low cost, portability, robustness against ambient light variations, and high resolution. The accuracy and precision of the developed single SLR camera-based stereo-DIC system were validated by measuring the 3D shape of a stationary sphere along with in-plane and out-of-plane displacements of a translated planar plate. Application of the established system to thermal deformation measurement of an alumina ceramic plate and a stainless-steel plate subjected to radiation heating was also demonstrated.
NASA Astrophysics Data System (ADS)
Russell, E.; Chi, J.; Waldo, S.; Pressley, S. N.; Lamb, B. K.; Pan, W.
2017-12-01
Diurnal and seasonal gas fluxes vary by crop growth stage. Digital cameras are increasingly being used to monitor inter-annual changes in vegetation phenology in a variety of ecosystems. These cameras are not designed as scientific instruments but the information they gather can add value to established measurement techniques (i.e. eddy covariance). This work combined deconstructed digital images with eddy covariance data from five agricultural sites (1 fallow, 4 cropped) in the inland Pacific Northwest, USA. The data were broken down with respect to crop stage and management activities. The fallow field highlighted the camera response to changing net radiation, illumination, and rainfall. At the cropped sites, the net ecosystem exchange, gross primary production, and evapotranspiration were correlated with the greenness and redness values derived from the images over the growing season. However, the color values do not change quickly enough to respond to day-to-day variability in the flux exchange as the two measurement types are based on different processes. The management practices and changes in phenology through the growing season were not visible within the camera data though the camera did capture the general evolution of the ecosystem fluxes.
Low-complexity camera digital signal imaging for video document projection system
NASA Astrophysics Data System (ADS)
Hsia, Shih-Chang; Tsai, Po-Shien
2011-04-01
We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.
On the Complexity of Digital Video Cameras in/as Research: Perspectives and Agencements
ERIC Educational Resources Information Center
Bangou, Francis
2014-01-01
The goal of this article is to consider the potential for digital video cameras to produce as part of a research agencement. Our reflection will be guided by the current literature on the use of video recordings in research, as well as by the rhizoanalysis of two vignettes. The first of these vignettes is associated with a short video clip shot by…
Measuring the Orbital Period of the Moon Using a Digital Camera
ERIC Educational Resources Information Center
Hughes, Stephen W.
2006-01-01
A method of measuring the orbital velocity of the Moon around the Earth using a digital camera is described. Separate images of the Moon and stars taken 24 hours apart were loaded into Microsoft PowerPoint and the centre of the Moon marked on each image. Four stars common to both images were connected together to form a "home-made" constellation.…
NASA Astrophysics Data System (ADS)
Cortes-Medellin, German; Parshley, Stephen; Campbell, Donald B.; Warnick, Karl F.; Jeffs, Brian D.; Ganesh, Rajagopalan
2016-08-01
This paper presents the current concept design for ALPACA (Advanced L-Band Phased Array Camera for Arecibo) an L-Band cryo-phased array instrument proposed for the 305 m radio telescope of Arecibo. It includes the cryogenically cooled front-end with 160 low noise amplifiers, a RF-over-fiber signal transport and a digital beam former with an instantaneous bandwidth of 312.5 MHz per channel. The camera will digitally form 40 simultaneous beams inside the available field of view of the Arecibo telescope optics, with an expected system temperature goal of 30 K.
[Medical and dental digital photography. Choosing a cheap and user-friendly camera].
Chossegros, C; Guyot, L; Mantout, B; Cheynet, F; Olivi, P; Blanc, J-L
2010-04-01
Digital photography is more and more important in our everyday medical practice. Patient data, medico-legal proof, remote diagnosis, forums, and medical publications are some of the applications of digital photography in medical and dental fields. A lot of small, light, and cheap cameras are on the market. The main issue is to obtain good, reproducible, cheap, and easy-to-shoot pictures. Every medical situation, portrait in esthetic surgery, skin photography in dermatology, X-ray pictures or intra-oral pictures, for example, has its own requirements. For these reasons, we have tried to find an "ideal" compact digital camera. The Sony DSC-T90 (and its T900 counterpart with a wider screen) seems a good choice. Its small size makes it usable in every situation and its price is low. An external light source and a free photo software (XnView((R))) can be useful complementary tools. The main adjustments and expected results are discussed.
NASA Astrophysics Data System (ADS)
Goiffon, Vincent; Rolando, Sébastien; Corbière, Franck; Rizzolo, Serena; Chabane, Aziouz; Girard, Sylvain; Baer, Jérémy; Estribeau, Magali; Magnan, Pierre; Paillet, Philippe; Van Uffelen, Marco; Mont Casellas, Laura; Scott, Robin; Gaillardin, Marc; Marcandella, Claude; Marcelot, Olivier; Allanche, Timothé
2017-01-01
The Total Ionizing Dose (TID) hardness of digital color Camera-on-a-Chip (CoC) building blocks is explored in the Multi-MGy range using 60Co gamma-ray irradiations. The performances of the following CoC subcomponents are studied: radiation hardened (RH) pixel and photodiode designs, RH readout chain, Color Filter Arrays (CFA) and column RH Analog-to-Digital Converters (ADC). Several radiation hardness improvements are reported (on the readout chain and on dark current). CFAs and ADCs degradations appear to be very weak at the maximum TID of 6 MGy(SiO2), 600 Mrad. In the end, this study demonstrates the feasibility of a MGy rad-hard CMOS color digital camera-on-a-chip, illustrated by a color image captured after 6 MGy(SiO2) with no obvious degradation. An original dark current reduction mechanism in irradiated CMOS Image Sensors is also reported and discussed.
Recent technology and usage of plastic lenses in image taking objectives
NASA Astrophysics Data System (ADS)
Yamaguchi, Susumu; Sato, Hiroshi; Mori, Nobuyoshi; Kiriki, Toshihiko
2005-09-01
Recently, plastic lenses produced by injection molding are widely used in image taking objectives for digital cameras, camcorders, and mobile phone cameras, because of their suitability for volume production and ease of obtaining an advantage of aspherical surfaces. For digital camera and camcorder objectives, it is desirable that there is no image point variation with the temperature change in spite of employing several plastic lenses. At the same time, due to the shrinking pixel size of solid-state image sensor, there is now a requirement to assemble lenses with high accuracy. In order to satisfy these requirements, we have developed 16 times compact zoom objective for camcorder and 3 times class folded zoom objectives for digital camera, incorporating cemented plastic doublet consisting of a positive lens and a negative lens. Over the last few years, production volumes of camera-equipped mobile phones have increased substantially. Therefore, for mobile phone cameras, the consideration of productivity is more important than ever. For this application, we have developed a 1.3-mega pixels compact camera module with macro function utilizing the advantage of a plastic lens that can be given mechanically functional shape to outer flange part. Its objective consists of three plastic lenses and all critical dimensions related to optical performance can be determined by high precise optical elements. Therefore this camera module is manufactured without optical adjustment in automatic assembling line, and achieves both high productivity and high performance. Reported here are the constructions and the technical topics of image taking objectives described above.
The Example of Using the Xiaomi Cameras in Inventory of Monumental Objects - First Results
NASA Astrophysics Data System (ADS)
Markiewicz, J. S.; Łapiński, S.; Bienkowski, R.; Kaliszewska, A.
2017-11-01
At present, digital documentation recorded in the form of raster or vector files is the obligatory way of inventorying historical objects. Today, photogrammetry is becoming more and more popular and is becoming the standard of documentation in many projects involving the recording of all possible spatial data on landscape, architecture, or even single objects. Low-cost sensors allow for the creation of reliable and accurate three-dimensional models of investigated objects. This paper presents the results of a comparison between the outcomes obtained when using three sources of image: low-cost Xiaomi cameras, a full-frame camera (Canon 5D Mark II) and middle-frame camera (Hasselblad-Hd4). In order to check how the results obtained from the two sensors differ the following parameters were analysed: the accuracy of the orientation of the ground level photos on the control and check points, the distribution of appointed distortion in the self-calibration process, the flatness of the walls, the discrepancies between point clouds from the low-cost cameras and references data. The results presented below are a result of co-operation of researchers from three institutions: the Systems Research Institute PAS, The Department of Geodesy and Cartography at the Warsaw University of Technology and the National Museum in Warsaw.
Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test
Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno
2008-01-01
The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces. PMID:27873930
Ranging Apparatus and Method Implementing Stereo Vision System
NASA Technical Reports Server (NTRS)
Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.
Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei
2016-01-01
High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023
Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei
2016-03-04
High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera.
Fogazzi, G B; Garigali, G
2017-03-01
We describe three ways to take digital images of urine sediment findings. Way 1 encompasses a digital camera permanently mounted on the microscope and connected with a computer equipped with a proprietary software to acquire, process and store the images. Way 2 is based on the use of inexpensive compact digital cameras, held by hands - or mounted on a tripod - close to one eyepiece of the microscope. Way 3 is based on the use of smartphones, held by hands close to one eyepiece of the microscope or connected to the microscope by an adapter. The procedures, advantages and limitations of each way are reported. Copyright © 2017. Published by Elsevier B.V.
Digital video system for on-line portal verification
NASA Astrophysics Data System (ADS)
Leszczynski, Konrad W.; Shalev, Shlomo; Cosby, N. Scott
1990-07-01
A digital system has been developed for on-line acquisition, processing and display of portal images during radiation therapy treatment. A metal/phosphor screen combination is the primary detector, where the conversion from high-energy photons to visible light takes place. A mirror angled at 45 degrees reflects the primary image to a low-light-level camera, which is removed from the direct radiation beam. The image registered by the camera is digitized, processed and displayed on a CRT monitor. Advanced digital techniques for processing of on-line images have been developed and implemented to enhance image contrast and suppress the noise. Some elements of automated radiotherapy treatment verification have been introduced.
Color constancy by characterization of illumination chromaticity
NASA Astrophysics Data System (ADS)
Nikkanen, Jarno T.
2011-05-01
Computational color constancy algorithms play a key role in achieving desired color reproduction in digital cameras. Failure to estimate illumination chromaticity correctly will result in invalid overall colour cast in the image that will be easily detected by human observers. A new algorithm is presented for computational color constancy. Low computational complexity and low memory requirement make the algorithm suitable for resource-limited camera devices, such as consumer digital cameras and camera phones. Operation of the algorithm relies on characterization of the range of possible illumination chromaticities in terms of camera sensor response. The fact that only illumination chromaticity is characterized instead of the full color gamut, for example, increases robustness against variations in sensor characteristics and against failure of diagonal model of illumination change. Multiple databases are used in order to demonstrate the good performance of the algorithm in comparison to the state-of-the-art color constancy algorithms.
Camera Concepts for the Advanced Gamma-Ray Imaging System (AGIS)
NASA Astrophysics Data System (ADS)
Nepomuk Otte, Adam
2009-05-01
The Advanced Gamma-Ray Imaging System (AGIS) is a concept for the next generation observatory in ground-based very high energy gamma-ray astronomy. Design goals are ten times better sensitivity, higher angular resolution, and a lower energy threshold than existing Cherenkov telescopes. Each telescope is equipped with a camera that detects and records the Cherenkov-light flashes from air showers. The camera is comprised of a pixelated focal plane of blue sensitive and fast (nanosecond) photon detectors that detect the photon signal and convert it into an electrical one. The incorporation of trigger electronics and signal digitization into the camera are under study. Given the size of AGIS, the camera must be reliable, robust, and cost effective. We are investigating several directions that include innovative technologies such as Geiger-mode avalanche-photodiodes as a possible detector and switched capacitor arrays for the digitization.
Frequently Asked Questions about Digital Mammography
... in digital cameras, which convert x-rays into electrical signals. The electrical signals are used to produce images of the ... DBT? Digital breast tomosynthesis is a relatively new technology. In DBT, the X-ray tube moves in ...
Precise color images a high-speed color video camera system with three intensified sensors
NASA Astrophysics Data System (ADS)
Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.
1999-06-01
High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.
Demosaicing images from colour cameras for digital image correlation
NASA Astrophysics Data System (ADS)
Forsey, A.; Gungor, S.
2016-11-01
Digital image correlation is not the intended use for consumer colour cameras, but with care they can be successfully employed in such a role. The main obstacle is the sparsely sampled colour data caused by the use of a colour filter array (CFA) to separate the colour channels. It is shown that the method used to convert consumer camera raw files into a monochrome image suitable for digital image correlation (DIC) can have a significant effect on the DIC output. A number of widely available software packages and two in-house methods are evaluated in terms of their performance when used with DIC. Using an in-plane rotating disc to produce a highly constrained displacement field, it was found that the bicubic spline based in-house demosaicing method outperformed the other methods in terms of accuracy and aliasing suppression.
Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing
2017-11-15
Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.
Observation sequences and onboard data processing of Planet-C
NASA Astrophysics Data System (ADS)
Suzuki, M.; Imamura, T.; Nakamura, M.; Ishi, N.; Ueno, M.; Hihara, H.; Abe, T.; Yamada, T.
Planet-C or VCO Venus Climate Orbiter will carry 5 cameras IR1 IR 1micrometer camera IR2 IR 2micrometer camera UVI UV Imager LIR Long-IR camera and LAC Lightning and Airglow Camera in the UV-IR region to investigate atmospheric dynamics of Venus During 30 hr orbiting designed to quasi-synchronize to the super rotation of the Venus atmosphere 3 groups of scientific observations will be carried out i image acquisition of 4 cameras IR1 IR2 UVI LIR 20 min in 2 hrs ii LAC operation only when VCO is within Venus shadow and iii radio occultation These observation sequences will define the scientific outputs of VCO program but the sequences must be compromised with command telemetry downlink and thermal power conditions For maximizing science data downlink it must be well compressed and the compression efficiency and image quality have the significant scientific importance in the VCO program Images of 4 cameras IR1 2 and UVI 1Kx1K and LIR 240x240 will be compressed using JPEG2000 J2K standard J2K is selected because of a no block noise b efficiency c both reversible and irreversible d patent loyalty free and e already implemented as academic commercial software ICs and ASIC logic designs Data compression efficiencies of J2K are about 0 3 reversible and 0 1 sim 0 01 irreversible The DE Digital Electronics unit which controls 4 cameras and handles onboard data processing compression is under concept design stage It is concluded that the J2K data compression logics circuits using space
Demonstration of the CDMA-mode CAOS smart camera.
Riza, Nabeel A; Mazhar, Mohsin A
2017-12-11
Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.
NASA Astrophysics Data System (ADS)
Pospisil, J.; Jakubik, P.; Machala, L.
2005-11-01
This article reports the suggestion, realization and verification of the newly developed measuring means of the noiseless and locally shift-invariant modulation transfer function (MTF) of a digital video camera in a usual incoherent visible region of optical intensity, especially of its combined imaging, detection, sampling and digitizing steps which are influenced by the additive and spatially discrete photodetector, aliasing and quantization noises. Such means relates to the still camera automatic working regime and static two-dimensional spatially continuous light-reflection random target of white-noise property. The introduced theoretical reason for such a random-target method is also performed under exploitation of the proposed simulation model of the linear optical intensity response and possibility to express the resultant MTF by a normalized and smoothed rate of the ascertainable output and input power spectral densities. The random-target and resultant image-data were obtained and processed by means of a processing and evaluational PC with computation programs developed on the basis of MATLAB 6.5E The present examples of results and other obtained results of the performed measurements demonstrate the sufficient repeatability and acceptability of the described method for comparative evaluations of the performance of digital video cameras under various conditions.
Improved Feature Matching for Mobile Devices with IMU.
Masiero, Andrea; Vettore, Antonio
2016-08-05
Thanks to the recent diffusion of low-cost high-resolution digital cameras and to the development of mostly automated procedures for image-based 3D reconstruction, the popularity of photogrammetry for environment surveys is constantly increasing in the last years. Automatic feature matching is an important step in order to successfully complete the photogrammetric 3D reconstruction: this step is the fundamental basis for the subsequent estimation of the geometry of the scene. This paper reconsiders the feature matching problem when dealing with smart mobile devices (e.g., when using the standard camera embedded in a smartphone as imaging sensor). More specifically, this paper aims at exploiting the information on camera movements provided by the inertial navigation system (INS) in order to make the feature matching step more robust and, possibly, computationally more efficient. First, a revised version of the affine scale-invariant feature transform (ASIFT) is considered: this version reduces the computational complexity of the original ASIFT, while still ensuring an increase of correct feature matches with respect to the SIFT. Furthermore, a new two-step procedure for the estimation of the essential matrix E (and the camera pose) is proposed in order to increase its estimation robustness and computational efficiency.
Very low cost real time histogram-based contrast enhancer utilizing fixed-point DSP processing
NASA Astrophysics Data System (ADS)
McCaffrey, Nathaniel J.; Pantuso, Francis P.
1998-03-01
A real time contrast enhancement system utilizing histogram- based algorithms has been developed to operate on standard composite video signals. This low-cost DSP based system is designed with fixed-point algorithms and an off-chip look up table (LUT) to reduce the cost considerably over other contemporary approaches. This paper describes several real- time contrast enhancing systems advanced at the Sarnoff Corporation for high-speed visible and infrared cameras. The fixed-point enhancer was derived from these high performance cameras. The enhancer digitizes analog video and spatially subsamples the stream to qualify the scene's luminance. Simultaneously, the video is streamed through a LUT that has been programmed with the previous calculation. Reducing division operations by subsampling reduces calculation- cycles and also allows the processor to be used with cameras of nominal resolutions. All values are written to the LUT during blanking so no frames are lost. The enhancer measures 13 cm X 6.4 cm X 3.2 cm, operates off 9 VAC and consumes 12 W. This processor is small and inexpensive enough to be mounted with field deployed security cameras and can be used for surveillance, video forensics and real- time medical imaging.
Development and calibration of a new gamma camera detector using large square Photomultiplier Tubes
NASA Astrophysics Data System (ADS)
Zeraatkar, N.; Sajedi, S.; Teimourian Fard, B.; Kaviani, S.; Akbarzadeh, A.; Farahani, M. H.; Sarkar, S.; Ay, M. R.
2017-09-01
Large area scintillation detectors applied in gamma cameras as well as Single Photon Computed Tomography (SPECT) systems, have a major role in in-vivo functional imaging. Most of the gamma detectors utilize hexagonal arrangement of Photomultiplier Tubes (PMTs). In this work we applied large square-shaped PMTs with row/column arrangement and positioning. The Use of large square PMTs reduces dead zones in the detector surface. However, the conventional center of gravity method for positioning may not introduce an acceptable result. Hence, the digital correlated signal enhancement (CSE) algorithm was optimized to obtain better linearity and spatial resolution in the developed detector. The performance of the developed detector was evaluated based on NEMA-NU1-2007 standard. The acquired images using this method showed acceptable uniformity and linearity comparing to three commercial gamma cameras. Also the intrinsic and extrinsic spatial resolutions with low-energy high-resolution (LEHR) collimator at 10 cm from surface of the detector were 3.7 mm and 7.5 mm, respectively. The energy resolution of the camera was measured 9.5%. The performance evaluation demonstrated that the developed detector maintains image quality with a reduced number of used PMTs relative to the detection area.
[Results of testing of MINISKAN mobile gamma-ray camera and specific features of its design].
Utkin, V M; Kumakhov, M A; Blinov, N N; Korsunskiĭ, V N; Fomin, D K; Kolesnikova, N V; Tultaev, A V; Nazarov, A A; Tararukhina, O B
2007-01-01
The main results of engineering, biomedical, and clinical testing of MINISKAN mobile gamma-ray camera are presented. Specific features of the camera hardware and software, as well as the main technical specifications, are described. The gamma-ray camera implements a new technology based on reconstructive tomography, aperture encoding, and digital processing of signals.
The art and science of photography in hand surgery.
Wang, Keming; Kowalski, Evan J; Chung, Kevin C
2014-03-01
High-quality medical photography plays an important role in teaching and demonstrating the functional capacity of the hands as well as in medicolegal documentation. Obtaining standardized, high-quality photographs is now an essential component of many surgery practices. The importance of standardized photography in facial and cosmetic surgery has been well documented in previous studies, but no studies have thoroughly addressed the details of photography for hand surgery. In this paper, we provide a set of guidelines and basic camera concepts for different scenarios to help hand surgeons obtain appropriate and informative high-quality photographs. A camera used for medical photography should come equipped with a large sensor size and an optical zoom lens with a focal length ranging anywhere from 14 to 75 mm. In a clinic or office setting, we recommend 6 standardized views of the hand and 4 views for the wrist; additional views should be taken for tendon ruptures, nerve injuries, or other deformities of the hand. For intraoperative pictures, the camera operator should understand the procedure and pertinent anatomy in order to properly obtain high-quality photographs. When digital radiographs are not available and radiographic film must be photographed, it is recommended to reduce the exposure and change the color mode to black and white to obtain the best possible pictures. The goal of medical photography is to present the subject in an accurate and precise fashion. Copyright © 2014 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Luo, Yunpeng; EI-Madany, Tarek; Filippa, Gianluca; Carrara, Arnaud; Cremonese, Edoardo; Galvagno, Marta; Hammer, Tiana; Pérez-Priego, Oscar; Reichstein, Markus; Martín Isabel, Pilar; González Cascón, Rosario; Migliavacca, Mirco
2017-04-01
Tree-Grass ecosystems are global widely distributed (16-35% of the land surface). However, its phenology (especially in water-limited areas) has not yet been well characterized and modeled. By using commercial digital cameras, continuous and relatively vast phenology data becomes available, which provides a good opportunity to monitor and develop a robust method used to extract the important phenological events (phenophases). Here we aimed to assess the usability of digital repeat photography for three Tree-Grass Mediterranean ecosystems over two different growing seasons (Majadas del Tietar, Spain) to extract critical phenophases for grass and evergreen broadleaved trees (autumn regreening of grass- Start of growing season; resprouting of tree leaves; senescence of grass - End of growing season), assess their uncertainty, and to correlate them with physiological phenology (i.e. phenology of ecosystem scale fluxes such as Gross Primary Productivity, GPP). We extracted green chromatic coordinates (GCC) and camera based normalized difference vegetation index (Camera-NDVI) from an infrared enabled digital camera using the "Phenopix" R package. Then we developed a novel method to retrieve important phenophases from GCC and Camera-NDVI from various region of interests (ROIs) of the imagery (tree areas, grass, and both - ecosystem) as well as from GPP, which was derived from Eddy Covariance tower in the same experimental site. The results show that, at ecosystem level, phenophases derived from GCC and Camera-NDVI are strongly correlated (R2 = 0.979). Remarkably, we observed that at the end of growing season phenophases derived from GCC were systematically advanced (ca. 8 days) than phenophase from Camera-NDVI. By using the radiative transfer model Soil Canopy Observation Photochemistry and Energy (SCOPE) we demonstrated that this delay is related to the different sensitivity of GCC and NDVI to the fraction of green/dry grass in the canopy, resulting in a systematic higher NDVI during the dry-down of the canopy. Phenophases derived from GCC and Camera-NDVI are correlated with phenophase extracted from GPP across sites and years (R2 =0.966 and 0.976 respectively). For the start of growing season the determination coefficient was higher (R2 =0.89 and 0.98 for GCC vs GPP and Camera-NDVI vs GPP, respectively) than for the end of growing season (R2 =0.75 and 0.70, for GCC and Camera-NDVI, respectively). The statistics obtained using phenophases derived from grass or ecosystem ROI are similar. In contrast, GCC and Camera-NDVI derived from trees ROI are relatively constant and not related to the seasonality of GPP. However, the GCC of tree shows a characteristic peak that is synchronous to leaf flushing in spring assessed using regular Chlorophyll content measurements and automatic dendrometers. Concluding, we first developed a method to derive phenological events of Tree-Grass ecosystems using digital repeat photography, second we demonstrated that the phenology of GPP is strongly dominated by the phenology of grassland layer, third we discussed the uncertainty related to the use of GCC and Camera-NDVI in senescence, and finally we demonstrate the capability of GCC to track in evergreen broadleaved forest crucial phenological events. Our findings confirm digital repeat photography is a vital data source for characterizing phenology in Mediterranean Tree-Grass Ecosystem.
Softcopy quality ruler method: implementation and validation
NASA Astrophysics Data System (ADS)
Jin, Elaine W.; Keelan, Brian W.; Chen, Junqing; Phillips, Jonathan B.; Chen, Ying
2009-01-01
A softcopy quality ruler method was implemented for the International Imaging Industry Association (I3A) Camera Phone Image Quality (CPIQ) Initiative. This work extends ISO 20462 Part 3 by virtue of creating reference digital images of known subjective image quality, complimenting the hardcopy Standard Reference Stimuli (SRS). The softcopy ruler method was developed using images from a Canon EOS 1Ds Mark II D-SLR digital still camera (DSC) and a Kodak P880 point-and-shoot DSC. Images were viewed on an Apple 30in Cinema Display at a viewing distance of 34 inches. Ruler images were made for 16 scenes. Thirty ruler images were generated for each scene, representing ISO 20462 Standard Quality Scale (SQS) values of approximately 2 to 31 at an increment of one just noticeable difference (JND) by adjusting the system modulation transfer function (MTF). A Matlab GUI was developed to display the ruler and test images side-by-side with a user-adjustable ruler level controlled by a slider. A validation study was performed at Kodak, Vista Point Technology, and Aptina Imaging in which all three companies set up a similar viewing lab to run the softcopy ruler method. The results show that the three sets of data are in reasonable agreement with each other, with the differences within the range expected from observer variability. Compared to previous implementations of the quality ruler, the slider-based user interface allows approximately 2x faster assessments with 21.6% better precision.
Image quality assessment for selfies with and without super resolution
NASA Astrophysics Data System (ADS)
Kubota, Aya; Gohshi, Seiichi
2018-04-01
With the advent of cellphone cameras, in particular, on smartphones, many people now take photos of themselves alone and with others in the frame; such photos are popularly known as "selfies". Most smartphones are equipped with two cameras: the front-facing and rear cameras. The camera located on the back of the smartphone is referred to as the "out-camera," whereas the one located on the front of the smartphone is called the "in-camera." In-cameras are mainly used for selfies. Some smartphones feature high-resolution cameras. However, the original image quality cannot be obtained because smartphone cameras often have low-performance lenses. Super resolution (SR) is one of the recent technological advancements that has increased image resolution. We developed a new SR technology that can be processed on smartphones. Smartphones with new SR technology are currently available in the market have already registered sales. However, the effective use of new SR technology has not yet been verified. Comparing the image quality with and without SR on smartphone display is necessary to confirm the usefulness of this new technology. Methods that are based on objective and subjective assessments are required to quantitatively measure image quality. It is known that the typical object assessment value, such as Peak Signal to Noise Ratio (PSNR), does not go together with how we feel when we assess image/video. When digital broadcast started, the standard was determined using subjective assessment. Although subjective assessment usually comes at high cost because of personnel expenses for observers, the results are highly reproducible when they are conducted under right conditions and statistical analysis. In this study, the subjective assessment results for selfie images are reported.
de Lasarte, Marta; Pujol, Jaume; Arjona, Montserrat; Vilaseca, Meritxell
2007-01-10
We present an optimized linear algorithm for the spatial nonuniformity correction of a CCD color camera's imaging system and the experimental methodology developed for its implementation. We assess the influence of the algorithm's variables on the quality of the correction, that is, the dark image, the base correction image, and the reference level, and the range of application of the correction using a uniform radiance field provided by an integrator cube. The best spatial nonuniformity correction is achieved by having a nonzero dark image, by using an image with a mean digital level placed in the linear response range of the camera as the base correction image and taking the mean digital level of the image as the reference digital level. The response of the CCD color camera's imaging system to the uniform radiance field shows a high level of spatial uniformity after the optimized algorithm has been applied, which also allows us to achieve a high-quality spatial nonuniformity correction of captured images under different exposure conditions.
Using a digital video camera to examine coupled oscillations
NASA Astrophysics Data System (ADS)
Greczylo, T.; Debowska, E.
2002-07-01
In our previous paper (Debowska E, Jakubowicz S and Mazur Z 1999 Eur. J. Phys. 20 89-95), thanks to the use of an ultrasound distance sensor, experimental verification of the solution of Lagrange equations for longitudinal oscillations of the Wilberforce pendulum was shown. In this paper the sensor and a digital video camera were used to monitor and measure the changes of both the pendulum's coordinates (vertical displacement and angle of rotation) simultaneously. The experiments were performed with the aid of the integrated software package COACH 5. Fourier analysis in Microsoft^{\\circledR} Excel 97 was used to find normal modes in each case of the measured oscillations. Comparison of the results with those presented in our previous paper (as given above) leads to the conclusion that a digital video camera is a powerful tool for measuring coupled oscillations of a Wilberforce pendulum. The most important conclusion is that a video camera is able to do something more than merely register interesting physical phenomena - it can be used to perform measurements of physical quantities at an advanced level.
Quantifying biodiversity using digital cameras and automated image analysis.
NASA Astrophysics Data System (ADS)
Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.
2009-04-01
Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and enabling automatic deletion of images generated by erroneous triggering (e.g. cloud movements). This is the first step to a hierarchical image processing framework, where situation subclasses such as birds or climatic conditions can be fed into more appropriate automated or semi-automated data mining software.
Bhadri, Prashant R; Rowley, Adrian P; Khurana, Rahul N; Deboer, Charles M; Kerns, Ralph M; Chong, Lawrence P; Humayun, Mark S
2007-05-01
To evaluate the effectiveness of a prototype stereoscopic camera-based viewing system (Digital Microsurgical Workstation, three-dimensional (3D) Vision Systems, Irvine, California, USA) for anterior and posterior segment ophthalmic surgery. Institutional-based prospective study. Anterior and posterior segment surgeons performed designated standardized tasks on porcine eyes after training on prosthetic plastic eyes. Both anterior and posterior segment surgeons were able to complete tasks requiring minimal or moderate stereoscopic viewing. The results indicate that the system provides improved ergonomics. Improvements in key viewing performance areas would further enhance the value over a conventional operating microscope. The performance of the prototype system is not at par with the planned commercial system. With continued development of this technology, the three- dimensional system may be a novel viewing system in ophthalmic surgery with improved ergonomics with respect to traditional microscopic viewing.
A digital underwater video camera system for aquatic research in regulated rivers
Martin, Benjamin M.; Irwin, Elise R.
2010-01-01
We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.
Target-Tracking Camera for a Metrology System
NASA Technical Reports Server (NTRS)
Liebe, Carl; Bartman, Randall; Chapsky, Jacob; Abramovici, Alexander; Brown, David
2009-01-01
An analog electronic camera that is part of a metrology system measures the varying direction to a light-emitting diode that serves as a bright point target. In the original application for which the camera was developed, the metrological system is used to determine the varying relative positions of radiating elements of an airborne synthetic aperture-radar (SAR) antenna as the airplane flexes during flight; precise knowledge of the relative positions as a function of time is needed for processing SAR readings. It has been common metrology system practice to measure the varying direction to a bright target by use of an electronic camera of the charge-coupled-device or active-pixel-sensor type. A major disadvantage of this practice arises from the necessity of reading out and digitizing the outputs from a large number of pixels and processing the resulting digital values in a computer to determine the centroid of a target: Because of the time taken by the readout, digitization, and computation, the update rate is limited to tens of hertz. In contrast, the analog nature of the present camera makes it possible to achieve an update rate of hundreds of hertz, and no computer is needed to determine the centroid. The camera is based on a position-sensitive detector (PSD), which is a rectangular photodiode with output contacts at opposite ends. PSDs are usually used in triangulation for measuring small distances. PSDs are manufactured in both one- and two-dimensional versions. Because it is very difficult to calibrate two-dimensional PSDs accurately, the focal-plane sensors used in this camera are two orthogonally mounted one-dimensional PSDs.
McCall, Brian; Olsen, Randall J; Nelles, Nicole J; Williams, Dawn L; Jackson, Kevin; Richards-Kortum, Rebecca; Graviss, Edward A; Tkaczyk, Tomasz S
2014-03-01
A prototype miniature objective that was designed for a point-of-care diagnostic array microscope for detection of Mycobacterium tuberculosis and previously fabricated and presented in a proof of concept is evaluated for its effectiveness in detecting acid-fast bacteria. To evaluate the ability of the microscope to resolve submicron features and details in the image of acid-fast microorganisms stained with a fluorescent dye, and to evaluate the accuracy of clinical diagnoses made with digital images acquired with the objective. The lens prescription data for the microscope design are presented. A test platform is built by combining parts of a standard microscope, a prototype objective, and a digital single-lens reflex camera. Counts of acid-fast bacteria made with the prototype objective are compared to counts obtained with a standard microscope over matched fields of view. Two sets of 20 smears, positive and negative, are diagnosed by 2 pathologists as sputum smear positive or sputum smear negative, using both a standard clinical microscope and the prototype objective under evaluation. The results are compared to a reference diagnosis of the same sample. More bacteria are counted in matched fields of view in digital images taken with the prototype objective than with the standard clinical microscope. All diagnostic results are found to be highly concordant. An array microscope built with this miniature lens design will be able to detect M tuberculosis with high sensitivity and specificity.
An Inexpensive Digital Infrared Camera
ERIC Educational Resources Information Center
Mills, Allan
2012-01-01
Details are given for the conversion of an inexpensive webcam to a camera specifically sensitive to the near infrared (700-1000 nm). Some experiments and practical applications are suggested and illustrated. (Contains 9 figures.)
NASA Astrophysics Data System (ADS)
Zhao, Ziyue; Gan, Xiaochuan; Zou, Zhi; Ma, Liqun
2018-01-01
The dynamic envelope measurement plays very important role in the external dimension design for high-speed train. Recently there is no digital measurement system to solve this problem. This paper develops an optoelectronic measurement system by using monocular digital camera, and presents the research of measurement theory, visual target design, calibration algorithm design, software programming and so on. This system consists of several CMOS digital cameras, several luminous targets for measuring, a scale bar, data processing software and a terminal computer. The system has such advantages as large measurement scale, high degree of automation, strong anti-interference ability, noise rejection and real-time measurement. In this paper, we resolve the key technology such as the transformation, storage and calculation of multiple cameras' high resolution digital image. The experimental data show that the repeatability of the system is within 0.02mm and the distance error of the system is within 0.12mm in the whole workspace. This experiment has verified the rationality of the system scheme, the correctness, the precision and effectiveness of the relevant methods.
Estimation of color modification in digital images by CFA pattern change.
Choi, Chang-Hee; Lee, Hae-Yeoun; Lee, Heung-Kyu
2013-03-10
Extensive studies have been carried out for detecting image forgery such as copy-move, re-sampling, blurring, and contrast enhancement. Although color modification is a common forgery technique, there is no reported forensic method for detecting this type of manipulation. In this paper, we propose a novel algorithm for estimating color modification in images acquired from digital cameras when the images are modified. Most commercial digital cameras are equipped with a color filter array (CFA) for acquiring the color information of each pixel. As a result, the images acquired from such digital cameras include a trace from the CFA pattern. This pattern is composed of the basic red green blue (RGB) colors, and it is changed when color modification is carried out on the image. We designed an advanced intermediate value counting method for measuring the change in the CFA pattern and estimating the extent of color modification. The proposed method is verified experimentally by using 10,366 test images. The results confirmed the ability of the proposed method to estimate color modification with high accuracy. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
MS Kavandi with camera in Service Module
2001-07-16
STS104-E-5125 (16 July 2001) --- Astronaut Janet L. Kavandi, STS-104 mission specialist, uses a camera as she floats through the Zvezda service module aboard the International Space Station (ISS). The five STS-104 crew members were visiting the orbital outpost to perform various tasks. The image was recorded with a digital still camera.
Camera Calibration with Radial Variance Component Estimation
NASA Astrophysics Data System (ADS)
Mélykuti, B.; Kruck, E. J.
2014-11-01
Camera calibration plays a more and more important role in recent times. Beside real digital aerial survey cameras the photogrammetric market is dominated by a big number of non-metric digital cameras mounted on UAVs or other low-weight flying platforms. The in-flight calibration of those systems has a significant role to enhance the geometric accuracy of survey photos considerably. It is expected to have a better precision of photo measurements in the center of images then along the edges or in the corners. With statistical methods the accuracy of photo measurements in dependency of the distance of points from image center has been analyzed. This test provides a curve for the measurement precision as function of the photo radius. A high number of camera types have been tested with well penetrated point measurements in image space. The result of the tests led to a general consequence to show a functional connection between accuracy and radial distance and to give a method how to check and enhance the geometrical capability of the cameras in respect to these results.
Accuracy evaluation of optical distortion calibration by digital image correlation
NASA Astrophysics Data System (ADS)
Gao, Zeren; Zhang, Qingchuan; Su, Yong; Wu, Shangquan
2017-11-01
Due to its convenience of operation, the camera calibration algorithm, which is based on the plane template, is widely used in image measurement, computer vision and other fields. How to select a suitable distortion model is always a problem to be solved. Therefore, there is an urgent need for an experimental evaluation of the accuracy of camera distortion calibrations. This paper presents an experimental method for evaluating camera distortion calibration accuracy, which is easy to implement, has high precision, and is suitable for a variety of commonly used lens. First, we use the digital image correlation method to calculate the in-plane rigid body displacement field of an image displayed on a liquid crystal display before and after translation, as captured with a camera. Next, we use a calibration board to calibrate the camera to obtain calibration parameters which are used to correct calculation points of the image before and after deformation. The displacement field before and after correction is compared to analyze the distortion calibration results. Experiments were carried out to evaluate the performance of two commonly used industrial camera lenses for four commonly used distortion models.
TOPDAQ Acquisition Utility Beta version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
MOreno, Mario; & Barret, Keith
2010-01-07
This TOPDAQ Acquisition Utility uses 5 digital cameras mounted on a vertical pole, maintained in a vertical position using sensors and actuators, to take photographs of an RP-2 or RP-3 module, one camera for each row (4) and one in the center for driving, when the module is at 0 degrees, or facing the eastern horizon. These photographs and other data collected at the same time the pictures are taken are analyzed by the TOPAAP Analysis Utility. The TOPCAT implemented by the TOPDAQ Acquisition Utility and TOPAAP Analysis Utility programs optimizes the alignment of each RP in a module onmore » a parabolic trough solar collector array (SCA) to maximize the amount of solar energy intercepted by the solar receiver. The camera fixture and related hardware are mounted on a pickup truck and driven between rows in a parabolic trough solar power plant. An ultrasonic distance meter is used to maintain the correct distance between the cameras and the RP module. Along with the two leveling actuators, a third actuator is used to maintain a proper relative vertical position between the cameras and the RP module. The TOPDAQ Acquisition Utility facilitates file management by keeping track of which RP module data is being taken and also controls the exposure levels for each camera to maintain a high contract ratio in the photograph even as the available daylight changes throughout the day. The theoretical TOPCAT hardware and software support the current industry standard RP-2 and RP-3 module geometries.« less
Current status of Polish Fireball Network
NASA Astrophysics Data System (ADS)
Wiśniewski, M.; Żołądek, P.; Olech, A.; Tyminski, Z.; Maciejewski, M.; Fietkiewicz, K.; Rudawska, R.; Gozdalski, M.; Gawroński, M. P.; Suchodolski, T.; Myszkiewicz, M.; Stolarz, M.; Polakowski, K.
2017-09-01
The Polish Fireball Network (PFN) is a project to monitor regularly the sky over Poland in order to detect bright fireballs. In 2016 the PFN consisted of 36 continuously active stations with 57 sensitive analogue video cameras and 7 high resolution digital cameras. In our observations we also use spectroscopic and radio techniques. A PyFN software package for trajectory and orbit determination was developed. The PFN project is an example of successful participation of amateur astronomers who can provide valuable scientific data. The network is coordinated by astronomers from Copernicus Astronomical Centre in Warsaw, Poland. In 2011-2015 the PFN cameras recorded 214,936 meteor events. Using the PFN data and the UFOOrbit software 34,609 trajectories and orbits were calculated. In the following years we are planning intensive modernization of the PFN network including installation of dozens of new digital cameras.
NASA Astrophysics Data System (ADS)
Shao, Xinxing; Zhu, Feipeng; Su, Zhilong; Dai, Xiangjun; Chen, Zhenning; He, Xiaoyuan
2018-03-01
The strain errors in stereo-digital image correlation (DIC) due to camera calibration were investigated using precisely controlled numerical experiments and real experiments. Three-dimensional rigid body motion tests were conducted to examine the effects of camera calibration on the measured results. For a fully accurate calibration, rigid body motion causes negligible strain errors. However, for inaccurately calibrated camera parameters and a short working distance, rigid body motion will lead to more than 50-μɛ strain errors, which significantly affects the measurement. In practical measurements, it is impossible to obtain a fully accurate calibration; therefore, considerable attention should be focused on attempting to avoid these types of errors, especially for high-accuracy strain measurements. It is necessary to avoid large rigid body motions in both two-dimensional DIC and stereo-DIC.
Image Sensors Enhance Camera Technologies
NASA Technical Reports Server (NTRS)
2010-01-01
In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.
An automated system for whole microscopic image acquisition and analysis.
Bueno, Gloria; Déniz, Oscar; Fernández-Carrobles, María Del Milagro; Vállez, Noelia; Salido, Jesús
2014-09-01
The field of anatomic pathology has experienced major changes over the last decade. Virtual microscopy (VM) systems have allowed experts in pathology and other biomedical areas to work in a safer and more collaborative way. VMs are automated systems capable of digitizing microscopic samples that were traditionally examined one by one. The possibility of having digital copies reduces the risk of damaging original samples, and also makes it easier to distribute copies among other pathologists. This article describes the development of an automated high-resolution whole slide imaging (WSI) system tailored to the needs and problems encountered in digital imaging for pathology, from hardware control to the full digitization of samples. The system has been built with an additional digital monochromatic camera together with the color camera by default and LED transmitted illumination (RGB). Monochrome cameras are the preferred method of acquisition for fluorescence microscopy. The system is able to digitize correctly and form large high resolution microscope images for both brightfield and fluorescence. The quality of the digital images has been quantified using three metrics based on sharpness, contrast and focus. It has been proved on 150 tissue samples of brain autopsies, prostate biopsies and lung cytologies, at five magnifications: 2.5×, 10×, 20×, 40×, and 63×. The article is focused on the hardware set-up and the acquisition software, although results of the implemented image processing techniques included in the software and applied to the different tissue samples are also presented. © 2014 Wiley Periodicals, Inc.
First light on a new fully digital camera based on SiPM for CTA SST-1M telescope
NASA Astrophysics Data System (ADS)
della Volpe, Domenico; Al Samarai, Imen; Alispach, Cyril; Bulik, Tomasz; Borkowski, Jerzy; Cadoux, Franck; Coco, Victor; Favre, Yannick; Grudzińska, Mira; Heller, Matthieu; Jamrozy, Marek; Kasperek, Jerzy; Lyard, Etienne; Mach, Emil; Mandat, Dusan; Michałowski, Jerzy; Moderski, Rafal; Montaruli, Teresa; Neronov, Andrii; Niemiec, Jacek; Njoh Ekoume, T. R. S.; Ostrowski, Michal; Paśko, Paweł; Pech, Miroslav; Rajda, Pawel; Rafalski, Jakub; Schovanek, Petr; Seweryn, Karol; Skowron, Krzysztof; Sliusar, Vitalii; Stawarz, Łukasz; Stodulska, Magdalena; Stodulski, Marek; Travnicek, Petr; Troyano Pujadas, Isaac; Walter, Roland; Zagdański, Adam; Zietara, Krzysztof
2017-08-01
The Cherenkov Telescope Array (CTA) will explore with unprecedented precision the Universe in the gammaray domain covering an energy range from 50 GeV to more the 300 TeV. To cover such a broad range with a sensitivity which will be ten time better than actual instruments, different types of telescopes are needed: the Large Size Telescopes (LSTs), with a ˜24 m diameter mirror, a Medium Size Telescopes (MSTs), with a ˜12 m mirror and the small size telescopes (SSTs), with a ˜4 m diameter mirror. The single mirror small size telescope (SST-1M), one of the proposed solutions to become part of the small-size telescopes of CTA, will be equipped with an innovative camera. The SST-1M has a Davies-Cotton optical design with a mirror dish of 4 m diameter and focal ratio 1.4 focussing the Cherenkov light produced in atmospheric showers onto a 90 cm wide hexagonal camera providing a FoV of 9 degrees. The camera is an innovative design based on silicon photomultipliers (SiPM ) and adopting a fully digital trigger and readout architecture. The camera features 1296 custom designed large area hexagonal SiPM coupled to hollow optical concentrators to achieve a pixel size of almost 2.4 cm. The SiPM is a custom design developed with Hamamatsu and with its active area of almost 1 cm2 is one of the largest monolithic SiPM existing. Also the optical concentrators are innovative being light funnels made of a polycarbonate substrate coated with a custom designed UV-enhancing coating. The analog signals coming from the SiPM are fed into the fully digital readout electronics, where digital data are processed by high-speed FPGAs both for trigger and readout. The trigger logic, implemented into an Virtex 7 FPGA, uses the digital data to elaborate a trigger decision by matching data against predefined patterns. This approach is extremely flexible and allows improvements and continued evolutions of the system. The prototype camera is being tested in laboratory prior to its installation expected in fall 2017 on the telescope prototype in Krakow (Poland). In this contribution, we will describe the design of the camera and show the performance measured in laboratory.
First Results of Digital Topography Applied to Macromolecular Crystals
NASA Technical Reports Server (NTRS)
Lovelace, J.; Soares, A. S.; Bellamy, H.; Sweet, R. M.; Snell, E. H.; Borgstahl, G.
2004-01-01
An inexpensive digital CCD camera was used to record X-ray topographs directly from large imperfect crystals of cubic insulin. The topographs recorded were not as detailed as those which can be measured with film or emulsion plates but do show great promise. Six reflections were recorded using a set of finely spaced stills encompassing the rocking curve of each reflection. A complete topographic reflection profile could be digitally imaged in minutes. Interesting and complex internal structure was observed by this technique.The CCD chip used in the camera has anti-blooming circuitry and produced good data quality even when pixels became overloaded.
OPALS: A COTS-based Tech Demo of Optical Communications
NASA Technical Reports Server (NTRS)
Oaida, Bogdan
2012-01-01
I. Objective: Deliver video from ISS to optical ground terminal via an optical communications link. a) JPL Phaeton/Early Career Hire (ECH) training project. b) Implemented as Class-D payload. c) Downlink at approx.30Mb/s. II. Flight System a) Optical Head Beacon Acquisition Camera. Downlink Transmitter. 2-axis Gimbal. b) Sealed Container Laser Avionics Power distribution Digital I/O board III. Implementation: a) Ground Station - Optical Communications Telescope Laboratory at Table Mountain Facility b) Flight System mounted to ISS FRAM as standard I/F. Attached externally on Express Logistics Carrier.
Digital Photography and Its Impact on Instruction.
ERIC Educational Resources Information Center
Lantz, Chris
Today the chemical processing of film is being replaced by a virtual digital darkroom. Digital image storage makes new levels of consistency possible because its nature is less volatile and more mutable than traditional photography. The potential of digital imaging is great, but issues of disk storage, computer speed, camera sensor resolution,…
Quantifying seasonal variation of leaf area index using near-infrared digital camera in a rice paddy
NASA Astrophysics Data System (ADS)
Hwang, Y.; Ryu, Y.; Kim, J.
2017-12-01
Digital camera has been widely used to quantify leaf area index (LAI). Numerous simple and automatic methods have been proposed to improve the digital camera based LAI estimates. However, most studies in rice paddy relied on arbitrary thresholds or complex radiative transfer models to make binary images. Moreover, only a few study reported continuous, automatic observation of LAI over the season in rice paddy. The objective of this study is to quantify seasonal variations of LAI using raw near-infrared (NIR) images coupled with a histogram shape-based algorithm in a rice paddy. As vegetation highly reflects the NIR light, we installed NIR digital camera 1.8 m above the ground surface and acquired unsaturated raw format images at one-hour intervals between 15 to 80 º solar zenith angles over the entire growing season in 2016 (from May to September). We applied a sub-pixel classification combined with light scattering correction method. Finally, to confirm the accuracy of the quantified LAI, we also conducted direct (destructive sampling) and indirect (LAI-2200) manual observations of LAI once per ten days on average. Preliminary results show that NIR derived LAI agreed well with in-situ observations but divergence tended to appear once rice canopy is fully developed. The continuous monitoring of LAI in rice paddy will help to understand carbon and water fluxes better and evaluate satellite based LAI products.
Multi-sensor fusion over the World Trade Center disaster site
NASA Astrophysics Data System (ADS)
Rodarmel, Craig; Scott, Lawrence; Simerlink, Deborah A.; Walker, Jeffrey
2002-09-01
The immense size and scope of the rescue and clean-up of the World Trade Center site created a need for data that would provide a total overview of the disaster area. To fulfill this need, the New York State Office for Technology (NYSOFT) contracted with EarthData International to collect airborne remote sensing data over Ground Zero with an airborne light detection and ranging (LIDAR) sensor, a high-resolution digital camera, and a thermal camera. The LIDAR data provided a three-dimensional elevation model of the ground surface that was used for volumetric calculations and also in the orthorectification of the digital images. The digital camera provided high-resolution imagery over the site to aide the rescuers in placement of equipment and other assets. In addition, the digital imagery was used to georeference the thermal imagery and also provided the visual background for the thermal data. The thermal camera aided in the location and tracking of underground fires. The combination of data from these three sensors provided the emergency crews with a timely, accurate overview containing a wealth of information of the rapidly changing disaster site. Because of the dynamic nature of the site, the data was acquired on a daily basis, processed, and turned over to NYSOFT within twelve hours of the collection. During processing, the three datasets were combined and georeferenced to allow them to be inserted into the client's geographic information systems.
Mitigation of Atmospheric Effects on Imaging Systems
2004-03-31
focal length. The imaging system had two cameras: an Electrim camera sensitive in the visible (0.6 µ m) waveband and an Amber QWIP infrared camera...sensitive in the 9–micron region. The Amber QWIP infrared camera had 256x256 pixels, pixel pitch 38 mµ , focal length of 1.8 m, FOV of 5.4 x5.4 mr...each day. Unfortunately, signals from the different read ports of the Electrim camera picked up noise on their way to the digitizer, and this resulted
Perfect Lighting for Facial Photography in Aesthetic Surgery: Ring Light.
Dölen, Utku Can; Çınar, Selçuk
2016-04-01
Photography is indispensable for plastic surgery. On-camera flashes can result in bleached out detail and colour. This is why most of the plastic surgery clinics prefer studio lighting similar to professional photographers'. In this article, we want to share a simple alternative to studio lighting that does not need extra space: Ring light. We took five different photographs of the same person with five different camera and lighting settings: Smartphone and ring light; point and shoot camera and on-camera flash; point and shoot camera and studio lighting; digital single-lens reflex (DLSR) camera and studio lighting; DSLR and ring light. Then, those photographs were assessed objectively with an online survey of five questions answered by three distinct populations: plastic surgeons (n: 28), professional portrait photographers (n: 24) and patients (n: 22) who had facial aesthetic procedures. Compared to the on-camera flash, studio lighting better showed the wrinkles of the subject. The ring light facilitated the perception of the wrinkles by providing homogenous soft light in a circular shape rather than bursting flashes. The combination of a DSLR camera and ring light gave the oldest looking subject according to 64 % of responders. The DSLR camera and the studio lighting demonstrated the youngest looking subject according to 70 % of the responders. The majority of the responders (78 %) chose the combination of DSLR camera and ring light that exhibited the wrinkles the most. We suggest using a ring light to obtain well-lit photographs without loss of detail, with any type of cameras. However, smartphones must be avoided if standard pictures are desired. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.
Digital Elevation Model from Non-Metric Camera in Uas Compared with LIDAR Technology
NASA Astrophysics Data System (ADS)
Dayamit, O. M.; Pedro, M. F.; Ernesto, R. R.; Fernando, B. L.
2015-08-01
Digital Elevation Model (DEM) data as a representation of surface topography is highly demanded for use in spatial analysis and modelling. Aimed to that issue many methods of acquisition data and process it are developed, from traditional surveying until modern technology like LIDAR. On the other hands, in a past four year the development of Unamend Aerial System (UAS) aimed to Geomatic bring us the possibility to acquire data about surface by non-metric digital camera on board in a short time with good quality for some analysis. Data collectors have attracted tremendous attention on UAS due to possibility of the determination of volume changes over time, monitoring of the breakwaters, hydrological modelling including flood simulation, drainage networks, among others whose support in DEM for proper analysis. The DEM quality is considered as a combination of DEM accuracy and DEM suitability so; this paper is aimed to analyse the quality of the DEM from non-metric digital camera on UAS compared with a DEM from LIDAR corresponding to same geographic space covering 4 km2 in Artemisa province, Cuba. This area is in a frame of urban planning whose need to know the topographic characteristics in order to analyse hydrology behaviour and decide the best place for make roads, building and so on. Base on LIDAR technology is still more accurate method, it offer us a pattern for test DEM from non-metric digital camera on UAS, whose are much more flexible and bring a solution for many applications whose needs DEM of detail.
Accuracy Potential and Applications of MIDAS Aerial Oblique Camera System
NASA Astrophysics Data System (ADS)
Madani, M.
2012-07-01
Airborne oblique cameras such as Fairchild T-3A were initially used for military reconnaissance in 30s. A modern professional digital oblique camera such as MIDAS (Multi-camera Integrated Digital Acquisition System) is used to generate lifelike three dimensional to the users for visualizations, GIS applications, architectural modeling, city modeling, games, simulators, etc. Oblique imagery provide the best vantage for accessing and reviewing changes to the local government tax base, property valuation assessment, buying & selling of residential/commercial for better decisions in a more timely manner. Oblique imagery is also used for infrastructure monitoring making sure safe operations of transportation, utilities, and facilities. Sanborn Mapping Company acquired one MIDAS from TrackAir in 2011. This system consists of four tilted (45 degrees) cameras and one vertical camera connected to a dedicated data acquisition computer system. The 5 digital cameras are based on the Canon EOS 1DS Mark3 with Zeiss lenses. The CCD size is 5,616 by 3,744 (21 MPixels) with the pixel size of 6.4 microns. Multiple flights using different camera configurations (nadir/oblique (28 mm/50 mm) and (50 mm/50 mm)) were flown over downtown Colorado Springs, Colorado. Boresight fights for 28 mm nadir camera were flown at 600 m and 1,200 m and for 50 mm nadir camera at 750 m and 1500 m. Cameras were calibrated by using a 3D cage and multiple convergent images utilizing Australis model. In this paper, the MIDAS system is described, a number of real data sets collected during the aforementioned flights are presented together with their associated flight configurations, data processing workflow, system calibration and quality control workflows are highlighted and the achievable accuracy is presented in some detail. This study revealed that the expected accuracy of about 1 to 1.5 GSD (Ground Sample Distance) for planimetry and about 2 to 2.5 GSD for vertical can be achieved. Remaining systematic errors were modeled by analyzing residuals using correction grid. The results of the final bundle adjustments are sufficient to enable Sanborn to produce DEM/DTM and orthophotos from the nadir imagery and create 3D models using georeferenced oblique imagery.
NASA Astrophysics Data System (ADS)
Pillans, Luke; Harmer, Jack; Edwards, Tim; Richardson, Lee
2016-05-01
Geolocation is the process of calculating a target position based on bearing and range relative to the known location of the observer. A high performance thermal imager with integrated geolocation functions is a powerful long range targeting device. Firefly is a software defined camera core incorporating a system-on-a-chip processor running the AndroidTM operating system. The processor has a range of industry standard serial interfaces which were used to interface to peripheral devices including a laser rangefinder and a digital magnetic compass. The core has built in Global Positioning System (GPS) which provides the third variable required for geolocation. The graphical capability of Firefly allowed flexibility in the design of the man-machine interface (MMI), so the finished system can give access to extensive functionality without appearing cumbersome or over-complicated to the user. This paper covers both the hardware and software design of the system, including how the camera core influenced the selection of peripheral hardware, and the MMI design process which incorporated user feedback at various stages.
Using oblique digital photography for alluvial sandbar monitoring and low-cost change detection
Tusso, Robert B.; Buscombe, Daniel D.; Grams, Paul E.
2015-01-01
The maintenance of alluvial sandbars is a longstanding management interest along the Colorado River in Grand Canyon. Resource managers are interested in both the long-term trend in sandbar condition and the short-term response to management actions, such as intentional controlled floods released from Glen Canyon Dam. Long-term monitoring is accomplished at a range of scales, by a combination of annual topographic survey at selected sites, daily collection of images from those sites using novel, autonomously operating, digital camera systems (hereafter referred to as 'remote cameras'), and quadrennial remote sensing of sandbars canyonwide. In this paper, we present results from the remote camera images for daily changes in sandbar topography.
NASA Astrophysics Data System (ADS)
Chavis, Christopher
Using commercial digital cameras in conjunction with Unmanned Aerial Systems (UAS) to generate 3-D Digital Surface Models (DSMs) and orthomosaics is emerging as a cost-effective alternative to Light Detection and Ranging (LiDAR). Powerful software applications such as Pix4D and APS can automate the generation of DSM and orthomosaic products from a handful of inputs. However, the accuracy of these models is relatively untested. The objectives of this study were to generate multiple DSM and orthomosaic pairs of the same area using Pix4D and APS from flights of imagery collected with a lightweight UAS. The accuracy of each individual DSM was assessed in addition to the consistency of the method to model one location over a period of time. Finally, this study determined if the DSMs automatically generated using lightweight UAS and commercial digital cameras could be used for detecting changes in elevation and at what scale. Accuracy was determined by comparing DSMs to a series of reference points collected with survey grade GPS. Other GPS points were also used as control points to georeference the products within Pix4D and APS. The effectiveness of the products for change detection was assessed through image differencing and observance of artificially induced, known elevation changes. The vertical accuracy with the optimal data and model is ≈ 25 cm and the highest consistency over repeat flights is a standard deviation of ≈ 5 cm. Elevation change detection based on such UAS imagery and DSM models should be viable for detecting infrastructure change in urban or suburban environments with little dense canopy vegetation.
NASA Astrophysics Data System (ADS)
Hynek, Bernhard; Binder, Daniel; Boffi, Geo; Schöner, Wolfgang; Verhoeven, Geert
2014-05-01
Terrestrial photogrammetry was the standard method for mapping high mountain terrain in the early days of mountain cartography, until it was replaced by aerial photogrammetry and airborne laser scanning. Modern low-price digital single-lens reflex (DSLR) cameras and highly automatic and cheap digital computer vision software with automatic image matching and multiview-stereo routines suggest the rebirth of terrestrial photogrammetry, especially in remote regions, where airborne surveying methods are expensive due to high flight costs. Terrestrial photogrammetry and modern automated image matching is widely used in geodesy, however, its application in glaciology is still rare, especially for surveying ice bodies at the scale of some km², which is typical for valley glaciers. In August 2013 a terrestrial photogrammetric survey was carried out on Freya Glacier, a 6km² valley glacier next to Zackenberg Research Station in NE-Greenland, where a detailed glacier mass balance monitoring was initiated during the last IPY. Photos with a consumer grade digital camera (Nikon D7100) were taken from the ridges surrounding the glacier. To create a digital elevation model, the photos were processed with the software photoscan. A set of ~100 dGPS surveyed ground control points on the glacier surface was used to georeference and validate the final DEM. Aim of this study was to produce a high resolution and high accuracy DEM of the actual surface topography of the Freya glacier catchment with a novel approach and to explore the potential of modern low-cost terrestrial photogrammetry combined with state-of-the-art automated image matching and multiview-stereo routines for glacier monitoring and to communicate this powerful and cheap method within the environmental research and glacier monitoring community.
An evolution of image source camera attribution approaches.
Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul
2016-05-01
Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
More than meets the eye: digital fraud in dentistry.
Rao, S A; Singh, N; Kumar, R; Thomas, A M
2010-01-01
Digital photographs play a substantial role in the presentation and validation of clinical cases for documentation and research purposes in esthetically oriented professions such as dentistry. The introduction of sophisticated cameras and "easy to use" computer software readily available on today's market has enabled digital fraud to emerge as a common and widely used practice. Hence, it is essential that both dentists and editorial circles are aware and cautious with regard to the possibility of digital fraud. A set of 10 routine "pre-" and "post" treatment dental procedure photographs were taken and randomly manipulated using standard desktop software. A team of 10 dental professionals were selected and each one of them was individually requested to review and evaluate the authenticity of the photographs. An assessment of expert opinion revealed an overall sensitivity of 60% and a sensitivity of 15% in correctly identifying a manipulated photograph, which is considered low. Furthermore, there was poor interobserver agreement. Advanced technology that is easily available has resulted in adept digital fraud that is difficult to detect. There is a need for awareness among both dental practitioners and the editorial circle regarding misrepresentation due to image manipulation. It is therefore necessary to follow a skeptical approach in the assessment of digitalized photos used in research and as a part of clinical dentistry.
A novel method for detecting light source for digital images forensic
NASA Astrophysics Data System (ADS)
Roy, A. K.; Mitra, S. K.; Agrawal, R.
2011-06-01
Manipulation in image has been in practice since centuries. These manipulated images are intended to alter facts — facts of ethics, morality, politics, sex, celebrity or chaos. Image forensic science is used to detect these manipulations in a digital image. There are several standard ways to analyze an image for manipulation. Each one has some limitation. Also very rarely any method tried to capitalize on the way image was taken by the camera. We propose a new method that is based on light and its shade as light and shade are the fundamental input resources that may carry all the information of the image. The proposed method measures the direction of light source and uses the light based technique for identification of any intentional partial manipulation in the said digital image. The method is tested for known manipulated images to correctly identify the light sources. The light source of an image is measured in terms of angle. The experimental results show the robustness of the methodology.
Improved stereo matching applied to digitization of greenhouse plants
NASA Astrophysics Data System (ADS)
Zhang, Peng; Xu, Lihong; Li, Dawei; Gu, Xiaomeng
2015-03-01
The digitization of greenhouse plants is an important aspect of digital agriculture. Its ultimate aim is to reconstruct a visible and interoperable virtual plant model on the computer by using state-of-the-art image process and computer graphics technologies. The most prominent difficulties of the digitization of greenhouse plants include how to acquire the three-dimensional shape data of greenhouse plants and how to carry out its realistic stereo reconstruction. Concerning these issues an effective method for the digitization of greenhouse plants is proposed by using a binocular stereo vision system in this paper. Stereo vision is a technique aiming at inferring depth information from two or more cameras; it consists of four parts: calibration of the cameras, stereo rectification, search of stereo correspondence and triangulation. Through the final triangulation procedure, the 3D point cloud of the plant can be achieved. The proposed stereo vision system can facilitate further segmentation of plant organs such as stems and leaves; moreover, it can provide reliable digital samples for the visualization of greenhouse tomato plants.
Agreement and reading time for differently-priced devices for the digital capture of X-ray films.
Salazar, Antonio José; Camacho, Juan Camilo; Aguirre, Diego Andrés
2012-03-01
We assessed the reliability of three digital capture devices: a film digitizer (which cost US $15,000), a flat-bed scanner (US $1800) and a digital camera (US $450). Reliability was measured as the agreement between six observers when reading images acquired from a single device and also in terms of the pair-device agreement. The images were 136 chest X-ray cases. The variables measured were the interstitial opacities distribution, interstitial patterns, nodule size and percentage pneumothorax size. The agreement between the six readers when reading images acquired from a single device was similar for the three devices. The pair-device agreements were moderate for all variables. There were significant differences in reading-time between devices: the mean reading-time for the film digitizer was 93 s, it was 59 s for the flat-bed scanner and 70 s for the digital camera. Despite the differences in their cost, there were no substantial differences in the performance of the three devices.
NASA Astrophysics Data System (ADS)
Ahn, Y.; Box, J. E.; Balog, J.; Lewinter, A.
2008-12-01
Monitoring Greenland outlet glaciers using remotely sensed data has drawn a great attention in earth science communities for decades and time series analysis of sensory data has provided important variability information of glacier flow by detecting speed and thickness changes, tracking features and acquiring model input. Thanks to advancements of commercial digital camera technology and increased solid state storage, we activated automatic ground-based time-lapse camera stations with high spatial/temporal resolution in west Greenland outlet and collected one-hour interval data continuous for more than one year at some but not all sites. We believe that important information of ice dynamics are contained in these data and that terrestrial mono-/stereo-photogrammetry can provide theoretical/practical fundamentals in data processing along with digital image processing techniques. Time-lapse images over periods in west Greenland indicate various phenomenon. Problematic is rain, snow, fog, shadows, freezing of water on camera enclosure window, image over-exposure, camera motion, sensor platform drift, and fox chewing of instrument cables, and the pecking of plastic window by ravens. Other problems include: feature identification, camera orientation, image registration, feature matching in image pairs, and feature tracking. Another obstacle is that non-metric digital camera contains large distortion to be compensated for precise photogrammetric use. Further, a massive number of images need to be processed in a way that is sufficiently computationally efficient. We meet these challenges by 1) identifying problems in possible photogrammetric processes, 2) categorizing them based on feasibility, and 3) clarifying limitation and alternatives, while emphasizing displacement computation and analyzing regional/temporal variability. We experiment with mono and stereo photogrammetric techniques in the aide of automatic correlation matching for efficiently handling the enormous data volumes.
Focusing and depth of field in photography: application in dermatology practice.
Taheri, Arash; Yentzer, Brad A; Feldman, Steven R
2013-11-01
Conventional photography obtains a sharp image of objects within a given 'depth of field'; objects not within the depth of field are out of focus. In recent years, digital photography revolutionized the way pictures are taken, edited, and stored. However, digital photography does not result in a deeper depth of field or better focusing. In this article, we briefly review the concept of depth of field and focus in photography as well as new technologies in this area. A deep depth of field is used to have more objects in focus; a shallow depth of field can emphasize a subject by blurring the foreground and background objects. The depth of field can be manipulated by adjusting the aperture size of the camera, with smaller apertures increasing the depth of field at the cost of lower levels of light capture. Light-field cameras are a new generation of digital cameras that offer several new features, including the ability to change the focus on any object in the image after taking the photograph. Understanding depth of field and camera technology helps dermatologists to capture their subjects in focus more efficiently. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Calculation for simulation of archery goal value using a web camera and ultrasonic sensor
NASA Astrophysics Data System (ADS)
Rusjdi, Darma; Abdurrasyid, Wulandari, Dewi Arianti
2017-08-01
Development of the device simulator digital indoor archery-based embedded systems as a solution to the limitations of the field or open space is adequate, especially in big cities. Development of the device requires simulations to calculate the value of achieving the target based on the approach defined by the parabolic motion variable initial velocity and direction of motion of the arrow reaches the target. The simulator device should be complemented with an initial velocity measuring device using ultrasonic sensors and measuring direction of the target using a digital camera. The methodology uses research and development of application software from modeling and simulation approach. The research objective to create simulation applications calculating the value of the achievement of the target arrows. Benefits as a preliminary stage for the development of the simulator device of archery. Implementation of calculating the value of the target arrows into the application program generates a simulation game of archery that can be used as a reference development of the digital archery simulator in a room with embedded systems using ultrasonic sensors and web cameras. Applications developed with the simulation calculation comparing the outer radius of the circle produced a camera from a distance of three meters.
MS Walheim poses with a Hasselblad camera on the flight deck of Atlantis during STS-110
2002-04-08
STS110-E-5017 (8 April 2002) --- Astronaut Rex J. Walheim, STS-110 mission specialist, holds a camera on the aft flight deck of the Space Shuttle Atlantis. A blue and white Earth is visible through the overhead windows of the orbiter. The image was taken with a digital still camera.
HST Solar Arrays photographed by Electronic Still Camera
NASA Technical Reports Server (NTRS)
1993-01-01
This close-up view of one of two Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.
NASA Technical Reports Server (NTRS)
Borgstahl, Gloria (Inventor); Lovelace, Jeff (Inventor); Snell, Edward Holmes (Inventor); Bellamy, Henry (Inventor)
2008-01-01
The present invention provides a digital topography imaging system for determining the crystalline structure of a biological macromolecule, wherein the system employs a charge coupled device (CCD) camera with antiblooming circuitry to directly convert x-ray signals to electrical signals without the use of phosphor and measures reflection profiles from the x-ray emitting source after x-rays are passed through a sample. Methods for using said system are also provided.
Explosive Transient Camera (ETC) Program
1991-10-01
VOLTAGES 4.- VIDEO OUT CCD CLOCKING UNIT UUPSTAIRS" ELECTRONICS AND ANALOG TO DIGITAL IPR OCECSSER I COMMANDS TO DATA AND STATUS INSTRUMENT INFORMATION I...and transmits digital video and status information to the "downstairs" system. The clocking unit and regulator/driver board are the only CCD dependent...A. 1001, " Video Cam-era’CC’" tandari Piells" (1(P’ll m-norartlum, unpublished). Condon,, J.J., Puckpan, M.A., and Vachalski, J. 1970, A. J., 9U, 1149
Evaluation of Digital Technology and Software Use among Business Education Teachers
ERIC Educational Resources Information Center
Ellis, Richard S.; Okpala, Comfort O.
2004-01-01
Digital video cameras are part of the evolution of multimedia digital products that have positive applications for educators, students, and industry. Multimedia digital video can be utilized by any personal computer and it allows the user to control, combine, and manipulate different types of media, such as text, sound, video, computer graphics,…
ERIC Educational Resources Information Center
Ching, Cynthia Carter; Wang, X. Christine; Shih, Mei-Li; Kedem, Yore
2006-01-01
To explore meaningful and effective technology integration in early childhood education, we investigated how kindergarten-first-grade students created and employed digital photography journals to support social and cognitive reflection. These students used a digital camera to document their daily school activities and created digital photo…
The multifocus plenoptic camera
NASA Astrophysics Data System (ADS)
Georgiev, Todor; Lumsdaine, Andrew
2012-01-01
The focused plenoptic camera is based on the Lippmann sensor: an array of microlenses focused on the pixels of a conventional image sensor. This device samples the radiance, or plenoptic function, as an array of cameras with large depth of field, focused at a certain plane in front of the microlenses. For the purpose of digital refocusing (which is one of the important applications) the depth of field needs to be large, but there are fundamental optical limitations to this. The solution of the above problem is to use and array of interleaved microlenses of different focal lengths, focused at two or more different planes. In this way a focused image can be constructed at any depth of focus, and a really wide range of digital refocusing can be achieved. This paper presents our theory and results of implementing such camera. Real world images are demonstrating the extended capabilities, and limitations are discussed.
MTF measurements on real time for performance analysis of electro-optical systems
NASA Astrophysics Data System (ADS)
Stuchi, Jose Augusto; Signoreto Barbarini, Elisa; Vieira, Flavio Pascoal; dos Santos, Daniel, Jr.; Stefani, Mário Antonio; Yasuoka, Fatima Maria Mitsue; Castro Neto, Jarbas C.; Linhari Rodrigues, Evandro Luis
2012-06-01
The need of methods and tools that assist in determining the performance of optical systems is actually increasing. One of the most used methods to perform analysis of optical systems is to measure the Modulation Transfer Function (MTF). The MTF represents a direct and quantitative verification of the image quality. This paper presents the implementation of the software, in order to calculate the MTF of electro-optical systems. The software was used for calculating the MTF of Digital Fundus Camera, Thermal Imager and Ophthalmologic Surgery Microscope. The MTF information aids the analysis of alignment and measurement of optical quality, and also defines the limit resolution of optical systems. The results obtained with the Fundus Camera and Thermal Imager was compared with the theoretical values. For the Microscope, the results were compared with MTF measured of Microscope Zeiss model, which is the quality standard of ophthalmological microscope.
SPRUCE Vegetation Phenology in Experimental Plots from Phenocam Imagery, 2015-2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richardson, Andrew D.; Hufkens, Koen; Milliman, Thomas
This data set consists of PhenoCam data from the SPRUCE experiment from the beginning of whole ecosystem warming in August 2015 through the end of 2017. Digital cameras, or phenocams, installed in each SPRUCE enclosure track seasonal variation in vegetation “greenness”, a proxy for vegetation phenology and associated physiological activity. Regions of interest (ROIs) were defined for vegetation types (1) Picea trees (EN, evergreen needleleaf); (2) Larix trees (DN, deciduous needleleaf); and (3) the mixed shrub layer (SH, shrubs). This data set consists of two sets of data files: (1) standard “3-day summary product files” for each camera and eachmore » ROI (i.e. vegetation type), characterizing vegetation color at a 3-day time step and (2) a “transition date file” containing the estimated “greenness rising” (spring) and “greenness falling” (autumn) transition dates.« less
NASA Astrophysics Data System (ADS)
Kuruliuk, K. A.; Kulesh, V. P.
2016-10-01
An optical videogrammetry method using one digital camera for non-contact measurements of geometric shape parameters, position and motion of models and structural elements of aircraft in experimental aerodynamics was developed. The tests with the use of this method for measurement of six components (three linear and three angular ones) of real position of helicopter device in wind tunnel flow were conducted. The distance between camera and test object was 15 meters. It was shown in practice that, in the conditions of aerodynamic experiment instrumental measurement error (standard deviation) for angular and linear displacements of helicopter device does not exceed 0,02° and 0.3 mm, respectively. Analysis of the results shows that at the minimum rotor thrust deviations are systematic and generally are within ± 0.2 degrees. Deviations of angle values grow with the increase of rotor thrust.
NASA Astrophysics Data System (ADS)
Le, Nam-Tuan
2017-05-01
Copyright protection and information security are two most considered issues of digital data following the development of internet and computer network. As an important solution for protection, watermarking technology has become one of the challenged roles in industry and academic research. The watermarking technology can be classified by two categories: visible watermarking and invisible watermarking. With invisible technique, there is an advantage on user interaction because of the visibility. By applying watermarking for communication, it will be a challenge and a new direction for communication technology. In this paper we will propose one new research on communication technology using optical camera communications (OCC) based invisible watermarking. Beside the analysis on performance of proposed system, we also suggest the frame structure of PHY and MAC layer for IEEE 802.15.7r1 specification which is a revision of visible light communication (VLC) standardization.
Fluorescent Microscopy Enhancement Using Imaging
NASA Astrophysics Data System (ADS)
Conrad, Morgan P.; Reck tenwald, Diether J.; Woodhouse, Bryan S.
1986-06-01
To enhance our capabilities for observing fluorescent stains in biological systems, we are developing a low cost imaging system based around an IBM AT microcomputer and a commercial image capture board compatible with a standard RS-170 format video camera. The image is digitized in real time with 256 grey levels, while being displayed and also stored in memory. The software allows for interactive processing of the data, such as histogram equalization or pseudocolor enhancement of the display. The entire image, or a quadrant thereof, can be averaged over time to improve the signal to noise ratio. Images may be stored to disk for later use or comparison. The camera may be selected for better response in the UV or near IR. Combined with signal averaging, this increases the sensitivity relative to that of the human eye, while still allowing for the fluorescence distribution on either the surface or internal cytoskeletal structure to be observed.
Robust tissue classification for reproducible wound assessment in telemedicine environments
NASA Astrophysics Data System (ADS)
Wannous, Hazem; Treuillet, Sylvie; Lucas, Yves
2010-04-01
In telemedicine environments, a standardized and reproducible assessment of wounds, using a simple free-handled digital camera, is an essential requirement. However, to ensure robust tissue classification, particular attention must be paid to the complete design of the color processing chain. We introduce the key steps including color correction, merging of expert labeling, and segmentation-driven classification based on support vector machines. The tool thus developed ensures stability under lighting condition, viewpoint, and camera changes, to achieve accurate and robust classification of skin tissues. Clinical tests demonstrate that such an advanced tool, which forms part of a complete 3-D and color wound assessment system, significantly improves the monitoring of the healing process. It achieves an overlap score of 79.3 against 69.1% for a single expert, after mapping on the medical reference developed from the image labeling by a college of experts.
NASA Astrophysics Data System (ADS)
Narayanan, V. L.
2017-12-01
For the first time, high speed imaging of lightning from few isolated tropical thunderstorms are observed from India. The recordings are made from Tirupati (13.6oN, 79.4oE, 180 m above mean sea level) during summer months with a digital camera capable of recording high speed videos up to 480 fps. At 480 fps, each individual video file is recorded for 30 s resulting in 14400 deinterlaced images per video file. An automatic processing algorithm is developed for quick identification and analysis of the lightning events which will be discussed in detail. Preliminary results indicating different types of phenomena associated with lightning like stepped leader, dart leader, luminous channels corresponding to continuing current and M components are discussed. While most of the examples show cloud to ground discharges, few interesting cases of intra-cloud, inter-cloud and cloud-air discharges will also be displayed. This indicates that though high speed cameras with few 1000 fps are preferred for a detailed study on lightning, moderate range CMOS sensor based digital cameras can provide important information as well. The lightning imaging activity presented herein is initiated as an amateur effort and currently plans are underway to propose a suite of supporting instruments to conduct coordinated campaigns. The images discussed here are acquired from normal residential area and indicate how frequent lightning strikes are in such tropical locations during thunderstorms, though no towering structures are nearby. It is expected that popularizing of such recordings made with affordable digital cameras will trigger more interest in lightning research and provide a possible data source from amateur observers paving the way for citizen science.
[True color accuracy in digital forensic photography].
Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A
2016-01-01
Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation).
Miniaturized camera system for an endoscopic capsule for examination of the colonic mucosa
NASA Astrophysics Data System (ADS)
Wippermann, Frank; Müller, Martin; Wäny, Martin; Voltz, Stephan
2014-09-01
Todaýs standard procedure for the examination of the colon uses a digital endoscope located at the tip of a tube encasing wires for camera read out, fibers for illumination, and mechanical structures for steering and navigation. On the other hand, there are swallowable capsules incorporating a miniaturized camera which are more cost effective, disposable, and less unpleasant for the patient during examination but cannot be navigated along the path through the colon. We report on the development of a miniaturized endoscopic camera as part of a completely wireless capsule which can be safely and accurately navigated and controlled from the outside using an electromagnet. The endoscope is based on a global shutter CMOS-imager with 640x640 pixels and a pixel size of 3.6μm featuring through silicon vias. Hence, the required electronic connectivity is done at its back side using a ball grid array enabling smallest lateral dimensions. The layout of the f/5-objective with 100° diagonal field of view aims for low production cost and employs polymeric lenses produced by injection molding. Due to the need of at least one-time autoclaving, high temperature resistant polymers were selected. Optical and mechanical design considerations are given along with experimental data obtained from realized demonstrators.
Improving wavelet denoising based on an in-depth analysis of the camera color processing
NASA Astrophysics Data System (ADS)
Seybold, Tamara; Plichta, Mathias; Stechele, Walter
2015-02-01
While Denoising is an extensively studied task in signal processing research, most denoising methods are designed and evaluated using readily processed image data, e.g. the well-known Kodak data set. The noise model is usually additive white Gaussian noise (AWGN). This kind of test data does not correspond to nowadays real-world image data taken with a digital camera. Using such unrealistic data to test, optimize and compare denoising algorithms may lead to incorrect parameter tuning or suboptimal choices in research on real-time camera denoising algorithms. In this paper we derive a precise analysis of the noise characteristics for the different steps in the color processing. Based on real camera noise measurements and simulation of the processing steps, we obtain a good approximation for the noise characteristics. We further show how this approximation can be used in standard wavelet denoising methods. We improve the wavelet hard thresholding and bivariate thresholding based on our noise analysis results. Both the visual quality and objective quality metrics show the advantage of the proposed method. As the method is implemented using look-up-tables that are calculated before the denoising step, our method can be implemented with very low computational complexity and can process HD video sequences real-time in an FPGA.
Qualification Tests of Micro-camera Modules for Space Applications
NASA Astrophysics Data System (ADS)
Kimura, Shinichi; Miyasaka, Akira
Visual capability is very important for space-based activities, for which small, low-cost space cameras are desired. Although cameras for terrestrial applications are continually being improved, little progress has been made on cameras used in space, which must be extremely robust to withstand harsh environments. This study focuses on commercial off-the-shelf (COTS) CMOS digital cameras because they are very small and are based on an established mass-market technology. Radiation and ultrahigh-vacuum tests were conducted on a small COTS camera that weighs less than 100 mg (including optics). This paper presents the results of the qualification tests for COTS cameras and for a small, low-cost COTS-based space camera.
Calibration, Projection, and Final Image Products of MESSENGER's Mercury Dual Imaging System
NASA Astrophysics Data System (ADS)
Denevi, Brett W.; Chabot, Nancy L.; Murchie, Scott L.; Becker, Kris J.; Blewett, David T.; Domingue, Deborah L.; Ernst, Carolyn M.; Hash, Christopher D.; Hawkins, S. Edward; Keller, Mary R.; Laslo, Nori R.; Nair, Hari; Robinson, Mark S.; Seelos, Frank P.; Stephens, Grant K.; Turner, F. Scott; Solomon, Sean C.
2018-02-01
We present an overview of the operations, calibration, geodetic control, photometric standardization, and processing of images from the Mercury Dual Imaging System (MDIS) acquired during the orbital phase of the MESSENGER spacecraft's mission at Mercury (18 March 2011-30 April 2015). We also provide a summary of all of the MDIS products that are available in NASA's Planetary Data System (PDS). Updates to the radiometric calibration included slight modification of the frame-transfer smear correction, updates to the flat fields of some wide-angle camera (WAC) filters, a new model for the temperature dependence of narrow-angle camera (NAC) and WAC sensitivity, and an empirical correction for temporal changes in WAC responsivity. Further, efforts to characterize scattered light in the WAC system are described, along with a mosaic-dependent correction for scattered light that was derived for two regional mosaics. Updates to the geometric calibration focused on the focal lengths and distortions of the NAC and all WAC filters, NAC-WAC alignment, and calibration of the MDIS pivot angle and base. Additionally, two control networks were derived so that the majority of MDIS images can be co-registered with sub-pixel accuracy; the larger of the two control networks was also used to create a global digital elevation model. Finally, we describe the image processing and photometric standardization parameters used in the creation of the MDIS advanced products in the PDS, which include seven large-scale mosaics, numerous targeted local mosaics, and a set of digital elevation models ranging in scale from local to global.
Confocal retinal imaging using a digital light projector with a near infrared VCSEL source
NASA Astrophysics Data System (ADS)
Muller, Matthew S.; Elsner, Ann E.
2018-02-01
A custom near infrared VCSEL source has been implemented in a confocal non-mydriatic retinal camera, the Digital Light Ophthalmoscope (DLO). The use of near infrared light improves patient comfort, avoids pupil constriction, penetrates the deeper retina, and does not mask visual stimuli. The DLO performs confocal imaging by synchronizing a sequence of lines displayed with a digital micromirror device to the rolling shutter exposure of a 2D CMOS camera. Real-time software adjustments enable multiply scattered light imaging, which rapidly and cost-effectively emphasizes drusen and other scattering disruptions in the deeper retina. A separate 5.1" LCD display provides customizable visible stimuli for vision experiments with simultaneous near infrared imaging.
Optomechanical System Development of the AWARE Gigapixel Scale Camera
NASA Astrophysics Data System (ADS)
Son, Hui S.
Electronic focal plane arrays (FPA) such as CMOS and CCD sensors have dramatically improved to the point that digital cameras have essentially phased out film (except in very niche applications such as hobby photography and cinema). However, the traditional method of mating a single lens assembly to a single detector plane, as required for film cameras, is still the dominant design used in cameras today. The use of electronic sensors and their ability to capture digital signals that can be processed and manipulated post acquisition offers much more freedom of design at system levels and opens up many interesting possibilities for the next generation of computational imaging systems. The AWARE gigapixel scale camera is one such computational imaging system. By utilizing a multiscale optical design, in which a large aperture objective lens is mated with an array of smaller, well corrected relay lenses, we are able to build an optically simple system that is capable of capturing gigapixel scale images via post acquisition stitching of the individual pictures from the array. Properly shaping the array of digital cameras allows us to form an effectively continuous focal surface using off the shelf (OTS) flat sensor technology. This dissertation details developments and physical implementations of the AWARE system architecture. It illustrates the optomechanical design principles and system integration strategies we have developed through the course of the project by summarizing the results of the two design phases for AWARE: AWARE-2 and AWARE-10. These systems represent significant advancements in the pursuit of scalable, commercially viable snapshot gigapixel imaging systems and should serve as a foundation for future development of such systems.
X-ray imaging using digital cameras
NASA Astrophysics Data System (ADS)
Winch, Nicola M.; Edgar, Andrew
2012-03-01
The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.
Bater, Christopher W; Coops, Nicholas C; Wulder, Michael A; Hilker, Thomas; Nielsen, Scott E; McDermid, Greg; Stenhouse, Gordon B
2011-09-01
Critical to habitat management is the understanding of not only the location of animal food resources, but also the timing of their availability. Grizzly bear (Ursus arctos) diets, for example, shift seasonally as different vegetation species enter key phenological phases. In this paper, we describe the use of a network of seven ground-based digital camera systems to monitor understorey and overstorey vegetation within species-specific regions of interest. Established across an elevation gradient in western Alberta, Canada, the cameras collected true-colour (RGB) images daily from 13 April 2009 to 27 October 2009. Fourth-order polynomials were fit to an RGB-derived index, which was then compared to field-based observations of phenological phases. Using linear regression to statistically relate the camera and field data, results indicated that 61% (r (2) = 0.61, df = 1, F = 14.3, p = 0.0043) of the variance observed in the field phenological phase data is captured by the cameras for the start of the growing season and 72% (r (2) = 0.72, df = 1, F = 23.09, p = 0.0009) of the variance in length of growing season. Based on the linear regression models, the mean absolute differences in residuals between predicted and observed start of growing season and length of growing season were 4 and 6 days, respectively. This work extends upon previous research by demonstrating that specific understorey and overstorey species can be targeted for phenological monitoring in a forested environment, using readily available digital camera technology and RGB-based vegetation indices.
2013-01-15
S48-E-007 (12 Sept 1991) --- Astronaut James F. Buchli, mission specialist, catches snack crackers as they float in the weightless environment of the earth-orbiting Discovery. This image was transmitted by the Electronic Still Camera, Development Test Objective (DTO) 648. The ESC is making its initial appearance on a Space Shuttle flight. Electronic still photography is a new technology that enables a camera to electronically capture and digitize an image with resolution approaching film quality. The digital image is stored on removable hard disks or small optical disks, and can be converted to a format suitable for downlink transmission or enhanced using image processing software. The Electronic Still Camera (ESC) was developed by the Man- Systems Division at the Johnson Space Center and is the first model in a planned evolutionary development leading to a family of high-resolution digital imaging devices. H. Don Yeates, JSC's Man-Systems Division, is program manager for the ESC. THIS IS A SECOND GENERATION PRINT MADE FROM AN ELECTRONICALLY PRODUCED NEGATIVE
A smartphone photogrammetry method for digitizing prosthetic socket interiors.
Hernandez, Amaia; Lemaire, Edward
2017-04-01
Prosthetic CAD/CAM systems require accurate 3D limb models; however, difficulties arise when working from the person's socket since current 3D scanners have difficulties scanning socket interiors. While dedicated scanners exist, they are expensive and the cost may be prohibitive for a limited number of scans per year. A low-cost and accessible photogrammetry method for socket interior digitization is proposed, using a smartphone camera and cloud-based photogrammetry services. 15 two-dimensional images of the socket's interior are captured using a smartphone camera. A 3D model is generated using cloud-based software. Linear measurements were comparing between sockets and the related 3D models. 3D reconstruction accuracy averaged 2.6 ± 2.0 mm and 0.086 ± 0.078 L, which was less accurate than models obtained by high quality 3D scanners. However, this method would provide a viable 3D digital socket reproduction that is accessible and low-cost, after processing in prosthetic CAD software. Clinical relevance The described method provides a low-cost and accessible means to digitize a socket interior for use in prosthetic CAD/CAM systems, employing a smartphone camera and cloud-based photogrammetry software.
Rapid orthophoto development system.
DOT National Transportation Integrated Search
2013-06-01
The DMC system procured in the project represented state-of-the-art, large-format digital aerial camera systems at the start of : project. DMC is based on the frame camera model, and to achieve large ground coverage with high spatial resolution, the ...
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2017-08-01
Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.
Assessment of gingival symmetry with digital measuring tools and its reproducibility.
Wilson, David; Soileau, Kristi; Esquivel, Jonathan; Cordero, Adriana; Buchman, Wes; Maney, Pooja; Archontia Palaiologou, A
The aim of this study was to investigate the accuracy of digital measuring tools to measure the position of gingival zeniths and to assess its reproducibility between different examiners. A total of 108 subjects were photographed at the Louisiana State University School of Dentistry. The settings, positioning of the digital camera, and subjects' Frankfurt levels were standardized. A photograph was taken of the six anterior maxillary teeth of each subject, and their corresponding free gingival margins. Digital caliper measurements were taken intraorally from the zenith to the incisal edge of the right maxillary central incisor. A reference line was drawn across the screen on each image at the level of the zenith of tooth 8. Three calibrated examiners then measured the distance from the reference line to the zeniths of the other five anterior maxillary teeth. There was no statistically significant difference between the examiners regarding any of the measurements. Central incisors were at the same level in 84.24% of the subjects, and lateral incisors were within 0.5 mm of central incisors in only 58% of the subjects. Canine zeniths were within 0.5 mm of each other in 43% of the subjects. Only 28% of the subjects presented with zeniths of tooth 6 to tooth 11 within 0.5 mm of each other. Lateral incisors were at or beneath the line drawn from central incisors to cuspids in 90.8% of the subjects. Standardized digital photography taken with the aid of a stadiometer and used to evaluate esthetic parameters allowed for reproducible measurements.
Real-Time Visualization of Tissue Ischemia
NASA Technical Reports Server (NTRS)
Bearman, Gregory H. (Inventor); Chrien, Thomas D. (Inventor); Eastwood, Michael L. (Inventor)
2000-01-01
A real-time display of tissue ischemia which comprises three CCD video cameras, each with a narrow bandwidth filter at the correct wavelength is discussed. The cameras simultaneously view an area of tissue suspected of having ischemic areas through beamsplitters. The output from each camera is adjusted to give the correct signal intensity for combining with, the others into an image for display. If necessary a digital signal processor (DSP) can implement algorithms for image enhancement prior to display. Current DSP engines are fast enough to give real-time display. Measurement at three, wavelengths, combined into a real-time Red-Green-Blue (RGB) video display with a digital signal processing (DSP) board to implement image algorithms, provides direct visualization of ischemic areas.
Pham, Quang Duc; Hayasaki, Yoshio
2015-01-01
We demonstrate an optical frequency comb profilometer with a single-pixel camera to measure the position and profile of an object's surface that exceeds far beyond light wavelength without 2π phase ambiguity. The present configuration of the single-pixel camera can perform the profilometry with an axial resolution of 3.4 μm at 1 GHz operation corresponding to a wavelength of 30 cm. Therefore, the axial dynamic range was increased to 0.87×105. It was found from the experiments and computer simulations that the improvement was derived from higher modulation contrast of digital micromirror devices. The frame rate was also increased to 20 Hz.
A neutron camera system for MAST.
Cecconello, M; Turnyanskiy, M; Conroy, S; Ericsson, G; Ronchi, E; Sangaroon, S; Akers, R; Fitzgerald, I; Cullen, A; Weiszflog, M
2010-10-01
A prototype neutron camera has been developed and installed at MAST as part of a feasibility study for a multichord neutron camera system with the aim to measure the spatial and time resolved 2.45 MeV neutron emissivity profile. Liquid scintillators coupled to a fast digitizer are used for neutron/gamma ray digital pulse shape discrimination. The preliminary results obtained clearly show the capability of this diagnostic to measure neutron emissivity profiles with sufficient time resolution to study the effect of fast ion loss and redistribution due to magnetohydrodynamic activity. A minimum time resolution of 2 ms has been achieved with a modest 1.5 MW of neutral beam injection heating with a measured neutron count rate of a few 100 kHz.
Maximizing the Performance of Automated Low Cost All-sky Cameras
NASA Technical Reports Server (NTRS)
Bettonvil, F.
2011-01-01
Thanks to the wide spread of digital camera technology in the consumer market, a steady increase in the number of active All-sky camera has be noticed European wide. In this paper I look into the details of such All-sky systems and try to optimize the performance in terms of accuracy of the astrometry, the velocity determination and photometry. Having autonomous operation in mind, suggestions are done for the optimal low cost All-sky camera.
A novel camera localization system for extending three-dimensional digital image correlation
NASA Astrophysics Data System (ADS)
Sabato, Alessandro; Reddy, Narasimha; Khan, Sameer; Niezrecki, Christopher
2018-03-01
The monitoring of civil, mechanical, and aerospace structures is important especially as these systems approach or surpass their design life. Often, Structural Health Monitoring (SHM) relies on sensing techniques for condition assessment. Advancements achieved in camera technology and optical sensors have made three-dimensional (3D) Digital Image Correlation (DIC) a valid technique for extracting structural deformations and geometry profiles. Prior to making stereophotogrammetry measurements, a calibration has to be performed to obtain the vision systems' extrinsic and intrinsic parameters. It means that the position of the cameras relative to each other (i.e. separation distance, cameras angle, etc.) must be determined. Typically, cameras are placed on a rigid bar to prevent any relative motion between the cameras. This constraint limits the utility of the 3D-DIC technique, especially as it is applied to monitor large-sized structures and from various fields of view. In this preliminary study, the design of a multi-sensor system is proposed to extend 3D-DIC's capability and allow for easier calibration and measurement. The suggested system relies on a MEMS-based Inertial Measurement Unit (IMU) and a 77 GHz radar sensor for measuring the orientation and relative distance of the stereo cameras. The feasibility of the proposed combined IMU-radar system is evaluated through laboratory tests, demonstrating its ability in determining the cameras position in space for performing accurate 3D-DIC calibration and measurements.
High-frame-rate infrared and visible cameras for test range instrumentation
NASA Astrophysics Data System (ADS)
Ambrose, Joseph G.; King, B.; Tower, John R.; Hughes, Gary W.; Levine, Peter A.; Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; O'Mara, K.; Sjursen, W.; McCaffrey, Nathaniel J.; Pantuso, Francis P.
1995-09-01
Field deployable, high frame rate camera systems have been developed to support the test and evaluation activities at the White Sands Missile Range. The infrared cameras employ a 640 by 480 format PtSi focal plane array (FPA). The visible cameras employ a 1024 by 1024 format backside illuminated CCD. The monolithic, MOS architecture of the PtSi FPA supports commandable frame rate, frame size, and integration time. The infrared cameras provide 3 - 5 micron thermal imaging in selectable modes from 30 Hz frame rate, 640 by 480 frame size, 33 ms integration time to 300 Hz frame rate, 133 by 142 frame size, 1 ms integration time. The infrared cameras employ a 500 mm, f/1.7 lens. Video outputs are 12-bit digital video and RS170 analog video with histogram-based contrast enhancement. The 1024 by 1024 format CCD has a 32-port, split-frame transfer architecture. The visible cameras exploit this architecture to provide selectable modes from 30 Hz frame rate, 1024 by 1024 frame size, 32 ms integration time to 300 Hz frame rate, 1024 by 1024 frame size (with 2:1 vertical binning), 0.5 ms integration time. The visible cameras employ a 500 mm, f/4 lens, with integration time controlled by an electro-optical shutter. Video outputs are RS170 analog video (512 by 480 pixels), and 12-bit digital video.
Cameras Monitor Spacecraft Integrity to Prevent Failures
NASA Technical Reports Server (NTRS)
2014-01-01
The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.
Laptop Circulation at Eastern Washington University
ERIC Educational Resources Information Center
Munson, Doris; Malia, Elizabeth
2008-01-01
In 2001, Eastern Washington University's Libraries began a laptop circulation program with seventeen laptops. Today, there are 150 laptops in the circulation pool, as well as seventeen digital cameras, eleven digital handycams, and thirteen digital projectors. This article explains how the program has grown to its present size, the growing pains…
Spyrou, Elena M; Kalogianni, Despina P; Tragoulias, Sotirios S; Ioannou, Penelope C; Christopoulos, Theodore K
2016-10-01
Chemi(bio)luminometric assays have contributed greatly to various areas of nucleic acid analysis due to their simplicity and detectability. In this work, we present the development of chemiluminometric genotyping methods in which (a) detection is performed by using either a conventional digital camera (at ambient temperature) or a smartphone and (b) a lateral flow assay configuration is employed for even higher simplicity and suitability for point of care or field testing. The genotyping of the C677T single nucleotide polymorphism (SNP) of methylenetetrahydropholate reductase (MTHFR) gene is chosen as a model. The interrogated DNA sequence is amplified by polymerase chain reaction (PCR) followed by a primer extension reaction. The reaction products are captured through hybridization on the sensing areas (spots) of the strip. Streptavidin-horseradish peroxidase conjugate is used as a reporter along with a chemiluminogenic substrate. Detection of the emerging chemiluminescence from the sensing areas of the strip is achieved by digital camera or smartphone. For this purpose, we constructed a 3D-printed smartphone attachment that houses inexpensive lenses and converts the smartphone into a portable chemiluminescence imager. The device enables spatial discrimination of the two alleles of a SNP in a single shot by imaging of the strip, thus avoiding the need of dual labeling. The method was applied successfully to genotyping of real clinical samples. Graphical abstract Paper-based genotyping assays using digital camera and smartphone as detectors.
Network-linked long-time recording high-speed video camera system
NASA Astrophysics Data System (ADS)
Kimura, Seiji; Tsuji, Masataka
2001-04-01
This paper describes a network-oriented, long-recording-time high-speed digital video camera system that utilizes an HDD (Hard Disk Drive) as a recording medium. Semiconductor memories (DRAM, etc.) are the most common image data recording media with existing high-speed digital video cameras. They are extensively used because of their advantage of high-speed writing and reading of picture data. The drawback is that their recording time is limited to only several seconds because the data amount is very large. A recording time of several seconds is sufficient for many applications. However, a much longer recording time is required in some applications where an exact prediction of trigger timing is hard to make. In the Late years, the recording density of the HDD has been dramatically improved, which has attracted more attention to its value as a long-recording-time medium. We conceived an idea that we would be able to build a compact system that makes possible a long time recording if the HDD can be used as a memory unit for high-speed digital image recording. However, the data rate of such a system, capable of recording 640 X 480 pixel resolution pictures at 500 frames per second (fps) with 8-bit grayscale is 153.6 Mbyte/sec., and is way beyond the writing speed of the commonly used HDD. So, we developed a dedicated image compression system and verified its capability to lower the data rate from the digital camera to match the HDD writing rate.
Ambient-Light-Canceling Camera Using Subtraction of Frames
NASA Technical Reports Server (NTRS)
Morookian, John Michael
2004-01-01
The ambient-light-canceling camera (ALCC) is a proposed near-infrared electronic camera that would utilize a combination of (1) synchronized illumination during alternate frame periods and (2) subtraction of readouts from consecutive frames to obtain images without a background component of ambient light. The ALCC is intended especially for use in tracking the motion of an eye by the pupil center corneal reflection (PCCR) method. Eye tracking by the PCCR method has shown potential for application in human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological deficiencies. In the PCCR method, an eye is illuminated by near-infrared light from a lightemitting diode (LED). Some of the infrared light is reflected from the surface of the cornea. Some of the infrared light enters the eye through the pupil and is reflected from back of the eye out through the pupil a phenomenon commonly observed as the red-eye effect in flash photography. An electronic camera is oriented to image the user's eye. The output of the camera is digitized and processed by algorithms that locate the two reflections. Then from the locations of the centers of the two reflections, the direction of gaze is computed. As described thus far, the PCCR method is susceptible to errors caused by reflections of ambient light. Although a near-infrared band-pass optical filter can be used to discriminate against ambient light, some sources of ambient light have enough in-band power to compete with the LED signal. The mode of operation of the ALCC would complement or supplant spectral filtering by providing more nearly complete cancellation of the effect of ambient light. In the operation of the ALCC, a near-infrared LED would be pulsed on during one camera frame period and off during the next frame period. Thus, the scene would be illuminated by both the LED (signal) light and the ambient (background) light during one frame period, and would be illuminated with only ambient (background) light during the next frame period. The camera output would be digitized and sent to a computer, wherein the pixel values of the background-only frame would be subtracted from the pixel values of the signal-plus-background frame to obtain signal-only pixel values (see figure). To prevent artifacts of motion from entering the images, it would be necessary to acquire image data at a rate greater than the standard video rate of 30 frames per second. For this purpose, the ALCC would exploit a novel control technique developed at NASA s Jet Propulsion Laboratory for advanced charge-coupled-device (CCD) cameras. This technique provides for readout from a subwindow [region of interest (ROI)] within the image frame. Because the desired reflections from the eye would typically occupy a small fraction of the area within the image frame, the ROI capability would make it possible to acquire and subtract pixel values at rates of several hundred frames per second considerably greater than the standard video rate and sufficient to both (1) suppress motion artifacts and (2) track the motion of the eye between consecutive subtractive frame pairs.
The study of integration about measurable image and 4D production
NASA Astrophysics Data System (ADS)
Zhang, Chunsen; Hu, Pingbo; Niu, Weiyun
2008-12-01
In this paper, we create the geospatial data of three-dimensional (3D) modeling by the combination of digital photogrammetry and digital close-range photogrammetry. For large-scale geographical background, we make the establishment of DEM and DOM combination of three-dimensional landscape model based on the digital photogrammetry which uses aerial image data to make "4D" (DOM: Digital Orthophoto Map, DEM: Digital Elevation Model, DLG: Digital Line Graphic and DRG: Digital Raster Graphic) production. For the range of building and other artificial features which the users are interested in, we realize that the real features of the three-dimensional reconstruction adopting the method of the digital close-range photogrammetry can come true on the basis of following steps : non-metric cameras for data collection, the camera calibration, feature extraction, image matching, and other steps. At last, we combine three-dimensional background and local measurements real images of these large geographic data and realize the integration of measurable real image and the 4D production.The article discussed the way of the whole flow and technology, achieved the three-dimensional reconstruction and the integration of the large-scale threedimensional landscape and the metric building.
Bai, Jin-Shun; Cao, Wei-Dong; Xiong, Jing; Zeng, Nao-Hua; Shimizu, Katshyoshi; Rui, Yu-Kui
2013-12-01
In order to explore the feasibility of using the image processing technology to diagnose the nitrogen status and to predict the maize yield, a field experiment with different nitrogen rates with green manure incorporation was conducted. Maize canopy digital images over a range of growth stages were captured by digital camera. Maize nitrogen status and the relationships between image color indices derived by digital camera for maize at different growth stages and maize nitrogen status indicators were analyzed. These digital camera sourced image color indices at different growth stages for maize were also regressed with maize grain yield at maturity. The results showed that the plant nitrogen status for maize was improved by green manure application. The leaf chlorophyll content (SPAD value), aboveground biomass and nitrogen uptake for green manure treatments at different maize growth stages were all higher than that for chemical fertilization treatments. The correlations between spectral indices with plant nitrogen indicators for maize affected by green manure application were weaker than that affected by chemical fertilization. And the correlation coefficients for green manure application were ranged with the maize growth stages changes. The best spectral indices for diagnosis of plant nitrogen status after green manure incorporation were normalized blue value (B/(R+G+B)) at 12-leaf (V12) stage and normalized red value (R/(R+G+B)) at grain-filling (R4) stage individually. The coefficients of determination based on linear regression were 0. 45 and 0. 46 for B/(R+G+B) at V12 stage and R/(R+G+B) at R4 stage respectively, acting as a predictor of maize yield response to nitrogen affected by green manure incorporation. Our findings suggested that digital image technique could be a potential tool for in-season prediction of the nitrogen status and grain yield for maize after green manure incorporation when the suitable growth stages and spectral indices for diagnosis were selected.
Digital holographic interferometry for characterizing deformable mirrors in aero-optics
NASA Astrophysics Data System (ADS)
Trolinger, James D.; Hess, Cecil F.; Razavi, Payam; Furlong, Cosme
2016-08-01
Measuring and understanding the transient behavior of a surface with high spatial and temporal resolution are required in many areas of science. This paper describes the development and application of a high-speed, high-dynamic range, digital holographic interferometer for high-speed surface contouring with fractional wavelength precision and high-spatial resolution. The specific application under investigation here is to characterize deformable mirrors (DM) employed in aero-optics. The developed instrument was shown capable of contouring a deformable mirror with extremely high-resolution at frequencies exceeding 40 kHz. We demonstrated two different procedures for characterizing the mechanical response of a surface to a wide variety of input forces, one that employs a high-speed digital camera and a second that employs a low-speed, low-cost digital camera. The latter is achieved by cycling the DM actuators with a step input, producing a transient that typically lasts up to a millisecond before reaching equilibrium. Recordings are made at increasing times after the DM initiation from zero to equilibrium to analyze the transient. Because the wave functions are stored and reconstructable, they can be compared with each other to produce contours including absolute, difference, and velocity. High-speed digital cameras recorded the wave functions during a single transient at rates exceeding 40 kHz. We concluded that either method is fully capable of characterizing a typical DM to the extent required by aero-optical engineers.
Use of a digital camera to monitor the growth and nitrogen status of cotton.
Jia, Biao; He, Haibing; Ma, Fuyu; Diao, Ming; Jiang, Guiying; Zheng, Zhong; Cui, Jin; Fan, Hua
2014-01-01
The main objective of this study was to develop a nondestructive method for monitoring cotton growth and N status using a digital camera. Digital images were taken of the cotton canopies between emergence and full bloom. The green and red values were extracted from the digital images and then used to calculate canopy cover. The values of canopy cover were closely correlated with the normalized difference vegetation index and the ratio vegetation index and were measured using a GreenSeeker handheld sensor. Models were calibrated to describe the relationship between canopy cover and three growth properties of the cotton crop (i.e., aboveground total N content, LAI, and aboveground biomass). There were close, exponential relationships between canopy cover and three growth properties. And the relationships for estimating cotton aboveground total N content were most precise, the coefficients of determination (R(2)) value was 0.978, and the root mean square error (RMSE) value was 1.479 g m(-2). Moreover, the models were validated in three fields of high-yield cotton. The result indicated that the best relationship between canopy cover and aboveground total N content had an R(2) value of 0.926 and an RMSE value of 1.631 g m(-2). In conclusion, as a near-ground remote assessment tool, digital cameras have good potential for monitoring cotton growth and N status.
Instant Grainification: Real-Time Grain-Size Analysis from Digital Images in the Field
NASA Astrophysics Data System (ADS)
Rubin, D. M.; Chezar, H.
2007-12-01
Over the past few years, digital cameras and underwater microscopes have been developed to collect in-situ images of sand-sized bed sediment, and software has been developed to measure grain size from those digital images (Chezar and Rubin, 2004; Rubin, 2004; Rubin et al., 2006). Until now, all image processing and grain- size analysis was done back in the office where images were uploaded from cameras and processed on desktop computers. Computer hardware has become small and rugged enough to process images in the field, which for the first time allows real-time grain-size analysis of sand-sized bed sediment. We present such a system consisting of weatherproof tablet computer, open source image-processing software (autocorrelation code of Rubin, 2004, running under Octave and Cygwin), and digital camera with macro lens. Chezar, H., and Rubin, D., 2004, Underwater microscope system: U.S. Patent and Trademark Office, patent number 6,680,795, January 20, 2004. Rubin, D.M., 2004, A simple autocorrelation algorithm for determining grain size from digital images of sediment: Journal of Sedimentary Research, v. 74, p. 160-165. Rubin, D.M., Chezar, H., Harney, J.N., Topping, D.J., Melis, T.S., and Sherwood, C.R., 2006, Underwater microscope for measuring spatial and temporal changes in bed-sediment grain size: USGS Open-File Report 2006-1360.
Measuring Positions of Objects using Two or More Cameras
NASA Technical Reports Server (NTRS)
Klinko, Steve; Lane, John; Nelson, Christopher
2008-01-01
An improved method of computing positions of objects from digitized images acquired by two or more cameras (see figure) has been developed for use in tracking debris shed by a spacecraft during and shortly after launch. The method is also readily adaptable to such applications as (1) tracking moving and possibly interacting objects in other settings in order to determine causes of accidents and (2) measuring positions of stationary objects, as in surveying. Images acquired by cameras fixed to the ground and/or cameras mounted on tracking telescopes can be used in this method. In this method, processing of image data starts with creation of detailed computer- aided design (CAD) models of the objects to be tracked. By rotating, translating, resizing, and overlaying the models with digitized camera images, parameters that characterize the position and orientation of the camera can be determined. The final position error depends on how well the centroids of the objects in the images are measured; how accurately the centroids are interpolated for synchronization of cameras; and how effectively matches are made to determine rotation, scaling, and translation parameters. The method involves use of the perspective camera model (also denoted the point camera model), which is one of several mathematical models developed over the years to represent the relationships between external coordinates of objects and the coordinates of the objects as they appear on the image plane in a camera. The method also involves extensive use of the affine camera model, in which the distance from the camera to an object (or to a small feature on an object) is assumed to be much greater than the size of the object (or feature), resulting in a truly two-dimensional image. The affine camera model does not require advance knowledge of the positions and orientations of the cameras. This is because ultimately, positions and orientations of the cameras and of all objects are computed in a coordinate system attached to one object as defined in its CAD model.
Automatic calibration method for plenoptic camera
NASA Astrophysics Data System (ADS)
Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao
2016-04-01
An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.
Artifacts in Digital Coincidence Timing
Moses, W. W.; Peng, Q.
2014-01-01
Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into a time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator. All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e., the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the “optimal” method. The purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization. PMID:25321885
Artifacts in digital coincidence timing
Moses, W. W.; Peng, Q.
2014-10-16
Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into amore » time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator.All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e. the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the 'optimal' method. In conclusion, the purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization.« less
Artifacts in digital coincidence timing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moses, W. W.; Peng, Q.
Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into amore » time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator.All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e. the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the 'optimal' method. In conclusion, the purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization.« less
Russo, Paolo; Mettivier, Giovanni
2011-04-01
The goal of this study is to evaluate a new method based on a coded aperture mask combined with a digital x-ray imaging detector for measurements of the focal spot sizes of diagnostic x-ray tubes. Common techniques for focal spot size measurements employ a pinhole camera, a slit camera, or a star resolution pattern. The coded aperture mask is a radiation collimator consisting of a large number of apertures disposed on a predetermined grid in an array, through which the radiation source is imaged onto a digital x-ray detector. The method of the coded mask camera allows one to obtain a one-shot accurate and direct measurement of the two dimensions of the focal spot (like that for a pinhole camera) but at a low tube loading (like that for a slit camera). A large number of small apertures in the coded mask operate as a "multipinhole" with greater efficiency than a single pinhole, but keeping the resolution of a single pinhole. X-ray images result from the multiplexed output on the detector image plane of such a multiple aperture array, and the image of the source is digitally reconstructed with a deconvolution algorithm. Images of the focal spot of a laboratory x-ray tube (W anode: 35-80 kVp; focal spot size of 0.04 mm) were acquired at different geometrical magnifications with two different types of digital detector (a photon counting hybrid silicon pixel detector with 0.055 mm pitch and a flat panel CMOS digital detector with 0.05 mm pitch) using a high resolution coded mask (type no-two-holes-touching modified uniformly redundant array) with 480 0.07 mm apertures, designed for imaging at energies below 35 keV. Measurements with a slit camera were performed for comparison. A test with a pinhole camera and with the coded mask on a computed radiography mammography unit with 0.3 mm focal spot was also carried out. The full width at half maximum focal spot sizes were obtained from the line profiles of the decoded images, showing a focal spot of 0.120 mm x 0.105 mm at 35 kVp and M = 6.1, with a detector entrance exposure as low as 1.82 mR (0.125 mA s tube load). The slit camera indicated a focal spot of 0.112 mm x 0.104 mm at 35 kVp and M = 3.15, with an exposure at the detector of 72 mR. Focal spot measurements with the coded mask could be performed up to 80 kVp. Tolerance to angular misalignment with the reference beam up to 7 degrees in in-plane rotations and 1 degrees deg in out-of-plane rotations was observed. The axial distance of the focal spot from the coded mask could also be determined. It is possible to determine the beam intensity via measurement of the intensity of the decoded image of the focal spot and via a calibration procedure. Coded aperture masks coupled to a digital area detector produce precise determinations of the focal spot of an x-ray tube with reduced tube loading and measurement time, coupled to a large tolerance in the alignment of the mask.
NASA Astrophysics Data System (ADS)
Takada, Shunji; Ihama, Mikio; Inuiya, Masafumi
2006-02-01
Digital still cameras overtook film cameras in Japanese market in 2000 in terms of sales volume owing to their versatile functions. However, the image-capturing capabilities such as sensitivity and latitude of color films are still superior to those of digital image sensors. In this paper, we attribute the cause for the high performance of color films to their multi-layered structure, and propose the solid-state image sensors with stacked organic photoconductive layers having narrow absorption bands on CMOS read-out circuits.
Pulsed spatial phase-shifting digital shearography based on a micropolarizer camera
NASA Astrophysics Data System (ADS)
Aranchuk, Vyacheslav; Lal, Amit K.; Hess, Cecil F.; Trolinger, James Davis; Scott, Eddie
2018-02-01
We developed a pulsed digital shearography system that utilizes the spatial phase-shifting technique. The system employs a commercial micropolarizer camera and a double pulse laser, which allows for instantaneous phase measurements. The system can measure dynamic deformation of objects as large as 1 m at a 2-m distance during the time between two laser pulses that range from 30 μs to 30 ms. The ability of the system to measure dynamic deformation was demonstrated by obtaining phase wrapped and unwrapped shearograms of a vibrating object.
STS-116 MS Fuglesang uses digital camera on the STBD side of the S0 Truss during EVA 4
2006-12-19
S116-E-06882 (18 Dec. 2006) --- European Space Agency (ESA) astronaut Christer Fuglesang, STS-116 mission specialist, uses a digital still camera during the mission's fourth session of extravehicular activity (EVA) while Space Shuttle Discovery was docked with the International Space Station. Astronaut Robert L. Curbeam Jr. (out of frame), mission specialist, worked in tandem with Fuglesang, using specially-prepared, tape-insulated tools, to guide the array wing neatly inside its blanket box during the 6-hour, 38-minute spacewalk.
Solar-Powered Airplane with Cameras and WLAN
NASA Technical Reports Server (NTRS)
Higgins, Robert G.; Dunagan, Steve E.; Sullivan, Don; Slye, Robert; Brass, James; Leung, Joe G.; Gallmeyer, Bruce; Aoyagi, Michio; Wei, Mei Y.; Herwitz, Stanley R.;
2004-01-01
An experimental airborne remote sensing system includes a remotely controlled, lightweight, solar-powered airplane (see figure) that carries two digital-output electronic cameras and communicates with a nearby ground control and monitoring station via a wireless local-area network (WLAN). The speed of the airplane -- typically <50 km/h -- is low enough to enable loitering over farm fields, disaster scenes, or other areas of interest to collect high-resolution digital imagery that could be delivered to end users (e.g., farm managers or disaster-relief coordinators) in nearly real time.
Removal of instrument signature from Mariner 9 television images of Mars
NASA Technical Reports Server (NTRS)
Green, W. B.; Jepsen, P. L.; Kreznar, J. E.; Ruiz, R. M.; Schwartz, A. A.; Seidman, J. B.
1975-01-01
The Mariner 9 spacecraft was inserted into orbit around Mars in November 1971. The two vidicon camera systems returned over 7300 digital images during orbital operations. The high volume of returned data and the scientific objectives of the Television Experiment made development of automated digital techniques for the removal of camera system-induced distortions from each returned image necessary. This paper describes the algorithms used to remove geometric and photometric distortions from the returned imagery. Enhancement processing of the final photographic products is also described.
A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology
Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi
2015-01-01
Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187
A novel multi-digital camera system based on tilt-shift photography technology.
Sun, Tao; Fang, Jun-Yong; Zhao, Dong; Liu, Xue; Tong, Qing-Xi
2015-03-31
Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product.
Characterization of Vegetation using the UC Davis Remote Sensing Testbed
NASA Astrophysics Data System (ADS)
Falk, M.; Hart, Q. J.; Bowen, K. S.; Ustin, S. L.
2006-12-01
Remote sensing provides information about the dynamics of the terrestrial biosphere with continuous spatial and temporal coverage on many different scales. We present the design and construction of a suite of instrument modules and network infrastructure with size, weight and power constraints suitable for small scale vehicles, anticipating vigorous growth in unmanned aerial vehicles (UAV) and other mobile platforms. Our approach provides the rapid deployment and low cost acquisition of high aerial imagery for applications requiring high spatial resolution and revisits. The testbed supports a wide range of applications, encourages remote sensing solutions in new disciplines and demonstrates the complete range of engineering knowledge required for the successful deployment of remote sensing instruments. The initial testbed is deployed on a Sig Kadet Senior remote controlled plane. It includes an onboard computer with wireless radio, GPS, inertia measurement unit, 3-axis electronic compass and digital cameras. The onboard camera is either a RGB digital camera or a modified digital camera with red and NIR channels. Cameras were calibrated using selective light sources, an integrating spheres and a spectrometer, allowing for the computation of vegetation indices such as the NDVI. Field tests to date have investigated technical challenges in wireless communication bandwidth limits, automated image geolocation, and user interfaces; as well as image applications such as environmental landscape mapping focusing on Sudden Oak Death and invasive species detection, studies on the impact of bird colonies on tree canopies, and precision agriculture.
Optics pioneers scoop Nobel prize
NASA Astrophysics Data System (ADS)
Banks, Michael
2009-11-01
Three physicists who carried out pioneering work in former industrial research labs have picked up this year's Nobel Prize for Physics. One half of the SEK 10m prize has been awarded to Charles Kao, 75, for his work at the UK-based Standard Telephones and Cables (STC) on the transmission of light in optical fibres, which underpinned the telecommunications revolution. The other half of the prize is shared between Willard Boyle, 85, and George Smith, 79, of Bell Laboratories in New Jersey, US, for inventing the charge-coupled device (CCD) - an imaging semiconductor circuit that forms the basis of most digital cameras.
NASA Astrophysics Data System (ADS)
Ruggeri, Marco; Hernandez, Victor; De Freitas, Carolina; Relhan, Nidhi; Silgado, Juan; Manns, Fabrice; Parel, Jean-Marie
2016-03-01
Hand-held wide-field contact color fundus photography is currently the standard method to acquire diagnostic images of children during examination under anesthesia and in the neonatal intensive care unit. The recent development of portable non-contact hand-held OCT retinal imaging systems has proved that OCT is of tremendous help to complement fundus photography in the management of pediatric patients. Currently, there is no commercial or research system that combines color wide-field digital fundus and OCT imaging in a contact-fashion. The contact of the probe with the cornea has the advantages of reducing motion experienced by the photographer during the imaging and providing fundus and OCT images with wider field of view that includes the periphery of the retina. In this study we produce proof of concept for a contact-type hand-held unit for simultaneous color fundus and OCT live view of the retina of pediatric patients. The front piece of the hand-held unit consists of a contact ophthalmoscopy lens integrating a circular light guide that was recovered from a digital fundus camera for pediatric imaging. The custom-made rear piece consists of the optics to: 1) fold the visible aerial image of the fundus generated by the ophthalmoscopy lens on a miniaturized level board digital color camera; 2) conjugate the eye pupil to the galvanometric scanning mirrors of an OCT delivery system. Wide-field color fundus and OCT images were simultaneously obtained in an eye model and sequentially obtained on the eye of a conscious 25 year-old human subject with healthy retina.
NASA Astrophysics Data System (ADS)
Turley, Anthony Allen
Many research projects require the use of aerial images. Wetlands evaluation, crop monitoring, wildfire management, environmental change detection, and forest inventory are but a few of the applications of aerial imagery. Low altitude Small Format Aerial Photography (SFAP) is a bridge between satellite and man-carrying aircraft image acquisition and ground-based photography. The author's project evaluates digital images acquired using low cost commercial digital cameras and standard model airplanes to determine their suitability for remote sensing applications. Images from two different sites were obtained. Several photo missions were flown over each site, acquiring images in the visible and near infrared electromagnetic bands. Images were sorted and analyzed to select those with the least distortion, and blended together with Microsoft Image Composite Editor. By selecting images taken within minutes apart, radiometric qualities of the images were virtually identical, yielding no blend lines in the composites. A commercial image stitching program, Autopano Pro, was purchased during the later stages of this study. Autopano Pro was often able to mosaic photos that the free Image Composite Editor was unable to combine. Using telemetry data from an onboard data logger, images were evaluated to calculate scale and spatial resolution. ERDAS ER Mapper and ESRI ArcGIS were used to rectify composite images. Despite the limitations inherent in consumer grade equipment, images of high spatial resolution were obtained. Mosaics of as many as 38 images were created, and the author was able to record detailed aerial images of forest and wetland areas where foot travel was impractical or impossible.
Virtual interactive presence and augmented reality (VIPAR) for remote surgical assistance.
Shenai, Mahesh B; Dillavou, Marcus; Shum, Corey; Ross, Douglas; Tubbs, Richard S; Shih, Alan; Guthrie, Barton L
2011-03-01
Surgery is a highly technical field that combines continuous decision-making with the coordination of spatiovisual tasks. We designed a virtual interactive presence and augmented reality (VIPAR) platform that allows a remote surgeon to deliver real-time virtual assistance to a local surgeon, over a standard Internet connection. The VIPAR system consisted of a "local" and a "remote" station, each situated over a surgical field and a blue screen, respectively. Each station was equipped with a digital viewpiece, composed of 2 cameras for stereoscopic capture, and a high-definition viewer displaying a virtual field. The virtual field was created by digitally compositing selected elements within the remote field into the local field. The viewpieces were controlled by workstations mutually connected by the Internet, allowing virtual remote interaction in real time. Digital renderings derived from volumetric MRI were added to the virtual field to augment the surgeon's reality. For demonstration, a fixed-formalin cadaver head and neck were obtained, and a carotid endarterectomy (CEA) and pterional craniotomy were performed under the VIPAR system. The VIPAR system allowed for real-time, virtual interaction between a local (resident) and remote (attending) surgeon. In both carotid and pterional dissections, major anatomic structures were visualized and identified. Virtual interaction permitted remote instruction for the local surgeon, and MRI augmentation provided spatial guidance to both surgeons. Camera resolution, color contrast, time lag, and depth perception were identified as technical issues requiring further optimization. Virtual interactive presence and augmented reality provide a novel platform for remote surgical assistance, with multiple applications in surgical training and remote expert assistance.
Method for acquiring, storing and analyzing crystal images
NASA Technical Reports Server (NTRS)
Gester, Thomas E. (Inventor); Rosenblum, William M. (Inventor); Christopher, Gayle K. (Inventor); Hamrick, David T. (Inventor); Delucas, Lawrence J. (Inventor); Tillotson, Brian (Inventor)
2003-01-01
A system utilizing a digital computer for acquiring, storing and evaluating crystal images. The system includes a video camera (12) which produces a digital output signal representative of a crystal specimen positioned within its focal window (16). The digitized output from the camera (12) is then stored on data storage media (32) together with other parameters inputted by a technician and relevant to the crystal specimen. Preferably, the digitized images are stored on removable media (32) while the parameters for different crystal specimens are maintained in a database (40) with indices to the digitized optical images on the other data storage media (32). Computer software is then utilized to identify not only the presence and number of crystals and the edges of the crystal specimens from the optical image, but to also rate the crystal specimens by various parameters, such as edge straightness, polygon formation, aspect ratio, surface clarity, crystal cracks and other defects or lack thereof, and other parameters relevant to the quality of the crystals.
Exploring of PST-TBPM in Monitoring Bridge Dynamic Deflection in Vibration
NASA Astrophysics Data System (ADS)
Zhang, Guojian; Liu, Shengzhen; Zhao, Tonglong; Yu, Chengxin
2018-01-01
This study adopts digital photography to monitor bridge dynamic deflection in vibration. Digital photography used in this study is based on PST-TBPM (photographing scale transformation-time baseline parallax method). Firstly, a digital camera is used to monitor the bridge in static as a zero image. Then, the digital camera is used to monitor the bridge in vibration every three seconds as the successive images. Based on the reference system, PST-TBPM is used to calculate the images to obtain the bridge dynamic deflection in vibration. Results show that the average measurement accuracies are 0.615 pixels and 0.79 pixels in X and Z direction. The maximal deflection of the bridge is 7.14 pixels. PST-TBPM is valid in solving the problem-the photographing direction not perpendicular to the bridge. Digital photography used in this study can assess the bridge health through monitoring the bridge dynamic deflection in vibration. The deformation trend curves depicted over time also can warn the possible dangers.
HST Solar Arrays photographed by Electronic Still Camera
NASA Technical Reports Server (NTRS)
1993-01-01
This view, backdropped against the blackness of space shows one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST). The scene was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.
Improved TDEM formation using fused ladar/digital imagery from a low-cost small UAV
NASA Astrophysics Data System (ADS)
Khatiwada, Bikalpa; Budge, Scott E.
2017-05-01
Formation of a Textured Digital Elevation Model (TDEM) has been useful in many applications in the fields of agriculture, disaster response, terrain analysis and more. Use of a low-cost small UAV system with a texel camera (fused lidar/digital imagery) can significantly reduce the cost compared to conventional aircraft-based methods. This paper reports continued work on this problem reported in a previous paper by Bybee and Budge, and reports improvements in performance. A UAV fitted with a texel camera is flown at a fixed height above the terrain and swaths of texel image data of the terrain below is taken continuously. Each texel swath has one or more lines of lidar data surrounded by a narrow strip of EO data. Texel swaths are taken such that there is some overlap from one swath to its adjacent swath. The GPS/IMU fitted on the camera also give coarse knowledge of attitude and position. Using this coarse knowledge and the information from the texel image, the error in the camera position and attitude is reduced which helps in producing an accurate TDEM. This paper reports improvements in the original work by using multiple lines of lidar data per swath. The final results are shown and analyzed for numerical accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nazareth, D; Malhotra, H; French, S
Purpose: Breast radiotherapy, particularly electronic compensation, may involve large dose gradients and difficult patient positioning problems. We have developed a simple self-calibrating augmented-reality system, which assists in accurately and reproducibly positioning the patient, by displaying her live image from a single camera superimposed on the correct perspective projection of her 3D CT data. Our method requires only a standard digital camera capable of live-view mode, installed in the treatment suite at an approximately-known orientation and position (rotation R; translation T). Methods: A 10-sphere calibration jig was constructed and CT imaged to provide a 3D model. The (R,T) relating the cameramore » to the CT coordinate system were determined by acquiring a photograph of the jig and optimizing an objective function, which compares the true image points to points calculated with a given candidate R and T geometry. Using this geometric information, 3D CT patient data, viewed from the camera's perspective, is plotted using a Matlab routine. This image data is superimposed onto the real-time patient image, acquired by the camera, and displayed using standard live-view software. This enables the therapists to view both the patient's current and desired positions, and guide the patient into assuming the correct position. The method was evaluated using an in-house developed bolus-like breast phantom, mounted on a supporting platform, which could be tilted at various angles to simulate treatment-like geometries. Results: Our system allowed breast phantom alignment, with an accuracy of about 0.5 cm and 1 ± 0.5 degree. Better resolution could be possible using a camera with higher-zoom capabilities. Conclusion: We have developed an augmented-reality system, which combines a perspective projection of a CT image with a patient's real-time optical image. This system has the potential to improve patient setup accuracy during breast radiotherapy, and could possibly be used for other disease sites as well.« less
Laser-Induced-Fluorescence Photogrammetry and Videogrammetry
NASA Technical Reports Server (NTRS)
Danehy, Paul; Jones, Tom; Connell, John; Belvin, Keith; Watson, Kent
2004-01-01
An improved method of dot-projection photogrammetry and an extension of the method to encompass dot-projection videogrammetry overcome some deficiencies of dot-projection photogrammetry as previously practiced. The improved method makes it possible to perform dot-projection photogrammetry or videogrammetry on targets that have previously not been amenable to dot-projection photogrammetry because they do not scatter enough light. Such targets include ones that are transparent, specularly reflective, or dark. In standard dot-projection photogrammetry, multiple beams of white light are projected onto the surface of an object of interest (denoted the target) to form a known pattern of bright dots. The illuminated surface is imaged in one or more cameras oriented at a nonzero angle or angles with respect to a central axis of the illuminating beams. The locations of the dots in the image(s) contain stereoscopic information on the locations of the dots, and, hence, on the location, shape, and orientation of the illuminated surface of the target. The images are digitized and processed to extract this information. Hardware and software to implement standard dot-projection photogrammetry are commercially available. Success in dot-projection photogrammetry depends on achieving sufficient signal-to-noise ratios: that is, it depends on scattering of enough light by the target so that the dots as imaged in the camera(s) stand out clearly against the ambient-illumination component of the image of the target. In one technique used previously to increase the signal-to-noise ratio, the target is illuminated by intense, pulsed laser light and the light entering the camera(s) is band-pass filtered at the laser wavelength. Unfortunately, speckle caused by the coherence of the laser light engenders apparent movement in the projected dots, thereby giving rise to errors in the measurement of the centroids of the dots and corresponding errors in the computed shape and location of the surface of the target. The improved method is denoted laser-induced-fluorescence photogrammetry.
Confocal Retinal Imaging Using a Digital Light Projector with a Near Infrared VCSEL Source
Muller, Matthew S.; Elsner, Ann E.
2018-01-01
A custom near infrared VCSEL source has been implemented in a confocal non-mydriatic retinal camera, the Digital Light Ophthalmoscope (DLO). The use of near infrared light improves patient comfort, avoids pupil constriction, penetrates the deeper retina, and does not mask visual stimuli. The DLO performs confocal imaging by synchronizing a sequence of lines displayed with a digital micromirror device to the rolling shutter exposure of a 2D CMOS camera. Real-time software adjustments enable multiply scattered light imaging, which rapidly and cost-effectively emphasizes drusen and other scattering disruptions in the deeper retina. A separate 5.1″ LCD display provides customizable visible stimuli for vision experiments with simultaneous near infrared imaging. PMID:29899586
Informed Consent, Use, and Storage of Digital Photography Among Mohs Surgeons in the United States.
Rimoin, Lauren; Haberle, Sasha; DeLong Aspey, Laura; Grant-Kels, Jane M; Stoff, Benjamin
2016-03-01
Digital photography is pervasive in dermatology. Potential uses include monitoring untreated disease, disease progression and treatment response, evaluating medical and cosmetic treatment, determining surgical sites, educating trainees and colleagues, and publishing reports in scientific journals. However, the nature of use, storage, and informed consent practices for digital photography among dermatologic surgeons has not been investigated. This study used a comprehensive survey to elucidate these elements to better define standard practice. A survey was created on SurveyMonkey. An email with the survey link was sent to all members of the American College of Mohs Surgery listserv with 2 follow-up emails. One hundred fifty-eight Mohs surgeons responded to the survey. Respondents indicated a wide variety in the type of camera and storage modality used for patient photographs. There was a variety of opinions on how to conceal a patient's identity when using photographs for educational purposes, and what features of a photo make it identifiable. Dermatologic surgeons vary widely on practices of photo storage and opinions of identifiability. Dermatology as a specialty may consider generating a consensus statement on appropriate use and storage of digital photography in dermatology practice.
Method for the visualization of landform by mapping using low altitude UAV application
NASA Astrophysics Data System (ADS)
Sharan Kumar, N.; Ashraf Mohamad Ismail, Mohd; Sukor, Nur Sabahiah Abdul; Cheang, William
2018-05-01
Unmanned Aerial Vehicle (UAV) and Digital Photogrammetry are evolving drastically in mapping technology. The significance and necessity for digital landform mapping are developing with years. In this study, a mapping workflow is applied to obtain two different input data sets which are the orthophoto and DSM. A fine flying technology is used to capture Low Altitude Aerial Photography (LAAP). Low altitude UAV (Drone) with the fixed advanced camera was utilized for imagery while computerized photogrammetry handling using Photo Scan was applied for cartographic information accumulation. The data processing through photogrammetry and orthomosaic processes is the main applications. High imagery quality is essential for the effectiveness and nature of normal mapping output such as 3D model, Digital Elevation Model (DEM), Digital Surface Model (DSM) and Ortho Images. The exactitude of Ground Control Points (GCP), flight altitude and the resolution of the camera are essential for good quality DEM and Orthophoto.
Measurement of solar extinction in tower plants with digital cameras
NASA Astrophysics Data System (ADS)
Ballestrín, J.; Monterreal, R.; Carra, M. E.; Fernandez-Reche, J.; Barbero, J.; Marzo, A.
2016-05-01
Atmospheric extinction of solar radiation between the heliostat field and the receiver is accepted as a non-negligible source of energy loss in the increasingly large central receiver plants. However, the reality is that there is currently no reliable measurement method for this quantity and at present these plants are designed, built and operated without knowing this local parameter. Nowadays digital cameras are used in many scientific applications for their ability to convert available light into digital images. Its broad spectral range, high resolution and high signal to noise ratio, make them an interesting device in solar technology. In this work a method for atmospheric extinction measurement based on digital images is presented. The possibility of defining a measurement setup in circumstances similar to those of a tower plant increases the credibility of the method. This procedure is currently being implemented at Plataforma Solar de Almería.
Non-Invasive Detection of Anaemia Using Digital Photographs of the Conjunctiva.
Collings, Shaun; Thompson, Oliver; Hirst, Evan; Goossens, Louise; George, Anup; Weinkove, Robert
2016-01-01
Anaemia is a major health burden worldwide. Although the finding of conjunctival pallor on clinical examination is associated with anaemia, inter-observer variability is high, and definitive diagnosis of anaemia requires a blood sample. We aimed to detect anaemia by quantifying conjunctival pallor using digital photographs taken with a consumer camera and a popular smartphone. Our goal was to develop a non-invasive screening test for anaemia. The conjunctivae of haemato-oncology in- and outpatients were photographed in ambient lighting using a digital camera (Panasonic DMC-LX5), and the internal rear-facing camera of a smartphone (Apple iPhone 5S) alongside an in-frame calibration card. Following image calibration, conjunctival erythema index (EI) was calculated and correlated with laboratory-measured haemoglobin concentration. Three clinicians independently evaluated each image for conjunctival pallor. Conjunctival EI was reproducible between images (average coefficient of variation 2.96%). EI of the palpebral conjunctiva correlated more strongly with haemoglobin concentration than that of the forniceal conjunctiva. Using the compact camera, palpebral conjunctival EI had a sensitivity of 93% and 57% and specificity of 78% and 83% for detection of anaemia (haemoglobin < 110 g/L) in training and internal validation sets, respectively. Similar results were found using the iPhone camera, though the EI cut-off value differed. Conjunctival EI analysis compared favourably with clinician assessment, with a higher positive likelihood ratio for prediction of anaemia. Erythema index of the palpebral conjunctiva calculated from images taken with a compact camera or mobile phone correlates with haemoglobin and compares favourably to clinician assessment for prediction of anaemia. If confirmed in further series, this technique may be useful for the non-invasive screening for anaemia.
NASA Technical Reports Server (NTRS)
Bendura, R. J.; Renfroe, P. G.
1974-01-01
A detailed discussion of the application of a previously method to determine vehicle flight attitude using a single camera onboard the vehicle is presented with emphasis on the digital computer program format and data reduction techniques. Application requirements include film and earth-related coordinates of at least two landmarks (or features), location of the flight vehicle with respect to the earth, and camera characteristics. Included in this report are a detailed discussion of the program input and output format, a computer program listing, a discussion of modifications made to the initial method, a step-by-step basic data reduction procedure, and several example applications. The computer program is written in FORTRAN 4 language for the Control Data 6000 series digital computer.
Harrefors, Christina; Sävenstedt, Stefan; Lundquist, Anders; Lundquist, Bengt; Axelsson, Karin
2012-01-01
Cognitive impairments influence the possibility of persons with dementia to remember daily events and maintain a sense of self. In order to address these problems a digital photo diary was developed to capture information about events in daily life. The device consisted of a wearable digital camera, smart phone with Global Positioning System (GPS) and a home memory station with computer for uploading the photographs and touch screen. The aim of this study was to describe professional caregiver’s perceptions on how persons with mild dementia might experience the usage of this digital photo diary from both a situation when wearing the camera and a situation when viewing the uploaded photos, through a questionnaire with 408 respondents. In order to catch the professional caregivers’ perceptions a questionnaire with the semantic differential technique was used and the main question was “How do you think Hilda (the fictive person in the questionnaire) feels when she is using the digital photo diary?”. The factor analysis revealed three factors; Sense of autonomy, Sense of self-esteem and Sense of trust. An interesting conclusion that can be drawn is that professional caregivers had an overall positive view of the usage of digital photo diary as supporting autonomy for persons with mild dementia. The meaningfulness of each situation when wearing the camera and viewing the uploaded pictures to be used in two different situations and a part of an integrated assistive device has to be considered separately. Individual needs and desires of the person who is living with dementia and the context of each individual has to be reflected on and taken into account before implementing assistive digital devices as a tool in care. PMID:22509232
STS-93 Commander Collins uses a digital camera on the middeck of Columbia
2013-11-18
STS093-347-015 (23-27 July 1999) --- Astronaut Eileen M. Collins, mission commander, loads a roll of film into a still camera on Columbia's middeck. Collins is the first woman mission commander in the history of human space flight.
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2016-12-01
A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.
Design of a high-numerical-aperture digital micromirror device camera with high dynamic range.
Qiao, Yang; Xu, Xiping; Liu, Tao; Pan, Yue
2015-01-01
A high-NA imaging system with high dynamic range is presented based on a digital micromirror device (DMD). The DMD camera consists of an objective imaging system and a relay imaging system, connected by a DMD chip. With the introduction of a total internal reflection prism system, the objective imaging system is designed with a working F/# of 1.97, breaking through the F/2.45 limitation of conventional DMD projection lenses. As for the relay imaging system, an off-axis design that could correct off-axis aberrations of the tilt relay imaging system is developed. This structure has the advantage of increasing the NA of the imaging system while maintaining a compact size. Investigation revealed that the dynamic range of a DMD camera could be greatly increased, by 2.41 times. We built one prototype DMD camera with a working F/# of 1.23, and the field experiments proved the validity and reliability our work.
BAE Systems' 17μm LWIR camera core for civil, commercial, and military applications
NASA Astrophysics Data System (ADS)
Lee, Jeffrey; Rodriguez, Christian; Blackwell, Richard
2013-06-01
Seventeen (17) µm pixel Long Wave Infrared (LWIR) Sensors based on vanadium oxide (VOx) micro-bolometers have been in full rate production at BAE Systems' Night Vision Sensors facility in Lexington, MA for the past five years.[1] We introduce here a commercial camera core product, the Airia-MTM imaging module, in a VGA format that reads out in 30 and 60Hz progressive modes. The camera core is architected to conserve power with all digital interfaces from the readout integrated circuit through video output. The architecture enables a variety of input/output interfaces including Camera Link, USB 2.0, micro-display drivers and optional RS-170 analog output supporting legacy systems. The modular board architecture of the electronics facilitates hardware upgrades allow us to capitalize on the latest high performance low power electronics developed for the mobile phones. Software and firmware is field upgradeable through a USB 2.0 port. The USB port also gives users access to up to 100 digitally stored (lossless) images.
Esthetic smile preferences and the orientation of the maxillary occlusal plane.
Kattadiyil, Mathew T; Goodacre, Charles J; Naylor, W Patrick; Maveli, Thomas C
2012-12-01
The anteroposterior orientation of the maxillary occlusal plane has an important role in the creation, assessment, and perception of an esthetic smile. However, the effect of the angle at which this plane is visualized (the viewing angle) in a broad smile has not been quantified. The purpose of this study was to assess the esthetic preferences of dental professionals and nondentists by using 3 viewing angles of the anteroposterior orientation of the maxillary occlusal plane. After Institutional Review Board approval, standardized digital photographic images of the smiles of 100 participants were recorded by simultaneously triggering 3 cameras set at different viewing angles. The top camera was positioned 10 degrees above the occlusal plane (camera #1, Top view); the center camera was positioned at the level of the occlusal plane (camera #2, Center view); and the bottom camera was located 10 degrees below the occlusal plane (camera #3, Bottom view). Forty-two dental professionals and 31 nondentists (persons from the general population) independently evaluated digital images of each participant's smile captured from the Top view, Center view, and Bottom view. The 73 evaluators were asked individually through a questionnaire to rank the 3 photographic images of each patient as 'most pleasing,' 'somewhat pleasing,' or 'least pleasing,' with most pleasing being the most esthetic view and the preferred orientation of the occlusal plane. The resulting esthetic preferences were statistically analyzed by using the Friedman test. In addition, the participants were asked to rank their own images from the 3 viewing angles as 'most pleasing,' 'somewhat pleasing,' and 'least pleasing.' The 73 evaluators found statistically significant differences in the esthetic preferences between the Top and Bottom views and between the Center and Bottom views (P<.001). No significant differences were found between the Top and Center views. The Top position was marginally preferred over the Center, and both were significantly preferred over the Bottom position. When the participants evaluated their own smiles, a significantly greater number (P< .001) preferred the Top view over the Center or the Bottom views. No significant differences were found in preferences based on the demographics of the evaluators when comparing age, education, gender, profession, and race. The esthetic preference for the maxillary occlusal plane was influenced by the viewing angle with the higher (Top) and center views preferred by both dental and nondental evaluators. The participants themselves preferred the higher view of their smile significantly more often than the center or lower angle views (P<.001). Copyright © 2012 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
Validation of geometric models for fisheye lenses
NASA Astrophysics Data System (ADS)
Schneider, D.; Schwalbe, E.; Maas, H.-G.
The paper focuses on the photogrammetric investigation of geometric models for different types of optical fisheye constructions (equidistant, equisolid-angle, sterographic and orthographic projection). These models were implemented and thoroughly tested in a spatial resection and a self-calibrating bundle adjustment. For this purpose, fisheye images were taken with a Nikkor 8 mm fisheye lens on a Kodak DSC 14n Pro digital camera in a hemispherical calibration room. Both, the spatial resection and the bundle adjustment resulted in a standard deviation of unit weight of 1/10 pixel with a suitable set of simultaneous calibration parameters introduced into the camera model. The camera-lens combination was treated with all of the four basic models mentioned above. Using the same set of additional lens distortion parameters, the differences between the models can largely be compensated, delivering almost the same precision parameters. The relative object space precision obtained from the bundle adjustment was ca. 1:10 000 of the object dimensions. This value can be considered as a very satisfying result, as fisheye images generally have a lower geometric resolution as a consequence of their large field of view and also have a inferior imaging quality in comparison to most central perspective lenses.
Error rate of automated calculation for wound surface area using a digital photography.
Yang, S; Park, J; Lee, H; Lee, J B; Lee, B U; Oh, B H
2018-02-01
Although measuring would size using digital photography is a quick and simple method to evaluate the skin wound, the possible compatibility of it has not been fully validated. To investigate the error rate of our newly developed wound surface area calculation using digital photography. Using a smartphone and a digital single lens reflex (DSLR) camera, four photographs of various sized wounds (diameter: 0.5-3.5 cm) were taken from the facial skin model in company with color patches. The quantitative values of wound areas were automatically calculated. The relative error (RE) of this method with regard to wound sizes and types of camera was analyzed. RE of individual calculated area was from 0.0329% (DSLR, diameter 1.0 cm) to 23.7166% (smartphone, diameter 2.0 cm). In spite of the correction of lens curvature, smartphone has significantly higher error rate than DSLR camera (3.9431±2.9772 vs 8.1303±4.8236). However, in cases of wound diameter below than 3 cm, REs of average values of four photographs were below than 5%. In addition, there was no difference in the average value of wound area taken by smartphone and DSLR camera in those cases. For the follow-up of small skin defect (diameter: <3 cm), our newly developed automated wound area calculation method is able to be applied to the plenty of photographs, and the average values of them are a relatively useful index of wound healing with acceptable error rate. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Burst mode composite photography for dynamic physics demonstrations
NASA Astrophysics Data System (ADS)
Lincoln, James
2018-05-01
I am writing this article to raise awareness of burst mode photography as a fun and engaging way for teachers and students to experience physics demonstration activities. In the context of digital photography, "burst mode" means taking multiple photographs per second, and this is a feature that now comes standard on most digital cameras—including the iPhone. Sometimes the images are composited to imply motion from a series of still pictures. By analyzing the time between the photos, students can measure rates of velocity and acceleration of moving objects. Some of these composite photographs have already shown up in the AAPT High School Physics Photo Contest. In this article I discuss some ideas for using burst mode photography in the iPhone and provide a discussion of how to edit these photographs to create a composite image. I also compare the capabilities of the iPhone and GoPro cameras in creating these photographic composites.
Photogrammetric Trajectory Estimation of Foam Debris Ejected From an F-15 Aircraft
NASA Technical Reports Server (NTRS)
Smith, Mark S.
2006-01-01
Photogrammetric analysis of high-speed digital video data was performed to estimate trajectories of foam debris ejected from an F-15B aircraft. This work was part of a flight test effort to study the transport properties of insulating foam shed by the Space Shuttle external tank during ascent. The conical frustum-shaped pieces of debris, called "divots," were ejected from a flight test fixture mounted underneath the F-15B aircraft. Two onboard cameras gathered digital video data at two thousand frames per second. Time histories of divot positions were determined from the videos post flight using standard photogrammetry techniques. Divot velocities were estimated by differentiating these positions with respect to time. Time histories of divot rotations were estimated using four points on the divot face. Estimated divot position, rotation, and Mach number for selected cases are presented. Uncertainty in the results is discussed.
Development of an imaging method for quantifying a large digital PCR droplet
NASA Astrophysics Data System (ADS)
Huang, Jen-Yu; Lee, Shu-Sheng; Hsu, Yu-Hsiang
2017-02-01
Portable devices have been recognized as the future linkage between end-users and lab-on-a-chip devices. It has a user friendly interface and provides apps to interface headphones, cameras, and communication duct, etc. In particular, the digital resolution of cameras installed in smartphones or pads already has a high imaging resolution with a high number of pixels. This unique feature has triggered researches to integrate optical fixtures with smartphone to provide microscopic imaging capabilities. In this paper, we report our study on developing a portable diagnostic tool based on the imaging system of a smartphone and a digital PCR biochip. A computational algorithm is developed to processing optical images taken from a digital PCR biochip with a smartphone in a black box. Each reaction droplet is recorded in pixels and is analyzed in a sRGB (red, green, and blue) color space. Multistep filtering algorithm and auto-threshold algorithm are adopted to minimize background noise contributed from ccd cameras and rule out false positive droplets, respectively. Finally, a size-filtering method is applied to identify the number of positive droplets to quantify target's concentration. Statistical analysis is then performed for diagnostic purpose. This process can be integrated in an app and can provide a user friendly interface without professional training.
Automatic rice crop height measurement using a field server and digital image processing.
Sritarapipat, Tanakorn; Rakwatin, Preesan; Kasetkasem, Teerasit
2014-01-07
Rice crop height is an important agronomic trait linked to plant type and yield potential. This research developed an automatic image processing technique to detect rice crop height based on images taken by a digital camera attached to a field server. The camera acquires rice paddy images daily at a consistent time of day. The images include the rice plants and a marker bar used to provide a height reference. The rice crop height can be indirectly measured from the images by measuring the height of the marker bar compared to the height of the initial marker bar. Four digital image processing steps are employed to automatically measure the rice crop height: band selection, filtering, thresholding, and height measurement. Band selection is used to remove redundant features. Filtering extracts significant features of the marker bar. The thresholding method is applied to separate objects and boundaries of the marker bar versus other areas. The marker bar is detected and compared with the initial marker bar to measure the rice crop height. Our experiment used a field server with a digital camera to continuously monitor a rice field located in Suphanburi Province, Thailand. The experimental results show that the proposed method measures rice crop height effectively, with no human intervention required.
An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates
Hobbs, Michael T.; Brehme, Cheryl S.
2017-01-01
Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.
An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates.
Hobbs, Michael T; Brehme, Cheryl S
2017-01-01
Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.
NASA Astrophysics Data System (ADS)
Haubeck, K.; Prinz, T.
2013-08-01
The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.
Oliphant, Huw; Kennedy, Alasdair; Comyn, Oliver; Spalton, David J; Nanavaty, Mayank A
2018-06-16
To compare slit lamp mounted cameras (SLC) versus digital compact camera (DCC) with slit-lamp adaptor when used by an inexperienced technician. In this cross sectional study, where posterior capsule opacification (PCO) was used as a comparator, patients were consented for one photograph with SLC and two with DCC (DCC1 and DCC2), with a slit-lamp adaptor. An inexperienced clinic technician, who took all the photographs and masked the images, recruited one eye of each patient. Images were graded for PCO using ECPO2000 software by two independent masked graders. Repeatability between DCC1 & DCC2 and limits-of-agreement between SLC and DCC1 mounted on slit-lamp with an adaptor were assessed. Coefficient-of-repeatability and Bland-Altmann plots were analyzed. Seventy-two patients (eyes) were recruited in the study. First 9 patients (eyes) were excluded due to unsatisfactory image quality from both the systems. Mean EPCO score for SLC was 2.28 (95% CI: 2.09 -2.45), for DCC1 was 2.28 (95% CI: 2.11-2.45), and for the DCC2 was 2.11 (95% CI: 2.11-2.45). There was no significant difference in EPCO scores between SLC Vs. DCC1 (p = 0.98) and between DCC1 and DCC 2 (p = 0.97). Coefficient of repeatability between DCC images was 0.42, and the coefficient of repeatability between DCC and SLC was 0.58. DCC on slit-lamp with an adaptor is comparable to a SLC. There is an initial learning curve, which is similar for both for an inexperienced person. This opens up the possibility for low cost anterior segment imaging in the clinical, research and teaching settings.
2012-11-08
S48-E-013 (15 Sept 1991) --- The Upper Atmosphere Research Satellite (UARS) in the payload bay of the earth- orbiting Discovery. UARS is scheduled for deploy on flight day three of the STS-48 mission. Data from UARS will enable scientists to study ozone depletion in the stratosphere, or upper atmosphere. This image was transmitted by the Electronic Still Camera (ESC), Development Test Objective (DTO) 648. The ESC is making its initial appearance on a Space Shuttle flight. Electronic still photography is a new technology that enables a camera to electronically capture and digitize an image with resolution approaching film quality. The digital image is stored on removable hard disks or small optical disks, and can be converted to a format suitable for downlink transmission or enhanced using image processing software. The Electronic Still Camera (ESC) was developed by the Man- Systems Division at the Johnson Space Center and is the first model in a planned evolutionary development leading to a family of high-resolution digital imaging devices. H. Don Yeates, JSC's Man-Systems Division, is program manager for the ESC. THIS IS A SECOND GENERATION PRINT MADE FROM AN ELECTRONICALLY PRODUCED NEGATIVE.
ERIC Educational Resources Information Center
Bueno de Mesquita, Paul; Dean, Ross F.; Young, Betty J.
2010-01-01
Advances in digital video technology create opportunities for more detailed qualitative analyses of actual teaching practice in science and other subject areas. User-friendly digital cameras and highly developed, flexible video-analysis software programs have made the tasks of video capture, editing, transcription, and subsequent data analysis…
Electronic Still Camera view of Aft end of Wide Field/Planetary Camera in HST
1993-12-06
S61-E-015 (6 Dec 1993) --- A close-up view of the aft part of the new Wide Field/Planetary Camera (WFPC-II) installed on the Hubble Space Telescope (HST). WFPC-II was photographed with the Electronic Still Camera (ESC) from inside Endeavour's cabin as astronauts F. Story Musgrave and Jeffrey A. Hoffman moved it from its stowage position onto the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Instrumental Response Model and Detrending for the Dark Energy Camera
Bernstein, G. M.; Abbott, T. M. C.; Desai, S.; ...
2017-09-14
We describe the model for mapping from sky brightness to the digital output of the Dark Energy Camera (DECam) and the algorithms adopted by the Dark Energy Survey (DES) for inverting this model to obtain photometric measures of celestial objects from the raw camera output. This calibration aims for fluxes that are uniform across the camera field of view and across the full angular and temporal span of the DES observations, approaching the accuracy limits set by shot noise for the full dynamic range of DES observations. The DES pipeline incorporates several substantive advances over standard detrending techniques, including principal-components-based sky and fringe subtraction; correction of the "brighter-fatter" nonlinearity; use of internal consistency in on-sky observations to disentangle the influences of quantum efficiency, pixel-size variations, and scattered light in the dome flats; and pixel-by-pixel characterization of instrument spectral response, through combination of internal-consistency constraints with auxiliary calibration data. This article provides conceptual derivations of the detrending/calibration steps, and the procedures for obtaining the necessary calibration data. Other publications will describe the implementation of these concepts for the DES operational pipeline, the detailed methods, and the validation that the techniques can bring DECam photometry and astrometry withinmore » $$\\approx 2$$ mmag and $$\\approx 3$$ mas, respectively, of fundamental atmospheric and statistical limits. In conclusion, the DES techniques should be broadly applicable to wide-field imagers.« less
Instrumental Response Model and Detrending for the Dark Energy Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, G. M.; Abbott, T. M. C.; Desai, S.
We describe the model for mapping from sky brightness to the digital output of the Dark Energy Camera (DECam) and the algorithms adopted by the Dark Energy Survey (DES) for inverting this model to obtain photometric measures of celestial objects from the raw camera output. This calibration aims for fluxes that are uniform across the camera field of view and across the full angular and temporal span of the DES observations, approaching the accuracy limits set by shot noise for the full dynamic range of DES observations. The DES pipeline incorporates several substantive advances over standard detrending techniques, including principal-components-based sky and fringe subtraction; correction of the "brighter-fatter" nonlinearity; use of internal consistency in on-sky observations to disentangle the influences of quantum efficiency, pixel-size variations, and scattered light in the dome flats; and pixel-by-pixel characterization of instrument spectral response, through combination of internal-consistency constraints with auxiliary calibration data. This article provides conceptual derivations of the detrending/calibration steps, and the procedures for obtaining the necessary calibration data. Other publications will describe the implementation of these concepts for the DES operational pipeline, the detailed methods, and the validation that the techniques can bring DECam photometry and astrometry withinmore » $$\\approx 2$$ mmag and $$\\approx 3$$ mas, respectively, of fundamental atmospheric and statistical limits. In conclusion, the DES techniques should be broadly applicable to wide-field imagers.« less
Vector-Based Ground Surface and Object Representation Using Cameras
2009-12-01
representations and it is a digital data structure used for the representation of a ground surface in geographical information systems ( GIS ). Figure...Vision API library, and the OpenCV library. Also, the Posix thread library was utilized to quickly capture the source images from cameras. Both
Signori, Cácia; Collares, Kauê; Cumerlato, Catarina B F; Correa, Marcos B; Opdam, Niek J M; Cenci, Maximiliano S
2018-04-01
The aim of this study was to investigate the validity of assessment of intraoral digital photography in the evaluation of dental restorations. Intraoral photographs of anterior and posterior restorations were classified based on FDI criteria according to the need for intervention: no intervention, repair and replacement. Evaluations were performed by an experienced expert in restorative dentistry (gold standard evaluator) and 3 trained dentists (consensus). The clinical inspection was the reference standard method. The prevalence of failures was explored. Cohen's kappa statistic was used. Validity was accessed by sensitivity, specificity, likelihood ratio and predictives values. Higher prevalence of failed restorations intervention was identified by the intraoral photography (17.7%) in comparison to the clinical evaluation (14.1%). Moderate agreement in the diagnosis of total failures was shown between the methods for the gold standard evaluator (kappa = 0.51) and consensus of evaluators (kappa = 0.53). Gold standard evaluator and consensus showed substantial and moderate agreement for posterior restorations (kappa = 0.61; 0.59), and fair and moderate agreement for anterior restorations (kappa = 0.36; 0.43), respectively. The accuracy was 84.8% in the assessment by intraoral photographs. Sensitivity and specificity values of 87.5% and 89.3% were found. Under the limits of this study, the assessment of digital photography performed by intraoral camera is an indirect diagnostic method valid for the evaluation of dental restorations, mainly in posterior teeth. This method should be employed taking into account the higher detection of defects provided by the images, which are not always clinically relevant. Copyright © 2018 Elsevier Ltd. All rights reserved.
Establishing imaging sensor specifications for digital still cameras
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2007-02-01
Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.
Digital Camera Control for Faster Inspection
NASA Technical Reports Server (NTRS)
Brown, Katharine; Siekierski, James D.; Mangieri, Mark L.; Dekome, Kent; Cobarruvias, John; Piplani, Perry J.; Busa, Joel
2009-01-01
Digital Camera Control Software (DCCS) is a computer program for controlling a boom and a boom-mounted camera used to inspect the external surface of a space shuttle in orbit around the Earth. Running in a laptop computer in the space-shuttle crew cabin, DCCS commands integrated displays and controls. By means of a simple one-button command, a crewmember can view low- resolution images to quickly spot problem areas and can then cause a rapid transition to high- resolution images. The crewmember can command that camera settings apply to a specific small area of interest within the field of view of the camera so as to maximize image quality within that area. DCCS also provides critical high-resolution images to a ground screening team, which analyzes the images to assess damage (if any); in so doing, DCCS enables the team to clear initially suspect areas more quickly than would otherwise be possible and further saves time by minimizing the probability of re-imaging of areas already inspected. On the basis of experience with a previous version (2.0) of the software, the present version (3.0) incorporates a number of advanced imaging features that optimize crewmember capability and efficiency.
Lunar UV-visible-IR mapping interferometric spectrometer
NASA Technical Reports Server (NTRS)
Smith, W. Hayden; Haskin, L.; Korotev, R.; Arvidson, R.; Mckinnon, W.; Hapke, B.; Larson, S.; Lucey, P.
1992-01-01
Ultraviolet-visible-infrared mapping digital array scanned interferometers for lunar compositional surveys was developed. The research has defined a no-moving-parts, low-weight and low-power, high-throughput, and electronically adaptable digital array scanned interferometer that achieves measurement objectives encompassing and improving upon all the requirements defined by the LEXSWIG for lunar mineralogical investigation. In addition, LUMIS provides a new, important, ultraviolet spectral mapping, high-spatial-resolution line scan camera, and multispectral camera capabilities. An instrument configuration optimized for spectral mapping and imaging of the lunar surface and provide spectral results in support of the instrument design are described.
A fast one-chip event-preprocessor and sequencer for the Simbol-X Low Energy Detector
NASA Astrophysics Data System (ADS)
Schanz, T.; Tenzer, C.; Maier, D.; Kendziorra, E.; Santangelo, A.
2010-12-01
We present an FPGA-based digital camera electronics consisting of an Event-Preprocessor (EPP) for on-board data preprocessing and a related Sequencer (SEQ) to generate the necessary signals to control the readout of the detector. The device has been originally designed for the Simbol-X low energy detector (LED). The EPP operates on 64×64 pixel images and has a real-time processing capability of more than 8000 frames per second. The already working releases of the EPP and the SEQ are now combined into one Digital-Camera-Controller-Chip (D3C).
Digital readout for image converter cameras
NASA Astrophysics Data System (ADS)
Honour, Joseph
1991-04-01
There is an increasing need for fast and reliable analysis of recorded sequences from image converter cameras so that experimental information can be readily evaluated without recourse to more time consuming photographic procedures. A digital readout system has been developed using a randomly triggerable high resolution CCD camera, the output of which is suitable for use with IBM AT compatible PC. Within half a second from receipt of trigger pulse, the frame reformatter displays the image and transfer to storage media can be readily achieved via the PC and dedicated software. Two software programmes offer different levels of image manipulation which includes enhancement routines and parameter calculations with accuracy down to pixel levels. Hard copy prints can be acquired using a specially adapted Polaroid printer, outputs for laser and video printer extend the overall versatility of the system.
Depth estimation using a lightfield camera
NASA Astrophysics Data System (ADS)
Roper, Carissa
The latest innovation to camera design has come in the form of the lightfield, or plenoptic, camera that captures 4-D radiance data rather than just the 2-D scene image via microlens arrays. With the spatial and angular light ray data now recorded on the camera sensor, it is feasible to construct algorithms that can estimate depth of field in different portions of a given scene. There are limitations to the precision due to hardware structure and the sheer number of scene variations that can occur. In this thesis, the potential of digital image analysis and spatial filtering to extract depth information is tested on the commercially available plenoptic camera.
USDA-ARS?s Scientific Manuscript database
The proliferation of tower-mounted cameras co-located with eddy covariance instrumentation provides a novel opportunity to better understand the relationship between canopy phenology and the seasonality of canopy photosynthesis. In this paper, we describe the abilities and limitations of webcams to ...
Low-cost camera modifications and methodologies for very-high-resolution digital images
USDA-ARS?s Scientific Manuscript database
Aerial color and color-infrared photography are usually acquired at high altitude so the ground resolution of the photographs is < 1 m. Moreover, current color-infrared cameras and manned aircraft flight time are expensive, so the objective is the development of alternative methods for obtaining ve...
Krikalev in front of flight deck windows
2001-03-12
STS102-E-5139 (12 March 2001) --- Cosmonaut Sergei K. Krikalev, now a member of the STS-102 crew, prepares to use a camera on Discovery's flight deck. Krikalev, representing Rosaviakosmos, had been onboard the International Space Station (ISS) since early November 2000. The photograph was taken with a digital still camera.
The future of consumer cameras
NASA Astrophysics Data System (ADS)
Battiato, Sebastiano; Moltisanti, Marco
2015-03-01
In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.
Collection and Analysis of Crowd Data with Aerial, Rooftop, and Ground Views
2014-11-10
collected these datasets using different aircrafts. Erista 8 HL OctaCopter is a heavy-lift aerial platform capable of using high-resolution cinema ...is another high-resolution camera that is cinema grade and high quality, with the capability of capturing videos with 4K resolution at 30 frames per...292.58 Imaging Systems and Accessories Blackmagic Production Camera 4 Crowd Counting using 4K Cameras High resolution cinema grade digital video
HST Solar Arrays photographed by Electronic Still Camera
NASA Technical Reports Server (NTRS)
1993-01-01
This medium close-up view of one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. This view shows the cell side of the minus V-2 panel. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.
Neil A. Clark; Sang-Mook Lee
2004-01-01
This paper demonstrates how a digital video camera with a long lens can be used with pulse laser ranging in order to collect very large-scale tree crown measurements. The long focal length of the camera lens provides the magnification required for precise viewing of distant points with the trade-off of spatial coverage. Multiple video frames are mosaicked into a single...
NASA Astrophysics Data System (ADS)
Wang, Sheng; Bandini, Filippo; Jakobsen, Jakob; Zarco-Tejada, Pablo J.; Köppl, Christian Josef; Haugård Olesen, Daniel; Ibrom, Andreas; Bauer-Gottwein, Peter; Garcia, Monica
2017-04-01
Unmanned Aerial Systems (UAS) can collect optical and thermal hyperspatial (<1m) imagery with low cost and flexible revisit times regardless of cloudy conditions. The reflectance and radiometric temperature signatures of the land surface, closely linked with the vegetation structure and functioning, are already part of models to predict Evapotranspiration (ET) and Gross Primary Productivity (GPP) from satellites. However, there remain challenges for an operational monitoring using UAS compared to satellites: the payload capacity of most commercial UAS is less than 2 kg, but miniaturized sensors have low signal to noise ratios and small field of view requires mosaicking hundreds of images and accurate orthorectification. In addition, wind gusts and lower platform stability require appropriate geometric and radiometric corrections. Finally, modeling fluxes on days without images is still an issue for both satellite and UAS applications. This study focuses on designing an operational UAS-based monitoring system including payload design, sensor calibration, based on routine collection of optical and thermal images in a Danish willow field to perform a joint monitoring of ET and GPP dynamics over continuous time at daily time steps. The payload (<2 kg) consists of a multispectral camera (Tetra Mini-MCA6), a thermal infrared camera (FLIR Tau 2), a digital camera (Sony RX-100) used to retrieve accurate digital elevation models (DEMs) for multispectral and thermal image orthorectification, and a standard GNSS single frequency receiver (UBlox) or a real time kinematic double frequency system (Novatel Inc. flexpack6+OEM628). Geometric calibration of the digital and multispectral cameras was conducted to recover intrinsic camera parameters. After geometric calibration, accurate DEMs with vertical errors about 10cm could be retrieved. Radiometric calibration for the multispectral camera was conducted with an integrating sphere (Labsphere CSTM-USS-2000C) and the laboratory calibration showed that the camera measured radiance had a bias within ±4.8%. The thermal camera was calibrated using a black body at varying target and ambient temperatures and resulted in laboratory accuracy with RMSE of 0.95 K. A joint model of ET and GPP was applied using two parsimonious, physiologically based models, a modified version of the Priestley-Taylor Jet Propulsion Laboratory model (Fisher et al., 2008; Garcia et al., 2013) and a Light Use Efficiency approach (Potter et al., 1993). Both models estimate ET and GPP under optimum potential conditions down-regulated by the same biophysical constraints dependent on remote sensing and atmospheric data to reflect multiple stresses. Vegetation indices were calculated from the multispectral data to assess vegetation conditions, while thermal infrared imagery was used to compute a thermal inertia index to infer soil moisture constraints. To interpolate radiometric temperature between flights, a prognostic Surface Energy Balance model (Margulis et al., 2001) based on the force-restore method was applied in a data assimilation scheme to obtain continuous ET and GPP fluxes. With this operational system, regular flight campaigns with a hexacopter (DJI S900) have been conducted in a Danish willow flux site (Risø) over the 2016 growing season. The observed energy, water and carbon fluxes from the Risø eddy covariance flux tower were used to validate the model simulation. This UAS monitoring system is suitable for agricultural management and land-atmosphere interaction studies.
Tests of commercial colour CMOS cameras for astronomical applications
NASA Astrophysics Data System (ADS)
Pokhvala, S. M.; Reshetnyk, V. M.; Zhilyaev, B. E.
2013-12-01
We present some results of testing commercial colour CMOS cameras for astronomical applications. Colour CMOS sensors allow to perform photometry in three filters simultaneously that gives a great advantage compared with monochrome CCD detectors. The Bayer BGR colour system realized in colour CMOS sensors is close to the astronomical Johnson BVR system. The basic camera characteristics: read noise (e^{-}/pix), thermal noise (e^{-}/pix/sec) and electronic gain (e^{-}/ADU) for the commercial digital camera Canon 5D MarkIII are presented. We give the same characteristics for the scientific high performance cooled CCD camera system ALTA E47. Comparing results for tests of Canon 5D MarkIII and CCD ALTA E47 show that present-day commercial colour CMOS cameras can seriously compete with the scientific CCD cameras in deep astronomical imaging.
An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates
2017-01-01
Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing. PMID:28981533
ColorChecker at the beach: dangers of sunburn and glare
NASA Astrophysics Data System (ADS)
McCann, John
2014-01-01
In High-Dynamic-Range (HDR) imaging, optical veiling glare sets the limits of accurate scene information recorded by a camera. But, what happens at the beach? Here we have a Low-Dynamic-Range (LDR) scene with maximal glare. Can we calibrate a camera at the beach and not be burnt? We know that we need sunscreen and sunglasses, but what about our cameras? The effect of veiling glare is scene-dependent. When we compare RAW camera digits with spotmeter measurements we find significant differences. As well, these differences vary, depending on where we aim the camera. When we calibrate our camera at the beach we get data that is valid for only that part of that scene. Camera veiling glare is an issue in LDR scenes in uniform illumination with a shaded lens.
Baker, Stokes S.; Vidican, Cleo B.; Cameron, David S.; Greib, Haittam G.; Jarocki, Christine C.; Setaputri, Andres W.; Spicuzza, Christopher H.; Burr, Aaron A.; Waqas, Meriam A.; Tolbert, Danzell A.
2012-01-01
Background and aims Studies have shown that levels of green fluorescent protein (GFP) leaf surface fluorescence are directly proportional to GFP soluble protein concentration in transgenic plants. However, instruments that measure GFP surface fluorescence are expensive. The goal of this investigation was to develop techniques with consumer digital cameras to analyse GFP surface fluorescence in transgenic plants. Methodology Inexpensive filter cubes containing machine vision dichroic filters and illuminated with blue light-emitting diodes (LED) were designed to attach to digital single-lens reflex (SLR) camera macro lenses. The apparatus was tested on purified enhanced GFP, and on wild-type and GFP-expressing arabidopsis grown autotrophically and heterotrophically. Principal findings Spectrum analysis showed that the apparatus illuminates specimens with wavelengths between ∼450 and ∼500 nm, and detects fluorescence between ∼510 and ∼595 nm. Epifluorescent photographs taken with SLR digital cameras were able to detect red-shifted GFP fluorescence in Arabidopsis thaliana leaves and cotyledons of pot-grown plants, as well as roots, hypocotyls and cotyledons of etiolated and light-grown plants grown heterotrophically. Green fluorescent protein fluorescence was detected primarily in the green channel of the raw image files. Studies with purified GFP produced linear responses to both protein surface density and exposure time (H0: β (slope) = 0 mean counts per pixel (ng s mm−2)−1, r2 > 0.994, n = 31, P < 1.75 × 10−29). Conclusions Epifluorescent digital photographs taken with complementary metal-oxide-semiconductor and charge-coupled device SLR cameras can be used to analyse red-shifted GFP surface fluorescence using visible blue light. This detection device can be constructed with inexpensive commercially available materials, thus increasing the accessibility of whole-organism GFP expression analysis to research laboratories and teaching institutions with small budgets. PMID:22479674
A simplified close range photogrammetry method for soil erosion assessment
USDA-ARS?s Scientific Manuscript database
With the increased affordability of consumer grade cameras and the development of powerful image processing software, digital photogrammetry offers a competitive advantage as a tool for soil erosion estimation compared to other technologies. One bottleneck of digital photogrammetry is its dependency...
Manifold-Based Image Understanding
2010-06-30
3] employs a Texas Instruments digital micromirror device (DMD), which consists of an array of N electrostatically actuated micromirrors . The camera...image x) is reflected off a digital micromirror device (DMD) array whose mirror orientations are modulated in the pseudorandom pattern φm supplied by a
Chalazonitis, A N; Koumarianos, D; Tzovara, J; Chronopoulos, P
2003-06-01
Over the past decade, the technology that permits images to be digitized and the reduction in the cost of digital equipment allows quick digital transfer of any conventional radiological film. Images then can be transferred to a personal computer, and several software programs are available that can manipulate their digital appearance. In this article, the fundamentals of digital imaging are discussed, as well as the wide variety of optional adjustments that the Adobe Photoshop 6.0 (Adobe Systems, San Jose, CA) program can offer to present radiological images with satisfactory digital imaging quality.
ERIC Educational Resources Information Center
Boardman, Margot
2007-01-01
This study set out to investigate the use of digital cameras and voice recorders to accurately capture essential components of early learners' achievements. The project was undertaken by 29 early childhood educators within kindergarten settings in Tasmania and the Australian Capital Territory. Data collected indicated that digital technologies,…
Advanced Digital Forensic and Steganalysis Methods
2009-02-01
investigation is simultaneously cropped, scaled, and processed, extending the technology when the digital image is printed, developing technology capable ...or other common processing operations). TECNOLOGY APPLICATIONS 1. Determining the origin of digital images 2. Matching an image to a camera...Technology Transfer and Innovation Partnerships Division of Research P.O. Box 6000 State University of New York Binghamton, NY 13902-6000 Phone: 607-777
NASA Astrophysics Data System (ADS)
Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Sergey N.
2015-01-01
Digital holography is technique which includes recording of interference pattern with digital photosensor, processing of obtained holographic data and reconstruction of object wavefront. Increase of signal-to-noise ratio (SNR) of reconstructed digital holograms is especially important in such fields as image encryption, pattern recognition, static and dynamic display of 3D scenes, and etc. In this paper compensation of photosensor light spatial noise portrait (LSNP) for increase of SNR of reconstructed digital holograms is proposed. To verify the proposed method, numerical experiments with computer generated Fresnel holograms with resolution equal to 512×512 elements were performed. Simulation of shots registration with digital camera Canon EOS 400D was performed. It is shown that solo use of the averaging over frames method allows to increase SNR only up to 4 times, and further increase of SNR is limited by spatial noise. Application of the LSNP compensation method in conjunction with the averaging over frames method allows for 10 times SNR increase. This value was obtained for LSNP measured with 20 % error. In case of using more accurate LSNP, SNR can be increased up to 20 times.
NASA Astrophysics Data System (ADS)
Bertin, Stephane; Friedrich, Heide; Delmas, Patrice; Chan, Edwin; Gimel'farb, Georgy
2015-03-01
Grain-scale monitoring of fluvial morphology is important for the evaluation of river system dynamics. Significant progress in remote sensing and computer performance allows rapid high-resolution data acquisition, however, applications in fluvial environments remain challenging. Even in a controlled environment, such as a laboratory, the extensive acquisition workflow is prone to the propagation of errors in digital elevation models (DEMs). This is valid for both of the common surface recording techniques: digital stereo photogrammetry and terrestrial laser scanning (TLS). The optimisation of the acquisition process, an effective way to reduce the occurrence of errors, is generally limited by the use of commercial software. Therefore, the removal of evident blunders during post processing is regarded as standard practice, although this may introduce new errors. This paper presents a detailed evaluation of a digital stereo-photogrammetric workflow developed for fluvial hydraulic applications. The introduced workflow is user-friendly and can be adapted to various close-range measurements: imagery is acquired with two Nikon D5100 cameras and processed using non-proprietary "on-the-job" calibration and dense scanline-based stereo matching algorithms. Novel ground truth evaluation studies were designed to identify the DEM errors, which resulted from a combination of calibration errors, inaccurate image rectifications and stereo-matching errors. To ensure optimum DEM quality, we show that systematic DEM errors must be minimised by ensuring a good distribution of control points throughout the image format during calibration. DEM quality is then largely dependent on the imagery utilised. We evaluated the open access multi-scale Retinex algorithm to facilitate the stereo matching, and quantified its influence on DEM quality. Occlusions, inherent to any roughness element, are still a major limiting factor to DEM accuracy. We show that a careful selection of the camera-to-object and baseline distance reduces errors in occluded areas and that realistic ground truths help to quantify those errors.
Miniaturized multiwavelength digital holography sensor for extensive in-machine tool measurement
NASA Astrophysics Data System (ADS)
Seyler, Tobias; Fratz, Markus; Beckmann, Tobias; Bertz, Alexander; Carl, Daniel
2017-06-01
In this paper we present a miniaturized digital holographic sensor (HoloCut) for operation inside a machine tool. With state-of-the-art 3D measurement systems, short-range structures such as tool marks cannot be resolved inside a machine tool chamber. Up to now, measurements had to be conducted outside the machine tool and thus processing data are generated offline. The sensor presented here uses digital multiwavelength holography to get 3D-shape-information of the machined sample. By using three wavelengths, we get a large artificial wavelength with a large unambiguous measurement range of 0.5mm and achieve micron repeatability even in the presence of laser speckles on rough surfaces. In addition, a digital refocusing algorithm based on phase noise is implemented to extend the measurement range beyond the limits of the artificial wavelength and geometrical depth-of-focus. With complex wave field propagation, the focus plane can be shifted after the camera images have been taken and a sharp image with extended depth of focus is constructed consequently. With 20mm x 20mm field of view the sensor enables measurement of both macro- and micro-structure (such as tool marks) with an axial resolution of 1 µm, lateral resolution of 7 µm and consequently allows processing data to be generated online which in turn qualifies it as a machine tool control. To make HoloCut compact enough for operation inside a machining center, the beams are arranged in two planes: The beams are split into reference beam and object beam in the bottom plane and combined onto the camera in the top plane later on. Using a mechanical standard interface according to DIN 69893 and having a very compact size of 235mm x 140mm x 215mm (WxHxD) and a weight of 7.5 kg, HoloCut can be easily integrated into different machine tools and extends no more in height than a typical processing tool.
Analysis of the lateral push-off in the freestyle flip turn.
Araujo, Luciana; Pereira, Suzana; Gatti, Roberta; Freitas, Elinai; Jacomel, Gabriel; Roesler, Helio; Villas-Boas, Joao
2010-09-01
The aim of this study was to examine the contact phase during the lateral push-off in the turn of front crawl swimming to determine which biomechanical variables (maximum normalized peak force, contact time, impulse, angle of knee flexion, and total turn time within 15 m) contribute to the performance of this turn technique. Thirty-four swimmers of state, national, and international competitive standard participated in the study. For data collection, the following equipment was used: an underwater force platform, a 30-Hz VHS video camera, and a MiniDv digital camera within an underwater box. Data are expressed as descriptive statistics. Inferential analyses were performed using Pearson's correlation and multiple linear regressions. All variables studied had a significant relationship with turn performance. We conclude that a turn executed with a knee flexion angle of between 100° and 120° provides optimum peak forces to generate impulses that allow the swimmer to lose less time in the turn without the need for an excessive force application and with less energy lost.
Characterization of a digital camera as an absolute tristimulus colorimeter
NASA Astrophysics Data System (ADS)
Martinez-Verdu, Francisco; Pujol, Jaume; Vilaseca, Meritxell; Capilla, Pascual
2003-01-01
An algorithm is proposed for the spectral and colorimetric characterization of digital still cameras (DSC) which allows to use them as tele-colorimeters with CIE-XYZ color output, in cd/m2. The spectral characterization consists of the calculation of the color-matching functions from the previously measured spectral sensitivities. The colorimetric characterization consists of transforming the RGB digital data into absolute tristimulus values CIE-XYZ (in cd/m2) under variable and unknown spectroradiometric conditions. Thus, at the first stage, a gray balance has been applied over the RGB digital data to convert them into RGB relative colorimetric values. At a second stage, an algorithm of luminance adaptation vs. lens aperture has been inserted in the basic colorimetric profile. Capturing the ColorChecker chart under different light sources, the DSC color analysis accuracy indexes, both in a raw state and with the corrections from a linear model of color correction, have been evaluated using the Pointer'86 color reproduction index with the unrelated Hunt'91 color appearance model. The results indicate that our digital image capture device, in raw performance, lightens and desaturates the colors.
Programmable Remapper with Single Flow Architecture
NASA Technical Reports Server (NTRS)
Fisher, Timothy E. (Inventor)
1993-01-01
An apparatus for image processing comprising a camera for receiving an original visual image and transforming the original visual image into an analog image, a first converter for transforming the analog image of the camera to a digital image, a processor having a single flow architecture for receiving the digital image and producing, with a single algorithm, an output image, a second converter for transforming the digital image of the processor to an analog image, and a viewer for receiving the analog image, transforming the analog image into a transformed visual image for observing the transformations applied to the original visual image. The processor comprises one or more subprocessors for the parallel reception of a digital image for producing an output matrix of the transformed visual image. More particularly, the processor comprises a plurality of subprocessors for receiving in parallel and transforming the digital image for producing a matrix of the transformed visual image, and an output interface means for receiving the respective portions of the transformed visual image from the respective subprocessor for producing an output matrix of the transformed visual image.
Bar-Gera, H; Musicant, O; Schechtman, E; Ze'evi, T
2016-11-01
The yellow signal driver behavior, reflecting the dilemma zone behavior, is analyzed using naturalistic data from digital enforcement cameras. The key variable in the analysis is the entrance time after the yellow onset, and its distribution. This distribution can assist in determining two critical outcomes: the safety outcome related to red-light-running angle accidents, and the efficiency outcome. The connection to other approaches for evaluating the yellow signal driver behavior is also discussed. The dataset was obtained from 37 digital enforcement cameras at non-urban signalized intersections in Israel, over a period of nearly two years. The data contain more than 200 million vehicle entrances, of which 2.3% (∼5million vehicles) entered the intersection during the yellow phase. In all non-urban signalized intersections in Israel the green phase ends with 3s of flashing green, followed by 3s of yellow. In most non-urban signalized roads in Israel the posted speed limit is 90km/h. Our analysis focuses on crossings during the yellow phase and the first 1.5s of the red phase. The analysis method consists of two stages. In the first stage we tested whether the frequency of crossings is constant at the beginning of the yellow phase. We found that the pattern was stable (i.e., the frequencies were constant) at 18 intersections, nearly stable at 13 intersections and unstable at 6 intersections. In addition to the 6 intersections with unstable patterns, two other outlying intersections were excluded from subsequent analysis. Logistic regression models were fitted for each of the remaining 29 intersection. We examined both standard (exponential) logistic regression and four parameters logistic regression. The results show a clear advantage for the former. The estimated parameters show that the time when the frequency of crossing reduces to half ranges from1.7 to 2.3s after yellow onset. The duration of the reduction of the relative frequency from 0.9 to 0.1 ranged from 1.9 to 2.9s. Copyright © 2015 Elsevier Ltd. All rights reserved.
Miniaturized Autonomous Extravehicular Robotic Camera (Mini AERCam)
NASA Technical Reports Server (NTRS)
Fredrickson, Steven E.
2001-01-01
The NASA Johnson Space Center (JSC) Engineering Directorate is developing the Autonomous Extravehicular Robotic Camera (AERCam), a low-volume, low-mass free-flying camera system . AERCam project team personnel recently initiated development of a miniaturized version of AERCam known as Mini AERCam. The Mini AERCam target design is a spherical "nanosatellite" free-flyer 7.5 inches in diameter and weighing 1 0 pounds. Mini AERCam is building on the success of the AERCam Sprint STS-87 flight experiment by adding new on-board sensing and processing capabilities while simultaneously reducing volume by 80%. Achieving enhanced capability in a smaller package depends on applying miniaturization technology across virtually all subsystems. Technology innovations being incorporated include micro electromechanical system (MEMS) gyros, "camera-on-a-chip" CMOS imagers, rechargeable xenon gas propulsion system , rechargeable lithium ion battery, custom avionics based on the PowerPC 740 microprocessor, GPS relative navigation, digital radio frequency communications and tracking, micropatch antennas, digital instrumentation, and dense mechanical packaging. The Mini AERCam free-flyer will initially be integrated into an approximate flight-like configuration for demonstration on an airbearing table. A pilot-in-the-loop and hardware-in-the-loop simulation to simulate on-orbit navigation and dynamics will complement the airbearing table demonstration. The Mini AERCam lab demonstration is intended to form the basis for future development of an AERCam flight system that provides beneficial on-orbit views unobtainable from fixed cameras, cameras on robotic manipulators, or cameras carried by EVA crewmembers.
Optimum color filters for CCD digital cameras
NASA Astrophysics Data System (ADS)
Engelhardt, Kai; Kunz, Rino E.; Seitz, Peter; Brunner, Harald; Knop, Karl
1993-12-01
As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.
Adaptation of the Camera Link Interface for Flight-Instrument Applications
NASA Technical Reports Server (NTRS)
Randall, David P.; Mahoney, John C.
2010-01-01
COTS (commercial-off-the-shelf) hard ware using an industry-standard Camera Link interface is proposed to accomplish the task of designing, building, assembling, and testing electronics for an airborne spectrometer that would be low-cost, but sustain the required data speed and volume. The focal plane electronics were designed to support that hardware standard. Analysis was done to determine how these COTS electronics could be interfaced with space-qualified camera electronics. Interfaces available for spaceflight application do not support the industry standard Camera Link interface, but with careful design, COTS EGSE (electronics ground support equipment), including camera interfaces and camera simulators, can still be used.
Improving the color fidelity of cameras for advanced television systems
NASA Astrophysics Data System (ADS)
Kollarits, Richard V.; Gibbon, David C.
1992-08-01
In this paper we compare the accuracy of the color information obtained from television cameras using three and five wavelength bands. This comparison is based on real digital camera data. The cameras are treated as colorimeters whose characteristics are not linked to that of the display. The color matrices for both cameras were obtained by identical optimization procedures that minimized the color error The color error for the five band camera is 2. 5 times smaller than that obtained from the three band camera. Visual comparison of color matches on a characterized color monitor indicate that the five band camera is capable of color measurements that produce no significant visual error on the display. Because the outputs from the five band camera are reduced to the normal three channels conventionally used for display there need be no increase in signal handling complexity outside the camera. Likewise it is possible to construct a five band camera using only three sensors as in conventional cameras. The principal drawback of the five band camera is the reduction in effective camera sensitivity by about 3/4 of an I stop. 1.
A method for direct measurement of the first-order mass moments of human body segments.
Fujii, Yusaku; Shimada, Kazuhito; Maru, Koichi; Ozawa, Junichi; Lu, Rong-Sheng
2010-01-01
We propose a simple and direct method for measuring the first-order mass moment of a human body segment. With the proposed method, the first-order mass moment of the body segment can be directly measured by using only one precision scale and one digital camera. In the dummy mass experiment, the relative standard uncertainty of a single set of measurements of the first-order mass moment is estimated to be 1.7%. The measured value will be useful as a reference for evaluating the uncertainty of the body segment inertial parameters (BSPs) estimated using an indirect method.
Error-proofing test system of industrial components based on image processing
NASA Astrophysics Data System (ADS)
Huang, Ying; Huang, Tao
2018-05-01
Due to the improvement of modern industrial level and accuracy, conventional manual test fails to satisfy the test standards of enterprises, so digital image processing technique should be utilized to gather and analyze the information on the surface of industrial components, so as to achieve the purpose of test. To test the installation parts of automotive engine, this paper employs camera to capture the images of the components. After these images are preprocessed including denoising, the image processing algorithm relying on flood fill algorithm is used to test the installation of the components. The results prove that this system has very high test accuracy.
NASA Astrophysics Data System (ADS)
Turko, Nir A.; Isbach, Michael; Ketelhut, Steffi; Greve, Burkhard; Schnekenburger, Jürgen; Shaked, Natan T.; Kemper, Björn
2017-02-01
We explored photothermal quantitative phase imaging (PTQPI) of living cells with functionalized nanoparticles (NPs) utilizing a cost-efficient setup based on a cell culture microscope. The excitation light was modulated by a mechanical chopper wheel with low frequencies. Quantitative phase imaging (QPI) was performed with Michelson interferometer-based off-axis digital holographic microscopy and a standard industrial camera. We present results from PTQPI observations on breast cancer cells that were incubated with functionalized gold NPs binding to the epidermal growth factor receptor. Moreover, QPI was used to quantify the impact of the NPs and the low frequency light excitation on cell morphology and viability.
Computer output microfilm (FR80) systems software documentation, volume 2
NASA Technical Reports Server (NTRS)
1975-01-01
The system consists of a series of programs which convert digital data from magnetic tapes into alpha-numeric characters, graphic plots, and imagery that is recorded on the face of a cathode ray tube. A special camera photographs the face of the tube on microfilm for subsequent display on a film reader. The applicable documents which apply to this system are delineated. The functional relationship between the system software, the standard insert routines, and the applications programs is described; all the applications programs are described in detail. Instructions for locating those documents are presented along with test preparations sheets for all baseline and/or program modification acceptance tests.
NASA Astrophysics Data System (ADS)
Tolle, F.; Friedt, J. M.; Bernard, É.; Prokop, A.; Griselin, M.
2014-12-01
Digital Elevation Model (DEM) is a key tool for analyzing spatially dependent processes including snow accumulation on slopes or glacier mass balance. Acquiring DEM within short time intervals provides new opportunities to evaluate such phenomena at the daily to seasonal rates.DEMs are usually generated from satellite imagery, aerial photography, airborne and ground-based LiDAR, and GPS surveys. In addition to these classical methods, we consider another alternative for periodic DEM acquisition with lower logistics requirements: digital processing of ground based, oblique view digital photography. Such a dataset, acquired using commercial off the shelf cameras, provides the source for generating elevation models using Structure from Motion (SfM) algorithms. Sets of pictures of a same structure but taken from various points of view are acquired. Selected features are identified on the images and allow for the reconstruction of the three-dimensional (3D) point cloud after computing the camera positions and optical properties. This cloud point, generated in an arbitrary coordinate system, is converted to an absolute coordinate system either by adding constraints of Ground Control Points (GCP), or including the (GPS) position of the cameras in the processing chain. We selected the opensource digital signal processing library provided by the French Geographic Institute (IGN) called Micmac for its fine processing granularity and the ability to assess the quality of each processing step.Although operating in snow covered environments appears challenging due to the lack of relevant features, we observed that enough reference points could be identified for 3D reconstruction. Despite poor climatic environment of the Arctic region considered (Ny Alesund area, 79oN) is not a problem for SfM, the low lying spring sun and the cast shadows appear as a limitation because of the lack of color dynamics in the digital cameras we used. A detailed understanding of the processing steps is mandatory during the image acquisition phase: compliance with acquisition rules reducing digital processing errors helps minimizing the uncertainty on the point cloud absolute position in its coordinate system. 3D models from SfM are compared with terrestrial LiDAR acquisitions for resolution assesment.
NASA Astrophysics Data System (ADS)
Jia, Yongwei; Cheng, Liming; Yu, Guangrong; Lou, Yongjian; Yu, Yan; Chen, Bo; Ding, Zuquan
2008-03-01
A method of digital image measurement of specimen deformation based on CCD cameras and Image J software was developed. This method was used to measure the biomechanics behavior of human pelvis. Six cadaveric specimens from the third lumbar vertebra to the proximal 1/3 part of femur were tested. The specimens without any structural abnormalities were dissected of all soft tissue, sparing the hip joint capsules and the ligaments of the pelvic ring and floor. Markers with black dot on white background were affixed to the key regions of the pelvis. Axial loading from the proximal lumbar was applied by MTS in the gradient of 0N to 500N, which simulated the double feet standing stance. The anterior and lateral images of the specimen were obtained through two CCD cameras. Based on Image J software, digital image processing software, which can be freely downloaded from the National Institutes of Health, digital 8-bit images were processed. The procedure includes the recognition of digital marker, image invert, sub-pixel reconstruction, image segmentation, center of mass algorithm based on weighted average of pixel gray values. Vertical displacements of S1 (the first sacral vertebrae) in front view and micro-angular rotation of sacroiliac joint in lateral view were calculated according to the marker movement. The results of digital image measurement showed as following: marker image correlation before and after deformation was excellent. The average correlation coefficient was about 0.983. According to the 768 × 576 pixels image (pixel size 0.68mm × 0.68mm), the precision of the displacement detected in our experiment was about 0.018 pixels and the comparatively error could achieve 1.11\\perthou. The average vertical displacement of S1 of the pelvis was 0.8356+/-0.2830mm under vertical load of 500 Newtons and the average micro-angular rotation of sacroiliac joint in lateral view was 0.584+/-0.221°. The load-displacement curves obtained from our optical measure system matched the clinical results. Digital image measurement of specimen deformation based on CCD cameras and Image J software has good perspective for application in biomechanical research, which has the advantage of simple optical setup, no-contact, high precision, and no special requirement of test environment.
An algorithm for approximate rectification of digital aerial images
USDA-ARS?s Scientific Manuscript database
High-resolution aerial photography is one of the most valuable tools available for managing extensive landscapes. With recent advances in digital camera technology, computer hardware, and software, aerial photography is easier to collect, store, and transfer than ever before. Images can be automa...
USDA-ARS?s Scientific Manuscript database
Photography has been a welcome tool in assisting to document and convey qualitative soil information. Greater availability of digital cameras with increased information storage capabilities has promoted novel uses of this technology in investigations of water movement patterns, organic matter conte...
ERIC Educational Resources Information Center
Bull, Glen; Bell, Lynn
2009-01-01
The shift from analog to digital video transformed the system from a unidirectional analog broadcast to a two-way conversation, resulting in the birth of participatory media. Digital video offers new opportunities for teaching science, social studies, mathematics, and English language arts. The professional education associations for each content…
Digital Storytelling in the Language Arts Classroom
ERIC Educational Resources Information Center
Bull, Glen; Kajder, Sara
2005-01-01
Technology offers a number of opportunities for connecting classrooms with the world. The advent of the Internet has offered unprecedented prospects for classroom connections, but the recent diffusion of digital cameras throughout society offers instructional possibilities as well. This document provides a detailed examination of digital…
Exploring of PST-TBPM in Monitoring Dynamic Deformation of Steel Structure in Vibration
NASA Astrophysics Data System (ADS)
Chen, Mingzhi; Zhao, Yongqian; Hai, Hua; Yu, Chengxin; Zhang, Guojian
2018-01-01
In order to monitor the dynamic deformation of steel structure in the real-time, digital photography is used in this paper. Firstly, the grid method is used correct the distortion of digital camera. Then the digital cameras are used to capture the initial and experimental images of steel structure to obtain its relative deformation. PST-TBPM (photographing scale transformation-time baseline parallax method) is used to eliminate the parallax error and convert the pixel change value of deformation points into the actual displacement value. In order to visualize the deformation trend of steel structure, the deformation curves are drawn based on the deformation value of deformation points. Results show that the average absolute accuracy and relative accuracy of PST-TBPM are 0.28mm and 1.1‰, respectively. Digital photography used in this study can meet accuracy requirements of steel structure deformation monitoring. It also can warn the safety of steel structure and provide data support for managers’ safety decisions based on the deformation curves on site.
Warped document image correction method based on heterogeneous registration strategies
NASA Astrophysics Data System (ADS)
Tong, Lijing; Zhan, Guoliang; Peng, Quanyao; Li, Yang; Li, Yifan
2013-03-01
With the popularity of digital camera and the application requirement of digitalized document images, using digital cameras to digitalize document images has become an irresistible trend. However, the warping of the document surface impacts on the quality of the Optical Character Recognition (OCR) system seriously. To improve the warped document image's vision quality and the OCR rate, this paper proposed a warped document image correction method based on heterogeneous registration strategies. This method mosaics two warped images of the same document from different viewpoints. Firstly, two feature points are selected from one image. Then the two feature points are registered in the other image base on heterogeneous registration strategies. At last, image mosaics are done for the two images, and the best mosaiced image is selected by OCR recognition results. As a result, for the best mosaiced image, the distortions are mostly removed and the OCR results are improved markedly. Experimental results show that the proposed method can resolve the issue of warped document image correction more effectively.
Use of a color CMOS camera as a colorimeter
NASA Astrophysics Data System (ADS)
Dallas, William J.; Roehrig, Hans; Redford, Gary R.
2006-08-01
In radiology diagnosis, film is being quickly replaced by computer monitors as the display medium for all imaging modalities. Increasingly, these monitors are color instead of monochrome. It is important to have instruments available to characterize the display devices in order to guarantee reproducible presentation of image material. We are developing an imaging colorimeter based on a commercially available color digital camera. The camera uses a sensor that has co-located pixels in all three primary colors.
Instrumentation for Aim Point Determination in the Close-in Battle
2007-12-01
Rugged camcorder with remote “ lipstick ” camera (http://www.samsung.com/Products/ Camcorder/DigitalMemory/files/scx210wl.pdf). ........ 5 Figure 5...target. One way of making a measurement is to mount a small “ lipstick ” camera to the rifle with a mount similar to the laser-tag transmitter mount...technology.com/contractors/surveillance/viotac-inc/viotac-inc1.html). Figure 4. Rugged camcorder with remote “ lipstick ” camera (http://www.samsung.com
Data Mining and Information Technology: Its Impact on Intelligence Collection and Privacy Rights
2007-11-26
sources include: Cameras - Digital cameras (still and video ) have been improving in capability while simultaneously dropping in cost at a rate...citizen is caught on camera 300 times each day.5 The power of extensive video coverage is magnified greatly by the nascent capability for voice and...software on security videos and tracking cell phone usage in the local area. However, it would only return the names and data of those who
CMOS Imaging Sensor Technology for Aerial Mapping Cameras
NASA Astrophysics Data System (ADS)
Neumann, Klaus; Welzenbach, Martin; Timm, Martin
2016-06-01
In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.
Reliable enumeration of malaria parasites in thick blood films using digital image analysis.
Frean, John A
2009-09-23
Quantitation of malaria parasite density is an important component of laboratory diagnosis of malaria. Microscopy of Giemsa-stained thick blood films is the conventional method for parasite enumeration. Accurate and reproducible parasite counts are difficult to achieve, because of inherent technical limitations and human inconsistency. Inaccurate parasite density estimation may have adverse clinical and therapeutic implications for patients, and for endpoints of clinical trials of anti-malarial vaccines or drugs. Digital image analysis provides an opportunity to improve performance of parasite density quantitation. Accurate manual parasite counts were done on 497 images of a range of thick blood films with varying densities of malaria parasites, to establish a uniformly reliable standard against which to assess the digital technique. By utilizing descriptive statistical parameters of parasite size frequency distributions, particle counting algorithms of the digital image analysis programme were semi-automatically adapted to variations in parasite size, shape and staining characteristics, to produce optimum signal/noise ratios. A reliable counting process was developed that requires no operator decisions that might bias the outcome. Digital counts were highly correlated with manual counts for medium to high parasite densities, and slightly less well correlated with conventional counts. At low densities (fewer than 6 parasites per analysed image) signal/noise ratios were compromised and correlation between digital and manual counts was poor. Conventional counts were consistently lower than both digital and manual counts. Using open-access software and avoiding custom programming or any special operator intervention, accurate digital counts were obtained, particularly at high parasite densities that are difficult to count conventionally. The technique is potentially useful for laboratories that routinely perform malaria parasite enumeration. The requirements of a digital microscope camera, personal computer and good quality staining of slides are potentially reasonably easy to meet.