Objective analysis of image quality of video image capture systems
NASA Astrophysics Data System (ADS)
Rowberg, Alan H.
1990-07-01
As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give horizontal or vertical streaking. While many of these results are significant from an engineering standpoint alone, there are clinical implications and some anatomy or pathology may not be visualized if an image capture system is used improperly.
Holmström, Oscar; Linder, Nina; Ngasala, Billy; Mårtensson, Andreas; Linder, Ewert; Lundin, Mikael; Moilanen, Hannu; Suutala, Antti; Diwan, Vinod; Lundin, Johan
2017-06-01
Microscopy remains the gold standard in the diagnosis of neglected tropical diseases. As resource limited, rural areas often lack laboratory equipment and trained personnel, new diagnostic techniques are needed. Low-cost, point-of-care imaging devices show potential in the diagnosis of these diseases. Novel, digital image analysis algorithms can be utilized to automate sample analysis. Evaluation of the imaging performance of a miniature digital microscopy scanner for the diagnosis of soil-transmitted helminths and Schistosoma haematobium, and training of a deep learning-based image analysis algorithm for automated detection of soil-transmitted helminths in the captured images. A total of 13 iodine-stained stool samples containing Ascaris lumbricoides, Trichuris trichiura and hookworm eggs and 4 urine samples containing Schistosoma haematobium were digitized using a reference whole slide-scanner and the mobile microscopy scanner. Parasites in the images were identified by visual examination and by analysis with a deep learning-based image analysis algorithm in the stool samples. Results were compared between the digital and visual analysis of the images showing helminth eggs. Parasite identification by visual analysis of digital slides captured with the mobile microscope was feasible for all analyzed parasites. Although the spatial resolution of the reference slide-scanner is higher, the resolution of the mobile microscope is sufficient for reliable identification and classification of all parasites studied. Digital image analysis of stool sample images captured with the mobile microscope showed high sensitivity for detection of all helminths studied (range of sensitivity = 83.3-100%) in the test set (n = 217) of manually labeled helminth eggs. In this proof-of-concept study, the imaging performance of a mobile, digital microscope was sufficient for visual detection of soil-transmitted helminths and Schistosoma haematobium. Furthermore, we show that deep learning-based image analysis can be utilized for the automated detection and classification of helminths in the captured images.
Holmström, Oscar; Linder, Nina; Ngasala, Billy; Mårtensson, Andreas; Linder, Ewert; Lundin, Mikael; Moilanen, Hannu; Suutala, Antti; Diwan, Vinod; Lundin, Johan
2017-01-01
ABSTRACT Background: Microscopy remains the gold standard in the diagnosis of neglected tropical diseases. As resource limited, rural areas often lack laboratory equipment and trained personnel, new diagnostic techniques are needed. Low-cost, point-of-care imaging devices show potential in the diagnosis of these diseases. Novel, digital image analysis algorithms can be utilized to automate sample analysis. Objective: Evaluation of the imaging performance of a miniature digital microscopy scanner for the diagnosis of soil-transmitted helminths and Schistosoma haematobium, and training of a deep learning-based image analysis algorithm for automated detection of soil-transmitted helminths in the captured images. Methods: A total of 13 iodine-stained stool samples containing Ascaris lumbricoides, Trichuris trichiura and hookworm eggs and 4 urine samples containing Schistosoma haematobium were digitized using a reference whole slide-scanner and the mobile microscopy scanner. Parasites in the images were identified by visual examination and by analysis with a deep learning-based image analysis algorithm in the stool samples. Results were compared between the digital and visual analysis of the images showing helminth eggs. Results: Parasite identification by visual analysis of digital slides captured with the mobile microscope was feasible for all analyzed parasites. Although the spatial resolution of the reference slide-scanner is higher, the resolution of the mobile microscope is sufficient for reliable identification and classification of all parasites studied. Digital image analysis of stool sample images captured with the mobile microscope showed high sensitivity for detection of all helminths studied (range of sensitivity = 83.3–100%) in the test set (n = 217) of manually labeled helminth eggs. Conclusions: In this proof-of-concept study, the imaging performance of a mobile, digital microscope was sufficient for visual detection of soil-transmitted helminths and Schistosoma haematobium. Furthermore, we show that deep learning-based image analysis can be utilized for the automated detection and classification of helminths in the captured images. PMID:28838305
Low-cost conversion of the Polaroid MD-4 land camera to a digital gel documentation system.
Porch, Timothy G; Erpelding, John E
2006-04-30
A simple, inexpensive design is presented for the rapid conversion of the popular MD-4 Polaroid land camera to a high quality digital gel documentation system. Images of ethidium bromide stained DNA gels captured using the digital system were compared to images captured on Polaroid instant film. Resolution and sensitivity were enhanced using the digital system. In addition to the low cost and superior image quality of the digital system, there is also the added convenience of real-time image viewing through the swivel LCD of the digital camera, wide flexibility of gel sizes, accurate automatic focusing, variable image resolution, and consistent ease of use and quality. Images can be directly imported to a computer by using the USB port on the digital camera, further enhancing the potential of the digital system for documentation, analysis, and archiving. The system is appropriate for use as a start-up gel documentation system and for routine gel analysis.
The Cooking and Pneumonia Study (CAPS) in Malawi: Implementation of Remote Source Data Verification
Weston, William; Smedley, James; Bennett, Andrew; Mortimer, Kevin
2016-01-01
Background Source data verification (SDV) is a data monitoring procedure which compares the original records with the Case Report Form (CRF). Traditionally, on-site SDV relies on monitors making multiples visits to study sites requiring extensive resources. The Cooking And Pneumonia Study (CAPS) is a 24- month village-level cluster randomized controlled trial assessing the effectiveness of an advanced cook-stove intervention in preventing pneumonia in children under five in rural Malawi (www.capstudy.org). CAPS used smartphones to capture digital images of the original records on an electronic CRF (eCRF). In the present study, descriptive statistics are used to report the experience of electronic data capture with remote SDV in a challenging research setting in rural Malawi. Methods At three monthly intervals, fieldworkers, who were employed by CAPS, captured pneumonia data from the original records onto the eCRF. Fieldworkers also captured digital images of the original records. Once Internet connectivity was available, the data captured on the eCRF and the digital images of the original records were uploaded to a web-based SDV application. This enabled SDV to be conducted remotely from the UK. We conducted SDV of the pneumonia data (occurrence, severity, and clinical indicators) recorded in the eCRF with the data in the digital images of the original records. Result 664 episodes of pneumonia were recorded after 6 months of follow-up. Of these 664 episodes, 611 (92%) had a finding of pneumonia in the original records. All digital images of the original records were clear and legible. Conclusion Electronic data capture using eCRFs on mobile technology is feasible in rural Malawi. Capturing digital images of the original records in the field allows remote SDV to be conducted efficiently and securely without requiring additional field visits. We recommend these approaches in similar settings, especially those with health endpoints. PMID:27355447
The Cooking and Pneumonia Study (CAPS) in Malawi: Implementation of Remote Source Data Verification.
Weston, William; Smedley, James; Bennett, Andrew; Mortimer, Kevin
2016-01-01
Source data verification (SDV) is a data monitoring procedure which compares the original records with the Case Report Form (CRF). Traditionally, on-site SDV relies on monitors making multiples visits to study sites requiring extensive resources. The Cooking And Pneumonia Study (CAPS) is a 24- month village-level cluster randomized controlled trial assessing the effectiveness of an advanced cook-stove intervention in preventing pneumonia in children under five in rural Malawi (www.capstudy.org). CAPS used smartphones to capture digital images of the original records on an electronic CRF (eCRF). In the present study, descriptive statistics are used to report the experience of electronic data capture with remote SDV in a challenging research setting in rural Malawi. At three monthly intervals, fieldworkers, who were employed by CAPS, captured pneumonia data from the original records onto the eCRF. Fieldworkers also captured digital images of the original records. Once Internet connectivity was available, the data captured on the eCRF and the digital images of the original records were uploaded to a web-based SDV application. This enabled SDV to be conducted remotely from the UK. We conducted SDV of the pneumonia data (occurrence, severity, and clinical indicators) recorded in the eCRF with the data in the digital images of the original records. 664 episodes of pneumonia were recorded after 6 months of follow-up. Of these 664 episodes, 611 (92%) had a finding of pneumonia in the original records. All digital images of the original records were clear and legible. Electronic data capture using eCRFs on mobile technology is feasible in rural Malawi. Capturing digital images of the original records in the field allows remote SDV to be conducted efficiently and securely without requiring additional field visits. We recommend these approaches in similar settings, especially those with health endpoints.
Systems and Methods for Imaging of Falling Objects
NASA Technical Reports Server (NTRS)
Fallgatter, Cale (Inventor); Garrett, Tim (Inventor)
2014-01-01
Imaging of falling objects is described. Multiple images of a falling object can be captured substantially simultaneously using multiple cameras located at multiple angles around the falling object. An epipolar geometry of the captured images can be determined. The images can be rectified to parallelize epipolar lines of the epipolar geometry. Correspondence points between the images can be identified. At least a portion of the falling object can be digitally reconstructed using the identified correspondence points to create a digital reconstruction.
Patient-generated Digital Images after Pediatric Ambulatory Surgery.
Miller, Matthew W; Ross, Rachael K; Voight, Christina; Brouwer, Heather; Karavite, Dean J; Gerber, Jeffrey S; Grundmeier, Robert W; Coffin, Susan E
2016-07-06
To describe the use of digital images captured by parents or guardians and sent to clinicians for assessment of wounds after pediatric ambulatory surgery. Subjects with digital images of post-operative wounds were identified as part of an on-going cohort study of infections after ambulatory surgery within a large pediatric healthcare system. We performed a structured review of the electronic health record (EHR) to determine how digital images were documented in the EHR and used in clinical care. We identified 166 patients whose parent or guardian reported sending a digital image of the wound to the clinician after surgery. A corresponding digital image was located in the EHR in only 121 of these encounters. A change in clinical management was documented in 20% of these encounters, including referral for in-person evaluation of the wound and antibiotic prescription. Clinical teams have developed ad hoc workflows to use digital images to evaluate post-operative pediatric surgical patients. Because the use of digital images to support follow-up care after ambulatory surgery is likely to increase, it is important that high-quality images are captured and documented appropriately in the EHR to ensure privacy, security, and a high-level of care.
Patient-Generated Digital Images after Pediatric Ambulatory Surgery
Ross, Rachael K.; Voight, Christina; Brouwer, Heather; Karavite, Dean J.; Gerber, Jeffrey S.; Grundmeier, Robert W.; Coffin, Susan E.
2016-01-01
Summary Objective To describe the use of digital images captured by parents or guardians and sent to clinicians for assessment of wounds after pediatric ambulatory surgery. Methods Subjects with digital images of post-operative wounds were identified as part of an ongoing cohort study of infections after ambulatory surgery within a large pediatric healthcare system. We performed a structured review of the electronic health record (EHR) to determine how digital images were documented in the EHR and used in clinical care. Results We identified 166 patients whose parent or guardian reported sending a digital image of the wound to the clinician after surgery. A corresponding digital image was located in the EHR in only 121 of these encounters. A change in clinical management was documented in 20% of these encounters, including referral for in-person evaluation of the wound and antibiotic prescription. Conclusion Clinical teams have developed ad hoc workflows to use digital images to evaluate post-operative pediatric surgical patients. Because the use of digital images to support follow-up care after ambulatory surgery is likely to increase, it is important that high-quality images are captured and documented appropriately in the EHR to ensure privacy, security, and a high-level of care. PMID:27452477
Digital Radiographic Image Processing and Analysis.
Yoon, Douglas C; Mol, André; Benn, Douglas K; Benavides, Erika
2018-07-01
This article describes digital radiographic imaging and analysis from the basics of image capture to examples of some of the most advanced digital technologies currently available. The principles underlying the imaging technologies are described to provide a better understanding of their strengths and limitations. Copyright © 2018 Elsevier Inc. All rights reserved.
Porter, Glenn; Ebeyan, Robert; Crumlish, Charles; Renshaw, Adrian
2015-03-01
The photographic preservation of fingermark impression evidence found on ammunition cases remains problematic due to the cylindrical shape of the deposition substrate preventing complete capture of the impression in a single image. A novel method was developed for the photographic recovery of fingermarks from curved surfaces using digital imaging. The process involves the digital construction of a complete impression image made from several different images captured from multiple camera perspectives. Fingermark impressions deposited onto 9-mm and 0.22-caliber brass cartridge cases and a plastic 12-gauge shotgun shell were tested using various image parameters, including digital stitching method, number of images per 360° rotation of shell, image cropping, and overlap. The results suggest that this method may be successfully used to recover fingermark impression evidence from the surfaces of ammunition cases or other similar cylindrical surfaces. © 2014 American Academy of Forensic Sciences.
Chalazonitis, A N; Koumarianos, D; Tzovara, J; Chronopoulos, P
2003-06-01
Over the past decade, the technology that permits images to be digitized and the reduction in the cost of digital equipment allows quick digital transfer of any conventional radiological film. Images then can be transferred to a personal computer, and several software programs are available that can manipulate their digital appearance. In this article, the fundamentals of digital imaging are discussed, as well as the wide variety of optional adjustments that the Adobe Photoshop 6.0 (Adobe Systems, San Jose, CA) program can offer to present radiological images with satisfactory digital imaging quality.
Matsushima, Kyoji; Sonobe, Noriaki
2018-01-01
Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.
2016-06-25
The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was
Can light-field photography ease focusing on the scalp and oral cavity?
Taheri, Arash; Feldman, Steven R
2013-08-01
Capturing a well-focused image using an autofocus camera can be difficult in oral cavity and on a hairy scalp. Light-field digital cameras capture data regarding the color, intensity, and direction of rays of light. Having information regarding direction of rays of light, computer software can be used to focus on different subjects in the field after the image data have been captured. A light-field camera was used to capture the images of the scalp and oral cavity. The related computer software was used to focus on scalp or different parts of oral cavity. The final pictures were compared with pictures taken with conventional, compact, digital cameras. The camera worked well for oral cavity. It also captured the pictures of scalp easily; however, we had to repeat clicking between the hairs on different points to choose the scalp for focusing. A major drawback of the system was the resolution of the resulting pictures that was lower than conventional digital cameras. Light-field digital cameras are fast and easy to use. They can capture more information on the full depth of field compared with conventional cameras. However, the resolution of the pictures is relatively low. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Clegg, G; Roebuck, S; Steedman, D
2001-01-01
Objectives—To develop a computer based storage system for clinical images—radiographs, photographs, ECGs, text—for use in teaching, training, reference and research within an accident and emergency (A&E) department. Exploration of methods to access and utilise the data stored in the archive. Methods—Implementation of a digital image archive using flatbed scanner and digital camera as capture devices. A sophisticated coding system based on ICD 10. Storage via an "intelligent" custom interface. Results—A practical solution to the problems of clinical image storage for teaching purposes. Conclusions—We have successfully developed a digital image capture and storage system, which provides an excellent teaching facility for a busy A&E department. We have revolutionised the practice of the "hand-over meeting". PMID:11435357
Use of a Digital Camera To Document Student Observations in a Microbiology Laboratory Class.
ERIC Educational Resources Information Center
Mills, David A.; Kelley, Kevin; Jones, Michael
2001-01-01
Points out the lack of microscopic images of wine-related microbes. Uses a digital camera during a wine microbiology laboratory to capture student-generated microscope images. Discusses the advantages of using a digital camera in a teaching lab. (YDS)
Aldaz, Gabriel; Shluzas, Lauren Aquino; Pickham, David; Eris, Ozgur; Sadler, Joel; Joshi, Shantanu; Leifer, Larry
2015-01-01
Chronic wounds, including pressure ulcers, compromise the health of 6.5 million Americans and pose an annual estimated burden of $25 billion to the U.S. health care system. When treating chronic wounds, clinicians must use meticulous documentation to determine wound severity and to monitor healing progress over time. Yet, current wound documentation practices using digital photography are often cumbersome and labor intensive. The process of transferring photos into Electronic Medical Records (EMRs) requires many steps and can take several days. Newer smartphone and tablet-based solutions, such as Epic Haiku, have reduced EMR upload time. However, issues still exist involving patient positioning, image-capture technique, and patient identification. In this paper, we present the development and assessment of the SnapCap System for chronic wound photography. Through leveraging the sensor capabilities of Google Glass, SnapCap enables hands-free digital image capture, and the tagging and transfer of images to a patient’s EMR. In a pilot study with wound care nurses at Stanford Hospital (n=16), we (i) examined feature preferences for hands-free digital image capture and documentation, and (ii) compared SnapCap to the state of the art in digital wound care photography, the Epic Haiku application. We used the Wilcoxon Signed-ranks test to evaluate differences in mean ranks between preference options. Preferred hands-free navigation features include barcode scanning for patient identification, Z(15) = -3.873, p < 0.001, r = 0.71, and double-blinking to take photographs, Z(13) = -3.606, p < 0.001, r = 0.71. In the comparison between SnapCap and Epic Haiku, the SnapCap System was preferred for sterile image-capture technique, Z(16) = -3.873, p < 0.001, r = 0.68. Responses were divided with respect to image quality and overall ease of use. The study’s results have contributed to the future implementation of new features aimed at enhancing mobile hands-free digital photography for chronic wound care. PMID:25902061
System for objective assessment of image differences in digital cinema
NASA Astrophysics Data System (ADS)
Fliegel, Karel; Krasula, Lukáš; Páta, Petr; Myslík, Jiří; Pecák, Josef; Jícha, Marek
2014-09-01
There is high demand for quick digitization and subsequent image restoration of archived film records. Digitization is very urgent in many cases because various invaluable pieces of cultural heritage are stored on aging media. Only selected records can be reconstructed perfectly using painstaking manual or semi-automatic procedures. This paper aims to answer the question what are the quality requirements on the restoration process in order to obtain acceptably close visual perception of the digitally restored film in comparison to the original analog film copy. This knowledge is very important to preserve the original artistic intention of the movie producers. Subjective experiment with artificially distorted images has been conducted in order to answer the question what is the visual impact of common image distortions in digital cinema. Typical color and contrast distortions were introduced and test images were presented to viewers using digital projector. Based on the outcome of this subjective evaluation a system for objective assessment of image distortions has been developed and its performance tested. The system utilizes calibrated digital single-lens reflex camera and subsequent analysis of suitable features of images captured from the projection screen. The evaluation of captured image data has been optimized in order to obtain predicted differences between the reference and distorted images while achieving high correlation with the results of subjective assessment. The system can be used to objectively determine the difference between analog film and digital cinema images on the projection screen.
Affordable, Accessible, Immediate: Capture Stunning Images with Digital Infrared Photography
ERIC Educational Resources Information Center
Snyder, Mark
2011-01-01
Technology educators who teach digital photography should consider incorporating an infrared (IR) photography component into their program. This is an area where digital photography offers significant benefits. Either type of IR imaging is very interesting to explore, but traditional film-based IR photography is difficult and expensive. In…
A Pipeline for 3D Digital Optical Phenotyping Plant Root System Architecture
NASA Astrophysics Data System (ADS)
Davis, T. W.; Shaw, N. M.; Schneider, D. J.; Shaff, J. E.; Larson, B. G.; Craft, E. J.; Liu, Z.; Kochian, L. V.; Piñeros, M. A.
2017-12-01
This work presents a new pipeline for digital optical phenotyping the root system architecture of agricultural crops. The pipeline begins with a 3D root-system imaging apparatus for hydroponically grown crop lines of interest. The apparatus acts as a self-containing dark room, which includes an imaging tank, motorized rotating bearing and digital camera. The pipeline continues with the Plant Root Imaging and Data Acquisition (PRIDA) software, which is responsible for image capturing and storage. Once root images have been captured, image post-processing is performed using the Plant Root Imaging Analysis (PRIA) command-line tool, which extracts root pixels from color images. Following the pre-processing binarization of digital root images, 3D trait characterization is performed using the next-generation RootReader3D software. RootReader3D measures global root system architecture traits, such as total root system volume and length, total number of roots, and maximum rooting depth and width. While designed to work together, the four stages of the phenotyping pipeline are modular and stand-alone, which provides flexibility and adaptability for various research endeavors.
Measuring food intake with digital photography
USDA-ARS?s Scientific Manuscript database
The Digital Photography of Foods Method accurately estimates the food intake of adults and children in cafeterias. With this method, images of food selection and leftovers are quickly captured in the cafeteria. These images are later compared with images of 'standard' portions of food using computer...
Inselect: Automating the Digitization of Natural History Collections
Hudson, Lawrence N.; Blagoderov, Vladimir; Heaton, Alice; Holtzhausen, Pieter; Livermore, Laurence; Price, Benjamin W.; van der Walt, Stéfan; Smith, Vincent S.
2015-01-01
The world’s natural history collections constitute an enormous evidence base for scientific research on the natural world. To facilitate these studies and improve access to collections, many organisations are embarking on major programmes of digitization. This requires automated approaches to mass-digitization that support rapid imaging of specimens and associated data capture, in order to process the tens of millions of specimens common to most natural history collections. In this paper we present Inselect—a modular, easy-to-use, cross-platform suite of open-source software tools that supports the semi-automated processing of specimen images generated by natural history digitization programmes. The software is made up of a Windows, Mac OS X, and Linux desktop application, together with command-line tools that are designed for unattended operation on batches of images. Blending image visualisation algorithms that automatically recognise specimens together with workflows to support post-processing tasks such as barcode reading, label transcription and metadata capture, Inselect fills a critical gap to increase the rate of specimen digitization. PMID:26599208
Inselect: Automating the Digitization of Natural History Collections.
Hudson, Lawrence N; Blagoderov, Vladimir; Heaton, Alice; Holtzhausen, Pieter; Livermore, Laurence; Price, Benjamin W; van der Walt, Stéfan; Smith, Vincent S
2015-01-01
The world's natural history collections constitute an enormous evidence base for scientific research on the natural world. To facilitate these studies and improve access to collections, many organisations are embarking on major programmes of digitization. This requires automated approaches to mass-digitization that support rapid imaging of specimens and associated data capture, in order to process the tens of millions of specimens common to most natural history collections. In this paper we present Inselect-a modular, easy-to-use, cross-platform suite of open-source software tools that supports the semi-automated processing of specimen images generated by natural history digitization programmes. The software is made up of a Windows, Mac OS X, and Linux desktop application, together with command-line tools that are designed for unattended operation on batches of images. Blending image visualisation algorithms that automatically recognise specimens together with workflows to support post-processing tasks such as barcode reading, label transcription and metadata capture, Inselect fills a critical gap to increase the rate of specimen digitization.
Agreement and reading time for differently-priced devices for the digital capture of X-ray films.
Salazar, Antonio José; Camacho, Juan Camilo; Aguirre, Diego Andrés
2012-03-01
We assessed the reliability of three digital capture devices: a film digitizer (which cost US $15,000), a flat-bed scanner (US $1800) and a digital camera (US $450). Reliability was measured as the agreement between six observers when reading images acquired from a single device and also in terms of the pair-device agreement. The images were 136 chest X-ray cases. The variables measured were the interstitial opacities distribution, interstitial patterns, nodule size and percentage pneumothorax size. The agreement between the six readers when reading images acquired from a single device was similar for the three devices. The pair-device agreements were moderate for all variables. There were significant differences in reading-time between devices: the mean reading-time for the film digitizer was 93 s, it was 59 s for the flat-bed scanner and 70 s for the digital camera. Despite the differences in their cost, there were no substantial differences in the performance of the three devices.
Integration of image capture and processing: beyond single-chip digital camera
NASA Astrophysics Data System (ADS)
Lim, SukHwan; El Gamal, Abbas
2001-05-01
An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.
NASA Astrophysics Data System (ADS)
Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha
2012-09-01
Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.
Development of digital shade guides for color assessment using a digital camera with ring flashes.
Tung, Oi-Hong; Lai, Yu-Lin; Ho, Yi-Ching; Chou, I-Chiang; Lee, Shyh-Yuan
2011-02-01
Digital photographs taken with cameras and ring flashes are commonly used for dental documentation. We hypothesized that different illuminants and camera's white balance setups shall influence color rendering of digital images and affect the effectiveness of color matching using digital images. Fifteen ceramic disks of different shades were fabricated and photographed with a digital camera in both automatic white balance (AWB) and custom white balance (CWB) under either light-emitting diode (LED) or electronic ring flash. The Commission Internationale d'Éclairage L*a*b* parameters of the captured images were derived from Photoshop software and served as digital shade guides. We found significantly high correlation coefficients (r² > 0.96) between the respective spectrophotometer standards and those shade guides generated in CWB setups. Moreover, the accuracy of color matching of another set of ceramic disks using digital shade guides, which was verified by ten operators, improved from 67% in AWB to 93% in CWB under LED illuminants. Probably, because of the inconsistent performance of the flashlight and specular reflection, the digital images captured under electronic ring flash in both white balance setups revealed less reliable and relative low-matching ability. In conclusion, the reliability of color matching with digital images is much influenced by the illuminants and camera's white balance setups, while digital shade guides derived under LED illuminants with CWB demonstrate applicable potential in the fields of color assessments.
Omniview motionless camera orientation system
NASA Technical Reports Server (NTRS)
Martin, H. Lee (Inventor); Kuban, Daniel P. (Inventor); Zimmermann, Steven D. (Inventor); Busko, Nicholas (Inventor)
2010-01-01
An apparatus and method is provided for converting digital images for use in an imaging system. The apparatus includes a data memory which stores digital data representing an image having a circular or spherical field of view such as an image captured by a fish-eye lens, a control input for receiving a signal for selecting a portion of the image, and a converter responsive to the control input for converting digital data corresponding to the selected portion into digital data representing a planar image for subsequent display. Various methods include the steps of storing digital data representing an image having a circular or spherical field of view, selecting a portion of the image, and converting the stored digital data corresponding to the selected portion into digital data representing a planar image for subsequent display. In various embodiments, the data converter and data conversion step may use an orthogonal set of transformation algorithms.
Scanning computed confocal imager
George, John S.
2000-03-14
There is provided a confocal imager comprising a light source emitting a light, with a light modulator in optical communication with the light source for varying the spatial and temporal pattern of the light. A beam splitter receives the scanned light and direct the scanned light onto a target and pass light reflected from the target to a video capturing device for receiving the reflected light and transferring a digital image of the reflected light to a computer for creating a virtual aperture and outputting the digital image. In a transmissive mode of operation the invention omits the beam splitter means and captures light passed through the target.
Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy
NASA Technical Reports Server (NTRS)
Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)
2011-01-01
Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.
[True color accuracy in digital forensic photography].
Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A
2016-01-01
Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation).
Light ray field capture using focal plane sweeping and its optical reconstruction using 3D displays.
Park, Jae-Hyeung; Lee, Sung-Keun; Jo, Na-Young; Kim, Hee-Jae; Kim, Yong-Soo; Lim, Hong-Gi
2014-10-20
We propose a method to capture light ray field of three-dimensional scene using focal plane sweeping. Multiple images are captured using a usual camera at different focal distances, spanning the three-dimensional scene. The captured images are then back-projected to four-dimensional spatio-angular space to obtain the light ray field. The obtained light ray field can be visualized either using digital processing or optical reconstruction using various three-dimensional display techniques including integral imaging, layered display, and holography.
Five task clusters that enable efficient and effective digitization of biological collections
Nelson, Gil; Paul, Deborah; Riccardi, Gregory; Mast, Austin R.
2012-01-01
Abstract This paper describes and illustrates five major clusters of related tasks (herein referred to as task clusters) that are common to efficient and effective practices in the digitization of biological specimen data and media. Examples of these clusters come from the observation of diverse digitization processes. The staff of iDigBio (The U.S. National Science Foundation’s National Resource for Advancing Digitization of Biological Collections) visited active biological and paleontological collections digitization programs for the purpose of documenting and assessing current digitization practices and tools. These observations identified five task clusters that comprise the digitization process leading up to data publication: (1) pre-digitization curation and staging, (2) specimen image capture, (3) specimen image processing, (4) electronic data capture, and (5) georeferencing locality descriptions. While not all institutions are completing each of these task clusters for each specimen, these clusters describe a composite picture of digitization of biological and paleontological specimens across the programs that were observed. We describe these clusters, three workflow patterns that dominate the implemention of these clusters, and offer a set of workflow recommendations for digitization programs. PMID:22859876
Desai, Nandini J.; Gupta, B. D.; Patel, Pratik Narendrabhai
2014-01-01
Introduction: Obtaining images of slides viewed by a microscope can be invaluable for both diagnosis and teaching.They can be transferred among technologically-advanced hospitals for further consultation and evaluation. But a standard microscopic photography camera unit (MPCU)(MIPS-Microscopic Image projection System) is costly and not available in resource poor settings. The aim of our endeavour was to find a comparable and cheaper alternative method for photomicrography. Materials and Methods: We used a NIKON Coolpix S6150 camera (box type digital camera) with Olympus CH20i microscope and a fluorescent microscope for the purpose of this study. Results: We got comparable results for capturing images of light microscopy, but the results were not as satisfactory for fluorescent microscopy. Conclusion: A box type digital camera is a comparable, less expensive and convenient alternative to microscopic photography camera unit. PMID:25478350
Salazar, Antonio José; Camacho, Juan Camilo; Aguirre, Diego Andrés
2012-02-01
A common teleradiology practice is digitizing films. The costs of specialized digitizers are very high, that is why there is a trend to use conventional scanners and digital cameras. Statistical clinical studies are required to determine the accuracy of these devices, which are very difficult to carry out. The purpose of this study was to compare three capture devices in terms of their capacity to detect several image characteristics. Spatial resolution, contrast, gray levels, and geometric deformation were compared for a specialized digitizer ICR (US$ 15,000), a conventional scanner UMAX (US$ 1,800), and a digital camera LUMIX (US$ 450, but require an additional support system and a light box for about US$ 400). Test patterns printed in films were used. The results detected gray levels lower than real values for all three devices; acceptable contrast and low geometric deformation with three devices. All three devices are appropriate solutions, but a digital camera requires more operator training and more settings.
Improved wheal detection from skin prick test images
NASA Astrophysics Data System (ADS)
Bulan, Orhan
2014-03-01
Skin prick test is a commonly used method for diagnosis of allergic diseases (e.g., pollen allergy, food allergy, etc.) in allergy clinics. The results of this test are erythema and wheal provoked on the skin where the test is applied. The sensitivity of the patient against a specific allergen is determined by the physical size of the wheal, which can be estimated from images captured by digital cameras. Accurate wheal detection from these images is an important step for precise estimation of wheal size. In this paper, we propose a method for improved wheal detection on prick test images captured by digital cameras. Our method operates by first localizing the test region by detecting calibration marks drawn on the skin. The luminance variation across the localized region is eliminated by applying a color transformation from RGB to YCbCr and discarding the luminance channel. We enhance the contrast of the captured images for the purpose of wheal detection by performing principal component analysis on the blue-difference (Cb) and red-difference (Cr) color channels. We finally, perform morphological operations on the contrast enhanced image to detect the wheal on the image plane. Our experiments performed on images acquired from 36 different patients show the efficiency of the proposed method for wheal detection from skin prick test images captured in an uncontrolled environment.
Image enhancement software for underwater recovery operations: User's manual
NASA Astrophysics Data System (ADS)
Partridge, William J.; Therrien, Charles W.
1989-06-01
This report describes software for performing image enhancement on live or recorded video images. The software was developed for operational use during underwater recovery operations at the Naval Undersea Warfare Engineering Station. The image processing is performed on an IBM-PC/AT compatible computer equipped with hardware to digitize and display video images. The software provides the capability to provide contrast enhancement and other similar functions in real time through hardware lookup tables, to automatically perform histogram equalization, to capture one or more frames and average them or apply one of several different processing algorithms to a captured frame. The report is in the form of a user manual for the software and includes guided tutorial and reference sections. A Digital Image Processing Primer in the appendix serves to explain the principle concepts that are used in the image processing.
On the mode I fracture analysis of cracked Brazilian disc using a digital image correlation method
NASA Astrophysics Data System (ADS)
Abshirini, Mohammad; Soltani, Nasser; Marashizadeh, Parisa
2016-03-01
Mode I of fracture of centrally cracked Brazilian disc was investigated experimentally using a digital image correlation (DIC) method. Experiments were performed on PMMA polymers subjected to diametric-compression load. The displacement fields were determined by a correlation between the reference and the deformed images captured before and during loading. The stress intensity factors were calculated by displacement fields using William's equation and the least square algorithm. The parameters involved in the accuracy of SIF calculation such as number of terms in William's equation and the region of analysis around the crack were discussed. The DIC results were compared with the numerical results available in literature and a very good agreement between them was observed. By extending the tests up to the critical state, mode I fracture toughness was determined by analyzing the image of specimen captured at the moment before fracture. The results showed that the digital image correlation was a reliable technique for the calculation of the fracture toughness of brittle materials.
Computerized image analysis for acetic acid induced intraepithelial lesions
NASA Astrophysics Data System (ADS)
Li, Wenjing; Ferris, Daron G.; Lieberman, Rich W.
2008-03-01
Cervical Intraepithelial Neoplasia (CIN) exhibits certain morphologic features that can be identified during a visual inspection exam. Immature and dysphasic cervical squamous epithelium turns white after application of acetic acid during the exam. The whitening process occurs visually over several minutes and subjectively discriminates between dysphasic and normal tissue. Digital imaging technologies allow us to assist the physician analyzing the acetic acid induced lesions (acetowhite region) in a fully automatic way. This paper reports a study designed to measure multiple parameters of the acetowhitening process from two images captured with a digital colposcope. One image is captured before the acetic acid application, and the other is captured after the acetic acid application. The spatial change of the acetowhitening is extracted using color and texture information in the post acetic acid image; the temporal change is extracted from the intensity and color changes between the post acetic acid and pre acetic acid images with an automatic alignment. The imaging and data analysis system has been evaluated with a total of 99 human subjects and demonstrate its potential to screening underserved women where access to skilled colposcopists is limited.
Subjective matters: from image quality to image psychology
NASA Astrophysics Data System (ADS)
Fedorovskaya, Elena A.; De Ridder, Huib
2013-03-01
From the advent of digital imaging through several decades of studies, the human vision research community systematically focused on perceived image quality and digital artifacts due to resolution, compression, gamma, dynamic range, capture and reproduction noise, blur, etc., to help overcome existing technological challenges and shortcomings. Technological advances made digital images and digital multimedia nearly flawless in quality, and ubiquitous and pervasive in usage, provide us with the exciting but at the same time demanding possibility to turn to the domain of human experience including higher psychological functions, such as cognition, emotion, awareness, social interaction, consciousness and Self. In this paper we will outline the evolution of human centered multidisciplinary studies related to imaging and propose steps and potential foci of future research.
Shaw, S L; Salmon, E D; Quatrano, R S
1995-12-01
In this report, we describe a relatively inexpensive method for acquiring, storing and processing light microscope images that combines the advantages of video technology with the powerful medium now termed digital photography. Digital photography refers to the recording of images as digital files that are stored, manipulated and displayed using a computer. This report details the use of a gated video-rate charge-coupled device (CCD) camera and a frame grabber board for capturing 256 gray-level digital images from the light microscope. This camera gives high-resolution bright-field, phase contrast and differential interference contrast (DIC) images but, also, with gated on-chip integration, has the capability to record low-light level fluorescent images. The basic components of the digital photography system are described, and examples are presented of fluorescence and bright-field micrographs. Digital processing of images to remove noise, to enhance contrast and to prepare figures for printing is discussed.
Three-dimensional image signals: processing methods
NASA Astrophysics Data System (ADS)
Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru
2010-11-01
Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.
NASA Astrophysics Data System (ADS)
Renken, Hartmut; Oelze, Holger W.; Rath, Hans J.
1998-04-01
The design and application of a digital high sped image data capturing system with a following image processing system applied to the Bremer Hochschul Hyperschallkanal BHHK is the content of this presentation. It is also the result of the cooperation between the departments aerodynamic and image processing at the ZARM-institute at the Drop Tower of Brennen. Similar systems are used by the combustion working group at ZARM and other external project partners. The BHHK, camera- and image storage system as well as the personal computer based image processing software are described next. Some examples of images taken at the BHHK are shown to illustrate the application. The new and very user-friendly Windows 32-bit system is capable to capture all camera data with a maximum pixel clock of 43 MHz and to process complete sequences of images in one step by using only one comfortable program.
NASA Astrophysics Data System (ADS)
Fang, Jingyu; Xu, Haisong; Wang, Zhehong; Wu, Xiaomin
2016-05-01
With colorimetric characterization, digital cameras can be used as image-based tristimulus colorimeters for color communication. In order to overcome the restriction of fixed capture settings adopted in the conventional colorimetric characterization procedures, a novel method was proposed considering capture settings. The method calculating colorimetric value of the measured image contains five main steps, including conversion from RGB values to equivalent ones of training settings through factors based on imaging system model so as to build the bridge between different settings, scaling factors involved in preparation steps for transformation mapping to avoid errors resulted from nonlinearity of polynomial mapping for different ranges of illumination levels. The experiment results indicate that the prediction error of the proposed method, which was measured by CIELAB color difference formula, reaches less than 2 CIELAB units under different illumination levels and different correlated color temperatures. This prediction accuracy for different capture settings remains the same level as the conventional method for particular lighting condition.
A novel automatic full-scale inspecting system for banknote printing plates
NASA Astrophysics Data System (ADS)
Zhang, Jian; Feng, Li; Lu, Jibing; Qin, Qingwang; Liu, Liquan; Liu, Huina
2018-01-01
Quality assurance of banknote printing plates is an important issue for the corporation which produces them. Every plate must be checked carefully and entirely before it's sent to the banknote printing factory. Previously the work is done by specific workers, usually with the help of powder and magnifiers, and often lasts for 3 to 4 hours for a 5*7 plate with the size of about 650*500 square millimeters. Now we have developed an automatic inspecting system to replace human work. The system mainly includes a stable platform, an electrical subsystem and an inspecting subsystem. A microscope held by the crossbeam can move around in the x-y-z space over the platform. A digital camera combined with the microscope captures gray digital images of the plate. The size of each digital image is 2672*4008, and each pixel corresponds to about 2.9*2.9 square microns area of the plate. The plate is inspected by each unit, and corresponding images are captured at the same relative position. Thousands of images are captured for one plate (for example, 4200 (120*5*7) for a 5*7 plate). The inspecting model images are generated from images of qualified plates, and then used to inspect indeterminate plates. The system costs about 64 minutes to inspect a plate, and identifies obvious defects.
Video Imaging System Particularly Suited for Dynamic Gear Inspection
NASA Technical Reports Server (NTRS)
Broughton, Howard (Inventor)
1999-01-01
A digital video imaging system that captures the image of a single tooth of interest of a rotating gear is disclosed. The video imaging system detects the complete rotation of the gear and divide that rotation into discrete time intervals so that each tooth of interest of the gear is precisely determined when it is at a desired location that is illuminated in unison with a digital video camera so as to record a single digital image for each tooth. The digital images are available to provide instantaneous analysis of the tooth of interest, or to be stored and later provide images that yield a history that may be used to predict gear failure, such as gear fatigue. The imaging system is completely automated by a controlling program so that it may run for several days acquiring images without supervision from the user.
Intelligent image capture of cartridge cases for firearms examiners
NASA Astrophysics Data System (ADS)
Jones, Brett C.; Guerci, Joseph R.
1997-02-01
The FBI's DRUGFIRETM system is a nationwide computerized networked image database of ballistic forensic evidence. This evidence includes images of cartridge cases and bullets obtained from both crime scenes and controlled test firings of seized weapons. Currently, the system is installed in over 80 forensic labs across the country and has enjoyed a high degree of success. In this paper, we discuss some of the issues and methods associated with providing a front-end semi-automated image capture system that simultaneously satisfies the often conflicting criteria of the many human examiners visual perception versus the criteria associated with optimizing autonomous digital image correlation. Specifically, we detail the proposed processing chain of an intelligent image capture system (IICS), involving a real- time capture 'assistant,' which assesses the quality of the image under test utilizing a custom designed neural network.
Smart Camera Technology Increases Quality
NASA Technical Reports Server (NTRS)
2004-01-01
When it comes to real-time image processing, everyone is an expert. People begin processing images at birth and rapidly learn to control their responses through the real-time processing of the human visual system. The human eye captures an enormous amount of information in the form of light images. In order to keep the brain from becoming overloaded with all the data, portions of an image are processed at a higher resolution than others, such as a traffic light changing colors. changing colors. In the same manner, image processing products strive to extract the information stored in light in the most efficient way possible. Digital cameras available today capture millions of pixels worth of information from incident light. However, at frame rates more than a few per second, existing digital interfaces are overwhelmed. All the user can do is store several frames to memory until that memory is full and then subsequent information is lost. New technology pairs existing digital interface technology with an off-the-shelf complementary metal oxide semiconductor (CMOS) imager to provide more than 500 frames per second of specialty image processing. The result is a cost-effective detection system unlike any other.
NASA Astrophysics Data System (ADS)
Jantzen, Connie; Slagle, Rick
1997-05-01
The distinction between exposure time and sample rate is often the first point raised in any discussion of high speed imaging. Many high speed events require exposure times considerably shorter than those that can be achieved solely by the sample rate of the camera, where exposure time equals 1/sample rate. Gating, a method of achieving short exposure times in digital cameras, is often difficult to achieve for exposure time requirements shorter than 100 microseconds. This paper discusses the advantages and limitations of using the short duration light pulse of a near infrared laser with high speed digital imaging systems. By closely matching the output wavelength of the pulsed laser to the peak near infrared response of current sensors, high speed image capture can be accomplished at very low (visible) light levels of illumination. By virtue of the short duration light pulse, adjustable to as short as two microseconds, image capture of very high speed events can be achieved at relatively low sample rates of less than 100 pictures per second, without image blur. For our initial investigations, we chose a ballistic subject. The results of early experimentation revealed the limitations of applying traditional ballistic imaging methods when using a pulsed infrared lightsource with a digital imaging system. These early disappointing results clarified the need to further identify the unique system characteristics of the digital imager and pulsed infrared combination. It was also necessary to investigate how the infrared reflectance and transmittance of common materials affects the imaging process. This experimental work yielded a surprising, successful methodology which will prove useful in imaging ballistic and weapons tests, as well as forensics, flow visualizations, spray pattern analyses, and nocturnal animal behavioral studies.
Resolution analysis of archive films for the purpose of their optimal digitization and distribution
NASA Astrophysics Data System (ADS)
Fliegel, Karel; Vítek, Stanislav; Páta, Petr; Myslík, Jiří; Pecák, Josef; Jícha, Marek
2017-09-01
With recent high demand for ultra-high-definition (UHD) content to be screened in high-end digital movie theaters but also in the home environment, film archives full of movies in high-definition and above are in the scope of UHD content providers. Movies captured with the traditional film technology represent a virtually unlimited source of UHD content. The goal to maintain complete image information is also related to the choice of scanning resolution and spatial resolution for further distribution. It might seem that scanning the film material in the highest possible resolution using state-of-the-art film scanners and also its distribution in this resolution is the right choice. The information content of the digitized images is however limited, and various degradations moreover lead to its further reduction. Digital distribution of the content in the highest image resolution might be therefore unnecessary or uneconomical. In other cases, the highest possible resolution is inevitable if we want to preserve fine scene details or film grain structure for archiving purposes. This paper deals with the image detail content analysis of archive film records. The resolution limit in captured scene image and factors which lower the final resolution are discussed. Methods are proposed to determine the spatial details of the film picture based on the analysis of its digitized image data. These procedures allow determining recommendations for optimal distribution of digitized video content intended for various display devices with lower resolutions. Obtained results are illustrated on spatial downsampling use case scenario, and performance evaluation of the proposed techniques is presented.
Akkaynak, Derya; Treibitz, Tali; Xiao, Bei; Gürkan, Umut A.; Allen, Justine J.; Demirci, Utkan; Hanlon, Roger T.
2014-01-01
Commercial off-the-shelf digital cameras are inexpensive and easy-to-use instruments that can be used for quantitative scientific data acquisition if images are captured in raw format and processed so that they maintain a linear relationship with scene radiance. Here we describe the image-processing steps required for consistent data acquisition with color cameras. In addition, we present a method for scene-specific color calibration that increases the accuracy of color capture when a scene contains colors that are not well represented in the gamut of a standard color-calibration target. We demonstrate applications of the proposed methodology in the fields of biomedical engineering, artwork photography, perception science, marine biology, and underwater imaging. PMID:24562030
Ernst, E J; Speck, P M; Fitzpatrick, J J
2012-01-01
Digital photography is a valuable adjunct to document physical injuries after sexual assault. In order for a digital photograph to have high image quality, there must exist a high level of naturalness. Digital photo documentation has varying degrees of naturalness; however, for a photograph to be natural, specific technical elements for the viewer must be satisfied. No tool was available to rate the naturalness of digital photo documentation of female genital injuries after sexual assault. The Photo Documentation Image Quality Scoring System (PDIQSS) tool was developed to rate technical elements for naturalness. Using this tool, experts evaluated randomly selected digital photographs of female genital injuries captured following sexual assault. Naturalness of female genital injuries following sexual assault was demonstrated when measured in all dimensions.
NASA Astrophysics Data System (ADS)
Takada, Shunji; Ihama, Mikio; Inuiya, Masafumi
2006-02-01
Digital still cameras overtook film cameras in Japanese market in 2000 in terms of sales volume owing to their versatile functions. However, the image-capturing capabilities such as sensitivity and latitude of color films are still superior to those of digital image sensors. In this paper, we attribute the cause for the high performance of color films to their multi-layered structure, and propose the solid-state image sensors with stacked organic photoconductive layers having narrow absorption bands on CMOS read-out circuits.
Movement measurement of isolated skeletal muscle using imaging microscopy
NASA Astrophysics Data System (ADS)
Elias, David; Zepeda, Hugo; Leija, Lorenzo S.; Sossa, Humberto; de la Rosa, Jose I.
1997-05-01
An imaging-microscopy methodology to measure contraction movement in chemically stimulated crustacean skeletal muscle, whose movement speed is about 0.02 mm/s is presented. For this, a CCD camera coupled to a microscope and a high speed digital image acquisition system, allowing us to capture 960 images per second are used. The images are digitally processed in a PC and displayed in a video monitor. A maximal field of 0.198 X 0.198 mm2 and a spatial resolution of 3.5 micrometers are obtained.
Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products
NASA Astrophysics Data System (ADS)
Williams, Don; Burns, Peter D.
2007-01-01
There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.
Real-time optical fiber digital speckle pattern interferometry for industrial applications
NASA Astrophysics Data System (ADS)
Chan, Robert K.; Cheung, Y. M.; Lo, C. H.; Tam, T. K.
1997-03-01
There is current interest, especially in the industrial sector, to use the digital speckle pattern interferometry (DSPI) technique to measure surface stress. Indeed, many publications in the subject are evident of the growing interests in the field. However, to bring the technology to industrial use requires the integration of several emerging technologies, viz. optics, feedback control, electronics, imaging processing and digital signal processing. Due to the highly interdisciplinary nature of the technique, successful implementation and development require expertise in all of the fields. At Baptist University, under the funding of a major industrial grant, we are developing the technology for the industrial sector. Our system fully exploits optical fibers and diode lasers in the design to enable practical and rugged systems suited for industrial applications. Besides the development in optics, we have broken away from the reliance of a microcomputer PC platform for both image capture and processing, and have developed a digital signal processing array system that can handle simultaneous and independent image capture/processing with feedback control. The system, named CASPA for 'cascadable architecture signal processing array,' is a third generation development system that utilizes up to 7 digital signal processors has proved to be a very powerful system. With our CASPA we are now in a better position to developing novel optical measurement systems for industrial application that may require different measurement systems to operate concurrently and requiring information exchange between the systems. Applications in mind such as simultaneous in-plane and out-of-plane DSPI image capture/process, vibrational analysis with interactive DSPI and phase shifting control of optical systems are a few good examples of the potentials.
Three-dimensional photography for the evaluation of facial profiles in obstructive sleep apnoea.
Lin, Shih-Wei; Sutherland, Kate; Liao, Yu-Fang; Cistulli, Peter A; Chuang, Li-Pang; Chou, Yu-Ting; Chang, Chih-Hao; Lee, Chung-Shu; Li, Li-Fu; Chen, Ning-Hung
2018-06-01
Craniofacial structure is an important determinant of obstructive sleep apnoea (OSA) syndrome risk. Three-dimensional stereo-photogrammetry (3dMD) is a novel technique which allows quantification of the craniofacial profile. This study compares the facial images of OSA patients captured by 3dMD to three-dimensional computed tomography (3-D CT) and two-dimensional (2-D) digital photogrammetry. Measurements were correlated with indices of OSA severity. Thirty-eight patients diagnosed with OSA were included, and digital photogrammetry, 3dMD and 3-D CT were performed. Distances, areas, angles and volumes from the images captured by three methods were analysed. Almost all measurements captured by 3dMD showed strong agreement with 3-D CT measurements. Results from 2-D digital photogrammetry showed poor agreement with 3-D CT. Mandibular width, neck perimeter size and maxillary volume measurements correlated well with the severity of OSA using all three imaging methods. Mandibular length, facial width, binocular width, neck width, cranial base triangle area, cranial base area 1 and middle cranial fossa volume correlated well with OSA severity using 3dMD and 3-D CT, but not with 2-D digital photogrammetry. 3dMD provided accurate craniofacial measurements of OSA patients, which were highly concordant with those obtained by CT, while avoiding the radiation associated with CT. © 2018 Asian Pacific Society of Respirology.
Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy
NASA Astrophysics Data System (ADS)
Bucht, Curry; Söderberg, Per; Manneberg, Göran
2009-02-01
The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor. Morphometry of the corneal endothelium is presently done by semi-automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development of fully automated analysis of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images. The digitally enhanced images of the corneal endothelium were transformed, using the fast Fourier transform (FFT). Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.
NASA Technical Reports Server (NTRS)
1998-01-01
Positive Systems has worked in conjunction with Stennis Space Center to design the ADAR System 5500. This is a four-band airborne digital imaging system used to capture multispectral imagery similar to that available from satellite platforms such as Landsat, SPOT and the new generation of high resolution satellites. Positive Systems has provided remote sensing services for the development of digital aerial camera systems and software for commercial aerial imaging applications.
ERIC Educational Resources Information Center
Liou, Wei-Kai; Bhagat, Kaushal Kumar; Chang, Chun-Yen
2018-01-01
The aim of this study is to design and implement a digital interactive globe system (DIGS), by integrating low-cost equipment to make DIGS cost-effective. DIGS includes a data processing unit, a wireless control unit, an image-capturing unit, a laser emission unit, and a three-dimensional hemispheric body-imaging screen. A quasi-experimental study…
A position and attitude vision measurement system for wind tunnel slender model
NASA Astrophysics Data System (ADS)
Cheng, Lei; Yang, Yinong; Xue, Bindang; Zhou, Fugen; Bai, Xiangzhi
2014-11-01
A position and attitude vision measurement system for drop test slender model in wind tunnel is designed and developed. The system used two high speed cameras, one is put to the side of the model and another is put to the position where the camera can look up the model. Simple symbols are set on the model. The main idea of the system is based on image matching technique between the 3D-digital model projection image and the image captured by the camera. At first, we evaluate the pitch angles, the roll angles and the position of the centroid of a model through recognizing symbols in the images captured by the side camera. And then, based on the evaluated attitude info, giving a series of yaw angles, a series of projection images of the 3D-digital model are obtained. Finally, these projection images are matched with the image which captured by the looking up camera, and the best match's projection images corresponds to the yaw angle is the very yaw angle of the model. Simulation experiments are conducted and the results show that the maximal error of attitude measurement is less than 0.05°, which can meet the demand of test in wind tunnel.
Three-dimensional capture, representation, and manipulation of Cuneiform tablets
NASA Astrophysics Data System (ADS)
Woolley, Sandra I.; Flowers, Nicholas J.; Arvanitis, Theodoros N.; Livingstone, Alasdair; Davis, Tom R.; Ellison, John
2001-04-01
This paper presents the digital imaging results of a collaborative research project working toward the generation of an on-line interactive digital image database of signs from ancient cuneiform tablets. An important aim of this project is the application of forensic analysis to the cuneiform symbols to identify scribal hands. Cuneiform tablets are amongst the earliest records of written communication, and could be considered as one of the original information technologies; an accessible, portable and robust medium for communication across distance and time. The earliest examples are up to 5,000 years old, and the writing technique remained in use for some 3,000 years. Unfortunately, only a small fraction of these tablets can be made available for display in museums and much important academic work has yet to be performed on the very large numbers of tablets to which there is necessarily restricted access. Our paper will describe the challenges encountered in the 2D image capture of a sample set of tablets held in the British Museum, explaining the motivation for attempting 3D imaging and the results of initial experiments scanning the smaller, more densely inscribed cuneiform tablets. We will also discuss the tractability of 3D digital capture, representation and manipulation, and investigate the requirements for scaleable data compression and transmission methods. Additional information can be found on the project website: www.cuneiform.net
Rhoads, Daniel D.; Mathison, Blaine A.; Bishop, Henry S.; da Silva, Alexandre J.; Pantanowitz, Liron
2016-01-01
Context Microbiology laboratories are continually pursuing means to improve quality, rapidity, and efficiency of specimen analysis in the face of limited resources. One means by which to achieve these improvements is through the remote analysis of digital images. Telemicrobiology enables the remote interpretation of images of microbiology specimens. To date, the practice of clinical telemicrobiology has not been thoroughly reviewed. Objective Identify the various methods that can be employed for telemicrobiology, including emerging technologies that may provide value to the clinical laboratory. Data Sources Peer-reviewed literature, conference proceedings, meeting presentations, and expert opinions pertaining to telemicrobiology have been evaluated. Results A number of modalities have been employed for telemicroscopy including static capture techniques, whole slide imaging, video telemicroscopy, mobile devices, and hybrid systems. Telemicrobiology has been successfully implemented for applications including routine primary diagnois, expert teleconsultation, and proficiency testing. Emerging areas include digital culture plate reading, mobile health applications and computer-augmented analysis of digital images. Conclusions Static image capture techniques to date have been the most widely used modality for telemicrobiology, despite the fact that other newer technologies are available and may produce better quality interpretations. Increased adoption of telemicrobiology offers added value, quality, and efficiency to the clinical microbiology laboratory. PMID:26317376
The Ansel Adams zone system: HDR capture and range compression by chemical processing
NASA Astrophysics Data System (ADS)
McCann, John J.
2010-02-01
We tend to think of digital imaging and the tools of PhotoshopTM as a new phenomenon in imaging. We are also familiar with multiple-exposure HDR techniques intended to capture a wider range of scene information, than conventional film photography. We know about tone-scale adjustments to make better pictures. We tend to think of everyday, consumer, silver-halide photography as a fixed window of scene capture with a limited, standard range of response. This description of photography is certainly true, between 1950 and 2000, for instant films and negatives processed at the drugstore. These systems had fixed dynamic range and fixed tone-scale response to light. All pixels in the film have the same response to light, so the same light exposure from different pixels was rendered as the same film density. Ansel Adams, along with Fred Archer, formulated the Zone System, staring in 1940. It was earlier than the trillions of consumer photos in the second half of the 20th century, yet it was much more sophisticated than today's digital techniques. This talk will describe the chemical mechanisms of the zone system in the parlance of digital image processing. It will describe the Zone System's chemical techniques for image synthesis. It also discusses dodging and burning techniques to fit the HDR scene into the LDR print. Although current HDR imaging shares some of the Zone System's achievements, it usually does not achieve all of them.
A stereoscopic lens for digital cinema cameras
NASA Astrophysics Data System (ADS)
Lipton, Lenny; Rupkalvis, John
2015-03-01
Live-action stereoscopic feature films are, for the most part, produced using a costly post-production process to convert planar cinematography into stereo-pair images and are only occasionally shot stereoscopically using bulky dual-cameras that are adaptations of the Ramsdell rig. The stereoscopic lens design described here might very well encourage more live-action image capture because it uses standard digital cinema cameras and workflow to save time and money.
Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images
Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong
2015-01-01
In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods. PMID:26703596
Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images.
Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong
2015-12-12
In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods.
Textured digital elevation model formation from low-cost UAV LADAR/digital image data
NASA Astrophysics Data System (ADS)
Bybee, Taylor C.; Budge, Scott E.
2015-05-01
Textured digital elevation models (TDEMs) have valuable use in precision agriculture, situational awareness, and disaster response. However, scientific-quality models are expensive to obtain using conventional aircraft-based methods. The cost of creating an accurate textured terrain model can be reduced by using a low-cost (<$20k) UAV system fitted with ladar and electro-optical (EO) sensors. A texel camera fuses calibrated ladar and EO data upon simultaneous capture, creating a texel image. This eliminates the problem of fusing the data in a post-processing step and enables both 2D- and 3D-image registration techniques to be used. This paper describes formation of TDEMs using simulated data from a small UAV gathering swaths of texel images of the terrain below. Being a low-cost UAV, only a coarse knowledge of position and attitude is known, and thus both 2D- and 3D-image registration techniques must be used to register adjacent swaths of texel imagery to create a TDEM. The process of creating an aggregate texel image (a TDEM) from many smaller texel image swaths is described. The algorithm is seeded with the rough estimate of position and attitude of each capture. Details such as the required amount of texel image overlap, registration models, simulated flight patterns (level and turbulent), and texture image formation are presented. In addition, examples of such TDEMs are shown and analyzed for accuracy.
Image interpolation and denoising for division of focal plane sensors using Gaussian processes.
Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor
2014-06-16
Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.
NASA Astrophysics Data System (ADS)
Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu
2000-12-01
New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.
A Digital Approach to Learning Petrology
NASA Astrophysics Data System (ADS)
Reid, M. R.
2011-12-01
In the undergraduate igneous and metamorphic petrology course at Northern Arizona University, we are employing petrographic microscopes equipped with relatively inexpensive ( $200) digital cameras that are linked to pen-tablet computers. The camera-tablet systems can assist student learning in a variety of ways. Images provided by the tablet computers can be used for helping students filter the visually complex specimens they examine. Instructors and students can simultaneously view the same petrographic features captured by the cameras and exchange information about them by pointing to salient features using the tablet pen. These images can become part of a virtual mineral/rock/texture portfolio tailored to individual student's needs. Captured digital illustrations can be annotated with digital ink or computer graphics tools; this activity emulates essential features of more traditional line drawings (visualizing an appropriate feature and selecting a representative image of it, internalizing the feature through studying and annotating it) while minimizing the frustration that many students feel about drawing. In these ways, we aim to help a student progress more efficiently from novice to expert. A number of our petrology laboratory exercises involve use of the camera-tablet systems for collaborative learning. Observational responsibilities are distributed among individual members of teams in order to increase interdependence and accountability, and to encourage efficiency. Annotated digital images are used to share students' findings and arrive at an understanding of an entire rock suite. This interdependence increases the individual's sense of responsibility for their work, and reporting out encourages students to practice use of technical vocabulary and to defend their observations. Pre- and post-course student interest in the camera-tablet systems has been assessed. In a post-course survey, the majority of students reported that, if available, they would use camera-tablet systems to capture microscope images (77%) and to make notes on images (71%). An informal focus group recommended introducing the cameras as soon as possible and having them available for making personal mineralogy/petrology portfolios. Because the stakes are perceived as high, use of the camera-tablet systems for peer-peer learning has been progressively modified to bolster student confidence in their collaborative efforts.
High Dynamic Range Digital Imaging of Spacecraft
NASA Technical Reports Server (NTRS)
Karr, Brian A.; Chalmers, Alan; Debattista, Kurt
2014-01-01
The ability to capture engineering imagery with a wide degree of dynamic range during rocket launches is critical for post launch processing and analysis [USC03, NNC86]. Rocket launches often present an extreme range of lightness, particularly during night launches. Night launches present a two-fold problem: capturing detail of the vehicle and scene that is masked by darkness, while also capturing detail in the engine plume.
3D reconstruction based on light field images
NASA Astrophysics Data System (ADS)
Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei
2018-04-01
This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.
NASA Astrophysics Data System (ADS)
Mun, Seong K.; Freedman, Matthew T.; Gelish, Anthony; de Treville, Robert E.; Sheehy, Monet R.; Hansen, Mark; Hill, Mac; Zacharia, Elisabeth; Sullivan, Michael J.; Sebera, C. Wayne
1993-01-01
Image management and communications (IMAC) network, also known as picture archiving and communication system (PACS) consists of (1) digital image acquisition, (2) image review station (3) image storage device(s), image reading workstation, and (4) communication capability. When these subsystems are integrated over a high speed communication technology, possibilities are numerous in improving the timeliness and quality of diagnostic services within a hospital or at remote clinical sites. Teleradiology system uses basically the same hardware configuration together with a long distance communication capability. Functional characteristics of components are highlighted. Many medical imaging systems are already in digital form. These digital images constitute approximately 30% of the total volume of images produced in a radiology department. The remaining 70% of images include conventional x-ray films of the chest, skeleton, abdomen, and GI tract. Unless one develops a method of handling these conventional film images, global improvement in productivity in image management and radiology service throughout a hospital cannot be achieved. Currently, there are two method of producing digital information representing these conventional analog images for IMAC: film digitizers that scan the conventional films, and computed radiography (CR) that captures x-ray images using storage phosphor plate that is subsequently scanned by a laser beam.
NASA Astrophysics Data System (ADS)
Kerr, Andrew D.
Determining optimal imaging settings and best practices related to the capture of aerial imagery using consumer-grade digital single lens reflex (DSLR) cameras, should enable remote sensing scientists to generate consistent, high quality, and low cost image data sets. Radiometric optimization, image fidelity, image capture consistency and repeatability were evaluated in the context of detailed image-based change detection. The impetus for this research is in part, a dearth of relevant, contemporary literature, on the utilization of consumer grade DSLR cameras for remote sensing, and the best practices associated with their use. The main radiometric control settings on a DSLR camera, EV (Exposure Value), WB (White Balance), light metering, ISO, and aperture (f-stop), are variables that were altered and controlled over the course of several image capture missions. These variables were compared for their effects on dynamic range, intra-frame brightness variation, visual acuity, temporal consistency, and the detectability of simulated cracks placed in the images. This testing was conducted from a terrestrial, rather than an airborne collection platform, due to the large number of images per collection, and the desire to minimize inter-image misregistration. The results point to a range of slightly underexposed image exposure values as preferable for change detection and noise minimization fidelity. The makeup of the scene, the sensor, and aerial platform, influence the selection of the aperture and shutter speed which along with other variables, allow for estimation of the apparent image motion (AIM) motion blur in the resulting images. The importance of the image edges in the image application, will in part dictate the lowest usable f-stop, and allow the user to select a more optimal shutter speed and ISO. The single most important camera capture variable is exposure bias (EV), with a full dynamic range, wide distribution of DN values, and high visual contrast and acuity occurring around -0.7 to -0.3EV exposure bias. The ideal values for sensor gain, was found to be ISO 100, with ISO 200 a less desirable. This study offers researchers a better understanding of the effects of camera capture settings on RSI pairs and their influence on image-based change detection.
Compression of CCD raw images for digital still cameras
NASA Astrophysics Data System (ADS)
Sriram, Parthasarathy; Sudharsanan, Subramania
2005-03-01
Lossless compression of raw CCD images captured using color filter arrays has several benefits. The benefits include improved storage capacity, reduced memory bandwidth, and lower power consumption for digital still camera processors. The paper discusses the benefits in detail and proposes the use of a computationally efficient block adaptive scheme for lossless compression. Experimental results are provided that indicate that the scheme performs well for CCD raw images attaining compression factors of more than two. The block adaptive method also compares favorably with JPEG-LS. A discussion is provided indicating how the proposed lossless coding scheme can be incorporated into digital still camera processors enabling lower memory bandwidth and storage requirements.
Reducing flicker due to ambient illumination in camera captured images
NASA Astrophysics Data System (ADS)
Kim, Minwoong; Bengtson, Kurt; Li, Lisa; Allebach, Jan P.
2013-02-01
The flicker artifact dealt with in this paper is the scanning distortion arising when an image is captured by a digital camera using a CMOS imaging sensor with an electronic rolling shutter under strong ambient light sources powered by AC. This type of camera scans a target line-by-line in a frame. Therefore, time differences exist between the lines. This mechanism causes a captured image to be corrupted by the change of illumination. This phenomenon is called the flicker artifact. The non-content area of the captured image is used to estimate a flicker signal that is a key to being able to compensate the flicker artifact. The average signal of the non-content area taken along the scan direction has local extrema where the peaks of flicker exist. The locations of the extrema are very useful information to estimate the desired distribution of pixel intensities assuming that the flicker artifact does not exist. The flicker-reduced images compensated by our approach clearly demonstrate the reduced flicker artifact, based on visual observation.
López, Carlos; Lejeune, Marylène; Escrivà, Patricia; Bosch, Ramón; Salvadó, Maria Teresa; Pons, Lluis E.; Baucells, Jordi; Cugat, Xavier; Álvaro, Tomás; Jaén, Joaquín
2008-01-01
This study investigates the effects of digital image compression on automatic quantification of immunohistochemical nuclear markers. We examined 188 images with a previously validated computer-assisted analysis system. A first group was composed of 47 images captured in TIFF format, and other three contained the same images converted from TIFF to JPEG format with 3×, 23× and 46× compression. Counts of TIFF format images were compared with the other three groups. Overall, differences in the count of the images increased with the percentage of compression. Low-complexity images (≤100 cells/field, without clusters or with small-area clusters) had small differences (<5 cells/field in 95–100% of cases) and high-complexity images showed substantial differences (<35–50 cells/field in 95–100% of cases). Compression does not compromise the accuracy of immunohistochemical nuclear marker counts obtained by computer-assisted analysis systems for digital images with low complexity and could be an efficient method for storing these images. PMID:18755997
Design of an aid to visual inspection workstation
NASA Astrophysics Data System (ADS)
Tait, Robert; Harding, Kevin
2016-05-01
Visual Inspection is the most common means for inspecting manufactured parts for random defects such as pits, scratches, breaks, corrosion or general wear. The reason for the need for visual inspection is the very random nature of what might be a defect. Some defects may be very rare, being seen once or twice a year, but May still be critical to part performance. Because of this random and rare nature, even the most sophisticated image analysis programs have not been able to recognize all possible defects. Key to any future automation of inspection is obtaining good sample images of what might be a defect. However, most visual check take no images and consequently generate no digital data or historical record beyond a simple count. Any additional tool to captures such images must be able to do so without taking addition time. This paper outlines the design of a potential visual inspection station that would be compatible with current visual inspection methods, but afford the means for reliable digital imaging and in many cases augmented capabilities to assist the inspection. Considerations in this study included: resolution, depth of field, feature highlighting, and ease of digital capture, annotations and inspection augmentation for repeatable registration as well as operator assistance and training.
Design of video processing and testing system based on DSP and FPGA
NASA Astrophysics Data System (ADS)
Xu, Hong; Lv, Jun; Chen, Xi'ai; Gong, Xuexia; Yang, Chen'na
2007-12-01
Based on high speed Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA), a video capture, processing and display system is presented, which is of miniaturization and low power. In this system, a triple buffering scheme was used for the capture and display, so that the application can always get a new buffer without waiting; The Digital Signal Processor has an image process ability and it can be used to test the boundary of workpiece's image. A video graduation technology is used to aim at the position which is about to be tested, also, it can enhance the system's flexibility. The character superposition technology realized by DSP is used to display the test result on the screen in character format. This system can process image information in real time, ensure test precision, and help to enhance product quality and quality management.
Three-dimensional imaging of cultural heritage artifacts with holographic printers
NASA Astrophysics Data System (ADS)
Kang, Hoonjong; Stoykova, Elena; Berberova, Nataliya; Park, Jiyong; Nazarova, Dimana; Park, Joo Sup; Kim, Youngmin; Hong, Sunghee; Ivanov, Branimir; Malinowski, Nikola
2016-01-01
Holography is defined as a two-steps process of capture and reconstruction of the light wavefront scattered from three-dimensional (3D) objects. Capture of the wavefront is possible due to encoding of both amplitude and phase in the hologram as a result of interference of the light beam coming from the object and mutually coherent reference beam. Three-dimensional imaging provided by holography motivates development of digital holographic imaging methods based on computer generation of holograms as a holographic display or a holographic printer. The holographic printing technique relies on combining digital 3D object representation and encoding of the holographic data with recording of analog white light viewable reflection holograms. The paper considers 3D contents generation for a holographic stereogram printer and a wavefront printer as a means of analogue recording of specific artifacts which are complicated objects with regards to conventional analog holography restrictions.
32 CFR 161.7 - ID card life-cycle procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... provide two fingerprint biometric scans and a facial image, to assist with authenticating the applicant's... manner: (i) A digitized, full-face passport-type photograph will be captured for the facial image and stored in DEERS and shall have a plain white or off-white background. No flags, posters, or other images...
Development of a Digital Microarray with Interferometric Reflectance Imaging
NASA Astrophysics Data System (ADS)
Sevenler, Derin
This dissertation describes a new type of molecular assay for nucleic acids and proteins. We call this technique a digital microarray since it is conceptually similar to conventional fluorescence microarrays, yet it performs enumerative ('digital') counting of the number captured molecules. Digital microarrays are approximately 10,000-fold more sensitive than fluorescence microarrays, yet maintain all of the strengths of the platform including low cost and high multiplexing (i.e., many different tests on the same sample simultaneously). Digital microarrays use gold nanorods to label the captured target molecules. Each gold nanorod on the array is individually detected based on its light scattering, with an interferometric microscopy technique called SP-IRIS. Our optimized high-throughput version of SP-IRIS is able to scan a typical array of 500 spots in less than 10 minutes. Digital DNA microarrays may have utility in applications where sequencing is prohibitively expensive or slow. As an example, we describe a digital microarray assay for gene expression markers of bacterial drug resistance.
Matsunaga, Tomoko M; Ogawa, Daisuke; Taguchi-Shiobara, Fumio; Ishimoto, Masao; Matsunaga, Sachihiro; Habu, Yoshiki
2017-06-01
Leaf color is an important indicator when evaluating plant growth and responses to biotic/abiotic stress. Acquisition of images by digital cameras allows analysis and long-term storage of the acquired images. However, under field conditions, where light intensity can fluctuate and other factors (shade, reflection, and background, etc.) vary, stable and reproducible measurement and quantification of leaf color are hard to achieve. Digital scanners provide fixed conditions for obtaining image data, allowing stable and reliable comparison among samples, but require detached plant materials to capture images, and the destructive processes involved often induce deformation of plant materials (curled leaves and faded colors, etc.). In this study, by using a lightweight digital scanner connected to a mobile computer, we obtained digital image data from intact plant leaves grown in natural-light greenhouses without detaching the targets. We took images of soybean leaves infected by Xanthomonas campestris pv. glycines , and distinctively quantified two disease symptoms (brown lesions and yellow halos) using freely available image processing software. The image data were amenable to quantitative and statistical analyses, allowing precise and objective evaluation of disease resistance.
A simple tool for stereological assessment of digital images: the STEPanizer.
Tschanz, S A; Burri, P H; Weibel, E R
2011-07-01
STEPanizer is an easy-to-use computer-based software tool for the stereological assessment of digitally captured images from all kinds of microscopical (LM, TEM, LSM) and macroscopical (radiology, tomography) imaging modalities. The program design focuses on providing the user a defined workflow adapted to most basic stereological tasks. The software is compact, that is user friendly without being bulky. STEPanizer comprises the creation of test systems, the appropriate display of digital images with superimposed test systems, a scaling facility, a counting module and an export function for the transfer of results to spreadsheet programs. Here we describe the major workflow of the tool illustrating the application on two examples from transmission electron microscopy and light microscopy, respectively. © 2011 The Authors Journal of Microscopy © 2011 Royal Microscopical Society.
Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy
NASA Astrophysics Data System (ADS)
Bucht, Curry; Söderberg, Per; Manneberg, Göran
2010-02-01
The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor of the corneal endothelium. Pathological conditions and physical trauma may threaten the endothelial cell density to such an extent that the optical property of the cornea and thus clear eyesight is threatened. Diagnosis of the corneal endothelium through morphometry is an important part of several clinical applications. Morphometry of the corneal endothelium is presently carried out by semi automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development and use of fully automated analysis of a very large range of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images, normalizing lights and contrasts. The digitally enhanced images of the corneal endothelium were Fourier transformed, using the fast Fourier transform (FFT) and stored as new images. Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on 292 images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.
Colomb, Tristan; Dürr, Florian; Cuche, Etienne; Marquet, Pierre; Limberger, Hans G; Salathé, René-Paul; Depeursinge, Christian
2005-07-20
We present a digital holographic microscope that permits one to image polarization state. This technique results from the coupling of digital holographic microscopy and polarization digital holography. The interference between two orthogonally polarized reference waves and the wave transmitted by a microscopic sample, magnified by a microscope objective, is recorded on a CCD camera. The off-axis geometry permits one to reconstruct separately from this single hologram two wavefronts that are used to image the object-wave Jones vector. We applied this technique to image the birefringence of a bent fiber. To evaluate the precision of the phase-difference measurement, the birefringence induced by internal stress in an optical fiber is measured and compared to the birefringence profile captured by a standard method, which had been developed to obtain high-resolution birefringence profiles of optical fibers.
Development of a microportable imaging system for otoscopy and nasoendoscopy evaluations.
VanLue, Michael; Cox, Kenneth M; Wade, James M; Tapp, Kevin; Linville, Raymond; Cosmato, Charlie; Smith, Tom
2007-03-01
Imaging systems for patients with cleft palate typically are not portable, but are essential to obtain an audiovisual record of nasoendoscopy and otoscopy procedures. Practitioners who evaluate patients in rural, remote, or otherwise medically underserved areas are expected to obtain audiovisual recordings of these procedures as part of standard clinical practice. Therefore, patients must travel substantial distances to medical facilities that have standard recording equipment. This project describes the specific components, strengths and weaknesses of an MPEG-4 digital recording system for otoscopy/nasoendoscopy evaluation of patients with cleft palate that is both portable and compatible with store-and-forward telemedicine applications. Three digital recording configurations (TabletPC, handheld digital video recorder, and an 8-mm digital camcorder) were used to record the audio/ video signal from an analog video scope system. The handheld digital video recorder was most effective at capturing audio/video and displaying procedures in real time. The system described was particularly easy to use, because it required no postrecording file capture or compression for later review, transfer, and/or archiving. The handheld digital recording system was assembled from commercially available components. The portability and the telemedicine compatibility of the handheld digital video recorder offers a viable solution for the documentation of nasoendosocopy and otoscopy procedures in remote, rural, or other locations where reduced medical access precludes the use of larger component audio/video systems.
Automatic forensic face recognition from digital images.
Peacock, C; Goode, A; Brett, A
2004-01-01
Digital image evidence is now widely available from criminal investigations and surveillance operations, often captured by security and surveillance CCTV. This has resulted in a growing demand from law enforcement agencies for automatic person-recognition based on image data. In forensic science, a fundamental requirement for such automatic face recognition is to evaluate the weight that can justifiably be attached to this recognition evidence in a scientific framework. This paper describes a pilot study carried out by the Forensic Science Service (UK) which explores the use of digital facial images in forensic investigation. For the purpose of the experiment a specific software package was chosen (Image Metrics Optasia). The paper does not describe the techniques used by the software to reach its decision of probabilistic matches to facial images, but accepts the output of the software as though it were a 'black box'. In this way, the paper lays a foundation for how face recognition systems can be compared in a forensic framework. The aim of the paper is to explore how reliably and under what conditions digital facial images can be presented in evidence.
[Design and development of the DSA digital subtraction workstation].
Peng, Wen-Xian; Peng, Tian-Zhou; Xia, Shun-Ren; Jin, Guang-Bo
2008-05-01
According to the patient examination criterion and the demands of all related departments, the DSA digital subtraction workstation has been successfully designed and is introduced in this paper by analyzing the characteristic of video source of DSA which was manufactured by GE Company and has no DICOM standard interface. The workstation includes images-capturing gateway and post-processing software. With the developed workstation, all images from this early DSA equipment are transformed into DICOM format and then are shared in different machines.
Feng, Sheng; Lotz, Thomas; Chase, J Geoffrey; Hann, Christopher E
2010-01-01
Digital Image Elasto Tomography (DIET) is a non-invasive elastographic breast cancer screening technology, based on image-based measurement of surface vibrations induced on a breast by mechanical actuation. Knowledge of frequency response characteristics of a breast prior to imaging is critical to maximize the imaging signal and diagnostic capability of the system. A feasibility analysis for a non-invasive image based modal analysis system is presented that is able to robustly and rapidly identify resonant frequencies in soft tissue. Three images per oscillation cycle are enough to capture the behavior at a given frequency. Thus, a sweep over critical frequency ranges can be performed prior to imaging to determine critical imaging settings of the DIET system to optimize its tumor detection performance.
Recognition of degraded handwritten digits using dynamic Bayesian networks
NASA Astrophysics Data System (ADS)
Likforman-Sulem, Laurence; Sigelle, Marc
2007-01-01
We investigate in this paper the application of dynamic Bayesian networks (DBNs) to the recognition of handwritten digits. The main idea is to couple two separate HMMs into various architectures. First, a vertical HMM and a horizontal HMM are built observing the evolving streams of image columns and image rows respectively. Then, two coupled architectures are proposed to model interactions between these two streams and to capture the 2D nature of character images. Experiments performed on the MNIST handwritten digit database show that coupled architectures yield better recognition performances than non-coupled ones. Additional experiments conducted on artificially degraded (broken) characters demonstrate that coupled architectures better cope with such degradation than non coupled ones and than discriminative methods such as SVMs.
Orthoscopic real-image display of digital holograms.
Makowski, P L; Kozacki, T; Zaperty, W
2017-10-01
We present a practical solution for the long-standing problem of depth inversion in real-image holographic display of digital holograms. It relies on a field lens inserted in front of the spatial light modulator device addressed by a properly processed hologram. The processing algorithm accounts for pixel size and wavelength mismatch between capture and display devices in a way that prevents image deformation. Complete images of large dimensions are observable from one position with a naked eye. We demonstrate the method experimentally on a 10-cm-long 3D object using a single full-HD spatial light modulator, but it can supplement most holographic displays designed to form a real image, including circular wide angle configurations.
Frequency domain zero padding for accurate autofocusing based on digital holography
NASA Astrophysics Data System (ADS)
Shin, Jun Geun; Kim, Ju Wan; Eom, Tae Joong; Lee, Byeong Ha
2018-01-01
The numerical refocusing feature of digital holography enables the reconstruction of a well-focused image from a digital hologram captured at an arbitrary out-of-focus plane without the supervision of end users. However, in general, the autofocusing process for getting a highly focused image requires a considerable computational cost. In this study, to reconstruct a better-focused image, we propose the zero padding technique implemented in the frequency domain. Zero padding in the frequency domain enhances the visibility or numerical resolution of the image, which allows one to measure the degree of focus with more accuracy. A coarse-to-fine search algorithm is used to reduce the computing load, and a graphics processing unit (GPU) is employed to accelerate the process. The performance of the proposed scheme is evaluated with simulation and experiment, and the possibility of obtaining a well-refocused image with an enhanced accuracy and speed are presented.
NASA Astrophysics Data System (ADS)
Sampat, Nitin; Grim, John F.; O'Hara, James E.
1998-04-01
The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.
Comparison of digital intraoral scanners by single-image capture system and full-color movie system.
Yamamoto, Meguru; Kataoka, Yu; Manabe, Atsufumi
2017-01-01
The use of dental computer-aided design/computer-aided manufacturing (CAD/CAM) restoration is rapidly increasing. This study was performed to evaluate the marginal and internal cement thickness and the adhesive gap of internal cavities comprising CAD/CAM materials using two digital impression acquisition methods and micro-computed tomography. Images obtained by a single-image acquisition system (Bluecam Ver. 4.0) and a full-color video acquisition system (Omnicam Ver. 4.2) were divided into the BL and OM groups, respectively. Silicone impressions were prepared from an ISO-standard metal mold, and CEREC Stone BC and New Fuji Rock IMP were used to create working models (n=20) in the BL and OM groups (n=10 per group), respectively. Individual inlays were designed in a conventional manner using designated software, and all restorations were prepared using CEREC inLab MC XL. These were assembled with the corresponding working models used for measurement, and the level of fit was examined by three-dimensional analysis based on micro-computed tomography. Significant differences in the marginal and internal cement thickness and adhesive gap spacing were found between the OM and BL groups. The full-color movie capture system appears to be a more optimal restoration system than the single-image capture system.
A computational approach to real-time image processing for serial time-encoded amplified microscopy
NASA Astrophysics Data System (ADS)
Oikawa, Minoru; Hiyama, Daisuke; Hirayama, Ryuji; Hasegawa, Satoki; Endo, Yutaka; Sugie, Takahisa; Tsumura, Norimichi; Kuroshima, Mai; Maki, Masanori; Okada, Genki; Lei, Cheng; Ozeki, Yasuyuki; Goda, Keisuke; Shimobaba, Tomoyoshi
2016-03-01
High-speed imaging is an indispensable technique, particularly for identifying or analyzing fast-moving objects. The serial time-encoded amplified microscopy (STEAM) technique was proposed to enable us to capture images with a frame rate 1,000 times faster than using conventional methods such as CCD (charge-coupled device) cameras. The application of this high-speed STEAM imaging technique to a real-time system, such as flow cytometry for a cell-sorting system, requires successively processing a large number of captured images with high throughput in real time. We are now developing a high-speed flow cytometer system including a STEAM camera. In this paper, we describe our approach to processing these large amounts of image data in real time. We use an analog-to-digital converter that has up to 7.0G samples/s and 8-bit resolution for capturing the output voltage signal that involves grayscale images from the STEAM camera. Therefore the direct data output from the STEAM camera generates 7.0G byte/s continuously. We provided a field-programmable gate array (FPGA) device as a digital signal pre-processor for image reconstruction and finding objects in a microfluidic channel with high data rates in real time. We also utilized graphics processing unit (GPU) devices for accelerating the calculation speed of identification of the reconstructed images. We built our prototype system, which including a STEAM camera, a FPGA device and a GPU device, and evaluated its performance in real-time identification of small particles (beads), as virtual biological cells, owing through a microfluidic channel.
Image microarrays (IMA): Digital pathology's missing tool
Hipp, Jason; Cheng, Jerome; Pantanowitz, Liron; Hewitt, Stephen; Yagi, Yukako; Monaco, James; Madabhushi, Anant; Rodriguez-canales, Jaime; Hanson, Jeffrey; Roy-Chowdhuri, Sinchita; Filie, Armando C.; Feldman, Michael D.; Tomaszewski, John E.; Shih, Natalie NC.; Brodsky, Victor; Giaccone, Giuseppe; Emmert-Buck, Michael R.; Balis, Ulysses J.
2011-01-01
Introduction: The increasing availability of whole slide imaging (WSI) data sets (digital slides) from glass slides offers new opportunities for the development of computer-aided diagnostic (CAD) algorithms. With the all-digital pathology workflow that these data sets will enable in the near future, literally millions of digital slides will be generated and stored. Consequently, the field in general and pathologists, specifically, will need tools to help extract actionable information from this new and vast collective repository. Methods: To address this limitation, we designed and implemented a tool (dCORE) to enable the systematic capture of image tiles with constrained size and resolution that contain desired histopathologic features. Results: In this communication, we describe a user-friendly tool that will enable pathologists to mine digital slides archives to create image microarrays (IMAs). IMAs are to digital slides as tissue microarrays (TMAs) are to cell blocks. Thus, a single digital slide could be transformed into an array of hundreds to thousands of high quality digital images, with each containing key diagnostic morphologies and appropriate controls. Current manual digital image cut-and-paste methods that allow for the creation of a grid of images (such as an IMA) of matching resolutions are tedious. Conclusion: The ability to create IMAs representing hundreds to thousands of vetted morphologic features has numerous applications in education, proficiency testing, consensus case review, and research. Lastly, in a manner analogous to the way conventional TMA technology has significantly accelerated in situ studies of tissue specimens use of IMAs has similar potential to significantly accelerate CAD algorithm development. PMID:22200030
Interpretation and mapping of geological features using mobile devices for 3D outcrop modelling
NASA Astrophysics Data System (ADS)
Buckley, Simon J.; Kehl, Christian; Mullins, James R.; Howell, John A.
2016-04-01
Advances in 3D digital geometric characterisation have resulted in widespread adoption in recent years, with photorealistic models utilised for interpretation, quantitative and qualitative analysis, as well as education, in an increasingly diverse range of geoscience applications. Topographic models created using lidar and photogrammetry, optionally combined with imagery from sensors such as hyperspectral and thermal cameras, are now becoming commonplace in geoscientific research. Mobile devices (tablets and smartphones) are maturing rapidly to become powerful field computers capable of displaying and interpreting 3D models directly in the field. With increasingly high-quality digital image capture, combined with on-board sensor pose estimation, mobile devices are, in addition, a source of primary data, which can be employed to enhance existing geological models. Adding supplementary image textures and 2D annotations to photorealistic models is therefore a desirable next step to complement conventional field geoscience. This contribution reports on research into field-based interpretation and conceptual sketching on images and photorealistic models on mobile devices, motivated by the desire to utilise digital outcrop models to generate high quality training images (TIs) for multipoint statistics (MPS) property modelling. Representative training images define sedimentological concepts and spatial relationships between elements in the system, which are subsequently modelled using artificial learning to populate geocellular models. Photorealistic outcrop models are underused sources of quantitative and qualitative information for generating TIs, explored further in this research by linking field and office workflows through the mobile device. Existing textured models are loaded to the mobile device, allowing rendering in a 3D environment. Because interpretation in 2D is more familiar and comfortable for users, the developed application allows new images to be captured with the device's digital camera, and an interface is available for annotating (interpreting) the image using lines and polygons. Image-to-geometry registration is then performed using a developed algorithm, initialised using the coarse pose from the on-board orientation and positioning sensors. The annotations made on the captured images are then available in the 3D model coordinate system for overlay and export. This workflow allows geologists to make interpretations and conceptual models in the field, which can then be linked to and refined in office workflows for later MPS property modelling.
High-Speed Edge-Detecting Line Scan Smart Camera
NASA Technical Reports Server (NTRS)
Prokop, Norman F.
2012-01-01
A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..
Automatic Mexican sign language and digits recognition using normalized central moments
NASA Astrophysics Data System (ADS)
Solís, Francisco; Martínez, David; Espinosa, Oscar; Toxqui, Carina
2016-09-01
This work presents a framework for automatic Mexican sign language and digits recognition based on computer vision system using normalized central moments and artificial neural networks. Images are captured by digital IP camera, four LED reflectors and a green background in order to reduce computational costs and prevent the use of special gloves. 42 normalized central moments are computed per frame and used in a Multi-Layer Perceptron to recognize each database. Four versions per sign and digit were used in training phase. 93% and 95% of recognition rates were achieved for Mexican sign language and digits respectively.
Russian Character Recognition using Self-Organizing Map
NASA Astrophysics Data System (ADS)
Gunawan, D.; Arisandi, D.; Ginting, F. M.; Rahmat, R. F.; Amalia, A.
2017-01-01
The World Tourism Organization (UNWTO) in 2014 released that there are 28 million visitors who visit Russia. Most of the visitors might have problem in typing Russian word when using digital dictionary. This is caused by the letters, called Cyrillic that used by the Russian and the countries around it, have different shape than Latin letters. The visitors might not familiar with Cyrillic. This research proposes an alternative way to input the Cyrillic words. Instead of typing the Cyrillic words directly, camera can be used to capture image of the words as input. The captured image is cropped, then several pre-processing steps are applied such as noise filtering, binary image processing, segmentation and thinning. Next, the feature extraction process is applied to the image. Cyrillic letters recognition in the image is done by utilizing Self-Organizing Map (SOM) algorithm. SOM successfully recognizes 89.09% Cyrillic letters from the computer-generated images. On the other hand, SOM successfully recognizes 88.89% Cyrillic letters from the image captured by the smartphone’s camera. For the word recognition, SOM successfully recognized 292 words and partially recognized 58 words from the image captured by the smartphone’s camera. Therefore, the accuracy of the word recognition using SOM is 83.42%
ERIC Educational Resources Information Center
Catelli, Francisco; Giovannini, Odilon; Bolzan, Vicente Dall Agnol
2011-01-01
The interference fringes produced by a diffraction grating illuminated with radiation from a TV remote control and a red laser beam are, simultaneously, captured by a digital camera. Based on an image with two interference patterns, an estimate of the infrared radiation wavelength emitted by a TV remote control is made. (Contains 4 figures.)
Fundamentals of image acquisition and processing in the digital era.
Farman, A G
2003-01-01
To review the historic context for digital imaging in dentistry and to outline the fundamental issues related to digital imaging modalities. Digital dental X-ray images can be achieved by scanning analog film radiographs (secondary capture), with photostimulable phosphors, or using solid-state detectors (e.g. charge-coupled device and complementary metal oxide semiconductor). There are four characteristics that are basic to all digital image detectors; namely, size of active area, signal-to-noise ratio, contrast resolution and the spatial resolution. To perceive structure in a radiographic image, there needs to be sufficient difference between contrasting densities. This primarily depends on the differences in the attenuation of the X-ray beam by adjacent tissues. It is also depends on the signal received; therefore, contrast tends to increase with increased exposure. Given adequate signal and sufficient differences in radiodensity, contrast will be sufficient to differentiate between adjacent structures, irrespective of the recording modality and processing used. Where contrast is not sufficient, digital images can sometimes be post-processed to disclose details that would otherwise go undetected. For example, cephalogram isodensity mapping can improve soft tissue detail. It is concluded that it could be a further decade or two before three-dimensional digital imaging systems entirely replace two-dimensional analog films. Such systems need not only to produce prettier images, but also to provide a demonstrable evidence-based higher standard of care at a cost that is not economically prohibitive for the practitioner or society, and which allows efficient and effective workflow within the business of dental practice.
Measuring food intake with digital photography
Martin, Corby K.; Nicklas, Theresa; Gunturk, Bahadir; Correa, John B.; Allen, H. Raymond; Champagne, Catherine
2014-01-01
The Digital Photography of Foods Method accurately estimates the food intake of adults and children in cafeterias. When using this method, imags of food selection and leftovers are quickly captured in the cafeteria. These images are later compared to images of “standard” portions of food using a computer application. The amount of food selected and discarded is estimated based upon this comparison, and the application automatically calculates energy and nutrient intake. Herein, we describe this method, as well as a related method called the Remote Food Photography Method (RFPM), which relies on Smartphones to estimate food intake in near real-time in free-living conditions. When using the RFPM, participants capture images of food selection and leftovers using a Smartphone and these images are wirelessly transmitted in near real-time to a server for analysis. Because data are transferred and analyzed in near real-time, the RFPM provides a platform for participants to quickly receive feedback about their food intake behavior and to receive dietary recommendations to achieve weight loss and health promotion goals. The reliability and validity of measuring food intake with the RFPM in adults and children will also be reviewed. The body of research reviewed herein demonstrates that digital imaging accurately estimates food intake in many environments and it has many advantages over other methods, including reduced participant burden, elimination of the need for participants to estimate portion size, and incorporation of computer automation to improve the accuracy, efficiency, and the cost-effectiveness of the method. PMID:23848588
NASA Technical Reports Server (NTRS)
Klassen, S. P.; Ritchie, G.; Frantz, J. M.; Pinnock, D.; Bugbee, B.
2003-01-01
Cumulative absorbed radiation is highly correlated with crop biomass and yield. In this chapter we describe the use of a digital camera and commercial imaging software for estimating daily radiation capture, canopy photosynthesis, and relative growth rate. Digital images were used to determine percentage of ground cover of lettuce (Lactuca sativa L.) communities grown at five temperatures. Plants were grown in a steady-state, 10-chamber CO2 gas exchange system, which was used to measure canopy photosynthesis and daily carbon gain. Daily measurements of percentage of ground cover were highly correlated with daily measurements of both absorbed radiation (r(sup 2) = 0.99) and daily carbon gain (r(sup 2) = 0.99). Differences among temperature treatments indicated that these relationships were influenced by leaf angle, leaf area index, and chlorophyll content. An analysis of the daily images also provided good estimates of relative growth rates, which were verified by gas exchange measurements of daily carbon gain. In a separate study we found that images taken at hourly intervals were effective for monitoring real-time growth. Our data suggests that hourly images can be used for early detection of plant stress. Applications, limitations, and potential errors are discussed. We have long known that crop yield is determined by the efficiency of four component processes: (i) radiation capture, (ii) quantum yield, (iii) carbon use efficiency, and (iv) carbon partitioning efficiency (Charles-Edwards, 1982; Penning de Vries & van Laar, 1982; Thornley, 1976). More than one-half century ago, Watson (1947, 1952) showed that variation in radiation capture accounted for almost all of the variation in yield between sites in temperate regions, because the three other components are relatively constant when the crop is not severely stressed. More recently, Monteith (1977) reviewed the literature on the close correlation between radiation capture and yield. Bugbee and Monje (1992) demonstrated the close relationship between absorbed radiation and yield in an optimal environment.
de Lasarte, Marta; Pujol, Jaume; Arjona, Montserrat; Vilaseca, Meritxell
2007-01-10
We present an optimized linear algorithm for the spatial nonuniformity correction of a CCD color camera's imaging system and the experimental methodology developed for its implementation. We assess the influence of the algorithm's variables on the quality of the correction, that is, the dark image, the base correction image, and the reference level, and the range of application of the correction using a uniform radiance field provided by an integrator cube. The best spatial nonuniformity correction is achieved by having a nonzero dark image, by using an image with a mean digital level placed in the linear response range of the camera as the base correction image and taking the mean digital level of the image as the reference digital level. The response of the CCD color camera's imaging system to the uniform radiance field shows a high level of spatial uniformity after the optimized algorithm has been applied, which also allows us to achieve a high-quality spatial nonuniformity correction of captured images under different exposure conditions.
NASA Astrophysics Data System (ADS)
Guo, X.; Li, Y.; Suo, T.; Liu, H.; Zhang, C.
2017-11-01
This paper proposes a method for de-blurring of images captured in the dynamic deformation of materials. De-blurring is achieved based on the dynamic-based approach, which is used to estimate the Point Spread Function (PSF) during the camera exposure window. The deconvolution process involving iterative matrix calculations of pixels, is then performed on the GPU to decrease the time cost. Compared to the Gauss method and the Lucy-Richardson method, it has the best result of the image restoration. The proposed method has been evaluated by using the Hopkinson bar loading system. In comparison to the blurry image, the proposed method has successfully restored the image. It is also demonstrated from image processing applications that the de-blurring method can improve the accuracy and the stability of the digital imaging correlation measurement.
A review on brightness preserving contrast enhancement methods for digital image
NASA Astrophysics Data System (ADS)
Rahman, Md Arifur; Liu, Shilong; Li, Ruowei; Wu, Hongkun; Liu, San Chi; Jahan, Mahmuda Rawnak; Kwok, Ngaiming
2018-04-01
Image enhancement is an imperative step for many vision based applications. For image contrast enhancement, popular methods adopt the principle of spreading the captured intensities throughout the allowed dynamic range according to predefined distributions. However, these algorithms take little or no consideration into account of maintaining the mean brightness of the original scene, which is of paramount importance to carry the true scene illumination characteristics to the viewer. Though there have been significant amount of reviews on contrast enhancement methods published, updated review on overall brightness preserving image enhancement methods is still scarce. In this paper, a detailed survey is performed on those particular methods that specifically aims to maintain the overall scene illumination characteristics while enhancing the digital image.
Black, J A; Waggamon, K A
1992-01-01
An isoelectric focusing method using thin-layer agarose gel has been developed for wheat gliadin. Using flat-bed units with a third electrode, up to 72 samples per gel may be analyzed. Advantages over traditional acid polyacrylamide gel electrophoresis methodology include: faster run times, nontoxic media, and greater sample capacity. The method is suitable for fingerprinting or purity testing of wheat varieties. Using digital images captured by a flat-bed scanner, a 4-band reference system using isoelectric points was devised. Software enables separated bands to be assigned pI values based upon reference tracks. Precision of assigned isoelectric points is shown to be on the order of 0.02 pH units. Captured images may be stored in a computer database and compared to unknown patterns to enable an identification. Parameters for a match with a stored pattern may be adjusted for pI interval required for a match, and number of best matches.
Study on a High Compression Processing for Video-on-Demand e-learning System
NASA Astrophysics Data System (ADS)
Nomura, Yoshihiko; Matsuda, Ryutaro; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko
The authors proposed a high-quality and small-capacity lecture-video-file creating system for distance e-learning system. Examining the feature of the lecturing scene, the authors ingeniously employ two kinds of image-capturing equipment having complementary characteristics : one is a digital video camera with a low resolution and a high frame rate, and the other is a digital still camera with a high resolution and a very low frame rate. By managing the two kinds of image-capturing equipment, and by integrating them with image processing, we can produce course materials with the greatly reduced file capacity : the course materials satisfy the requirements both for the temporal resolution to see the lecturer's point-indicating actions and for the high spatial resolution to read the small written letters. As a result of a comparative experiment, the e-lecture using the proposed system was confirmed to be more effective than an ordinary lecture from the viewpoint of educational effect.
Optical Density Analysis of X-Rays Utilizing Calibration Tooling to Estimate Thickness of Parts
NASA Technical Reports Server (NTRS)
Grau, David
2012-01-01
This process is designed to estimate the thickness change of a material through data analysis of a digitized version of an x-ray (or a digital x-ray) containing the material (with the thickness in question) and various tooling. Using this process, it is possible to estimate a material's thickness change in a region of the material or part that is thinner than the rest of the reference thickness. However, that same principle process can be used to determine the thickness change of material using a thinner region to determine thickening, or it can be used to develop contour plots of an entire part. Proper tooling must be used. An x-ray film with an S-shaped characteristic curve or a digital x-ray device with a product resulting in like characteristics is necessary. If a film exists with linear characteristics, this type of film would be ideal; however, at the time of this reporting, no such film has been known. Machined components (with known fractional thicknesses) of a like material (similar density) to that of the material to be measured are necessary. The machined components should have machined through-holes. For ease of use and better accuracy, the throughholes should be a size larger than 0.125 in. (.3 mm). Standard components for this use are known as penetrameters or image quality indicators. Also needed is standard x-ray equipment, if film is used in place of digital equipment, or x-ray digitization equipment with proven conversion properties. Typical x-ray digitization equipment is commonly used in the medical industry, and creates digital images of x-rays in DICOM format. It is recommended to scan the image in a 16-bit format. However, 12-bit and 8-bit resolutions are acceptable. Finally, x-ray analysis software that allows accurate digital image density calculations, such as Image-J freeware, is needed. The actual procedure requires the test article to be placed on the raw x-ray, ensuring the region of interest is aligned for perpendicular x-ray exposure capture. One or multiple machined components of like material/ density with known thicknesses are placed atop the part (preferably in a region of nominal and non-varying thickness) such that exposure of the combined part and machined component lay-up is captured on the x-ray. Depending on the accuracy required, the machined component fs thickness must be carefully chosen. Similarly, depending on the accuracy required, the lay-up must be exposed such that the regions of the x-ray to be analyzed have a density range between 1 and 4.5. After the exposure, the image is digitized, and the digital image can then be analyzed using the image analysis software.
Locating and decoding barcodes in fuzzy images captured by smart phones
NASA Astrophysics Data System (ADS)
Deng, Wupeng; Hu, Jiwei; Liu, Quan; Lou, Ping
2017-07-01
With the development of barcodes for commercial use, people's requirements for detecting barcodes by smart phone become increasingly pressing. The low quality of barcode image captured by mobile phone always affects the decoding and recognition rates. This paper focuses on locating and decoding EAN-13 barcodes in fuzzy images. We present a more accurate locating algorithm based on segment length and high fault-tolerant rate algorithm for decoding barcodes. Unlike existing approaches, location algorithm is based on the edge segment length of EAN -13 barcodes, while our decoding algorithm allows the appearance of fuzzy region in barcode image. Experimental results are performed on damaged, contaminated and scratched digital images, and provide a quite promising result for EAN -13 barcode location and decoding.
Creep Measurement Video Extensometer
NASA Technical Reports Server (NTRS)
Jaster, Mark; Vickerman, Mary; Padula, Santo, II; Juhas, John
2011-01-01
Understanding material behavior under load is critical to the efficient and accurate design of advanced aircraft and spacecraft. Technologies such as the one disclosed here allow accurate creep measurements to be taken automatically, reducing error. The goal was to develop a non-contact, automated system capable of capturing images that could subsequently be processed to obtain the strain characteristics of these materials during deformation, while maintaining adequate resolution to capture the true deformation response of the material. The measurement system comprises a high-resolution digital camera, computer, and software that work collectively to interpret the image.
Silva, Paolo S; Walia, Saloni; Cavallerano, Jerry D; Sun, Jennifer K; Dunn, Cheri; Bursell, Sven-Erik; Aiello, Lloyd M; Aiello, Lloyd Paul
2012-09-01
To compare agreement between diagnosis of clinical level of diabetic retinopathy (DR) and diabetic macular edema (DME) derived from nonmydriatic fundus images using a digital camera back optimized for low-flash image capture (MegaVision) compared with standard seven-field Early Treatment Diabetic Retinopathy Study (ETDRS) photographs and dilated clinical examination. Subject comfort and image acquisition time were also evaluated. In total, 126 eyes from 67 subjects with diabetes underwent Joslin Vision Network nonmydriatic retinal imaging. ETDRS photographs were obtained after pupillary dilation, and fundus examination was performed by a retina specialist. There was near-perfect agreement between MegaVision and ETDRS photographs (κ=0.81, 95% confidence interval [CI] 0.73-0.89) for clinical DR severity levels. Substantial agreement was observed with clinical examination (κ=0.71, 95% CI 0.62-0.80). For DME severity level there was near-perfect agreement with ETDRS photographs (κ=0.92, 95% CI 0.87-0.98) and moderate agreement with clinical examination (κ=0.58, 95% CI 0.46-0.71). The wider MegaVision 45° field led to identification of nonproliferative changes in areas not imaged by the 30° field of ETDRS photos. Field area unique to ETDRS photographs identified proliferative changes not visualized with MegaVision. Mean MegaVision acquisition time was 9:52 min. After imaging, 60% of subjects preferred the MegaVision lower flash settings. When evaluated using a rigorous protocol, images captured using a low-light digital camera compared favorably with ETDRS photography and clinical examination for grading level of DR and DME. Furthermore, these data suggest the importance of more extensive peripheral images and suggest that utilization of wide-field retinal imaging may further improve accuracy of DR assessment.
Specialized Color Targets for Spectral Reflectance Reconstruction of Magnified Images
NASA Astrophysics Data System (ADS)
Kruschwitz, Jennifer D. T.
Digital images are used almost exclusively instead of film to capture visual information across many scientific fields. The colorimetric color representation within these digital images can be relayed from the digital counts produced by the camera with the use of a known color target. In image capture of magnified images, there is currently no reliable color target that can be used at multiple magnifications and give the user a solid understanding of the color ground truth within those images. The first part of this dissertation included the design, fabrication, and testing of a color target produced with optical interference coated microlenses for use in an off-axis illumination, compound microscope. An ideal target was designed to increase the color gamut for colorimetric imaging and provide the necessary "Block Dye" spectral reflectance profiles across the visible spectrum to reduce the number of color patches necessary for multiple filter imaging systems that rely on statistical models for spectral reflectance reconstruction. There are other scientific disciplines that can benefit from a specialized color target to determine the color ground truth in their magnified images and perform spectral estimation. Not every discipline has the luxury of having a multi-filter imaging system. The second part of this dissertation developed two unique ways of using an interference coated color mirror target: one that relies on multiple light-source angles, and one that leverages a dynamic color change with time. The source multi-angle technique would be used for the microelectronic discipline where the reconstructed spectral reflectance would be used to determine a dielectric film thickness on a silicon substrate, and the time varying technique would be used for a biomedical example to determine the thickness of human tear film.
Fisheye image rectification using spherical and digital distortion models
NASA Astrophysics Data System (ADS)
Li, Xin; Pi, Yingdong; Jia, Yanling; Yang, Yuhui; Chen, Zhiyong; Hou, Wenguang
2018-02-01
Fisheye cameras have been widely used in many applications including close range visual navigation and observation and cyber city reconstruction because its field of view is much larger than that of a common pinhole camera. This means that a fisheye camera can capture more information than a pinhole camera in the same scenario. However, the fisheye image contains serious distortion, which may cause trouble for human observers in recognizing the objects within. Therefore, in most practical applications, the fisheye image should be rectified to a pinhole perspective projection image to conform to human cognitive habits. The traditional mathematical model-based methods cannot effectively remove the distortion, but the digital distortion model can reduce the image resolution to some extent. Considering these defects, this paper proposes a new method that combines the physical spherical model and the digital distortion model. The distortion of fisheye images can be effectively removed according to the proposed approach. Many experiments validate its feasibility and effectiveness.
Fundamentals of in Situ Digital Camera Methodology for Water Quality Monitoring of Coast and Ocean
Goddijn-Murphy, Lonneke; Dailloux, Damien; White, Martin; Bowers, Dave
2009-01-01
Conventional digital cameras, the Nikon Coolpix885® and the SeaLife ECOshot®, were used as in situ optical instruments for water quality monitoring. Measured response spectra showed that these digital cameras are basically three-band radiometers. The response values in the red, green and blue bands, quantified by RGB values of digital images of the water surface, were comparable to measurements of irradiance levels at red, green and cyan/blue wavelengths of water leaving light. Different systems were deployed to capture upwelling light from below the surface, while eliminating direct surface reflection. Relationships between RGB ratios of water surface images, and water quality parameters were found to be consistent with previous measurements using more traditional narrow-band radiometers. This current paper focuses on the method that was used to acquire digital images, derive RGB values and relate measurements to water quality parameters. Field measurements were obtained in Galway Bay, Ireland, and in the Southern Rockall Trough in the North Atlantic, where both yellow substance and chlorophyll concentrations were successfully assessed using the digital camera method. PMID:22346729
Common-path biodynamic imaging for dynamic fluctuation spectroscopy of 3D living tissue
NASA Astrophysics Data System (ADS)
Li, Zhe; Turek, John; Nolte, David D.
2017-03-01
Biodynamic imaging is a novel 3D optical imaging technology based on short-coherence digital holography that measures intracellular motions of cells inside their natural microenvironments. Here both common-path and Mach-Zehnder designs are presented. Biological tissues such as tumor spheroids and ex vivo biopsies are used as targets, and backscattered light is collected as signal. Drugs are applied to samples, and their effects are evaluated by identifying biomarkers that capture intracellular dynamics from the reconstructed holograms. Through digital holography and coherence gating, information from different depths of the samples can be extracted, enabling the deep-tissue measurement of the responses to drugs.
MSTB 2 x 6-Inch Low Speed Tunnel Turbulence Generator Grid/Honeycomb PIV Measurements and Analysis
NASA Technical Reports Server (NTRS)
Blackshire, James L.
1997-01-01
An assessment of the turbulence levels present in the Measurement Science and Technology (MSTB) branch's 2 x 6-inch low speed wind tunnel was made using Particle Image Velocimetry (PIV), and a turbulence generator consisting of a grid/honeycomb structure. Approximately 3000 digital PIV images were captured and analyzed covering an approximate 2 x 6-inch area along the centerline of the tunnel just beyond the turbulence generator system. Custom software for analysis and acquisition was developed for semi-automated digital PIV image acquisition and analysis. Comparisons between previously obtained LTA and LV turbulence measurements taken in the tunnel are presented.
Parallax Player: a stereoscopic format converter
NASA Astrophysics Data System (ADS)
Feldman, Mark H.; Lipton, Lenny
2003-05-01
The Parallax Player is a software application that is, in essence, a stereoscopic format converter. Various formats may be inputted and outputted. In addition to being able to take any one of a wide variety of different formats and play them back on many different kinds of PCs and display screens. The Parallax Player has built into it the capability to produce ersatz stereo from a planar still or movie image. The player handles two basic forms of digital content - still images, and movies. It is assumed that all data is digital, either created by means of a photographic film process and later digitized, or directly captured or authored in a digital form. In its current implementation, running on a number of Windows Operating Systems, The Parallax Player reads in a broad selection of contemporary file formats.
Nair, Madhu K; Pettigrew, James C; Loomis, Jeffrey S; Bates, Robert E; Kostewicz, Stephen; Robinson, Boyd; Sweitzer, Jean; Dolan, Teresa A
2009-06-01
The implementation of digital radiography in dentistry in a large healthcare enterprise setting is discussed. A distinct need for a dedicated dental picture archiving and communication systems (PACS) exists for seamless integration of different vendor products across the system. Complex issues are contended with as each clinical department migrated to a digital environment with unique needs and workflow patterns. The University of Florida has had a dental PACS installed over 2 years ago. This paper describes the process of conversion from film-based imaging from the planning stages through clinical implementation. Dentistry poses many unique challenges as it strives to achieve better integration with systems primarily designed for imaging; however, the technical requirements for high-resolution image capture in dentistry far exceed those in medicine, as most routine dental diagnostic tasks are challenging. The significance of specification, evaluation, vendor selection, installation, trial runs, training, and phased clinical implementation is emphasized.
Multispectral imaging approach for simplified non-invasive in-vivo evaluation of gingival erythema
NASA Astrophysics Data System (ADS)
Eckhard, Timo; Valero, Eva M.; Nieves, Juan L.; Gallegos-Rueda, José M.; Mesa, Francisco
2012-03-01
Erythema is a common visual sign of gingivitis. In this work, a new and simple low-cost image capture and analysis method for erythema assessment is proposed. The method is based on digital still images of gingivae and applied on a pixel-by-pixel basis. Multispectral images are acquired with a conventional digital camera and multiplexed LED illumination panels at 460nm and 630nm peak wavelength. An automatic work-flow segments teeth from gingiva regions in the images and creates a map of local blood oxygenation levels, which relates to the presence of erythema. The map is computed from the ratio of the two spectral images. An advantage of the proposed approach is that the whole process is easy to manage by dental health care professionals in clinical environment.
Experiences with semiautomatic aerotriangulation on digital photogrammetric stations
NASA Astrophysics Data System (ADS)
Kersten, Thomas P.; Stallmann, Dirk
1995-12-01
With the development of higher-resolution scanners, faster image-handling capabilities, and higher-resolution screens, digital photogrammetric workstations promise to rival conventional analytical plotters in functionality, i.e. in the degree of automation in data capture and processing, and in accuracy. The availability of high quality digital image data and inexpensive high capacity fast mass storage offers the capability to perform accurate semi- automatic or automatic triangulation of digital aerial photo blocks on digital photogrammetric workstations instead of analytical plotters. In this paper, we present our investigations and results on two photogrammetric triangulation blocks, the OEEPE (European Organisation for Experimental Photogrammetric Research) test block (scale 1;4'000) and a Swiss test block (scale 1:12'000) using digitized images. Twenty-eight images of the OEEPE test block were scanned on the Zeiss/Intergraph PS1 and the digital images were delivered with a resolution of 15 micrometer and 30 micrometer, while 20 images of the Swiss test block were scanned on the Desktop Publishing Scanner Agfa Horizon with a resolution of 42 micrometer and on the PS1 with 15 micrometer. Measurements in the digital images were performed on the commercial Digital photogrammetric Station Leica/Helava DPW770 and with basic hard- and software components of the Digital Photogrammetric Station DIPS II, an experimental system of the Institute of Geodesy and Photogrammetry, ETH Zurich. As a reference, the analog images of both photogrammetric test blocks were measured at analytical plotters. On DIPS II measurements of fiducial marks, signalized and natural tie points were performed by least squares template and image matching, while on DPW770 all points were measured by the cross correlation technique. The observations were adjusted in a self-calibrating bundle adjustment. The comparisons between these results and the experiences with the functionality of the commercial and the experimental system are presented.
Pan, Bing; Jiang, Tianyun; Wu, Dafang
2014-11-01
In thermomechanical testing of hypersonic materials and structures, direct observation and quantitative strain measurement of the front surface of a test specimen directly exposed to severe aerodynamic heating has been considered as a very challenging task. In this work, a novel quartz infrared heating device with an observation window is designed to reproduce the transient thermal environment experienced by hypersonic vehicles. The specially designed experimental system allows the capture of test article's surface images at various temperatures using an optical system outfitted with a bandpass filter. The captured images are post-processed by digital image correlation to extract full-field thermal deformation. To verify the viability and accuracy of the established system, thermal strains of a chromiumnickel austenite stainless steel sample heated from room temperature up to 600 °C were determined. The preliminary results indicate that the air disturbance between the camera and the specimen due to heat haze induces apparent distortions in the recorded images and large errors in the measured strains, but the average values of the measured strains are accurate enough. Limitations and further improvements of the proposed technique are discussed.
Semantic classification of business images
NASA Astrophysics Data System (ADS)
Erol, Berna; Hull, Jonathan J.
2006-01-01
Digital cameras are becoming increasingly common for capturing information in business settings. In this paper, we describe a novel method for classifying images into the following semantic classes: document, whiteboard, business card, slide, and regular images. Our method is based on combining low-level image features, such as text color, layout, and handwriting features with high-level OCR output analysis. Several Support Vector Machine Classifiers are combined for multi-class classification of input images. The system yields 95% accuracy in classification.
Non-interferometric quantitative phase imaging of yeast cells
NASA Astrophysics Data System (ADS)
Poola, Praveen K.; Pandiyan, Vimal Prabhu; John, Renu
2015-12-01
Real-time imaging of live cells is quite difficult without the addition of external contrast agents. Various methods for quantitative phase imaging of living cells have been proposed like digital holographic microscopy and diffraction phase microscopy. In this paper, we report theoretical and experimental results of quantitative phase imaging of live yeast cells with nanometric precision using transport of intensity equations (TIE). We demonstrate nanometric depth sensitivity in imaging live yeast cells using this technique. This technique being noninterferometric, does not need any coherent light sources and images can be captured through a regular bright-field microscope. This real-time imaging technique would deliver the depth or 3-D volume information of cells and is highly promising in real-time digital pathology applications, screening of pathogens and staging of diseases like malaria as it does not need any preprocessing of samples.
Automatic source camera identification using the intrinsic lens radial distortion
NASA Astrophysics Data System (ADS)
Choi, Kai San; Lam, Edmund Y.; Wong, Kenneth K. Y.
2006-11-01
Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.
Smartphone adapters for digital photomicrography.
Roy, Somak; Pantanowitz, Liron; Amin, Milon; Seethala, Raja R; Ishtiaque, Ahmed; Yousem, Samuel A; Parwani, Anil V; Cucoranu, Ioan; Hartman, Douglas J
2014-01-01
Photomicrographs in Anatomic Pathology provide a means of quickly sharing information from a glass slide for consultation, education, documentation and publication. While static image acquisition historically involved the use of a permanently mounted camera unit on a microscope, such cameras may be expensive, need to be connected to a computer, and often require proprietary software to acquire and process images. Another novel approach for capturing digital microscopic images is to use smartphones coupled with the eyepiece of a microscope. Recently, several smartphone adapters have emerged that allow users to attach mobile phones to the microscope. The aim of this study was to test the utility of these various smartphone adapters. We surveyed the market for adapters to attach smartphones to the ocular lens of a conventional light microscope. Three adapters (Magnifi, Skylight and Snapzoom) were tested. We assessed the designs of these adapters and their effectiveness at acquiring static microscopic digital images. All adapters facilitated the acquisition of digital microscopic images with a smartphone. The optimal adapter was dependent on the type of phone used. The Magnifi adapters for iPhone were incompatible when using a protective case. The Snapzoom adapter was easiest to use with iPhones and other smartphones even with protective cases. Smartphone adapters are inexpensive and easy to use for acquiring digital microscopic images. However, they require some adjustment by the user in order to optimize focus and obtain good quality images. Smartphone microscope adapters provide an economically feasible method of acquiring and sharing digital pathology photomicrographs.
Optical character recognition of camera-captured images based on phase features
NASA Astrophysics Data System (ADS)
Diaz-Escobar, Julia; Kober, Vitaly
2015-09-01
Nowadays most of digital information is obtained using mobile devices specially smartphones. In particular, it brings the opportunity for optical character recognition in camera-captured images. For this reason many recognition applications have been recently developed such as recognition of license plates, business cards, receipts and street signal; document classification, augmented reality, language translator and so on. Camera-captured images are usually affected by geometric distortions, nonuniform illumination, shadow, noise, which make difficult the recognition task with existing systems. It is well known that the Fourier phase contains a lot of important information regardless of the Fourier magnitude. So, in this work we propose a phase-based recognition system exploiting phase-congruency features for illumination/scale invariance. The performance of the proposed system is tested in terms of miss classifications and false alarms with the help of computer simulation.
2004-09-20
ISS009-E-23888 (20 September 2004) --- Downtown Pittsburgh, with its swollen, muddy rivers, is featured in this image photographed from the International Space Station (ISS). Astronaut Edward M. (Mike) Fincke, Expedition 9 NASA ISS science officer and flight engineer, who is a native of Emsworth, captured this image with a digital camera at 5 p.m. on Monday, September 20, 2004.
2013-01-15
S48-E-007 (12 Sept 1991) --- Astronaut James F. Buchli, mission specialist, catches snack crackers as they float in the weightless environment of the earth-orbiting Discovery. This image was transmitted by the Electronic Still Camera, Development Test Objective (DTO) 648. The ESC is making its initial appearance on a Space Shuttle flight. Electronic still photography is a new technology that enables a camera to electronically capture and digitize an image with resolution approaching film quality. The digital image is stored on removable hard disks or small optical disks, and can be converted to a format suitable for downlink transmission or enhanced using image processing software. The Electronic Still Camera (ESC) was developed by the Man- Systems Division at the Johnson Space Center and is the first model in a planned evolutionary development leading to a family of high-resolution digital imaging devices. H. Don Yeates, JSC's Man-Systems Division, is program manager for the ESC. THIS IS A SECOND GENERATION PRINT MADE FROM AN ELECTRONICALLY PRODUCED NEGATIVE
A pathologist-designed imaging system for anatomic pathology signout, teaching, and research.
Schubert, E; Gross, W; Siderits, R H; Deckenbaugh, L; He, F; Becich, M J
1994-11-01
Pathology images are derived from gross surgical specimens, light microscopy, immunofluorescence, electron microscopy, molecular diagnostic gels, flow cytometry, image analysis data, and clinical laboratory data in graphic form. We have implemented a network of desktop personal computers (PCs) that allow us to easily capture, store, and retrieve gross and microscopic, anatomic, and research pathology images. System architecture involves multiple image acquisition and retrieval sites and a central file server for storage. The digitized images are conveyed via a local area network to and from image capture or display stations. Acquisition sites consist of a high-resolution camera connected to a frame grabber card in a 486-type personal computer, equipped with 16 MB (Table 1) RAM, a 1.05-gigabyte hard drive, and a 32-bit ethernet card for access to our anatomic pathology reporting system. We have designed a push-button workstation for acquiring and indexing images that does not significantly interfere with surgical pathology sign-out. Advantages of the system include the following: (1) Improving patient care: the availability of gross images at time of microscopic sign-out, verification of recurrence of malignancy from archived images, monitoring of bone marrow engraftment and immunosuppressive intervention after bone marrow/solid organ transplantation on repeat biopsies, and ability to seek instantaneous consultation with any pathologist on the network; (2) enhancing the teaching environment: building a digital surgical pathology atlas, improving the availability of images for conference support, and sharing cases across the network; (3) enhancing research: case study compilation, metastudy analysis, and availability of digitized images for quantitative analysis and permanent/reusable image records for archival study; and (4) other practical and economic considerations: storing case requisition images and hand-drawn diagrams deters the spread of gross room contaminants and results in considerable cost savings in photographic media for conferences, improved quality assurance by porting control stains across the network, and a multiplicity of other advantages that enhance image and information management in pathology.
A novel smartphone ophthalmic imaging adapter: User feasibility studies in Hyderabad, India
Ludwig, Cassie A; Murthy, Somasheila I; Pappuru, Rajeev R; Jais, Alexandre; Myung, David J; Chang, Robert T
2016-01-01
Aim of Study: To evaluate the ability of ancillary health staff to use a novel smartphone imaging adapter system (EyeGo, now known as Paxos Scope) to capture images of sufficient quality to exclude emergent eye findings. Secondary aims were to assess user and patient experiences during image acquisition, interuser reproducibility, and subjective image quality. Materials and Methods: The system captures images using a macro lens and an indirect ophthalmoscopy lens coupled with an iPhone 5S. We conducted a prospective cohort study of 229 consecutive patients presenting to L. V. Prasad Eye Institute, Hyderabad, India. Primary outcome measure was mean photographic quality (FOTO-ED study 1–5 scale, 5 best). 210 patients and eight users completed surveys assessing comfort and ease of use. For 46 patients, two users imaged the same patient's eyes sequentially. For 182 patients, photos taken with the EyeGo system were compared to images taken by existing clinic cameras: a BX 900 slit-lamp with a Canon EOS 40D Digital Camera and an FF 450 plus Fundus Camera with VISUPAC™ Digital Imaging System. Images were graded post hoc by a reviewer blinded to diagnosis. Results: Nine users acquired 719 useable images and 253 videos of 229 patients. Mean image quality was ≥ 4.0/5.0 (able to exclude subtle findings) for all users. 8/8 users and 189/210 patients surveyed were comfortable with the EyeGo device on a 5-point Likert scale. For 21 patients imaged with the anterior adapter by two users, a weighted κ of 0.597 (95% confidence interval: 0.389–0.806) indicated moderate reproducibility. High level of agreement between EyeGo and existing clinic cameras (92.6% anterior, 84.4% posterior) was found. Conclusion: The novel, ophthalmic imaging system is easily learned by ancillary eye care providers, well tolerated by patients, and captures high-quality images of eye findings. PMID:27146928
Topography changes monitoring of small islands using camera drone
NASA Astrophysics Data System (ADS)
Bang, E.
2017-12-01
Drone aerial photogrammetry was conducted for monitoring topography changes of small islands in the east sea of Korea. Severe weather and sea wave is eroding the islands and sometimes cause landslide and falling rock. Due to rugged cliffs in all direction and bad accessibility, ground based survey methods are less efficient in monitoring topography changes of the whole area. Camera drones can provide digital images and movie in every corner of the islands, and drone aerial photogrammetry is powerful to get precise digital surface model (DSM) for a limited area. We have got a set of digital images to construct a textured 3D model of the project area every year since 2014. Flight height is in less than 100m from the top of those islands to get enough ground sampling distance (GSD). Most images were vertically captured with automatic flights, but we also flied drones around the islands with about 30°-45° camera angle for constructing 3D model better. Every digital image has geo-reference, but we set several ground control points (GCPs) on the islands and their coordinates were measured with RTK surveying methods to increase the absolute accuracy of the project. We constructed 3D textured model using photogrammetry tool, which generates 3D spatial information from digital images. From the polygonal model, we could get DSM with contour lines. Thematic maps such as hill shade relief map, aspect map and slope map were also processed. Those maps make us understand topography condition of the project area better. The purpose of this project is monitoring topography change of these small islands. Elevation difference map between DSMs of each year is constructed. There are two regions showing big negative difference value. By comparing constructed textured models and captured digital images around these regions, it is checked that a region have experienced real topography change. It is due to huge rock fall near the center of the east island. The size of fallen rock can be measured on the digital model exactly, which is about 13m*6m*2m (height*width*thickness). We believe that drone aerial photogrammetry can be an efficient topography changes detection method for a complicated terrain area.
A 256×256 low-light-level CMOS imaging sensor with digital CDS
NASA Astrophysics Data System (ADS)
Zou, Mei; Chen, Nan; Zhong, Shengyou; Li, Zhengfen; Zhang, Jicun; Yao, Li-bin
2016-10-01
In order to achieve high sensitivity for low-light-level CMOS image sensors (CIS), a capacitive transimpedance amplifier (CTIA) pixel circuit with a small integration capacitor is used. As the pixel and the column area are highly constrained, it is difficult to achieve analog correlated double sampling (CDS) to remove the noise for low-light-level CIS. So a digital CDS is adopted, which realizes the subtraction algorithm between the reset signal and pixel signal off-chip. The pixel reset noise and part of the column fixed-pattern noise (FPN) can be greatly reduced. A 256×256 CIS with CTIA array and digital CDS is implemented in the 0.35μm CMOS technology. The chip size is 7.7mm×6.75mm, and the pixel size is 15μm×15μm with a fill factor of 20.6%. The measured pixel noise is 24LSB with digital CDS in RMS value at dark condition, which shows 7.8× reduction compared to the image sensor without digital CDS. Running at 7fps, this low-light-level CIS can capture recognizable images with the illumination down to 0.1lux.
Measuring food intake with digital photography.
Martin, C K; Nicklas, T; Gunturk, B; Correa, J B; Allen, H R; Champagne, C
2014-01-01
The digital photography of foods method accurately estimates the food intake of adults and children in cafeterias. When using this method, images of food selection and leftovers are quickly captured in the cafeteria. These images are later compared with images of 'standard' portions of food using computer software. The amount of food selected and discarded is estimated based upon this comparison, and the application automatically calculates energy and nutrient intake. In the present review, we describe this method, as well as a related method called the Remote Food Photography Method (RFPM), which relies on smartphones to estimate food intake in near real-time in free-living conditions. When using the RFPM, participants capture images of food selection and leftovers using a smartphone and these images are wirelessly transmitted in near real-time to a server for analysis. Because data are transferred and analysed in near real-time, the RFPM provides a platform for participants to quickly receive feedback about their food intake behaviour and to receive dietary recommendations for achieving weight loss and health promotion goals. The reliability and validity of measuring food intake with the RFPM in adults and children is also reviewed. In sum, the body of research reviewed demonstrates that digital imaging accurately estimates food intake in many environments and it has many advantages over other methods, including reduced participant burden, elimination of the need for participants to estimate portion size, and the incorporation of computer automation to improve the accuracy, efficiency and cost-effectiveness of the method. © 2013 The British Dietetic Association Ltd.
A new method for digital video documentation in surgical procedures and minimally invasive surgery.
Wurnig, P N; Hollaus, P H; Wurnig, C H; Wolf, R K; Ohtsuka, T; Pridun, N S
2003-02-01
Documentation of surgical procedures is limited to the accuracy of description, which depends on the vocabulary and the descriptive prowess of the surgeon. Even analog video recording could not solve the problem of documentation satisfactorily due to the abundance of recorded material. By capturing the video digitally, most problems are solved in the circumstances described in this article. We developed a cheap and useful digital video capturing system that consists of conventional computer components. Video images and clips can be captured intraoperatively and are immediately available. The system is a commercial personal computer specially configured for digital video capturing and is connected by wire to the video tower. Filming was done with a conventional endoscopic video camera. A total of 65 open and endoscopic procedures were documented in an orthopedic and a thoracic surgery unit. The median number of clips per surgical procedure was 6 (range, 1-17), and the median storage volume was 49 MB (range, 3-360 MB) in compressed form. The median duration of a video clip was 4 min 25 s (range, 45 s to 21 min). Median time for editing a video clip was 12 min for an advanced user (including cutting, title for the movie, and compression). The quality of the clips renders them suitable for presentations. This digital video documentation system allows easy capturing of intraoperative video sequences in high quality. All possibilities of documentation can be performed. With the use of an endoscopic video camera, no compromises with respect to sterility and surgical elbowroom are necessary. The cost is much lower than commercially available systems, and setting changes can be performed easily without trained specialists.
Note: In vivo pH imaging system using luminescent indicator and color camera
NASA Astrophysics Data System (ADS)
Sakaue, Hirotaka; Dan, Risako; Shimizu, Megumi; Kazama, Haruko
2012-07-01
Microscopic in vivo pH imaging system is developed that can capture the luminescent- and color-imaging. The former gives a quantitative measurement of a pH distribution in vivo. The latter captures the structural information that can be overlaid to the pH distribution for correlating the structure of a specimen and its pH distribution. By using a digital color camera, a luminescent image as well as a color image is obtained. The system uses HPTS (8-hydroxypyrene-1,3,6-trisulfonate) as a luminescent pH indicator for the luminescent imaging. Filter units are mounted in the microscope, which extract two luminescent images for using the excitation-ratio method. A ratio of the two images is converted to a pH distribution through a priori pH calibration. An application of the system to epidermal cells of Lactuca Sativa L is shown.
Quantitative single-molecule imaging by confocal laser scanning microscopy.
Vukojevic, Vladana; Heidkamp, Marcus; Ming, Yu; Johansson, Björn; Terenius, Lars; Rigler, Rudolf
2008-11-25
A new approach to quantitative single-molecule imaging by confocal laser scanning microscopy (CLSM) is presented. It relies on fluorescence intensity distribution to analyze the molecular occurrence statistics captured by digital imaging and enables direct determination of the number of fluorescent molecules and their diffusion rates without resorting to temporal or spatial autocorrelation analyses. Digital images of fluorescent molecules were recorded by using fast scanning and avalanche photodiode detectors. In this way the signal-to-background ratio was significantly improved, enabling direct quantitative imaging by CLSM. The potential of the proposed approach is demonstrated by using standard solutions of fluorescent dyes, fluorescently labeled DNA molecules, quantum dots, and the Enhanced Green Fluorescent Protein in solution and in live cells. The method was verified by using fluorescence correlation spectroscopy. The relevance for biological applications, in particular, for live cell imaging, is discussed.
Chen, Zhenning; Shao, Xinxing; Xu, Xiangyang; He, Xiaoyuan
2018-02-01
The technique of digital image correlation (DIC), which has been widely used for noncontact deformation measurements in both the scientific and engineering fields, is greatly affected by the quality of speckle patterns in terms of its performance. This study was concerned with the optimization of the digital speckle pattern (DSP) for DIC in consideration of both the accuracy and efficiency. The root-mean-square error of the inverse compositional Gauss-Newton algorithm and the average number of iterations were used as quality metrics. Moreover, the influence of subset sizes and the noise level of images, which are the basic parameters in the quality assessment formulations, were also considered. The simulated binary speckle patterns were first compared with the Gaussian speckle patterns and captured DSPs. Both the single-radius and multi-radius DSPs were optimized. Experimental tests and analyses were conducted to obtain the optimized and recommended DSP. The vector diagram of the optimized speckle pattern was also uploaded as reference.
Superresolution with the focused plenoptic camera
NASA Astrophysics Data System (ADS)
Georgiev, Todor; Chunev, Georgi; Lumsdaine, Andrew
2011-03-01
Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.
Getting the Bigger Picture With Digital Surveillance
NASA Technical Reports Server (NTRS)
2002-01-01
Through a Space Act Agreement, Diebold, Inc., acquired the exclusive rights to Glenn Research Center's patented video observation technology, originally designed to accelerate video image analysis for various ongoing and future space applications. Diebold implemented the technology into its AccuTrack digital, color video recorder, a state-of- the-art surveillance product that uses motion detection for around-the- clock monitoring. AccuTrack captures digitally signed images and transaction data in real-time. This process replaces the onerous tasks involved in operating a VCR-based surveillance system, and subsequently eliminates the need for central viewing and tape archiving locations altogether. AccuTrack can monitor an entire bank facility, including four automated teller machines, multiple teller lines, and new account areas, all from one central location.
1-Million droplet array with wide-field fluorescence imaging for digital PCR.
Hatch, Andrew C; Fisher, Jeffrey S; Tovar, Armando R; Hsieh, Albert T; Lin, Robert; Pentoney, Stephen L; Yang, David L; Lee, Abraham P
2011-11-21
Digital droplet reactors are useful as chemical and biological containers to discretize reagents into picolitre or nanolitre volumes for analysis of single cells, organisms, or molecules. However, most DNA based assays require processing of samples on the order of tens of microlitres and contain as few as one to as many as millions of fragments to be detected. Presented in this work is a droplet microfluidic platform and fluorescence imaging setup designed to better meet the needs of the high-throughput and high-dynamic-range by integrating multiple high-throughput droplet processing schemes on the chip. The design is capable of generating over 1-million, monodisperse, 50 picolitre droplets in 2-7 minutes that then self-assemble into high density 3-dimensional sphere packing configurations in a large viewing chamber for visualization and analysis. This device then undergoes on-chip polymerase chain reaction (PCR) amplification and fluorescence detection to digitally quantify the sample's nucleic acid contents. Wide-field fluorescence images are captured using a low cost 21-megapixel digital camera and macro-lens with an 8-12 cm(2) field-of-view at 1× to 0.85× magnification, respectively. We demonstrate both end-point and real-time imaging ability to perform on-chip quantitative digital PCR analysis of the entire droplet array. Compared to previous work, this highly integrated design yields a 100-fold increase in the number of on-chip digitized reactors with simultaneous fluorescence imaging for digital PCR based assays.
Automated Meteor Detection by All-Sky Digital Camera Systems
NASA Astrophysics Data System (ADS)
Suk, Tomáš; Šimberová, Stanislava
2017-12-01
We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.
Hadamard multimode optical imaging transceiver
Cooke, Bradly J; Guenther, David C; Tiee, Joe J; Kellum, Mervyn J; Olivas, Nicholas L; Weisse-Bernstein, Nina R; Judd, Stephen L; Braun, Thomas R
2012-10-30
Disclosed is a method and system for simultaneously acquiring and producing results for multiple image modes using a common sensor without optical filtering, scanning, or other moving parts. The system and method utilize the Walsh-Hadamard correlation detection process (e.g., functions/matrix) to provide an all-binary structure that permits seamless bridging between analog and digital domains. An embodiment may capture an incoming optical signal at an optical aperture, convert the optical signal to an electrical signal, pass the electrical signal through a Low-Noise Amplifier (LNA) to create an LNA signal, pass the LNA signal through one or more correlators where each correlator has a corresponding Walsh-Hadamard (WH) binary basis function, calculate a correlation output coefficient for each correlator as a function of the corresponding WH binary basis function in accordance with Walsh-Hadamard mathematical principles, digitize each of the correlation output coefficient by passing each correlation output coefficient through an Analog-to-Digital Converter (ADC), and performing image mode processing on the digitized correlation output coefficients as desired to produce one or more image modes. Some, but not all, potential image modes include: multi-channel access, temporal, range, three-dimensional, and synthetic aperture.
Bernardo, Theresa M; Malinowski, Robert P
2005-01-01
In this article, advances in the application of medical media to education, clinical care, and research are explored and illustrated with examples, and their future potential is discussed. Impact is framed in terms of the Sloan Consortium's five pillars of quality education: access; student and faculty satisfaction; learning effectiveness; and cost effectiveness. (Hiltz SR, Zhang Y, Turoff M. Studies of effectiveness of learning networks. In Bourne J, Moore J, ed. Elements of Quality Online Education. Needham, MA: Sloan-Consortium, 2002:15-45). The alternatives for converting analog media (text, photos, graphics, sound, video, animations, radiographs) to digital media and direct digital capture are covered, as are options for storing, manipulating, retrieving, and sharing digital collections. Diagnostic imaging is given particular attention, clarifying the difference between computerized radiography and digital radiography and explaining the accepted standard (DICOM) and the advantages of Web PACS. Some novel research applications of medical media are presented.
Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo
2008-01-01
Background Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Methods Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). Results We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. Conclusion The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes. PMID:18627634
Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo
2008-07-16
Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes.
Laser-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1995-01-01
The invention relates generally to systems for determining the range of an object from a reference point and, in one embodiment, to laser-directed ranging systems useful in telerobotics applications. Digital processing techniques are employed which minimize the complexity and cost of the hardware and software for processing range calculations, thereby enhancing the commercial attractiveness of the system for use in relatively low-cost robotic systems. The system includes a video camera for generating images of the target, image digitizing circuitry, and an associated frame grabber circuit. The circuit first captures one of the pairs of stereo video images of the target, and then captures a second video image of the target as it is partly illuminated by the light beam, suitably generated by a laser. The two video images, taken sufficiently close together in time to minimize camera and scene motion, are converted to digital images and then compared. Common pixels are eliminated, leaving only a digital image of the laser-illuminated spot on the target. Mw centroid of the laser illuminated spot is dm obtained and compared with a predetermined reference point, predetermined by design or calibration, which represents the coordinate at the focal plane of the laser illumination at infinite range. Preferably, the laser and camera are mounted on a servo-driven platform which can be oriented to direct the camera and the laser toward the target. In one embodiment the platform is positioned in response to movement of the operator's head. Position and orientation sensors are used to monitor head movement. The disparity between the digital image of the laser spot and the reference point is calculated for determining range to the target. Commercial applications for the system relate to active range-determination systems, such as those used with robotic systems in which it is necessary to determine the, range to a workpiece or object to be grasped or acted upon by a robot arm end-effector in response to commands generated by an operator. In one embodiment, the system provides a real-time image of the target for the operator as the robot approaches the object. The system is also adapted for use in virtual reality systems in which a remote object or workpiece is to be acted upon by a remote robot arm or other mechanism controlled by an operator.
Characterization of a digital camera as an absolute tristimulus colorimeter
NASA Astrophysics Data System (ADS)
Martinez-Verdu, Francisco; Pujol, Jaume; Vilaseca, Meritxell; Capilla, Pascual
2003-01-01
An algorithm is proposed for the spectral and colorimetric characterization of digital still cameras (DSC) which allows to use them as tele-colorimeters with CIE-XYZ color output, in cd/m2. The spectral characterization consists of the calculation of the color-matching functions from the previously measured spectral sensitivities. The colorimetric characterization consists of transforming the RGB digital data into absolute tristimulus values CIE-XYZ (in cd/m2) under variable and unknown spectroradiometric conditions. Thus, at the first stage, a gray balance has been applied over the RGB digital data to convert them into RGB relative colorimetric values. At a second stage, an algorithm of luminance adaptation vs. lens aperture has been inserted in the basic colorimetric profile. Capturing the ColorChecker chart under different light sources, the DSC color analysis accuracy indexes, both in a raw state and with the corrections from a linear model of color correction, have been evaluated using the Pointer'86 color reproduction index with the unrelated Hunt'91 color appearance model. The results indicate that our digital image capture device, in raw performance, lightens and desaturates the colors.
Rolling Shutter Effect aberration compensation in Digital Holographic Microscopy
NASA Astrophysics Data System (ADS)
Monaldi, Andrea C.; Romero, Gladis G.; Cabrera, Carlos M.; Blanc, Adriana V.; Alanís, Elvio E.
2016-05-01
Due to the sequential-readout nature of most CMOS sensors, each row of the sensor array is exposed at a different time, resulting in the so-called rolling shutter effect that induces geometric distortion to the image if the video camera or the object moves during image acquisition. Particularly in digital holograms recording, while the sensor captures progressively each row of the hologram, interferometric fringes can oscillate due to external vibrations and/or noises even when the object under study remains motionless. The sensor records each hologram row in different instants of these disturbances. As a final effect, phase information is corrupted, distorting the reconstructed holograms quality. We present a fast and simple method for compensating this effect based on image processing tools. The method is exemplified by holograms of microscopic biological static objects. Results encourage incorporating CMOS sensors over CCD in Digital Holographic Microscopy due to a better resolution and less expensive benefits.
Client/server approach to image capturing
NASA Astrophysics Data System (ADS)
Tuijn, Chris; Stokes, Earle
1998-01-01
The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven applications). This paper is structured as follows. In the introduction, we further motive the need for a scan server-based architecture. In the second section, we give a brief architectural overview of the scan server and the other components it is connected to. The third chapter exposes the generic model for input devices as well as the image processing model; the fourth chapter reveals the different shapes the scanning applications (or modules) can have. In the last section, we briefly summarize the presented material and point out trends for future development.
Smartphone adapters for digital photomicrography
Roy, Somak; Pantanowitz, Liron; Amin, Milon; Seethala, Raja R.; Ishtiaque, Ahmed; Yousem, Samuel A.; Parwani, Anil V.; Cucoranu, Ioan; Hartman, Douglas J.
2014-01-01
Background: Photomicrographs in Anatomic Pathology provide a means of quickly sharing information from a glass slide for consultation, education, documentation and publication. While static image acquisition historically involved the use of a permanently mounted camera unit on a microscope, such cameras may be expensive, need to be connected to a computer, and often require proprietary software to acquire and process images. Another novel approach for capturing digital microscopic images is to use smartphones coupled with the eyepiece of a microscope. Recently, several smartphone adapters have emerged that allow users to attach mobile phones to the microscope. The aim of this study was to test the utility of these various smartphone adapters. Materials and Methods: We surveyed the market for adapters to attach smartphones to the ocular lens of a conventional light microscope. Three adapters (Magnifi, Skylight and Snapzoom) were tested. We assessed the designs of these adapters and their effectiveness at acquiring static microscopic digital images. Results: All adapters facilitated the acquisition of digital microscopic images with a smartphone. The optimal adapter was dependent on the type of phone used. The Magnifi adapters for iPhone were incompatible when using a protective case. The Snapzoom adapter was easiest to use with iPhones and other smartphones even with protective cases. Conclusions: Smartphone adapters are inexpensive and easy to use for acquiring digital microscopic images. However, they require some adjustment by the user in order to optimize focus and obtain good quality images. Smartphone microscope adapters provide an economically feasible method of acquiring and sharing digital pathology photomicrographs. PMID:25191623
Bautista, Pinky A; Yagi, Yukako
2011-01-01
In this paper we introduced a digital staining method for histopathology images captured with an n-band multispectral camera. The method consisted of two major processes: enhancement of the original spectral transmittance and the transformation of the enhanced transmittance to its target spectral configuration. Enhancement is accomplished by shifting the original transmittance with the scaled difference between the original transmittance and the transmittance estimated with m dominant principal component (PC) vectors;the m-PC vectors were determined from the transmittance samples of the background image. Transformation of the enhanced transmittance to the target spectral configuration was done using an nxn transformation matrix, which was derived by applying a least square method to the enhanced and target spectral training data samples of the different tissue components. Experimental results on the digital conversion of a hematoxylin and eosin (H&E) stained multispectral image to its Masson's trichrome stained (MT) equivalent shows the viability of the method.
Holostrain system: a powerful tool for experimental mechanics
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.; Bhat, Gopalakrishna K.
1992-09-01
A portable holographic interferometer that can be used to measure displacements and strains in all kinds of mechanical components and structures is described. The holostrain system captures images on a TV camera that detects interference patterns produced by laser illumination. The video signals are digitized. The digitized interferograms are processed by a fast processing system. The output of the system are the strains or the stresses of the observed mechanical component or structure.
Palmprint Recognition Across Different Devices.
Jia, Wei; Hu, Rong-Xiang; Gui, Jie; Zhao, Yang; Ren, Xiao-Ming
2012-01-01
In this paper, the problem of Palmprint Recognition Across Different Devices (PRADD) is investigated, which has not been well studied so far. Since there is no publicly available PRADD image database, we created a non-contact PRADD image database containing 12,000 grayscale captured from 100 subjects using three devices, i.e., one digital camera and two smart-phones. Due to the non-contact image acquisition used, rotation and scale changes between different images captured from a same palm are inevitable. We propose a robust method to calculate the palm width, which can be effectively used for scale normalization of palmprints. On this PRADD image database, we evaluate the recognition performance of three different methods, i.e., subspace learning method, correlation method, and orientation coding based method, respectively. Experiments results show that orientation coding based methods achieved promising recognition performance for PRADD.
Palmprint Recognition across Different Devices
Jia, Wei; Hu, Rong-Xiang; Gui, Jie; Zhao, Yang; Ren, Xiao-Ming
2012-01-01
In this paper, the problem of Palmprint Recognition Across Different Devices (PRADD) is investigated, which has not been well studied so far. Since there is no publicly available PRADD image database, we created a non-contact PRADD image database containing 12,000 grayscale captured from 100 subjects using three devices, i.e., one digital camera and two smart-phones. Due to the non-contact image acquisition used, rotation and scale changes between different images captured from a same palm are inevitable. We propose a robust method to calculate the palm width, which can be effectively used for scale normalization of palmprints. On this PRADD image database, we evaluate the recognition performance of three different methods, i.e., subspace learning method, correlation method, and orientation coding based method, respectively. Experiments results show that orientation coding based methods achieved promising recognition performance for PRADD. PMID:22969380
Increasing the efficiency of digitization workflows for herbarium specimens.
Tulig, Melissa; Tarnowsky, Nicole; Bevans, Michael; Anthony Kirchgessner; Thiers, Barbara M
2012-01-01
The New York Botanical Garden Herbarium has been databasing and imaging its estimated 7.3 million plant specimens for the past 17 years. Due to the size of the collection, we have been selectively digitizing fundable subsets of specimens, making successive passes through the herbarium with each new grant. With this strategy, the average rate for databasing complete records has been 10 specimens per hour. With 1.3 million specimens databased, this effort has taken about 130,000 hours of staff time. At this rate, to complete the herbarium and digitize the remaining 6 million specimens, another 600,000 hours would be needed. Given the current biodiversity and economic crises, there is neither the time nor money to complete the collection at this rate.Through a combination of grants over the last few years, The New York Botanical Garden has been testing new protocols and tactics for increasing the rate of digitization through combinations of data collaboration, field book digitization, partial data entry and imaging, and optical character recognition (OCR) of specimen images. With the launch of the National Science Foundation's new Advancing Digitization of Biological Collections program, we hope to move forward with larger, more efficient digitization projects, capturing data from larger portions of the herbarium at a fraction of the cost and time.
Increasing the efficiency of digitization workflows for herbarium specimens
Tulig, Melissa; Tarnowsky, Nicole; Bevans, Michael; Anthony Kirchgessner; Thiers, Barbara M.
2012-01-01
Abstract The New York Botanical Garden Herbarium has been databasing and imaging its estimated 7.3 million plant specimens for the past 17 years. Due to the size of the collection, we have been selectively digitizing fundable subsets of specimens, making successive passes through the herbarium with each new grant. With this strategy, the average rate for databasing complete records has been 10 specimens per hour. With 1.3 million specimens databased, this effort has taken about 130,000 hours of staff time. At this rate, to complete the herbarium and digitize the remaining 6 million specimens, another 600,000 hours would be needed. Given the current biodiversity and economic crises, there is neither the time nor money to complete the collection at this rate. Through a combination of grants over the last few years, The New York Botanical Garden has been testing new protocols and tactics for increasing the rate of digitization through combinations of data collaboration, field book digitization, partial data entry and imaging, and optical character recognition (OCR) of specimen images. With the launch of the National Science Foundation’s new Advancing Digitization of Biological Collections program, we hope to move forward with larger, more efficient digitization projects, capturing data from larger portions of the herbarium at a fraction of the cost and time. PMID:22859882
Digital shaded-relief map of Venezuela
Garrity, Christopher P.; Hackley, Paul C.; Urbani, Franco
2004-01-01
The Digital Shaded-Relief Map of Venezuela is a composite of more than 20 tiles of 90 meter (3 arc second) pixel resolution elevation data, captured during the Shuttle Radar Topography Mission (SRTM) in February 2000. The SRTM, a joint project between the National Geospatial-Intelligence Agency (NGA) and the National Aeronautics and Space Administration (NASA), provides the most accurate and comprehensive international digital elevation dataset ever assembled. The 10-day flight mission aboard the U.S. Space Shuttle Endeavour obtained elevation data for about 80% of the world's landmass at 3-5 meter pixel resolution through the use of synthetic aperture radar (SAR) technology. SAR is desirable because it acquires data along continuous swaths, maintaining data consistency across large areas, independent of cloud cover. Swaths were captured at an altitude of 230 km, and are approximately 225 km wide with varying lengths. Rendering of the shaded-relief image required editing of the raw elevation data to remove numerous holes and anomalously high and low values inherent in the dataset. Customized ArcInfo Arc Macro Language (AML) scripts were written to interpolate areas of null values and generalize irregular elevation spikes and wells. Coastlines and major water bodies used as a clipping mask were extracted from 1:500,000-scale geologic maps of Venezuela (Bellizzia and others, 1976). The shaded-relief image was rendered with an illumination azimuth of 315? and an altitude of 65?. A vertical exaggeration of 2X was applied to the image to enhance land-surface features. Image post-processing techniques were accomplished using conventional desktop imaging software.
New concept high-speed and high-resolution color scanner
NASA Astrophysics Data System (ADS)
Nakashima, Keisuke; Shinoda, Shin'ichi; Konishi, Yoshiharu; Sugiyama, Kenji; Hori, Tetsuya
2003-05-01
We have developed a new concept high-speed and high-resolution color scanner (Blinkscan) using digital camera technology. With our most advanced sub-pixel image processing technology, approximately 12 million pixel image data can be captured. High resolution imaging capability allows various uses such as OCR, color document read, and document camera. The scan time is only about 3 seconds for a letter size sheet. Blinkscan scans documents placed "face up" on its scan stage and without any special illumination lights. Using Blinkscan, a high-resolution color document can be easily inputted into a PC at high speed, a paperless system can be built easily. It is small, and since the occupancy area is also small, setting it on an individual desk is possible. Blinkscan offers the usability of a digital camera and accuracy of a flatbed scanner with high-speed processing. Now, about several hundred of Blinkscan are mainly shipping for the receptionist operation in a bank and a security. We will show the high-speed and high-resolution architecture of Blinkscan. Comparing operation-time with conventional image capture device, the advantage of Blinkscan will make clear. And image evaluation for variety of environment, such as geometric distortions or non-uniformity of brightness, will be made.
Kim, Dong-Keun; Yoo, Sun K; Kim, Sun H
2005-01-01
The instant transmission of radiological images may be important for making rapid clinical decisions about emergency patients. We have examined an instant image transfer system based on a personal digital assistant (PDA) phone with a built-in camera. Images displayed on a picture archiving and communication systems (PACS) monitor can be captured by the camera in the PDA phone directly. Images can then be transmitted from an emergency centre to a remote physician via a wireless high-bandwidth network (CDMA 1 x EVDO). We reviewed the radiological lesions in 10 normal and 10 abnormal cases produced by modalities such as computerized tomography (CT), magnetic resonance (MR) and digital angiography. The images were of 24-bit depth and 1,144 x 880, 1,120 x 840, 1,024 x 768, 800 x 600, 640 x 480 and 320 x 240 pixels. Three neurosurgeons found that for satisfactory remote consultation a minimum size of 640 x 480 pixels was required for CT and MR images and 1,024 x 768 pixels for angiography images. Although higher resolution produced higher clinical satisfaction, it also required more transmission time. At the limited bandwidth employed, higher resolutions could not be justified.
Computational photography with plenoptic camera and light field capture: tutorial.
Lam, Edmund Y
2015-11-01
Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images.
Cryo-Scanning Electron Microscopy of Captured Cirrus Ice Particles
NASA Astrophysics Data System (ADS)
Magee, N. B.; Boaggio, K.; Bandamede, M.; Bancroft, L.; Hurler, K.
2016-12-01
We present the latest collection of high-resolution cryo-scanning electron microscopy images and microanalysis of cirrus ice particles captured by high-altitude balloon (ICE-Ball, see abstracts by K. Boaggio and M. Bandamede). Ice particle images and sublimation-residues are derived from particles captured during approximately 15 balloon flights conducted in Pennsylvania and New Jersey over the past 12 months. Measurements include 3D digital elevation model reconstructions of ice particles, and associated statistical analyses of entire particles and particle sub-facets and surfaces. This 3D analysis reveals that morphologies of most ice particles captured deviate significantly from ideal habits, and display geometric complexity and surface roughness at multiple measureable scales, ranging from 100's nanometers to 100's of microns. The presentation suggests potential a path forward for representing scattering from a realistically complex array of ice particle shapes and surfaces.
Computer vision research with new imaging technology
NASA Astrophysics Data System (ADS)
Hou, Guangqi; Liu, Fei; Sun, Zhenan
2015-12-01
Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.
Focusing and depth of field in photography: application in dermatology practice.
Taheri, Arash; Yentzer, Brad A; Feldman, Steven R
2013-11-01
Conventional photography obtains a sharp image of objects within a given 'depth of field'; objects not within the depth of field are out of focus. In recent years, digital photography revolutionized the way pictures are taken, edited, and stored. However, digital photography does not result in a deeper depth of field or better focusing. In this article, we briefly review the concept of depth of field and focus in photography as well as new technologies in this area. A deep depth of field is used to have more objects in focus; a shallow depth of field can emphasize a subject by blurring the foreground and background objects. The depth of field can be manipulated by adjusting the aperture size of the camera, with smaller apertures increasing the depth of field at the cost of lower levels of light capture. Light-field cameras are a new generation of digital cameras that offer several new features, including the ability to change the focus on any object in the image after taking the photograph. Understanding depth of field and camera technology helps dermatologists to capture their subjects in focus more efficiently. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Quality and noise measurements in mobile phone video capture
NASA Astrophysics Data System (ADS)
Petrescu, Doina; Pincenti, John
2011-02-01
The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.
The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications
Park, Keunyeol; Song, Minkyu
2018-01-01
This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm2 with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency. PMID:29495273
The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications.
Park, Keunyeol; Song, Minkyu; Kim, Soo Youn
2018-02-24
This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm² with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency.
2012-11-08
S48-E-013 (15 Sept 1991) --- The Upper Atmosphere Research Satellite (UARS) in the payload bay of the earth- orbiting Discovery. UARS is scheduled for deploy on flight day three of the STS-48 mission. Data from UARS will enable scientists to study ozone depletion in the stratosphere, or upper atmosphere. This image was transmitted by the Electronic Still Camera (ESC), Development Test Objective (DTO) 648. The ESC is making its initial appearance on a Space Shuttle flight. Electronic still photography is a new technology that enables a camera to electronically capture and digitize an image with resolution approaching film quality. The digital image is stored on removable hard disks or small optical disks, and can be converted to a format suitable for downlink transmission or enhanced using image processing software. The Electronic Still Camera (ESC) was developed by the Man- Systems Division at the Johnson Space Center and is the first model in a planned evolutionary development leading to a family of high-resolution digital imaging devices. H. Don Yeates, JSC's Man-Systems Division, is program manager for the ESC. THIS IS A SECOND GENERATION PRINT MADE FROM AN ELECTRONICALLY PRODUCED NEGATIVE.
Jafarian, Amir Hossein; Tasbandi, Aida; Mohamadian Roshan, Nema
2018-04-19
The aim of this study is to investigate and compare the results of digital image analysis in pleural effusion cytology samples with conventional modalities. In this cross-sectional study, 53 pleural fluid cytology smears from Qaem hospital pathology department, located in Mashhad, Iran were investigated. Prior to digital analysis, all specimens were evaluated by two pathologists and categorized into three groups as: benign, suspicious, and malignant. Using an Olympus microscope and Olympus DP3 digital camera, digital images from cytology slides were captured. Appropriate images (n = 130) were separately imported to Adobe Photoshop CS5 and parameters including area and perimeter, circularity, Gray Value mean, integrated density, and nucleus to cytoplasm area ratio were analyzed. Gray Value mean, nucleus to cytoplasm area ratio, and circularity showed the best sensitivity and specificity rates as well as significant differences between all groups. Also, nucleus area and perimeter showed a significant relation between suspicious and malignant groups with benign group. Whereas, there was no such difference between suspicious and malignant groups. We concluded that digital image analysis is welcomed in the field of research on pleural fluid smears as it can provide quantitative data to apply various comparisons and reduce interobserver variation which could assist pathologists to achieve a more accurate diagnosis. © 2018 Wiley Periodicals, Inc.
Teich, Sorin; Al-Rawi, Wisam; Heima, Masahiro; Faddoul, Fady F; Goldzweig, Gil; Gutmacher, Zvi; Aizenbud, Dror
2016-10-01
To evaluate the image quality generated by eight commercially available intraoral sensors. Eighteen clinicians ranked the quality of a bitewing acquired from one subject using eight different intraoral sensors. Analytical methods used to evaluate clinical image quality included the Visual Grading Characteristics method, which helps to quantify subjective opinions to make them suitable for analysis. The Dexis sensor was ranked significantly better than Sirona and Carestream-Kodak sensors; and the image captured using the Carestream-Kodak sensor was ranked significantly worse than those captured using Dexis, Schick and Cyber Medical Imaging sensors. The Image Works sensor image was rated the lowest by all clinicians. Other comparisons resulted in non-significant results. None of the sensors was considered to generate images of significantly better quality than the other sensors tested. Further research should be directed towards determining the clinical significance of the differences in image quality reported in this study. © 2016 FDI World Dental Federation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trease, Lynn L.; Trease, Harold E.; Fowler, John
2007-03-15
One of the critical steps toward performing computational biology simulations, using mesh based integration methods, is in using topologically faithful geometry derived from experimental digital image data as the basis for generating the computational meshes. Digital image data representations contain both the topology of the geometric features and experimental field data distributions. The geometric features that need to be captured from the digital image data are three-dimensional, therefore the process and tools we have developed work with volumetric image data represented as data-cubes. This allows us to take advantage of 2D curvature information during the segmentation and feature extraction process.more » The process is basically: 1) segmenting to isolate and enhance the contrast of the features that we wish to extract and reconstruct, 2) extracting the geometry of the features in an isosurfacing technique, and 3) building the computational mesh using the extracted feature geometry. “Quantitative” image reconstruction and feature extraction is done for the purpose of generating computational meshes, not just for producing graphics "screen" quality images. For example, the surface geometry that we extract must represent a closed water-tight surface.« less
Investigation of sparsity metrics for autofocusing in digital holographic microscopy
NASA Astrophysics Data System (ADS)
Fan, Xin; Healy, John J.; Hennelly, Bryan M.
2017-05-01
Digital holographic microscopy (DHM) is an optoelectronic technique that is made up of two parts: (i) the recording of the interference pattern of the diffraction pattern of an object and a known reference wavefield using a digital camera and (ii) the numerical reconstruction of the complex object wavefield using the recorded interferogram and a distance parameter as input. The latter is based on the simulation of optical propagation from the camera plane to a plane at any arbitrary distance from the camera. A key advantage of DHM over conventional microscopy is that both the phase and intensity information of the object can be recovered at any distance, using only one capture, and this facilitates the recording of scenes that may change dynamically and that may otherwise go in and out of focus. Autofocusing using traditional microscopy requires mechanical movement of the translation stage or the microscope objective, and multiple image captures that are then compared using some metric. Autofocusing in DHM is similar, except that the sequence of intensity images, to which the metric is applied, is generated numerically from a single capture. We recently investigated the application of a number of sparsity metrics for DHM autofocusing and in this paper we extend this work to include more such metrics, and apply them over a greater range of biological diatom cells and magnification/numerical apertures. We demonstrate for the first time that these metrics may be grouped together according to matching behavior following high pass filtering.
Simulated Lidar Images of Human Pose using a 3DS Max Virtual Laboratory
2015-12-01
developed in Autodesk 3DS Max, with an animated, biofidelic 3D human mesh biped character ( avatar ) as the subject. The biped animation modifies the digital...character ( avatar ) as the subject. The biped animation modifies the digital human model through a time sequence of motion capture data representing an...AFB. Mr. Isiah Davenport from Infoscitex Corp developed the method for creating the biofidelic avatars from laboratory data and 3DS Max code for
A digital rat atlas of sectional anatomy
NASA Astrophysics Data System (ADS)
Yu, Li; Liu, Qian; Bai, Xueling; Liao, Yinping; Luo, Qingming; Gong, Hui
2006-09-01
This paper describes a digital rat alias of sectional anatomy made by milling. Two healthy Sprague-Dawley (SD) rat weighing 160-180 g were used for the generation of this atlas. The rats were depilated completely, then euthanized by Co II. One was via vascular perfusion, the other was directly frozen at -85 °C over 24 hour. After that, the frozen specimens were transferred into iron molds for embedding. A 3% gelatin solution colored blue was used to fill the molds and then frozen at -85 °C for one or two days. The frozen specimen-blocks were subsequently sectioned on the cryosection-milling machine in a plane oriented approximately transverse to the long axis of the body. The surface of specimen-blocks were imaged by a scanner and digitalized into 4,600 x2,580 x 24 bit array through a computer. Finally 9,475 sectional images (arterial vessel were not perfused) and 1,646 sectional images (arterial vessel were perfused) were captured, which made the volume of the digital atlas up to 369.35 Gbyte. This digital rat atlas is aimed at the whole rat and the rat arterial vessels are also presented. We have reconstructed this atlas. The information from the two-dimensional (2-D) images of serial sections and three-dimensional (3-D) surface model all shows that the digital rat atlas we constructed is high quality. This work lays the foundation for a deeper study of digital rat.
NASA Astrophysics Data System (ADS)
Cline, Julia Elaine
2011-12-01
Ultra-high temperature deformation measurements are required to characterize the thermo-mechanical response of material systems for thermal protection systems for aerospace applications. The use of conventional surface-contacting strain measurement techniques is not practical in elevated temperature conditions. Technological advancements in digital imaging provide impetus to measure full-field displacement and determine strain fields with sub-pixel accuracy by image processing. In this work, an Instron electromechanical axial testing machine with a custom-designed high temperature gripping mechanism is used to apply quasi-static tensile loads to graphite specimens heated to 2000°F (1093°C). Specimen heating via Joule effect is achieved and maintained with a custom-designed temperature control system. Images are captured at monotonically increasing load levels throughout the test duration using an 18 megapixel Canon EOS Rebel T2i digital camera with a modified Schneider Kreutznach telecentric lens and a combination of blue light illumination and narrow band-pass filter system. Images are processed using an open-source Matlab-based digital image correlation (DIC) code. Validation of source code is performed using Mathematica generated images with specified known displacement fields in order to gain confidence in accurate software tracking capabilities. Room temperature results are compared with extensometer readings. Ultra-high temperature strain measurements for graphite are obtained at low load levels, demonstrating the potential for non-contacting digital image correlation techniques to accurately determine full-field strain measurements at ultra-high temperature. Recommendations are given to improve the experimental set-up to achieve displacement field measurements accurate to 1/10 pixel and strain field accuracy of less than 2%.
Digital PIV (DPIV) Software Analysis System
NASA Technical Reports Server (NTRS)
Blackshire, James L.
1997-01-01
A software package was developed to provide a Digital PIV (DPIV) capability for NASA LaRC. The system provides an automated image capture, test correlation, and autocorrelation analysis capability for the Kodak Megaplus 1.4 digital camera system for PIV measurements. The package includes three separate programs that, when used together with the PIV data validation algorithm, constitutes a complete DPIV analysis capability. The programs are run on an IBM PC/AT host computer running either Microsoft Windows 3.1 or Windows 95 using a 'quickwin' format that allows simple user interface and output capabilities to the windows environment.
Rectification of curved document images based on single view three-dimensional reconstruction.
Kang, Lai; Wei, Yingmei; Jiang, Jie; Bai, Liang; Lao, Songyang
2016-10-01
Since distortions in camera-captured document images significantly affect the accuracy of optical character recognition (OCR), distortion removal plays a critical role for document digitalization systems using a camera for image capturing. This paper proposes a novel framework that performs three-dimensional (3D) reconstruction and rectification of camera-captured document images. While most existing methods rely on additional calibrated hardware or multiple images to recover the 3D shape of a document page, or make a simple but not always valid assumption on the corresponding 3D shape, our framework is more flexible and practical since it only requires a single input image and is able to handle a general locally smooth document surface. The main contributions of this paper include a new iterative refinement scheme for baseline fitting from connected components of text line, an efficient discrete vertical text direction estimation algorithm based on convex hull projection profile analysis, and a 2D distortion grid construction method based on text direction function estimation using 3D regularization. In order to examine the performance of our proposed method, both qualitative and quantitative evaluation and comparison with several recent methods are conducted in our experiments. The experimental results demonstrate that the proposed method outperforms relevant approaches for camera-captured document image rectification, in terms of improvements on both visual distortion removal and OCR accuracy.
Image analysis of ocular fundus for retinopathy characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushizima, Daniela; Cuadros, Jorge
2010-02-05
Automated analysis of ocular fundus images is a common procedure in countries as England, including both nonemergency examination and retinal screening of patients with diabetes mellitus. This involves digital image capture and transmission of the images to a digital reading center for evaluation and treatment referral. In collaboration with the Optometry Department, University of California, Berkeley, we have tested computer vision algorithms to segment vessels and lesions in ground-truth data (DRIVE database) and hundreds of images of non-macular centric and nonuniform illumination views of the eye fundus from EyePACS program. Methods under investigation involve mathematical morphology (Figure 1) for imagemore » enhancement and pattern matching. Recently, we have focused in more efficient techniques to model the ocular fundus vasculature (Figure 2), using deformable contours. Preliminary results show accurate segmentation of vessels and high level of true-positive microaneurysms.« less
Tug-of-war lacunarity—A novel approach for estimating lacunarity
NASA Astrophysics Data System (ADS)
Reiss, Martin A.; Lemmerer, Birgit; Hanslmeier, Arnold; Ahammer, Helmut
2016-11-01
Modern instrumentation provides us with massive repositories of digital images that will likely only increase in the future. Therefore, it has become increasingly important to automatize the analysis of digital images, e.g., with methods from pattern recognition. These methods aim to quantify the visual appearance of captured textures with quantitative measures. As such, lacunarity is a useful multi-scale measure of texture's heterogeneity but demands high computational efforts. Here we investigate a novel approach based on the tug-of-war algorithm, which estimates lacunarity in a single pass over the image. We computed lacunarity for theoretical and real world sample images, and found that the investigated approach is able to estimate lacunarity with low uncertainties. We conclude that the proposed method combines low computational efforts with high accuracy, and that its application may have utility in the analysis of high-resolution images.
High-quality and small-capacity e-learning video featuring lecturer-superimposing PC screen images
NASA Astrophysics Data System (ADS)
Nomura, Yoshihiko; Murakami, Michinobu; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko
2006-10-01
Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene. While dynamic lecturing scene is shot by a digital video camera, screen images are electronically stored by a PC screen image capturing software in relatively long period at a practical class. Then, a lecturer and a lecture stick are extracted from the digital video images by pattern recognition techniques, and the extracted images are superimposed on the appropriate PC screen images by off-line processing. Thus, we have succeeded to create a high-quality and small-capacity (HQ/SC) video-on-demand educational content featuring the advantages: the high quality of image sharpness, the small electronic file capacity, and the realistic lecturer motion.
Data management and digital delivery of analog data
Miller, W.A.; Longhenry, Ryan; Smith, T.
2008-01-01
The U.S. Geological Survey's (USGS) data archive at the Earth Resources Observation and Science (EROS) Center is a comprehensive and impartial record of the Earth's changing land surface. USGS/EROS has been archiving and preserving land remote sensing data for over 35 years. This remote sensing archive continues to grow as aircraft and satellites acquire more imagery. As a world leader in preserving data, USGS/EROS has a reputation as a technological innovator in solving challenges and ensuring that access to these collections is available. Other agencies also call on the USGS to consider their collections for long-term archive support. To improve access to the USGS film archive, each frame on every roll of film is being digitized by automated high performance digital camera systems. The system robotically captures a digital image from each film frame for the creation of browse and medium resolution image files. Single frame metadata records are also created to improve access that otherwise involves interpreting flight indexes. USGS/EROS is responsible for over 8.6 million frames of aerial photographs and 27.7 million satellite images.
Stevenson, Paul; Finnane, Anna R; Soyer, H Peter
2016-03-21
Capturing clinical images is becoming more prevalent in everyday clinical practice, and dermatology lends itself to the use of clinical photographs and teledermatology. "Store-and-forward", whereby clinical images are forwarded to a specialist who later responds with an opinion on diagnosis and management is a popular form of teledermatology. Store-and-forward teledermatology has proven accurate and reliable, accelerating the process of diagnosis and treatment and improving patient outcomes. Practitioners' personal smartphones and other devices are often used to capture and communicate clinical images. Patient privacy can be placed at risk with the use of this technology. Practitioners should obtain consent for taking images, explain how they will be used, apply appropriate security in their digital communications, and delete images and other data on patients from personal devices after saving these to patient health records. Failing to use appropriate security precautions poses an emerging medico-legal risk for practitioners.
NASA Astrophysics Data System (ADS)
Gong, K.; Fritsch, D.
2018-05-01
Nowadays, multiple-view stereo satellite imagery has become a valuable data source for digital surface model generation and 3D reconstruction. In 2016, a well-organized multiple view stereo publicly benchmark for commercial satellite imagery has been released by the John Hopkins University Applied Physics Laboratory, USA. This benchmark motivates us to explore the method that can generate accurate digital surface models from a large number of high resolution satellite images. In this paper, we propose a pipeline for processing the benchmark data to digital surface models. As a pre-procedure, we filter all the possible image pairs according to the incidence angle and capture date. With the selected image pairs, the relative bias-compensated model is applied for relative orientation. After the epipolar image pairs' generation, dense image matching and triangulation, the 3D point clouds and DSMs are acquired. The DSMs are aligned to a quasi-ground plane by the relative bias-compensated model. We apply the median filter to generate the fused point cloud and DSM. By comparing with the reference LiDAR DSM, the accuracy, the completeness and the robustness are evaluated. The results show, that the point cloud reconstructs the surface with small structures and the fused DSM generated by our pipeline is accurate and robust.
Vector-Based Ground Surface and Object Representation Using Cameras
2009-12-01
representations and it is a digital data structure used for the representation of a ground surface in geographical information systems ( GIS ). Figure...Vision API library, and the OpenCV library. Also, the Posix thread library was utilized to quickly capture the source images from cameras. Both
Biwasaka, Hitoshi; Saigusa, Kiyoshi; Aoki, Yasuhiro
2005-03-01
In this study, the applicability of holography in the 3-dimensional recording of forensic objects such as skulls and mandibulae, and the accuracy of the reconstructed 3-D images, were examined. The virtual holographic image, which records the 3-dimensional data of the original object, is visually observed on the other side of the holographic plate, and reproduces the 3-dimensional shape of the object well. Another type of holographic image, the real image, is focused on a frosted glass screen, and cross-sectional images of the object can be observed. When measuring the distances between anatomical reference points using an image-processing software, the average deviations in the holographic images as compared to the actual objects were less than 0.1 mm. Therefore, holography could be useful as a 3-dimensional recording method of forensic objects. Two superimposition systems using holographic images were examined. In the 2D-3D system, the transparent virtual holographic image of an object is directly superimposed onto the digitized photograph of the same object on the LCD monitor. On the other hand, in the video system, the holographic image captured by the CCD camera is superimposed onto the digitized photographic image using a personal computer. We found that the discrepancy between the outlines of the superimposed holographic and photographic dental images using the video system was smaller than that using the 2D-3D system. Holography seemed to perform comparably to the computer graphic system; however, a fusion with the digital technique would expand the utility of holography in superimposition.
Information recovery through image sequence fusion under wavelet transformation
NASA Astrophysics Data System (ADS)
He, Qiang
2010-04-01
Remote sensing is widely applied to provide information of areas with limited ground access with applications such as to assess the destruction from natural disasters and to plan relief and recovery operations. However, the data collection of aerial digital images is constrained by bad weather, atmospheric conditions, and unstable camera or camcorder. Therefore, how to recover the information from the low-quality remote sensing images and how to enhance the image quality becomes very important for many visual understanding tasks, such like feature detection, object segmentation, and object recognition. The quality of remote sensing imagery can be improved through meaningful combination of the employed images captured from different sensors or from different conditions through information fusion. Here we particularly address information fusion to remote sensing images under multi-resolution analysis in the employed image sequences. The image fusion is to recover complete information by integrating multiple images captured from the same scene. Through image fusion, a new image with high-resolution or more perceptive for human and machine is created from a time series of low-quality images based on image registration between different video frames.
Presence capture cameras - a new challenge to the image quality
NASA Astrophysics Data System (ADS)
Peltoketo, Veli-Tapani
2016-04-01
Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.
Multiresolution image registration in digital x-ray angiography with intensity variation modeling.
Nejati, Mansour; Pourghassem, Hossein
2014-02-01
Digital subtraction angiography (DSA) is a widely used technique for visualization of vessel anatomy in diagnosis and treatment. However, due to unavoidable patient motions, both externally and internally, the subtracted angiography images often suffer from motion artifacts that adversely affect the quality of the medical diagnosis. To cope with this problem and improve the quality of DSA images, registration algorithms are often employed before subtraction. In this paper, a novel elastic registration algorithm for registration of digital X-ray angiography images, particularly for the coronary location, is proposed. This algorithm includes a multiresolution search strategy in which a global transformation is calculated iteratively based on local search in coarse and fine sub-image blocks. The local searches are accomplished in a differential multiscale framework which allows us to capture both large and small scale transformations. The local registration transformation also explicitly accounts for local variations in the image intensities which incorporated into our model as a change of local contrast and brightness. These local transformations are then smoothly interpolated using thin-plate spline interpolation function to obtain the global model. Experimental results with several clinical datasets demonstrate the effectiveness of our algorithm in motion artifact reduction.
Advanced Prosthetic Gait Training Tool
2015-12-01
motion capture sequences was provided by MPL to CCAD and OGAL. CCAD’s work focused on imposing these sequences on the SantosTM digital human avatar ...manipulating the avatar image. These manipulations are accomplished in the context of reinforcing what is the more ideal position and relating...focus on the visual environment by asking users to manipulate a static image of the Santos avatar to represent their perception of what they observe
Crystal surface analysis using matrix textural features classified by a probabilistic neural network
NASA Astrophysics Data System (ADS)
Sawyer, Curry R.; Quach, Viet; Nason, Donald; van den Berg, Lodewijk
1991-12-01
A system is under development in which surface quality of a growing bulk mercuric iodide crystal is monitored by video camera at regular intervals for early detection of growth irregularities. Mercuric iodide single crystals are employed in radiation detectors. A microcomputer system is used for image capture and processing. The digitized image is divided into multiple overlapping sub-images and features are extracted from each sub-image based on statistical measures of the gray tone distribution, according to the method of Haralick. Twenty parameters are derived from each sub-image and presented to a probabilistic neural network (PNN) for classification. This number of parameters was found to be optimal for the system. The PNN is a hierarchical, feed-forward network that can be rapidly reconfigured as additional training data become available. Training data is gathered by reviewing digital images of many crystals during their growth cycle and compiling two sets of images, those with and without irregularities.
Optomechanical System Development of the AWARE Gigapixel Scale Camera
NASA Astrophysics Data System (ADS)
Son, Hui S.
Electronic focal plane arrays (FPA) such as CMOS and CCD sensors have dramatically improved to the point that digital cameras have essentially phased out film (except in very niche applications such as hobby photography and cinema). However, the traditional method of mating a single lens assembly to a single detector plane, as required for film cameras, is still the dominant design used in cameras today. The use of electronic sensors and their ability to capture digital signals that can be processed and manipulated post acquisition offers much more freedom of design at system levels and opens up many interesting possibilities for the next generation of computational imaging systems. The AWARE gigapixel scale camera is one such computational imaging system. By utilizing a multiscale optical design, in which a large aperture objective lens is mated with an array of smaller, well corrected relay lenses, we are able to build an optically simple system that is capable of capturing gigapixel scale images via post acquisition stitching of the individual pictures from the array. Properly shaping the array of digital cameras allows us to form an effectively continuous focal surface using off the shelf (OTS) flat sensor technology. This dissertation details developments and physical implementations of the AWARE system architecture. It illustrates the optomechanical design principles and system integration strategies we have developed through the course of the project by summarizing the results of the two design phases for AWARE: AWARE-2 and AWARE-10. These systems represent significant advancements in the pursuit of scalable, commercially viable snapshot gigapixel imaging systems and should serve as a foundation for future development of such systems.
High-Speed Binary-Output Image Sensor
NASA Technical Reports Server (NTRS)
Fossum, Eric; Panicacci, Roger A.; Kemeny, Sabrina E.; Jones, Peter D.
1996-01-01
Photodetector outputs digitized by circuitry on same integrated-circuit chip. Developmental special-purpose binary-output image sensor designed to capture up to 1,000 images per second, with resolution greater than 10 to the 6th power pixels per image. Lower-resolution but higher-frame-rate prototype of sensor contains 128 x 128 array of photodiodes on complementary metal oxide/semiconductor (CMOS) integrated-circuit chip. In application for which it is being developed, sensor used to examine helicopter oil to determine whether amount of metal and sand in oil sufficient to warrant replacement.
A novel weighted-direction color interpolation
NASA Astrophysics Data System (ADS)
Tao, Jin-you; Yang, Jianfeng; Xue, Bin; Liang, Xiaofen; Qi, Yong-hong; Wang, Feng
2013-08-01
A digital camera capture images by covering the sensor surface with a color filter array (CFA), only get a color sample at pixel location. Demosaicking is a process by estimating the missing color components of each pixel to get a full resolution image. In this paper, a new algorithm based on edge adaptive and different weighting factors is proposed. Our method can effectively suppress undesirable artifacts. Experimental results based on Kodak images show that the proposed algorithm obtain higher quality images compared to other methods in numerical and visual aspects.
Integrated clinical workstations for image and text data capture, display, and teleconsultation.
Dayhoff, R; Kuzmak, P M; Kirin, G
1994-01-01
The Department of Veterans Affairs (VA) DHCP Imaging System digitally records clinically significant diagnostic images selected by medical specialists in a variety of hospital departments, including radiology, cardiology, gastroenterology, pathology, dermatology, hematology, surgery, podiatry, dental clinic, and emergency room. These images, which include true color and gray scale images, scanned documents, and electrocardiogram waveforms, are stored on network file servers and displayed on workstations located throughout a medical center. All images are managed by the VA's hospital information system (HIS), allowing integrated displays of text and image data from all medical specialties. Two VA medical centers currently have DHCP Imaging Systems installed, and other installations are underway.
NASA Astrophysics Data System (ADS)
Eugster, H.; Huber, F.; Nebiker, S.; Gisi, A.
2012-07-01
Stereovision based mobile mapping systems enable the efficient capturing of directly georeferenced stereo pairs. With today's camera and onboard storage technologies imagery can be captured at high data rates resulting in dense stereo sequences. These georeferenced stereo sequences provide a highly detailed and accurate digital representation of the roadside environment which builds the foundation for a wide range of 3d mapping applications and image-based geo web-services. Georeferenced stereo images are ideally suited for the 3d mapping of street furniture and visible infrastructure objects, pavement inspection, asset management tasks or image based change detection. As in most mobile mapping systems, the georeferencing of the mapping sensors and observations - in our case of the imaging sensors - normally relies on direct georeferencing based on INS/GNSS navigation sensors. However, in urban canyons the achievable direct georeferencing accuracy of the dynamically captured stereo image sequences is often insufficient or at least degraded. Furthermore, many of the mentioned application scenarios require homogeneous georeferencing accuracy within a local reference frame over the entire mapping perimeter. To achieve these demands georeferencing approaches are presented and cost efficient workflows are discussed which allows validating and updating the INS/GNSS based trajectory with independently estimated positions in cases of prolonged GNSS signal outages in order to increase the georeferencing accuracy up to the project requirements.
High-speed imaging using 3CCD camera and multi-color LED flashes
NASA Astrophysics Data System (ADS)
Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis
2017-11-01
This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.
Multiple Sensor Camera for Enhanced Video Capturing
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko
A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.
FBI Fingerprint Image Capture System High-Speed-Front-End throughput modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rathke, P.M.
1993-09-01
The Federal Bureau of Investigation (FBI) has undertaken a major modernization effort called the Integrated Automated Fingerprint Identification System (IAFISS). This system will provide centralized identification services using automated fingerprint, subject descriptor, mugshot, and document processing. A high-speed Fingerprint Image Capture System (FICS) is under development as part of the IAFIS program. The FICS will capture digital and microfilm images of FBI fingerprint cards for input into a central database. One FICS design supports two front-end scanning subsystems, known as the High-Speed-Front-End (HSFE) and Low-Speed-Front-End, to supply image data to a common data processing subsystem. The production rate of themore » HSFE is critical to meeting the FBI`s fingerprint card processing schedule. A model of the HSFE has been developed to help identify the issues driving the production rate, assist in the development of component specifications, and guide the evolution of an operations plan. A description of the model development is given, the assumptions are presented, and some HSFE throughput analysis is performed.« less
Electronic Still Camera image of Astronaut Claude Nicollier working with RMS
1993-12-05
S61-E-006 (5 Dec 1993) --- The robot arm controlling work of Swiss scientist Claude Nicollier was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. With the mission specialist's assistance, Endeavour's crew captured the Hubble Space Telescope (HST) on December 4, 1993. Four of the seven crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Kinect-based sign language recognition of static and dynamic hand movements
NASA Astrophysics Data System (ADS)
Dalawis, Rando C.; Olayao, Kenneth Deniel R.; Ramos, Evan Geoffrey I.; Samonte, Mary Jane C.
2017-02-01
A different approach of sign language recognition of static and dynamic hand movements was developed in this study using normalized correlation algorithm. The goal of this research was to translate fingerspelling sign language into text using MATLAB and Microsoft Kinect. Digital input image captured by Kinect devices are matched from template samples stored in a database. This Human Computer Interaction (HCI) prototype was developed to help people with communication disability to express their thoughts with ease. Frame segmentation and feature extraction was used to give meaning to the captured images. Sequential and random testing was used to test both static and dynamic fingerspelling gestures. The researchers explained some factors they encountered causing some misclassification of signs.
Introduction to Color Imaging Science
NASA Astrophysics Data System (ADS)
Lee, Hsien-Che
2005-04-01
Color imaging technology has become almost ubiquitous in modern life in the form of monitors, liquid crystal screens, color printers, scanners, and digital cameras. This book is a comprehensive guide to the scientific and engineering principles of color imaging. It covers the physics of light and color, how the eye and physical devices capture color images, how color is measured and calibrated, and how images are processed. It stresses physical principles and includes a wealth of real-world examples. The book will be of value to scientists and engineers in the color imaging industry and, with homework problems, can also be used as a text for graduate courses on color imaging.
CytometryML and other data formats
NASA Astrophysics Data System (ADS)
Leif, Robert C.
2006-02-01
Cytology automation and research will be enhanced by the creation of a common data format. This data format would provide the pathology and research communities with a uniform way for annotating and exchanging images, flow cytometry, and associated data. This specification and/or standard will include descriptions of the acquisition device, staining, the binary representations of the image and list-mode data, the measurements derived from the image and/or the list-mode data, and descriptors for clinical/pathology and research. An international, vendor-supported, non-proprietary specification will allow pathologists, researchers, and companies to develop and use image capture/analysis software, as well as list-mode analysis software, without worrying about incompatibilities between proprietary vendor formats. Presently, efforts to create specifications and/or descriptions of these formats include the Laboratory Digital Imaging Project (LDIP) Data Exchange Specification; extensions to the Digital Imaging and Communications in Medicine (DICOM); Open Microscopy Environment (OME); Flowcyt, an extension to the present Flow Cytometry Standard (FCS); and CytometryML. The feasibility of creating a common data specification for digital microscopy and flow cytometry in a manner consistent with its use for medical devices and interoperability with both hospital information and picture archiving systems has been demonstrated by the creation of the CytometryML schemas. The feasibility of creating a software system for digital microscopy has been demonstrated by the OME. CytometryML consists of schemas that describe instruments and their measurements. These instruments include digital microscopes and flow cytometers. Optical components including the instruments' excitation and emission parts are described. The description of the measurements made by these instruments includes the tagged molecule, data acquisition subsystem, and the format of the list-mode and/or image data. Many of the CytometryML data-types are based on the Digital Imaging and Communications in Medicine (DICOM). Binary files for images and list-mode data have been created and read.
Effect of sway on image fidelity in whole-body digitizing
NASA Astrophysics Data System (ADS)
Corner, Brian D.; Hu, Anmin
1998-03-01
For 3D digitizers to be useful data collection tools in scientific and human factors engineering applications, the models created from scan data must match the original object very closely. Factors such as ambient light, characteristics of the object's surface, and object movement, among others can affect the quality of the image produced by any 3D digitizing system. Recently, Cyberware has developed a whole body digitizer for collecting data on human size and shape. With a digitizing time of about 15 seconds, the effect subject movement, or sway, on model fidelity is an important issue to be addressed. The effect of sway is best measured by comparing the dimensions of an object of known geometry to the model of the same object captured by the digitizer. Since it is difficult to know the geometry of a human body accurately, it was decided to compare an object of simple geometry to its digitized counterpart. Preliminary analysis showed that a single cardboard tube would provide the best artifact for detecting sway. A tube was attached to the subjects using supports that allowed the cylinder to stand away from the body. The stand-off was necessary to minimize occluded areas. Multiple scans were taken of 1 subject and the cylinder extracted from the images. Comparison of the actual cylinder dimensions to those extracted from the whole body images found the effect of sway to be minimal. This follows earlier findings that anthropometric dimensions extracted from whole body scans are very close to the same dimensions measured using standard manual methods. Recommendations for subject preparation and stabilization are discussed.
NASA Astrophysics Data System (ADS)
Goiffon, Vincent; Rolando, Sébastien; Corbière, Franck; Rizzolo, Serena; Chabane, Aziouz; Girard, Sylvain; Baer, Jérémy; Estribeau, Magali; Magnan, Pierre; Paillet, Philippe; Van Uffelen, Marco; Mont Casellas, Laura; Scott, Robin; Gaillardin, Marc; Marcandella, Claude; Marcelot, Olivier; Allanche, Timothé
2017-01-01
The Total Ionizing Dose (TID) hardness of digital color Camera-on-a-Chip (CoC) building blocks is explored in the Multi-MGy range using 60Co gamma-ray irradiations. The performances of the following CoC subcomponents are studied: radiation hardened (RH) pixel and photodiode designs, RH readout chain, Color Filter Arrays (CFA) and column RH Analog-to-Digital Converters (ADC). Several radiation hardness improvements are reported (on the readout chain and on dark current). CFAs and ADCs degradations appear to be very weak at the maximum TID of 6 MGy(SiO2), 600 Mrad. In the end, this study demonstrates the feasibility of a MGy rad-hard CMOS color digital camera-on-a-chip, illustrated by a color image captured after 6 MGy(SiO2) with no obvious degradation. An original dark current reduction mechanism in irradiated CMOS Image Sensors is also reported and discussed.
Goyal, Anish; Myers, Travis; Wang, Christine A; Kelly, Michael; Tyrrell, Brian; Gokden, B; Sanchez, Antonio; Turner, George; Capasso, Federico
2014-06-16
We demonstrate active hyperspectral imaging using a quantum-cascade laser (QCL) array as the illumination source and a digital-pixel focal-plane-array (DFPA) camera as the receiver. The multi-wavelength QCL array used in this work comprises 15 individually addressable QCLs in which the beams from all lasers are spatially overlapped using wavelength beam combining (WBC). The DFPA camera was configured to integrate the laser light reflected from the sample and to perform on-chip subtraction of the passive thermal background. A 27-frame hyperspectral image was acquired of a liquid contaminant on a diffuse gold surface at a range of 5 meters. The measured spectral reflectance closely matches the calculated reflectance. Furthermore, the high-speed capabilities of the system were demonstrated by capturing differential reflectance images of sand and KClO3 particles that were moving at speeds of up to 10 m/s.
Image analysis for microelectronic retinal prosthesis.
Hallum, L E; Cloherty, S L; Lovell, N H
2008-01-01
By way of extracellular, stimulating electrodes, a microelectronic retinal prosthesis aims to render discrete, luminous spots-so-called phosphenes-in the visual field, thereby providing a phosphene image (PI) as a rudimentary remediation of profound blindness. As part thereof, a digital camera, or some other photosensitive array, captures frames, frames are analyzed, and phosphenes are actuated accordingly by way of modulated charge injections. Here, we present a method that allows the assessment of image analysis schemes for integration with a prosthetic device, that is, the means of converting the captured image (high resolution) to modulated charge injections (low resolution). We use the mutual-information function to quantify the amount of information conveyed to the PI observer (device implantee), while accounting for the statistics of visual stimuli. We demonstrate an effective scheme involving overlapping, Gaussian kernels, and discuss extensions of the method to account for shortterm visual memory in observers, and their perceptual errors of omission and commission.
Demosaicking algorithm for the Kodak-RGBW color filter array
NASA Astrophysics Data System (ADS)
Rafinazari, M.; Dubois, E.
2015-01-01
Digital cameras capture images through different Color Filter Arrays and then reconstruct the full color image. Each CFA pixel only captures one primary color component; the other primary components will be estimated using information from neighboring pixels. During the demosaicking algorithm, the two unknown color components will be estimated at each pixel location. Most of the demosaicking algorithms use the RGB Bayer CFA pattern with Red, Green and Blue filters. The least-Squares Luma-Chroma demultiplexing method is a state of the art demosaicking method for the Bayer CFA. In this paper we develop a new demosaicking algorithm using the Kodak-RGBW CFA. This particular CFA reduces noise and improves the quality of the reconstructed images by adding white pixels. We have applied non-adaptive and adaptive demosaicking method using the Kodak-RGBW CFA on the standard Kodak image dataset and the results have been compared with previous work.
Estimation of spectral distribution of sky radiance using a commercial digital camera.
Saito, Masanori; Iwabuchi, Hironobu; Murata, Isao
2016-01-10
Methods for estimating spectral distribution of sky radiance from images captured by a digital camera and for accurately estimating spectral responses of the camera are proposed. Spectral distribution of sky radiance is represented as a polynomial of the wavelength, with coefficients obtained from digital RGB counts by linear transformation. The spectral distribution of radiance as measured is consistent with that obtained by spectrometer and radiative transfer simulation for wavelengths of 430-680 nm, with standard deviation below 1%. Preliminary applications suggest this method is useful for detecting clouds and studying the relation between irradiance at the ground and cloud distribution.
Lyu, Tao; Yao, Suying; Nie, Kaiming; Xu, Jiangtao
2014-11-17
A 12-bit high-speed column-parallel two-step single-slope (SS) analog-to-digital converter (ADC) for CMOS image sensors is proposed. The proposed ADC employs a single ramp voltage and multiple reference voltages, and the conversion is divided into coarse phase and fine phase to improve the conversion rate. An error calibration scheme is proposed to correct errors caused by offsets among the reference voltages. The digital-to-analog converter (DAC) used for the ramp generator is based on the split-capacitor array with an attenuation capacitor. Analysis of the DAC's linearity performance versus capacitor mismatch and parasitic capacitance is presented. A prototype 1024 × 32 Time Delay Integration (TDI) CMOS image sensor with the proposed ADC architecture has been fabricated in a standard 0.18 μm CMOS process. The proposed ADC has average power consumption of 128 μW and a conventional rate 6 times higher than the conventional SS ADC. A high-quality image, captured at the line rate of 15.5 k lines/s, shows that the proposed ADC is suitable for high-speed CMOS image sensors.
NASA Astrophysics Data System (ADS)
Lawi, Armin; Adhitya, Yudhi
2018-03-01
The objective of this research is to determine the quality of cocoa beans through morphology of their digital images. Samples of cocoa beans were scattered on a bright white paper under a controlled lighting condition. A compact digital camera was used to capture the images. The images were then processed to extract their morphological parameters. Classification process begins with an analysis of cocoa beans image based on morphological feature extraction. Parameters for extraction of morphological or physical feature parameters, i.e., Area, Perimeter, Major Axis Length, Minor Axis Length, Aspect Ratio, Circularity, Roundness, Ferret Diameter. The cocoa beans are classified into 4 groups, i.e.: Normal Beans, Broken Beans, Fractured Beans, and Skin Damaged Beans. The model of classification used in this paper is the Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM), a proposed improvement model of SVM using ensemble method in which the separate hyperplanes are obtained by least square approach and the multiclass procedure uses One-Against- All method. The result of our proposed model showed that the classification with morphological feature input parameters were accurately as 99.705% for the four classes, respectively.
Depth measurements through controlled aberrations of projected patterns.
Birch, Gabriel C; Tyo, J Scott; Schwiegerling, Jim
2012-03-12
Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture three-dimensional images in space confined environments and without major modifications to current cameras is uncommon. Our goal is to create a simple modification to a conventional camera that allows for three dimensional reconstruction. We require such an imaging system have imaging and illumination paths coincident. Furthermore, we require that any three-dimensional modification to a camera also permits full resolution 2D image capture.Here we present a method of extracting depth information with a single camera and aberrated projected pattern. A commercial digital camera is used in conjunction with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of a projected pattern, thereby encoding depth. By designing an aberrated projected pattern, we are able to exploit this differential focus in post-processing designed to exploit the projected pattern and optical system. We are able to correlate the distance of an object at a particular transverse position from the camera to ratios of particular wavelet coefficients.We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.
Design framework for a spectral mask for a plenoptic camera
NASA Astrophysics Data System (ADS)
Berkner, Kathrin; Shroff, Sapna A.
2012-01-01
Plenoptic cameras are designed to capture different combinations of light rays from a scene, sampling its lightfield. Such camera designs capturing directional ray information enable applications such as digital refocusing, rotation, or depth estimation. Only few address capturing spectral information of the scene. It has been demonstrated that by modifying a plenoptic camera with a filter array containing different spectral filters inserted in the pupil plane of the main lens, sampling of the spectral dimension of the plenoptic function is performed. As a result, the plenoptic camera is turned into a single-snapshot multispectral imaging system that trades-off spatial with spectral information captured with a single sensor. Little work has been performed so far on analyzing diffraction effects and aberrations of the optical system on the performance of the spectral imager. In this paper we demonstrate simulation of a spectrally-coded plenoptic camera optical system via wave propagation analysis, evaluate quality of the spectral measurements captured at the detector plane, and demonstrate opportunities for optimization of the spectral mask for a few sample applications.
Zhang, Zhi-Feng; Gao, Zhan; Liu, Yuan-Yuan; Jiang, Feng-Chun; Yang, Yan-Li; Ren, Yu-Fen; Yang, Hong-Jun; Yang, Kun; Zhang, Xiao-Dong
2012-01-01
Train wheel sets must be periodically inspected for possible or actual premature failures and it is very significant to record the wear history for the full life of utilization of wheel sets. This means that an online measuring system could be of great benefit to overall process control. An online non-contact method for measuring a wheel set's geometric parameters based on the opto-electronic measuring technique is presented in this paper. A charge coupled device (CCD) camera with a selected optical lens and a frame grabber was used to capture the image of the light profile of the wheel set illuminated by a linear laser. The analogue signals of the image were transformed into corresponding digital grey level values. The 'mapping function method' is used to transform an image pixel coordinate to a space coordinate. The images of wheel sets were captured when the train passed through the measuring system. The rim inside thickness and flange thickness were measured and analyzed. The spatial resolution of the whole image capturing system is about 0.33 mm. Theoretic and experimental results show that the online measurement system based on computer vision can meet wheel set measurement requirements.
Geyer, Stefan H.; Maurer-Gesek, Barbara; Reissig, Lukas F.; Weninger, Wolfgang J.
2017-01-01
We provide simple protocols for generating digital volume data with the high-resolution episcopic microscopy (HREM) method. HREM is capable of imaging organic materials with volumes up to 5 x 5 x 7 mm3 in typical numeric resolutions between 1 x 1 x 1 and 5 x 5 x 5 µm3. Specimens are embedded in methacrylate resin and sectioned on a microtome. After each section an image of the block surface is captured with a digital video camera that sits on the phototube connected to the compound microscope head. The optical axis passes through a green fluorescent protein (GFP) filter cube and is aligned with a position, at which the bock holder arm comes to rest after each section. In this way, a series of inherently aligned digital images, displaying subsequent block surfaces are produced. Loading such an image series in three-dimensional (3D) visualization software facilitates the immediate conversion to digital volume data, which permit virtual sectioning in various orthogonal and oblique planes and the creation of volume and surface rendered computer models. We present three simple, tissue specific protocols for processing various groups of organic specimens, including mouse, chick, quail, frog and zebra fish embryos, human biopsy material, uncoated paper and skin replacement material. PMID:28715372
Geyer, Stefan H; Maurer-Gesek, Barbara; Reissig, Lukas F; Weninger, Wolfgang J
2017-07-07
We provide simple protocols for generating digital volume data with the high-resolution episcopic microscopy (HREM) method. HREM is capable of imaging organic materials with volumes up to 5 x 5 x 7 mm 3 in typical numeric resolutions between 1 x 1 x 1 and 5 x 5 x 5 µm 3 . Specimens are embedded in methacrylate resin and sectioned on a microtome. After each section an image of the block surface is captured with a digital video camera that sits on the phototube connected to the compound microscope head. The optical axis passes through a green fluorescent protein (GFP) filter cube and is aligned with a position, at which the bock holder arm comes to rest after each section. In this way, a series of inherently aligned digital images, displaying subsequent block surfaces are produced. Loading such an image series in three-dimensional (3D) visualization software facilitates the immediate conversion to digital volume data, which permit virtual sectioning in various orthogonal and oblique planes and the creation of volume and surface rendered computer models. We present three simple, tissue specific protocols for processing various groups of organic specimens, including mouse, chick, quail, frog and zebra fish embryos, human biopsy material, uncoated paper and skin replacement material.
NASA Astrophysics Data System (ADS)
Chen, Chao; Gao, Nan; Wang, Xiangjun; Zhang, Zonghua
2018-03-01
Phase-based fringe projection methods have been commonly used for three-dimensional (3D) measurements. However, image saturation results in incorrect intensities in captured fringe pattern images, leading to phase and measurement errors. Existing solutions are complex. This paper proposes an adaptive projection intensity adjustment method to avoid image saturation and maintain good fringe modulation in measuring objects with a high range of surface reflectivities. The adapted fringe patterns are created using only one prior step of fringe-pattern projection and image capture. First, a set of phase-shifted fringe patterns with maximum projection intensity value of 255 and a uniform gray level pattern are projected onto the surface of an object. The patterns are reflected from and deformed by the object surface and captured by a digital camera. The best projection intensities corresponding to each saturated-pixel clusters are determined by fitting a polynomial function to transform captured intensities to projected intensities. Subsequently, the adapted fringe patterns are constructed using the best projection intensities at projector pixel coordinate. Finally, the adapted fringe patterns are projected for phase recovery and 3D shape calculation. The experimental results demonstrate that the proposed method achieves high measurement accuracy even for objects with a high range of surface reflectivities.
A super resolution framework for low resolution document image OCR
NASA Astrophysics Data System (ADS)
Ma, Di; Agam, Gady
2013-01-01
Optical character recognition is widely used for converting document images into digital media. Existing OCR algorithms and tools produce good results from high resolution, good quality, document images. In this paper, we propose a machine learning based super resolution framework for low resolution document image OCR. Two main techniques are used in our proposed approach: a document page segmentation algorithm and a modified K-means clustering algorithm. Using this approach, by exploiting coherence in the document, we reconstruct from a low resolution document image a better resolution image and improve OCR results. Experimental results show substantial gain in low resolution documents such as the ones captured from video.
Next-generation digital camera integration and software development issues
NASA Astrophysics Data System (ADS)
Venkataraman, Shyam; Peters, Ken; Hecht, Richard
1998-04-01
This paper investigates the complexities associated with the development of next generation digital cameras due to requirements in connectivity and interoperability. Each successive generation of digital camera improves drastically in cost, performance, resolution, image quality and interoperability features. This is being accomplished by advancements in a number of areas: research, silicon, standards, etc. As the capabilities of these cameras increase, so do the requirements for both hardware and software. Today, there are two single chip camera solutions in the market including the Motorola MPC 823 and LSI DCAM- 101. Real time constraints for a digital camera may be defined by the maximum time allowable between capture of images. Constraints in the design of an embedded digital camera include processor architecture, memory, processing speed and the real-time operating systems. This paper will present the LSI DCAM-101, a single-chip digital camera solution. It will present an overview of the architecture and the challenges in hardware and software for supporting streaming video in such a complex device. Issues presented include the development of the data flow software architecture, testing and integration on this complex silicon device. The strategy for optimizing performance on the architecture will also be presented.
NASA Astrophysics Data System (ADS)
Jaiswal, Mayoore; Horning, Matt; Hu, Liming; Ben-Or, Yau; Champlin, Cary; Wilson, Benjamin; Levitz, David
2018-02-01
Cervical cancer is the fourth most common cancer among women worldwide and is especially prevalent in low resource settings due to lack of screening and treatment options. Visual inspection with acetic acid (VIA) is a widespread and cost-effective screening method for cervical pre-cancer lesions, but accuracy depends on the experience level of the health worker. Digital cervicography, capturing images of the cervix, enables review by an off-site expert or potentially a machine learning algorithm. These reviews require images of sufficient quality. However, image quality varies greatly across users. A novel algorithm was developed to evaluate the sharpness of images captured with the MobileODT's digital cervicography device (EVA System), in order to, eventually provide feedback to the health worker. The key challenges are that the algorithm evaluates only a single image of each cervix, it needs to be robust to the variability in cervix images and fast enough to run in real time on a mobile device, and the machine learning model needs to be small enough to fit on a mobile device's memory, train on a small imbalanced dataset and run in real-time. In this paper, the focus scores of a preprocessed image and a Gaussian-blurred version of the image are calculated using established methods and used as features. A feature selection metric is proposed to select the top features which were then used in a random forest classifier to produce the final focus score. The resulting model, based on nine calculated focus scores, achieved significantly better accuracy than any single focus measure when tested on a holdout set of images. The area under the receiver operating characteristics curve was 0.9459.
Pandurangappa, Rohit; Annavajjula, Saileela; Rajashekaraiah, Premalatha Bidadi
2016-01-01
Background. Microscopes are omnipresent throughout the field of biological research. With microscopes one can see in detail what is going on at the cellular level in tissues. Though it is a ubiquitous tool, the limitation is that with high magnification there is a small field of view. It is often advantageous to see an entire sample at high magnification. Over the years technological advancements in optics have helped to provide solutions to this limitation of microscopes by creating the so-called dedicated “slide scanners” which can provide a “whole slide digital image.” These scanners can provide seamless, large-field-of-view, high resolution image of entire tissue section. The only disadvantage of such complete slide imaging system is its outrageous cost, thereby hindering their practical use by most laboratories, especially in developing and low resource countries. Methods. In a quest for their substitute, we tried commonly used image editing software Adobe Photoshop along with a basic image capturing device attached to a trinocular microscope to create a digital pathology slide. Results. The seamless image created using Adobe Photoshop maintained its diagnostic quality. Conclusion. With time and effort photomicrographs obtained from a basic camera-microscope set up can be combined and merged in Adobe Photoshop to create a whole slide digital image of practically usable quality at a negligible cost. PMID:27747147
Banavar, Spoorthi Ravi; Chippagiri, Prashanthi; Pandurangappa, Rohit; Annavajjula, Saileela; Rajashekaraiah, Premalatha Bidadi
2016-01-01
Background . Microscopes are omnipresent throughout the field of biological research. With microscopes one can see in detail what is going on at the cellular level in tissues. Though it is a ubiquitous tool, the limitation is that with high magnification there is a small field of view. It is often advantageous to see an entire sample at high magnification. Over the years technological advancements in optics have helped to provide solutions to this limitation of microscopes by creating the so-called dedicated "slide scanners" which can provide a "whole slide digital image." These scanners can provide seamless, large-field-of-view, high resolution image of entire tissue section. The only disadvantage of such complete slide imaging system is its outrageous cost, thereby hindering their practical use by most laboratories, especially in developing and low resource countries. Methods . In a quest for their substitute, we tried commonly used image editing software Adobe Photoshop along with a basic image capturing device attached to a trinocular microscope to create a digital pathology slide. Results . The seamless image created using Adobe Photoshop maintained its diagnostic quality. Conclusion . With time and effort photomicrographs obtained from a basic camera-microscope set up can be combined and merged in Adobe Photoshop to create a whole slide digital image of practically usable quality at a negligible cost.
Miniature, mobile X-ray computed radiography system
Watson, Scott A; Rose, Evan A
2017-03-07
A miniature, portable x-ray system may be configured to scan images stored on a phosphor. A flash circuit may be configured to project red light onto a phosphor and receive blue light from the phosphor. A digital monochrome camera may be configured to receive the blue light to capture an article near the phosphor.
Digital TAcy: proof of concept
NASA Astrophysics Data System (ADS)
Bubel, Annie; Sylvain, Jean-François; Martin, François
2009-06-01
Anthocyanins are water soluble pigments in plants that are recognized for their antioxidant property. These pigments are found in high concentration in cranberries, which give their characteristic dark red color. The Total Anthocyanin concentration (TAcy) measurement process requires precious time, consumes chemical products and needs to be continuously repeated during the harvesting period. The idea of the digital TAcy system is to explore the possibility of estimating the TAcy based on analysing the color of the fruits. A calibrated color image capture set-up was developed and characterized, allowing calibrated color data capture from hundreds of samples over two harvesting years (fall of 2007 and 2008). The acquisition system was designed in such a way to avoid specular reflections and provide good resolution images with an extended range of color values representative of the different stages of fruit ripeness. The chemical TAcy value being known for every sample, a mathematical model was developed to predict the TAcy based on color information. This model, which also takes into account bruised and rotten fruits, shows a RMS error of less than 6% over the TAcy interest range [0-50].
Calibration for single multi-mode fiber digital scanning microscopy imaging system
NASA Astrophysics Data System (ADS)
Yin, Zhe; Liu, Guodong; Liu, Bingguo; Gan, Yu; Zhuang, Zhitao; Chen, Fengdong
2015-11-01
Single multimode fiber (MMF) digital scanning imaging system is a development tendency of modern endoscope. We concentrate on the calibration method of the imaging system. Calibration method comprises two processes, forming scanning focused spots and calibrating the couple factors varied with positions. Adaptive parallel coordinate algorithm (APC) is adopted to form the focused spots at the multimode fiber (MMF) output. Compare with other algorithm, APC contains many merits, i.e. rapid speed, small amount calculations and no iterations. The ratio of the optics power captured by MMF to the intensity of the focused spots is called couple factor. We setup the calibration experimental system to form the scanning focused spots and calculate the couple factors for different object positions. The experimental result the couple factor is higher in the center than the edge.
Integrated clinical workstations for image and text data capture, display, and teleconsultation.
Dayhoff, R.; Kuzmak, P. M.; Kirin, G.
1994-01-01
The Department of Veterans Affairs (VA) DHCP Imaging System digitally records clinically significant diagnostic images selected by medical specialists in a variety of hospital departments, including radiology, cardiology, gastroenterology, pathology, dermatology, hematology, surgery, podiatry, dental clinic, and emergency room. These images, which include true color and gray scale images, scanned documents, and electrocardiogram waveforms, are stored on network file servers and displayed on workstations located throughout a medical center. All images are managed by the VA's hospital information system (HIS), allowing integrated displays of text and image data from all medical specialties. Two VA medical centers currently have DHCP Imaging Systems installed, and other installations are underway. PMID:7949899
Workflow Challenges of Enterprise Imaging: HIMSS-SIIM Collaborative White Paper.
Towbin, Alexander J; Roth, Christopher J; Bronkalla, Mark; Cram, Dawn
2016-10-01
With the advent of digital cameras, there has been an explosion in the number of medical specialties using images to diagnose or document disease and guide interventions. In many specialties, these images are not added to the patient's electronic medical record and are not distributed so that other providers caring for the patient can view them. As hospitals begin to develop enterprise imaging strategies, they have found that there are multiple challenges preventing the implementation of systems to manage image capture, image upload, and image management. This HIMSS-SIIM white paper will describe the key workflow challenges related to enterprise imaging and offer suggestions for potential solutions to these challenges.
NASA Astrophysics Data System (ADS)
Dorey, C. K.; Ebenstein, David B.
1988-10-01
Subcellular localization of multiple biochemical markers is readily achieved through their characteristic autofluorescence or through use of appropriately labelled antibodies. Recent development of specific probes has permitted elegant studies in calcium and pH in living cells. However, each of these methods measured fluorescence at one wavelength; precise quantitation of multiple fluorophores at individual sites within a cell has not been possible. Using DIFM, we have achieved spectral analysis of discrete subcellular particles 1-2 gm in diameter. The fluorescence emission is broken into narrow bands by an interference monochromator and visualized through the combined use of a silicon intensified target (SIT) camera, a microcomputer based framegrabber with 8 bit resolution, and a color video monitor. Image acquisition, processing, analysis and display are under software control. The digitized image can be corrected for the spectral distortions induced by the wavelength dependent sensitivity of the camera, and the displayed image can be enhanced or presented in pseudocolor to facilitate discrimination of variation in pixel intensity of individual particles. For rapid comparison of the fluorophore composition of granules, a ratio image is produced by dividing the image captured at one wavelength by that captured at another. In the resultant ratio image, a granule which has a fluorophore composition different from the majority is selectively colored. This powerful system has been utilized to obtain spectra of endogenous autofluorescent compounds in discrete cellular organelles of human retinal pigment epithelium, and to measure immunohistochemically labelled components of the extracellular matrix associated with the human optic nerve.
NASA Astrophysics Data System (ADS)
Huang, Hua-Wei; Zhang, Yang
2008-08-01
An attempt has been made to characterize the colour spectrum of methane flame under various burning conditions using RGB and HSV colour models instead of resolving the real physical spectrum. The results demonstrate that each type of flame has its own characteristic distribution in both the RGB and HSV space. It has also been observed that the averaged B and G values in the RGB model represent well the CH* and C*2 emission of methane premixed flame. Theses features may be utilized for flame measurement and monitoring. The great advantage of using a conventional camera for monitoring flame properties based on the colour spectrum is that it is readily available, easy to interface with a computer, cost effective and has certain spatial resolution. Furthermore, it has been demonstrated that a conventional digital camera is able to image flame not only in the visible spectrum but also in the infrared. This feature is useful in avoiding the problem of image saturation typically encountered in capturing the very bright sooty flames. As a result, further digital imaging processing and quantitative information extraction is possible. It has been identified that an infrared image also has its own distribution in both the RGB and HSV colour space in comparison with a flame image in the visible spectrum.
Wegleitner, Eric J.; Isermann, Daniel A.
2017-01-01
Many biologists use digital images for estimating ages of fish, but the use of images could lead to differences in age estimates and precision because image capture can produce changes in light and clarity compared to directly viewing structures through a microscope. We used sectioned sagittal otoliths from 132 Largemouth Bass Micropterus salmoides and sectioned dorsal spines and otoliths from 157 Walleyes Sander vitreus to determine whether age estimates and among‐reader precision were similar when annuli were enumerated directly through a microscope or from digital images. Agreement of ages between viewing methods for three readers were highest for Largemouth Bass otoliths (75–89% among readers), followed by Walleye otoliths (63–70%) and Walleye dorsal spines (47–64%). Most discrepancies (72–96%) were ±1 year, and differences were more prevalent for age‐5 and older fish. With few exceptions, mean ages estimated from digital images were similar to ages estimated via directly viewing the structures through the microscope, and among‐reader precision did not vary between viewing methods for each structure. However, the number of disagreements we observed suggests that biologists should assess potential differences in age structure that could arise if images of calcified structures are used in the age estimation process.
Demonstrating Change with Astronaut Photography Using Object Based Image Analysis
NASA Technical Reports Server (NTRS)
Hollier, Andi; Jagge, Amy
2017-01-01
Every day, hundreds of images of Earth flood the Crew Earth Observations database as astronauts use hand held digital cameras to capture spectacular frames from the International Space Station. The variety of resolutions and perspectives provide a template for assessing land cover change over decades. We will focus on urban growth in the second fastest growing city in the nation, Houston, TX, using Object-Based Image Analysis. This research will contribute to the land change science community, integrated resource planning, and monitoring of the rapid rate of urban sprawl.
Lam, Christopher T.; Krieger, Marlee S.; Gallagher, Jennifer E.; Asma, Betsy; Muasher, Lisa C.; Schmitt, John W.; Ramanujam, Nimmi
2015-01-01
Introduction Current guidelines by WHO for cervical cancer screening in low- and middle-income countries involves visual inspection with acetic acid (VIA) of the cervix, followed by treatment during the same visit or a subsequent visit with cryotherapy if a suspicious lesion is found. Implementation of these guidelines is hampered by a lack of: trained health workers, reliable technology, and access to screening facilities. A low cost ultra-portable Point of Care Tampon based digital colposcope (POCkeT Colposcope) for use at the community level setting, which has the unique form factor of a tampon, can be inserted into the vagina to capture images of the cervix, which are on par with that of a state of the art colposcope, at a fraction of the cost. A repository of images to be compiled that can be used to empower front line workers to become more effective through virtual dynamic training. By task shifting to the community setting, this technology could potentially provide significantly greater cervical screening access to where the most vulnerable women live. The POCkeT Colposcope’s concentric LED ring provides comparable white and green field illumination at a fraction of the electrical power required in commercial colposcopes. Evaluation with standard optical imaging targets to assess the POCkeT Colposcope against the state of the art digital colposcope and other VIAM technologies. Results Our POCkeT Colposcope has comparable resolving power, color reproduction accuracy, minimal lens distortion, and illumination when compared to commercially available colposcopes. In vitro and pilot in vivo imaging results are promising with our POCkeT Colposcope capturing comparable quality images to commercial systems. Conclusion The POCkeT Colposcope is capable of capturing images suitable for cervical lesion analysis. Our portable low cost system could potentially increase access to cervical cancer screening in limited resource settings through task shifting to community health workers. PMID:26332673
Electronic Still Camera Project on STS-48
NASA Technical Reports Server (NTRS)
1991-01-01
On behalf of NASA, the Office of Commercial Programs (OCP) has signed a Technical Exchange Agreement (TEA) with Autometric, Inc. (Autometric) of Alexandria, Virginia. The purpose of this agreement is to evaluate and analyze a high-resolution Electronic Still Camera (ESC) for potential commercial applications. During the mission, Autometric will provide unique photo analysis and hard-copy production. Once the mission is complete, Autometric will furnish NASA with an analysis of the ESC s capabilities. Electronic still photography is a developing technology providing the means by which a hand held camera electronically captures and produces a digital image with resolution approaching film quality. The digital image, stored on removable hard disks or small optical disks, can be converted to a format suitable for downlink transmission, or it can be enhanced using image processing software. The on-orbit ability to enhance or annotate high-resolution images and then downlink these images in real-time will greatly improve Space Shuttle and Space Station capabilities in Earth observations and on-board photo documentation.
The 3D scanner prototype utilize object profile imaging using line laser and octave software
NASA Astrophysics Data System (ADS)
Nurdini, Mugi; Manunggal, Trikarsa Tirtadwipa; Samsi, Agus
2016-11-01
Three-dimensional scanner or 3D Scanner is a device to reconstruct the real object into digital form on a computer. 3D Scanner is a technology that is being developed, especially in developed countries, where the current 3D Scanner devices is the advanced version with a very expensive prices. This study is basically a simple prototype of 3D Scanner with a very low investment costs. 3D Scanner prototype device consists of a webcam, a rotating desk system controlled by a stepper motor and Arduino UNO, and a line laser. Objects that limit the research is the object with same radius from its center point (object pivot). Scanning is performed by using object profile imaging by line laser which is then captured by the camera and processed by a computer (image processing) using Octave software. On each image acquisition, the scanned object on a rotating desk rotated by a certain degree, so for one full turn multiple images of a number of existing side are finally obtained. Then, the profile of the entire images is extracted in order to obtain digital object dimension. Digital dimension is calibrated by length standard, called gage block. Overall dimensions are then digitally reconstructed into a three-dimensional object. Validation of the scanned object reconstruction of the original object dimensions expressed as a percentage error. Based on the results of data validation, horizontal dimension error is about 5% to 23% and vertical dimension error is about +/- 3%.
Case report: teledermatology and epiluminescence microscopy for the diagnosis of scabies.
Weinstock, M A; Kempton, S A
2000-07-01
We wish to share images from a patient seen in our teledermatology program. Due to the absence of on-site dermatology services at the Togus, Maine, Department of Veterans Affairs, and associated community clinics for veterans in Aroostook, Bangor, Calais, and Rumford, we created a program to provide dermatologic expertise from Providence, Rhode Island. Patients referred for this service were evaluated by a nurse practitioner, who obtained a history, performed a physical examination, and captured digital images of the affected area of skin, including epiluminescence microscopic images where indicated. These data were then retrieved at the Providence (host) site and reviewed by a dermatologist, who formulated an impression and plan that was then implemented by the remote site in Maine. This approach, which involves image capture at the remote site and later review of images at the host site, is the "store-and-forward" method, which appears to be a relatively cost-effective means of providing this service from a distance.
Whalen, T A; Demarco, A J
1999-10-01
A method is described for measuring the volume of individual specimens of Amoeba proteus which utilizes an easily constructed compressor to flatten the specimen to a known thickness. The microscopic image of the flattened specimen is captured on tape, digitized and analysed with the NIH Image software. The results from one specimen are given to illustrate the sources and magnitude of errors affecting these volume measurements.
NASA Astrophysics Data System (ADS)
Cunningham, Cindy C.; Peloquin, Tracy D.
1999-02-01
Since late 1996 the Forensic Identification Services Section of the Ontario Provincial Police has been actively involved in state-of-the-art image capture and the processing of video images extracted from crime scene videos. The benefits and problems of this technology for video analysis are discussed. All analysis is being conducted on SUN Microsystems UNIX computers, networked to a digital disk recorder that is used for video capture. The primary advantage of this system over traditional frame grabber technology is reviewed. Examples from actual cases are presented and the successes and limitations of this approach are explored. Suggestions to companies implementing security technology plans for various organizations (banks, stores, restaurants, etc.) will be made. Future directions for this work and new technologies are also discussed.
Koban, K C; Leitsch, S; Holzbach, T; Volkmer, E; Metz, P M; Giunta, R E
2014-04-01
A new approach of using photographs from smartphones for three-dimensional (3D) imaging was introduced besides the standard high quality 3D camera systems. In this work, we investigated different capture preferences and compared the accuracy of this 3D reconstruction method with manual tape measurement and an established commercial 3D camera system. The facial region of one plastic mannequin head was labelled with 21 landmarks. A 3D reference model was captured with the Vectra 3D Imaging System®. In addition, 3D imaging was executed with the Autodesk 123d Catch® application using 16, 12, 9, 6 and 3 pictures from Apple® iPhone 4 s® and iPad® 3rd generation. The accuracy of 3D reconstruction was measured in 2 steps. First, 42 distance measurements from manual tape measurement and the 2 digital systems were compared. Second, the surface-to-surface deviation of different aesthetic units from the Vectra® reference model to Catch® generated models was analysed. For each 3D system the capturing and processing time was measured. The measurement showed no significant (p>0.05) difference between manual tape measurement and both digital distances from the Catch® application and Vectra®. Surface-to-surface deviation to the Vectra® reference model showed sufficient results for the 3D reconstruction of Catch® with 16, 12 and 9 picture sets. Use of 6 and 3 pictures resulted in large deviations. Lateral aesthetic units showed higher deviations than central units. Catch® needed 5 times longer to capture and compute 3D models (average 10 min vs. 2 min). The Autodesk 123d Catch® computed models suggests good accuracy of the 3D reconstruction for a standard mannequin model, in comparison to manual tape measurement and the surface-to-surface analysis with a 3D reference model. However, the prolonged capture time with multiple pictures is prone to errors. Further studies are needed to investigate its application and quality in capturing volunteer models. Soon mobile applications may offer an alternative for plastic surgeons to today's cost intensive, stationary 3D camera systems. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Astrophysics Data System (ADS)
Merkel, Ronny; Breuhan, Andy; Hildebrandt, Mario; Vielhauer, Claus; Bräutigam, Anja
2012-06-01
In the field of crime scene forensics, current methods of evidence collection, such as the acquisition of shoe-marks, tireimpressions, palm-prints or fingerprints are in most cases still performed in an analogue way. For example, fingerprints are captured by powdering and sticky tape lifting, ninhydrine bathing or cyanoacrylate fuming and subsequent photographing. Images of the evidence are then further processed by forensic experts. With the upcoming use of new multimedia systems for the digital capturing and processing of crime scene traces in forensics, higher resolutions can be achieved, leading to a much better quality of forensic images. Furthermore, the fast and mostly automated preprocessing of such data using digital signal processing techniques is an emerging field. Also, by the optical and non-destructive lifting of forensic evidence, traces are not destroyed and therefore can be re-captured, e.g. by creating time series of a trace, to extract its aging behavior and maybe determine the time the trace was left. However, such new methods and tools face different challenges, which need to be addressed before a practical application in the field. Based on the example of fingerprint age determination, which is an unresolved research challenge to forensic experts since decades, we evaluate the influences of different environmental conditions as well as different types of sweating and their implications to the capturing sensory, preprocessing methods and feature extraction. We use a Chromatic White Light (CWL) sensor to exemplary represent such a new optical and contactless measurement device and investigate the influence of 16 different environmental conditions, 8 different sweat types and 11 different preprocessing methods on the aging behavior of 48 fingerprint time series (2592 fingerprint scans in total). We show the challenges that arise for such new multimedia systems capturing and processing forensic evidence
Marcus, Inna; Tung, Irene T; Dosunmu, Eniolami O; Thiamthat, Warakorn; Freedman, Sharon F
2013-12-01
To compare anterior segment findings identified in young children using digital photographic images from the Lytro light field camera to those observed clinically. This was a prospective study of children <9 years of age with an anterior segment abnormality. Clinically observed anterior segment examination findings for each child were recorded and several digital images of the anterior segment of each eye captured with the Lytro camera. The images were later reviewed by a masked examiner. Sensitivity of abnormal examination findings on Lytro imaging was calculated and compared to the clinical examination as the gold standard. A total of 157 eyes of 80 children (mean age, 4.4 years; range, 0.1-8.9) were included. Clinical examination revealed 206 anterior segment abnormalities altogether: lids/lashes (n = 21 eyes), conjunctiva/sclera (n = 28 eyes), cornea (n = 71 eyes), anterior chamber (n = 14 eyes), iris (n = 43 eyes), and lens (n = 29 eyes). Review of Lytro photographs of eyes with clinically diagnosed anterior segment abnormality correctly identified 133 of 206 (65%) of all abnormalities. Additionally, 185 abnormalities in 50 children were documented at examination under anesthesia. The Lytro camera was able to document most abnormal anterior segment findings in un-sedated young children. Its unique ability to allow focus change after image capture is a significant improvement on prior technology. Copyright © 2013 American Association for Pediatric Ophthalmology and Strabismus. Published by Mosby, Inc. All rights reserved.
Low-cost, high-performance and efficiency computational photometer design
NASA Astrophysics Data System (ADS)
Siewert, Sam B.; Shihadeh, Jeries; Myers, Randall; Khandhar, Jay; Ivanov, Vitaly
2014-05-01
Researchers at the University of Alaska Anchorage and University of Colorado Boulder have built a low cost high performance and efficiency drop-in-place Computational Photometer (CP) to test in field applications ranging from port security and safety monitoring to environmental compliance monitoring and surveying. The CP integrates off-the-shelf visible spectrum cameras with near to long wavelength infrared detectors and high resolution digital snapshots in a single device. The proof of concept combines three or more detectors into a single multichannel imaging system that can time correlate read-out, capture, and image process all of the channels concurrently with high performance and energy efficiency. The dual-channel continuous read-out is combined with a third high definition digital snapshot capability and has been designed using an FPGA (Field Programmable Gate Array) to capture, decimate, down-convert, re-encode, and transform images from two standard definition CCD (Charge Coupled Device) cameras at 30Hz. The continuous stereo vision can be time correlated to megapixel high definition snapshots. This proof of concept has been fabricated as a fourlayer PCB (Printed Circuit Board) suitable for use in education and research for low cost high efficiency field monitoring applications that need multispectral and three dimensional imaging capabilities. Initial testing is in progress and includes field testing in ports, potential test flights in un-manned aerial systems, and future planned missions to image harsh environments in the arctic including volcanic plumes, ice formation, and arctic marine life.
An image compression algorithm for a high-resolution digital still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.
Very High-Speed Digital Video Capability for In-Flight Use
NASA Technical Reports Server (NTRS)
Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald
2006-01-01
digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2 and altitudes up to 50,000 ft (15.24 km). The digital video was used to determine the structural survivability of the debris in a real flight environment and quantify the aerodynamic trajectories of the debris.
Error-proofing test system of industrial components based on image processing
NASA Astrophysics Data System (ADS)
Huang, Ying; Huang, Tao
2018-05-01
Due to the improvement of modern industrial level and accuracy, conventional manual test fails to satisfy the test standards of enterprises, so digital image processing technique should be utilized to gather and analyze the information on the surface of industrial components, so as to achieve the purpose of test. To test the installation parts of automotive engine, this paper employs camera to capture the images of the components. After these images are preprocessed including denoising, the image processing algorithm relying on flood fill algorithm is used to test the installation of the components. The results prove that this system has very high test accuracy.
Suitability of digital camcorders for virtual reality image data capture
NASA Astrophysics Data System (ADS)
D'Apuzzo, Nicola; Maas, Hans-Gerd
1998-12-01
Today's consumer market digital camcorders offer features which make them appear quite interesting devices for virtual reality data capture. The paper compares a digital camcorder with an analogue camcorder and a machine vision type CCD camera and discusses the suitability of these three cameras for virtual reality applications. Besides the discussion of technical features of the cameras, this includes a detailed accuracy test in order to define the range of applications. In combination with the cameras, three different framegrabbers are tested. The geometric accuracy potential of all three cameras turned out to be surprisingly large, and no problems were noticed in the radiometric performance. On the other hand, some disadvantages have to be reported: from the photogrammetrists point of view, the major disadvantage of most camcorders is the missing possibility to synchronize multiple devices, limiting the suitability for 3-D motion data capture. Moreover, the standard video format contains interlacing, which is also undesirable for all applications dealing with moving objects or moving cameras. Further disadvantages are computer interfaces with functionality, which is still suboptimal. While custom-made solutions to these problems are probably rather expensive (and will make potential users turn back to machine vision like equipment), this functionality could probably be included by the manufacturers at almost zero cost.
An evolution of image source camera attribution approaches.
Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul
2016-05-01
Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Kakudo, Natsuko; Kushida, Satoshi; Tanaka, Nobuko; Minakata, Tatsuya; Suzuki, Kenji; Kusumoto, Kenji
2011-11-01
Chemical peeling is becoming increasingly popular for skin rejuvenation in dermatological esthetic surgery. Conspicuous facial pores are one of the most frequently encountered skin problems in women of all ages. This study was performed to analyze the effectiveness of reducing conspicuous facial pores using glycolic acid chemical peeling (GACP) based on a novel computer analysis of digital-camera-captured images. GACP was performed a total of five times at 2-week intervals in 22 healthy women. Computerized image analysis of conspicuous, open, and darkened facial pores was performed using the Robo Skin Analyzer CS 50. The number of conspicuous facial pores decreased significantly in 19 (86%) of the 22 subjects, with a mean improvement rate of 34.6%. The number of open pores decreased significantly in 16 (72%) of the subjects, with a mean improvement rate of 11.0%. The number of darkened pores decreased significantly in 18 (81%) of the subjects, with a mean improvement rate of 34.3%. GACP significantly reduces the number of conspicuous facial pores. The Robo Skin Analyzer CS 50 is useful for the quantification and analysis of 'pore enlargement', a subtle finding in dermatological esthetic surgery. © 2011 John Wiley & Sons A/S.
Hirano, Masatsugu; Yamasaki, Katsuhito; Okada, Hiroshi; Kitazawa, Sohei; Kitazawa, Riko; Ohno, Yoshiharu; Sakurai, Takashi; Kondoh, Takeshi; Ohbayashi, Chiho; Katafuchi, Tetsuro; Maeda, Sakan; Sugimura, Kazuro; Tamura, Shinichi
2005-03-01
We discuss the usefulness of the refraction contrast method using highly parallel X-rays as a new approach to minute lung cancer detection. The advantages of refraction contrast images are discussed in terms of contrast, and a comparison is made with absorption images. We simulated refraction contrast imaging using globules with the density of water in air as models for minute lung cancer detection. The contrast intensified by bright and dark lines was compared on a globule with the contrast of absorption images. We adopted the Monte Carlo simulation to determine the strength of the profile curve of the photon counts at the detector. The obtained contrasts were more intense by two to three digits than those obtainable with the absorption contrast imaging method. The contrast in refraction contrast imaging was more intense than that obtainable with absorption contrast imaging. A two to three digit improvement in contrast means that it is possible to greatly reduce the exposure dose necessary for imaging. Therefore, it is expected to become possible to detect the interfaces of soft tissues, which are difficult to capture with conventional absorption imaging, at low dosages and high resolution.
Automated reconstruction of standing posture panoramas from multi-sector long limb x-ray images
NASA Astrophysics Data System (ADS)
Miller, Linzey; Trier, Caroline; Ben-Zikri, Yehuda K.; Linte, Cristian A.
2016-03-01
Due to the digital X-ray imaging system's limited field of view, several individual sector images are required to capture the posture of an individual in standing position. These images are then "stitched together" to reconstruct the standing posture. We have created an image processing application that automates the stitching, therefore minimizing user input, optimizing workflow, and reducing human error. The application begins with pre-processing the input images by removing artifacts, filtering out isolated noisy regions, and amplifying a seamless bone edge. The resulting binary images are then registered together using a rigid-body intensity based registration algorithm. The identified registration transformations are then used to map the original sector images into the panorama image. Our method focuses primarily on the use of the anatomical content of the images to generate the panoramas as opposed to using external markers employed to aid with the alignment process. Currently, results show robust edge detection prior to registration and we have tested our approach by comparing the resulting automatically-stitched panoramas to the manually stitched panoramas in terms of registration parameters, target registration error of homologous markers, and the homogeneity of the digitally subtracted automatically- and manually-stitched images using 26 patient datasets.
Super-resolved all-refocused image with a plenoptic camera
NASA Astrophysics Data System (ADS)
Wang, Xiang; Li, Lin; Hou, Guangqi
2015-12-01
This paper proposes an approach to produce the super-resolution all-refocused images with the plenoptic camera. The plenoptic camera can be produced by putting a micro-lens array between the lens and the sensor in a conventional camera. This kind of camera captures both the angular and spatial information of the scene in one single shot. A sequence of digital refocused images, which are refocused at different depth, can be produced after processing the 4D light field captured by the plenoptic camera. The number of the pixels in the refocused image is the same as that of the micro-lens in the micro-lens array. Limited number of the micro-lens will result in poor low resolution refocused images. Therefore, not enough details will exist in these images. Such lost details, which are often high frequency information, are important for the in-focus part in the refocused image. We decide to super-resolve these in-focus parts. The result of image segmentation method based on random walks, which works on the depth map produced from the 4D light field data, is used to separate the foreground and background in the refocused image. And focusing evaluation function is employed to determine which refocused image owns the clearest foreground part and which one owns the clearest background part. Subsequently, we employ single image super-resolution method based on sparse signal representation to process the focusing parts in these selected refocused images. Eventually, we can obtain the super-resolved all-focus image through merging the focusing background part and the focusing foreground part in the way of digital signal processing. And more spatial details will be kept in these output images. Our method will enhance the resolution of the refocused image, and just the refocused images owning the clearest foreground and background need to be super-resolved.
Applications and challenges of digital pathology and whole slide imaging.
Higgins, C
2015-07-01
Virtual microscopy is a method for digitizing images of tissue on glass slides and using a computer to view, navigate, change magnification, focus and mark areas of interest. Virtual microscope systems (also called digital pathology or whole slide imaging systems) offer several advantages for biological scientists who use slides as part of their general, pharmaceutical, biotechnology or clinical research. The systems usually are based on one of two methodologies: area scanning or line scanning. Virtual microscope systems enable automatic sample detection, virtual-Z acquisition and creation of focal maps. Virtual slides are layered with multiple resolutions at each location, including the highest resolution needed to allow more detailed review of specific regions of interest. Scans may be acquired at 2, 10, 20, 40, 60 and 100 × or a combination of magnifications to highlight important detail. Digital microscopy starts when a slide collection is put into an automated or manual scanning system. The original slides are archived, then a server allows users to review multilayer digital images of the captured slides either by a closed network or by the internet. One challenge for adopting the technology is the lack of a universally accepted file format for virtual slides. Additional challenges include maintaining focus in an uneven sample, detecting specimens accurately, maximizing color fidelity with optimal brightness and contrast, optimizing resolution and keeping the images artifact-free. There are several manufacturers in the field and each has not only its own approach to these issues, but also its own image analysis software, which provides many options for users to enhance the speed, quality and accuracy of their process through virtual microscopy. Virtual microscope systems are widely used and are trusted to provide high quality solutions for teleconsultation, education, quality control, archiving, veterinary medicine, research and other fields.
A review of the current state of digital plate reading of cultures in clinical microbiology.
Rhoads, Daniel D; Novak, Susan M; Pantanowitz, Liron
2015-01-01
Digital plate reading (DPR) is increasingly being adopted as a means to facilitate the analysis and improve the quality and efficiency within the clinical microbiology laboratory. This review discusses the role of DPR in the context of total laboratory automation and explores some of the platforms currently available or in development for digital image capturing of microbial growth on media. The review focuses on the advantages and challenges of DPR. Peer-reviewed studies describing the utility and quality of these novel DPR systems are largely lacking, and professional guidelines for DPR implementation and quality management are needed. Further development and more widespread adoption of DPR is anticipated.
A review of the current state of digital plate reading of cultures in clinical microbiology
Rhoads, Daniel D.; Novak, Susan M.; Pantanowitz, Liron
2015-01-01
Digital plate reading (DPR) is increasingly being adopted as a means to facilitate the analysis and improve the quality and efficiency within the clinical microbiology laboratory. This review discusses the role of DPR in the context of total laboratory automation and explores some of the platforms currently available or in development for digital image capturing of microbial growth on media. The review focuses on the advantages and challenges of DPR. Peer-reviewed studies describing the utility and quality of these novel DPR systems are largely lacking, and professional guidelines for DPR implementation and quality management are needed. Further development and more widespread adoption of DPR is anticipated. PMID:26110091
Electronic Photography at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Holm, Jack; Judge, Nancianne
1995-01-01
An electronic photography facility has been established in the Imaging & Photographic Technology Section, Visual Imaging Branch, at the NASA Langley Research Center (LaRC). The purpose of this facility is to provide the LaRC community with access to digital imaging technology. In particular, capabilities have been established for image scanning, direct image capture, optimized image processing for storage, image enhancement, and optimized device dependent image processing for output. Unique approaches include: evaluation and extraction of the entire film information content through scanning; standardization of image file tone reproduction characteristics for optimal bit utilization and viewing; education of digital imaging personnel on the effects of sampling and quantization to minimize image processing related information loss; investigation of the use of small kernel optimal filters for image restoration; characterization of a large array of output devices and development of image processing protocols for standardized output. Currently, the laboratory has a large collection of digital image files which contain essentially all the information present on the original films. These files are stored at 8-bits per color, but the initial image processing was done at higher bit depths and/or resolutions so that the full 8-bits are used in the stored files. The tone reproduction of these files has also been optimized so the available levels are distributed according to visual perceptibility. Look up tables are available which modify these files for standardized output on various devices, although color reproduction has been allowed to float to some extent to allow for full utilization of output device gamut.
Color standardization and optimization in whole slide imaging.
Yagi, Yukako
2011-03-30
Standardization and validation of the color displayed by digital slides is an important aspect of digital pathology implementation. While the most common reason for color variation is the variance in the protocols and practices in the histology lab, the color displayed can also be affected by variation in capture parameters (for example, illumination and filters), image processing and display factors in the digital systems themselves. We have been developing techniques for color validation and optimization along two paths. The first was based on two standard slides that are scanned and displayed by the imaging system in question. In this approach, one slide is embedded with nine filters with colors selected especially for H&E stained slides (looking like tiny Macbeth color chart); the specific color of the nine filters were determined in our previous study and modified for whole slide imaging (WSI). The other slide is an H&E stained mouse embryo. Both of these slides were scanned and the displayed images were compared to a standard. The second approach was based on our previous multispectral imaging research. As a first step, the two slide method (above) was used to identify inaccurate display of color and its cause, and to understand the importance of accurate color in digital pathology. We have also improved the multispectral-based algorithm for more consistent results in stain standardization. In near future, the results of the two slide and multispectral techniques can be combined and will be widely available. We have been conducting a series of researches and developing projects to improve image quality to establish Image Quality Standardization. This paper discusses one of most important aspects of image quality - color.
Capturing latent fingerprints from metallic painted surfaces using UV-VIS spectroscope
NASA Astrophysics Data System (ADS)
Makrushin, Andrey; Scheidat, Tobias; Vielhauer, Claus
2015-03-01
In digital crime scene forensics, contactless non-destructive detection and acquisition of latent fingerprints by means of optical devices such as a high-resolution digital camera, confocal microscope, or chromatic white-light sensor is the initial step prior to destructive chemical development. The applicability of an optical sensor to digitalize latent fingerprints primarily depends on reflection properties of a substrate. Metallic painted surfaces, for instance, pose a problem for conventional sensors which make use of visible light. Since metallic paint is a semi-transparent layer on top of the surface, visible light penetrates it and is reflected off of the metallic flakes randomly disposed in the paint. Fingerprint residues do not impede light beams making ridges invisible. Latent fingerprints can be revealed, however, using ultraviolet light which does not penetrate the paint. We apply a UV-VIS spectroscope that is capable of capturing images within the range from 163 to 844 nm using 2048 discrete levels. We empirically show that latent fingerprints left behind on metallic painted surfaces become clearly visible within the range from 205 to 385 nm. Our proposed streakiness score feature determining the proportion of a ridge-valley pattern in an image is applied for automatic assessment of a fingerprint's visibility and distinguishing between fingerprint and empty regions. The experiments are carried out with 100 fingerprint and 100 non-fingerprint samples.
A sunset Earth observation image taken during STS-100
2001-04-26
S100-E-5498 (26 April 2001) --- Earth's limb--the edge of the planet seen at twilight--was captured with a digital still camera by one of the STS-100 crew members aboard the Space Shuttle Endeavour. Near center frame the silhouette of cloud layers can be seen in the atmosphere, above which lies an airglow layer (left).
Rapid assessment of above-ground biomass of Giant Reed using visibility estimates
USDA-ARS?s Scientific Manuscript database
A method for the rapid estimation of biomass and density of giant reed (Arundo donax L.) was developed using estimates of visibility as a predictive tool. Visibility estimates were derived by capturing digital images of a 0.25 m2 polystyrene whiteboard placed a set distance (1m) from the edge of gia...
HST Solar Arrays photographed by Electronic Still Camera
NASA Technical Reports Server (NTRS)
1993-01-01
This close-up view of one of two Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.
A new, open-source, multi-modality digital breast phantom
NASA Astrophysics Data System (ADS)
Graff, Christian G.
2016-03-01
An anthropomorphic digital breast phantom has been developed with the goal of generating random voxelized breast models that capture the anatomic variability observed in vivo. This is a new phantom and is not based on existing digital breast phantoms or segmentation of patient images. It has been designed at the outset to be modality agnostic (i.e., suitable for use in modeling x-ray based imaging systems, magnetic resonance imaging, and potentially other imaging systems) and open source so that users may freely modify the phantom to suit a particular study. In this work we describe the modeling techniques that have been developed, the capabilities and novel features of this phantom, and study simulated images produced from it. Starting from a base quadric, a series of deformations are performed to create a breast with a particular volume and shape. Initial glandular compartments are generated using a Voronoi technique and a ductal tree structure with terminal duct lobular units is grown from the nipple into each compartment. An additional step involving the creation of fat and glandular lobules using a Perlin noise function is performed to create more realistic glandular/fat tissue interfaces and generate a Cooper's ligament network. A vascular tree is grown from the chest muscle into the breast tissue. Breast compression is performed using a neo-Hookean elasticity model. We show simulated mammographic and T1-weighted MRI images and study properties of these images.
Remote assessment of acne: the use of acne grading tools to evaluate digital skin images.
Bergman, Hagit; Tsai, Kenneth Y; Seo, Su-Jean; Kvedar, Joseph C; Watson, Alice J
2009-06-01
Digital imaging of dermatology patients is a novel approach to remote data collection. A number of assessment tools have been developed to grade acne severity and to track clinical progress over time. Although these tools have been validated when used in a face-to-face setting, their efficacy and reliability when used to assess digital images have not been examined. The main purpose of this study was to determine whether specific assessment tools designed to grade acne during face-to-face visits can be applied to the evaluation of digital images. The secondary purpose was to ascertain whether images obtained by subjects are of adequate quality to allow such assessments to be made. Three hundred (300) digital images of patients with mild to moderate facial inflammatory acne from an ongoing randomized-controlled study were included in this analysis. These images were obtained from 20 patients and consisted of sets of 3 images taken over time. Of these images, 120 images were captured by subjects themselves and 180 were taken by study staff. Subjects were asked to retake their photographs if the initial images were deemed of poor quality by study staff. Images were evaluated by two dermatologists-in-training using validated acne assessment measures: Total Inflammatory Lesion Count, Leeds technique, and the Investigator's Global Assessment. Reliability of raters was evaluated using correlation coefficients and kappa statistics. Of the different acne assessment measures tested, the inter-rater reliability was highest for the total inflammatory lesion count (r = 0.871), but low for the Leeds technique (kappa = 0.381) and global assessment (kappa = 0.3119). Raters were able to evaluate over 89% of all images using each type of acne assessment measure despite the fact that images obtained by study staff were of higher quality than those obtained by patients (p < 0.001). Several existing clinical assessment measures can be used to evaluate digital images obtained from subjects with inflammatory acne lesions. The level of inter-rater agreement is highly variable across assessment measures, and we found the Total Inflammatory Lesion Count to be the most reliable. This measure could be used to allow a dermatologist to remotely track a patient's progress over time.
Three-dimensional reconstruction from serial sections in PC-Windows platform by using 3D_Viewer.
Xu, Yi-Hua; Lahvis, Garet; Edwards, Harlene; Pitot, Henry C
2004-11-01
Three-dimensional (3D) reconstruction from serial sections allows identification of objects of interest in 3D and clarifies the relationship among these objects. 3D_Viewer, developed in our laboratory for this purpose, has four major functions: image alignment, movie frame production, movie viewing, and shift-overlay image generation. Color images captured from serial sections were aligned; then the contours of objects of interest were highlighted in a semi-automatic manner. These 2D images were then automatically stacked at different viewing angles, and their composite images on a projected plane were recorded by an image transform-shift-overlay technique. These composition images are used in the object-rotation movie show. The design considerations of the program and the procedures used for 3D reconstruction from serial sections are described. This program, with a digital image-capture system, a semi-automatic contours highlight method, and an automatic image transform-shift-overlay technique, greatly speeds up the reconstruction process. Since images generated by 3D_Viewer are in a general graphic format, data sharing with others is easy. 3D_Viewer is written in MS Visual Basic 6, obtainable from our laboratory on request.
Evaluation of color grading impact in restoration process of archive films
NASA Astrophysics Data System (ADS)
Fliegel, Karel; Vítek, Stanislav; Páta, Petr; Janout, Petr; Myslík, Jiří; Pecák, Josef; Jícha, Marek
2016-09-01
Color grading of archive films is a very particular task in the process of their restoration. The ultimate goal of color grading here is to achieve the same look of the movie as intended at the time of its first presentation. The role of the expert restorer, expert group and a digital colorist in this complicated process is to find the optimal settings of the digital color grading system so that the resulting image look is as close as possible to the estimate of the original reference release print adjusted by the expert group of cinematographers. A methodology for subjective assessment of perceived differences between the outcomes of color grading is introduced, and results of a subjective study are presented. Techniques for objective assessment of perceived differences are discussed, and their performance is evaluated using ground truth obtained from the subjective experiment. In particular, a solution based on calibrated digital single-lens reflex camera and subsequent analysis of image features captured from the projection screen is described. The system based on our previous work is further developed so that it can be used for the analysis of projected images. It allows assessing color differences in these images and predict their impact on the perceived difference in image look.
Optical Measurement of In-plane Waves in Mechanical Metamaterials Through Digital Image Correlation
NASA Astrophysics Data System (ADS)
Schaeffer, Marshall; Trainiti, Giuseppe; Ruzzene, Massimo
2017-02-01
We report on a Digital Image Correlation-based technique for the detection of in-plane elastic waves propagating in structural lattices. The experimental characterization of wave motion in lattice structures is currently of great interest due its relevance to the design of novel mechanical metamaterials with unique/unusual properties such as strongly directional behaviour, negative refractive indexes and topologically protected wave motion. Assessment of these functionalities often requires the detection of highly spatially resolved in-plane wavefields, which for reticulated or porous structural assemblies is an open challenge. A Digital Image Correlation approach is implemented that tracks small displacements of the lattice nodes by centring image subsets about the lattice intersections. A high speed camera records the motion of the points by properly interleaving subse- quent frames thus artificially enhancing the available sampling rate. This, along with an imaging stitching procedure, enables the capturing of a field of view that is sufficiently large for subsequent processing. The transient response is recorded in the form of the full wavefields, which are processed to unveil features of wave motion in a hexagonal lattice. Time snapshots and frequency contours in the spatial Fourier domain are compared with numerical predictions to illustrate the accuracy of the recorded wavefields.
Application of digital image correlation for long-distance bridge deflection measurement
NASA Astrophysics Data System (ADS)
Tian, Long; Pan, Bing; Cai, Youfa; Liang, Hui; Zhao, Yan
2013-06-01
Due to its advantages of non-contact, full-field and high-resolution measurement, digital image correlation (DIC) method has gained wide acceptance and found numerous applications in the field of experimental mechanics. In this paper, the application of DIC for real-time long-distance bridge deflection detection in outdoor environments is studied. Bridge deflection measurement using DIC in outdoor environments is more challenging than regular DIC measurements performed under laboratory conditions. First, much more image noise due to variations in ambient light will be presented in the images recorded in outdoor environments. Second, how to select the target area becomes a key factor because long-distance imaging results in a large field of view of the test object. Finally, the image acquisition speed of the camera must be high enough (larger than 100 fps) to capture the real-time dynamic motion of a bridge. In this work, the above challenging issues are addressed and several improvements were made to DIC method. The applicability was demonstrated by real experiments. Experimental results indicate that the DIC method has great potentials in motion measurement in various large building structures.
Computational analysis of Pelton bucket tip erosion using digital image processing
NASA Astrophysics Data System (ADS)
Shrestha, Bim Prasad; Gautam, Bijaya; Bajracharya, Tri Ratna
2008-03-01
Erosion of hydro turbine components through sand laden river is one of the biggest problems in Himalayas. Even with sediment trapping systems, complete removal of fine sediment from water is impossible and uneconomical; hence most of the turbine components in Himalayan Rivers are exposed to sand laden water and subject to erode. Pelton bucket which are being wildly used in different hydropower generation plant undergoes erosion on the continuous presence of sand particles in water. The subsequent erosion causes increase in splitter thickness, which is supposed to be theoretically zero. This increase in splitter thickness gives rise to back hitting of water followed by decrease in turbine efficiency. This paper describes the process of measurement of sharp edges like bucket tip using digital image processing. Image of each bucket is captured and allowed to run for 72 hours; sand concentration in water hitting the bucket is closely controlled and monitored. Later, the image of the test bucket is taken in the same condition. The process is repeated for 10 times. In this paper digital image processing which encompasses processes that performs image enhancement in both spatial and frequency domain. In addition, the processes that extract attributes from images, up to and including the measurement of splitter's tip. Processing of image has been done in MATLAB 6.5 platform. The result shows that quantitative measurement of edge erosion of sharp edges could accurately be detected and the erosion profile could be generated using image processing technique.
Digital Intraoral Imaging Re-Exposure Rates of Dental Students.
Senior, Anthea; Winand, Curtis; Ganatra, Seema; Lai, Hollis; Alsulfyani, Noura; Pachêco-Pereira, Camila
2018-01-01
A guiding principle of radiation safety is ensuring that radiation dosage is as low as possible while yielding the necessary diagnostic information. Intraoral images taken with conventional dental film have a higher re-exposure rate when taken by dental students compared to experienced staff. The aim of this study was to examine the prevalence of and reasons for re-exposure of digital intraoral images taken by third- and fourth-year dental students in a dental school clinic. At one dental school in Canada, the total number of intraoral images taken by third- and fourth-year dental students, re-exposures, and error descriptions were extracted from patient clinical records for an eight-month period (September 2015 to April 2016). The data were categorized to distinguish between digital images taken with solid-state sensors or photostimulable phosphor plates (PSP). The results showed that 9,397 intraoral images were made, and 1,064 required re-exposure. The most common error requiring re-exposure for bitewing images was an error in placement of the receptor too far mesially or distally (29% for sensors and 18% for PSP). The most common error requiring re-exposure for periapical images was inadequate capture of the periapical area (37% for sensors and 6% for PSP). A retake rate of 11% was calculated, and the common technique errors causing image deficiencies were identified. Educational intervention can now be specifically designed to reduce the retake rate and radiation dose for future patients.
Fully automated three-dimensional microscopy system
NASA Astrophysics Data System (ADS)
Kerschmann, Russell L.
2000-04-01
Tissue-scale structures such as vessel networks are imaged at micron resolution with the Virtual Tissue System (VT System). VT System imaging of cubic millimeters of tissue and other material extends the capabilities of conventional volumetric techniques such as confocal microscopy, and allows for the first time the integrated 2D and 3D analysis of important tissue structural relationships. The VT System eliminates the need for glass slide-mounted tissue sections and instead captures images directly from the surface of a block containing a sample. Tissues are en bloc stained with fluorochrome compounds, embedded in an optically conditioned polymer that suppresses image signals form dep within the block , and serially sectioned for imaging. Thousands of fully registered 2D images are automatically captured digitally to completely convert tissue samples into blocks of high-resolution information. The resulting multi gigabyte data sets constitute the raw material for precision visualization and analysis. Cellular function may be seen in a larger anatomical context. VT System technology makes tissue metrics, accurate cell enumeration and cell cycle analyses possible while preserving full histologic setting.
Pre-slip and Localized Strain Band - A Study Based on Large Sample Experiment and DIC
NASA Astrophysics Data System (ADS)
Ji, Y.; Zhuo, Y. Q.; Liu, L.; Ma, J.
2017-12-01
Meta-instability stage (MIS) is the stage occurs between a fault reaching the peak differential stress and the onset of the final stress drop. It is the crucial stage during which a fault transits from "stick" to "slip". Therefore, if one can quantitatively analyze the spatial and temporal characteristics of the deformation field of a fault at MIS, it will be of great significance both to fault mechanics and earthquake prediction study. In order to do so, a series of stick-slip experiments were conducted using a biaxial servo-controlled pressure machine. Digital images of the sample surfaces were captured by a high speed camera and processed using a digital image correlation method (DIC). If images of a rock sample are acquired before and after deformation, then DIC can be used to infer the displacement and strain fields. In our study, sample images were captured at the rate of 1000 frame per second and the resolution is 2048 by 2048 in pixel. The displacement filed, strain filed and fault displacement were calculated from the captured images. Our data shows that (1) pre-sliding can be a three-stage process, including a relative long and slow first stage at slipping rate of 7.9nm/s, a relatively short and fast second one at rate of 3µm/s and the last stage only last for 0.2s but the slipping rate reached as high as 220µm/s. (2) Localized strain bands were observed nearly perpendicular to the fault. A possible mechanism is that the pre-sliding is distributed heterogeneously along the fault, which means there are relatively adequately sliding segments and the less sliding ones, they become the constrain condition of deformation of the adjacent subregion. The localized deformation band tends to radiate from the discontinuity point of sliding. While the adequately sliding segments are competing with the less sliding ones, the strain bands are evolving accordingly.
TU-CD-207-09: Analysis of the 3-D Shape of Patients’ Breast for Breast Imaging and Surgery Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agasthya, G; Sechopoulos, I
2015-06-15
Purpose: Develop a method to accurately capture the 3-D shape of patients’ external breast surface before and during breast compression for mammography/tomosynthesis. Methods: During this IRB-approved, HIPAA-compliant study, 50 women were recruited to undergo 3-D breast surface imaging during breast compression and imaging for the cranio-caudal (CC) view on a digital mammography/breast tomosynthesis system. Digital projectors and cameras mounted on tripods were used to acquire 3-D surface images of the breast, in three conditions: (a) positioned on the support paddle before compression, (b) during compression by the compression paddle and (c) the anterior-posterior view with the breast in its natural,more » unsupported position. The breast was compressed to standard full compression with the compression paddle and a tomosynthesis image was acquired simultaneously with the 3-D surface. The 3-D surface curvature and deformation with respect to the uncompressed surface was analyzed using contours. The 3-D surfaces were voxelized to capture breast shape in a format that can be manipulated for further analysis. Results: A protocol was developed to accurately capture the 3-D shape of patients’ breast before and during compression for mammography. Using a pair of 3-D scanners, the 50 patient breasts were scanned in three conditions, resulting in accurate representations of the breast surfaces. The surfaces were post processed, analyzed using contours and voxelized, with 1 mm{sup 3} voxels, converting the breast shape into a format that can be easily modified as required. Conclusion: Accurate characterization of the breast curvature and shape for the generation of 3-D models is possible. These models can be used for various applications such as improving breast dosimetry, accurate scatter estimation, conducting virtual clinical trials and validating compression algorithms. Ioannis Sechopoulos is consultant for Fuji Medical Systems USA.« less
Measuring digit lengths with 3D digital stereophotogrammetry: A comparison across methods.
Gremba, Allison; Weinberg, Seth M
2018-05-09
We compared digital 3D stereophotogrammetry to more traditional measurement methods (direct anthropometry and 2D scanning) to capture digit lengths and ratios. The length of the second and fourth digits was measured by each method and the second-to-fourth ratio was calculated. For each digit measurement, intraobserver agreement was calculated for each of the three collection methods. Further, measurements from the three methods were compared directly to one another. Agreement statistics included the intraclass correlation coefficient (ICC) and technical error of measurement (TEM). Intraobserver agreement statistics for the digit length measurements were high for all three methods; ICC values exceeded 0.97 and TEM values were below 1 mm. For digit ratio, intraobserver agreement was also acceptable for all methods, with direct anthropometry exhibiting lower agreement (ICC = 0.87) compared to indirect methods. For the comparison across methods, the overall agreement was high for digit length measurements (ICC values ranging from 0.93 to 0.98; TEM values below 2 mm). For digit ratios, high agreement was observed between the two indirect methods (ICC = 0.93), whereas indirect methods showed lower agreement when compared to direct anthropometry (ICC < 0.75). Digit measurements and derived ratios from 3D stereophotogrammetry showed high intraobserver agreement (similar to more traditional methods) suggesting that landmarks could be placed reliably on 3D hand surface images. While digit length measurements were found to be comparable across all three methods, ratios derived from direct anthropometry tended to be higher than those calculated indirectly from 2D or 3D images. © 2018 Wiley Periodicals, Inc.
Operational experience with DICOM for the clinical specialties in the healthcare enterprise
NASA Astrophysics Data System (ADS)
Kuzmak, Peter M.; Dayhoff, Ruth E.
2004-04-01
A number of clinical specialties routinely use images in treating patients, for example ophthalmology, dentistry, cardiology, endoscopy, and surgery. These images are captured by a variety of commercial digital image acquisition systems. The US Department of Veterans Affairs has been working for several years on advancing the use of the Digital Imaging and Communications in Medicine (DICOM) Standard in these clinical specialties. This is an effort that has involved several facets: (1) working with the vendors to ensure that they satisfy existing DICOM requirements, (2) developing interface software to the VistA hospital information system (HIS), (3) field testing DICOM systems, (4) deploying these DICOM interfaces nation-wide to all VA medical centers, (5) working with the healthcare providers using the system, and (6) participating in the DICOM working groups to improve the standard. The VA is now beginning to develop clinical applications that make use of the DICOM interfaces in the clinical specialties. The first of these will be in ophthalmology to remotely screen patients for diabetic retinopathy.
Infrared Sky Imager (IRSI) Instrument Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, Victor R.
2016-04-01
The Infrared Sky Imager (IRSI) deployed at the Atmospheric Radiation Measurement (ARM) Climate Research Facility is a Solmirus Corp. All Sky Infrared Visible Analyzer. The IRSI is an automatic, continuously operating, digital imaging and software system designed to capture hemispheric sky images and provide time series retrievals of fractional sky cover during both the day and night. The instrument provides diurnal, radiometrically calibrated sky imagery in the mid-infrared atmospheric window and imagery in the visible wavelengths for cloud retrievals during daylight hours. The software automatically identifies cloudy and clear regions at user-defined intervals and calculates fractional sky cover, providing amore » real-time display of sky conditions.« less
Multispectral high-resolution hologram generation using orthographic projection images
NASA Astrophysics Data System (ADS)
Muniraj, I.; Guo, C.; Sheridan, J. T.
2016-08-01
We present a new method of synthesizing a digital hologram of three-dimensional (3D) real-world objects from multiple orthographic projection images (OPI). A high-resolution multiple perspectives of 3D objects (i.e., two dimensional elemental image array) are captured under incoherent white light using synthetic aperture integral imaging (SAII) technique and their OPIs are obtained respectively. The reference beam is then multiplied with the corresponding OPI and integrated to form a Fourier hologram. Eventually, a modified phase retrieval algorithm (GS/HIO) is applied to reconstruct the hologram. The principle is validated experimentally and the results support the feasibility of the proposed method.
Color Imaging management in film processing
NASA Astrophysics Data System (ADS)
Tremeau, Alain; Konik, Hubert; Colantoni, Philippe
2003-12-01
The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.
Kullgren, A; Lie, A; Tingvall, C
1994-02-01
Vehicle deformations are important sources for information about the performance of safety systems. Photogrammetry has developed vastly under recent years. In this study modern photogrammetrical methods have been used for vehicle deformation analysis. The study describes the equipment for documentation and recording in the field (semi-metric camera), and a system for photogrammetrical measurements of the images in laboratory environment (personal computer and digitizing tablet). The material used is approximately 500 collected and measured cases. The study shows that the reliability is high and that accuracies around 15mm can be achieved even if the equipment and routines used are relatively simple. The effects of further development using video cameras for data capture and digital images for measurements are discussed.
NASA Astrophysics Data System (ADS)
Díaz, L.; Morales, Y.; Torres, C.
2015-01-01
The esthetic dentistry reference in our society is determined by several factors, including one that produces more dissatisfaction is abnormal tooth color or that does not meet the patient's expectations. For this reason it has been designed and implemented an algorithm in MATLAB that captures, digitizes, pre-processing and analyzed dental imaging by allowing to evaluate the degree of bleaching caused by the use of peroxide of hidrogen. The samples analyzed were human teeth extracted, which were subjected to different concentrations of peroxide of hidrogen and see if they can teeth whitening when using these products, was used different concentrations and intervals of time to analysis or study of the whitening of the teeth with the hydrogen peroxide.
Improved grid-noise removal in single-frame digital moiré 3D shape measurement
NASA Astrophysics Data System (ADS)
Mohammadi, Fatemeh; Kofman, Jonathan
2016-11-01
A single-frame grid-noise removal technique was developed for application in single-frame digital-moiré 3D shape measurement. The ability of the stationary wavelet transform (SWT) to prevent oscillation artifacts near discontinuities, and the ability of the Fourier transform (FFT) applied to wavelet coefficients to separate grid-noise from useful image information, were combined in a new technique, SWT-FFT, to remove grid-noise from moiré-pattern images generated by digital moiré. In comparison to previous grid-noise removal techniques in moiré, SWT-FFT avoids the requirement for mechanical translation of optical components and capture of multiple frames, to enable single-frame moiré-based measurement. Experiments using FFT, Discrete Wavelet Transform (DWT), DWT-FFT, and SWT-FFT were performed on moiré-pattern images containing grid noise, generated by digital moiré, for several test objects. SWT-FFT had the best performance in removing high-frequency grid-noise, both straight and curved lines, minimizing artifacts, and preserving the moiré pattern without blurring and degradation. SWT-FFT also had the lowest noise amplitude in the reconstructed height and lowest roughness index for all test objects, indicating best grid-noise removal in comparison to the other techniques.
Device and Method of Scintillating Quantum Dots for Radiation Imaging
NASA Technical Reports Server (NTRS)
Burke, Eric R. (Inventor); DeHaven, Stanton L. (Inventor); Williams, Phillip A. (Inventor)
2017-01-01
A radiation imaging device includes a radiation source and a micro structured detector comprising a material defining a surface that faces the radiation source. The material includes a plurality of discreet cavities having openings in the surface. The detector also includes a plurality of quantum dots disclosed in the cavities. The quantum dots are configured to interact with radiation from the radiation source, and to emit visible photons that indicate the presence of radiation. A digital camera and optics may be used to capture images formed by the detector in response to exposure to radiation.
Low-Speed Fingerprint Image Capture System User`s Guide, June 1, 1993
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitus, B.R.; Goddard, J.S.; Jatko, W.B.
1993-06-01
The Low-Speed Fingerprint Image Capture System (LS-FICS) uses a Sun workstation controlling a Lenzar ElectroOptics Opacity 1000 imaging system to digitize fingerprint card images to support the Federal Bureau of Investigation`s (FBI`s) Automated Fingerprint Identification System (AFIS) program. The system also supports the operations performed by the Oak Ridge National Laboratory- (ORNL-) developed Image Transmission Network (ITN) prototype card scanning system. The input to the system is a single FBI fingerprint card of the agreed-upon standard format and a user-specified identification number. The output is a file formatted to be compatible with the National Institute of Standards and Technology (NIST)more » draft standard for fingerprint data exchange dated June 10, 1992. These NIST compatible files contain the required print and text images. The LS-FICS is designed to provide the FBI with the capability of scanning fingerprint cards into a digital format. The FBI will replicate the system to generate a data base of test images. The Host Workstation contains the image data paths and the compression algorithm. A local area network interface, disk storage, and tape drive are used for the image storage and retrieval, and the Lenzar Opacity 1000 scanner is used to acquire the image. The scanner is capable of resolving 500 pixels/in. in both x and y directions. The print images are maintained in full 8-bit gray scale and compressed with an FBI-approved wavelet-based compression algorithm. The text fields are downsampled to 250 pixels/in. and 2-bit gray scale. The text images are then compressed using a lossless Huffman coding scheme. The text fields retrieved from the output files are easily interpreted when displayed on the screen. Detailed procedures are provided for system calibration and operation. Software tools are provided to verify proper system operation.« less
Pre-Flight Radiometric Model of Linear Imager on LAPAN-IPB Satellite
NASA Astrophysics Data System (ADS)
Hadi Syafrudin, A.; Salaswati, Sartika; Hasbi, Wahyudi
2018-05-01
LAPAN-IPB Satellite is Microsatellite class with mission of remote sensing experiment. This satellite carrying Multispectral Line Imager for captured of radiometric reflectance value from earth to space. Radiometric quality of image is important factor to classification object on remote sensing process. Before satellite launch in orbit or pre-flight, Line Imager have been tested by Monochromator and integrating sphere to get spectral and every pixel radiometric response characteristic. Pre-flight test data with variety setting of line imager instrument used to see correlation radiance input and digital number of images output. Output input correlation is described by the radiance conversion model with imager setting and radiometric characteristics. Modelling process from hardware level until normalize radiance formula are presented and discussed in this paper.
NASA Astrophysics Data System (ADS)
Zhang, Zhenhai; Li, Kejie; Wu, Xiaobing; Zhang, Shujiang
2008-03-01
The unwrapped and correcting algorithm based on Coordinate Rotation Digital Computer (CORDIC) and bilinear interpolation algorithm was presented in this paper, with the purpose of processing dynamic panoramic annular image. An original annular panoramic image captured by panoramic annular lens (PAL) can be unwrapped and corrected to conventional rectangular image without distortion, which is much more coincident with people's vision. The algorithm for panoramic image processing is modeled by VHDL and implemented in FPGA. The experimental results show that the proposed panoramic image algorithm for unwrapped and distortion correction has the lower computation complexity and the architecture for dynamic panoramic image processing has lower hardware cost and power consumption. And the proposed algorithm is valid.
Ethical implications of digital images for teaching and learning purposes: an integrative review.
Kornhaber, Rachel; Betihavas, Vasiliki; Baber, Rodney J
2015-01-01
Digital photography has simplified the process of capturing and utilizing medical images. The process of taking high-quality digital photographs has been recognized as efficient, timely, and cost-effective. In particular, the evolution of smartphone and comparable technologies has become a vital component in teaching and learning of health care professionals. However, ethical standards in relation to digital photography for teaching and learning have not always been of the highest standard. The inappropriate utilization of digital images within the health care setting has the capacity to compromise patient confidentiality and increase the risk of litigation. Therefore, the aim of this review was to investigate the literature concerning the ethical implications for health professionals utilizing digital photography for teaching and learning. A literature search was conducted utilizing five electronic databases, PubMed, Embase (Excerpta Medica Database), Cumulative Index to Nursing and Allied Health Literature, Educational Resources Information Center, and Scopus, limited to English language. Studies that endeavored to evaluate the ethical implications of digital photography for teaching and learning purposes in the health care setting were included. The search strategy identified 514 papers of which nine were retrieved for full review. Four papers were excluded based on the inclusion criteria, leaving five papers for final analysis. Three key themes were developed: knowledge deficit, consent and beyond, and standards driving scope of practice. The assimilation of evidence in this review suggests that there is value for health professionals utilizing digital photography for teaching purposes in health education. However, there is limited understanding of the process of obtaining and storage and use of such mediums for teaching purposes. Disparity was also highlighted related to policy and guideline identification and development in clinical practice. Therefore, the implementation of policy to guide practice requires further research.
An efficient multiple exposure image fusion in JPEG domain
NASA Astrophysics Data System (ADS)
Hebbalaguppe, Ramya; Kakarala, Ramakrishna
2012-01-01
In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.
Matsushima, Kyoji
2008-07-01
Rotational transformation based on coordinate rotation in Fourier space is a useful technique for simulating wave field propagation between nonparallel planes. This technique is characterized by fast computation because the transformation only requires executing a fast Fourier transform twice and a single interpolation. It is proved that the formula of the rotational transformation mathematically satisfies the Helmholtz equation. Moreover, to verify the formulation and its usefulness in wave optics, it is also demonstrated that the transformation makes it possible to reconstruct an image on arbitrarily tilted planes from a wave field captured experimentally by using digital holography.
New technology in dietary assessment: a review of digital methods in improving food record accuracy.
Stumbo, Phyllis J
2013-02-01
Methods for conducting dietary assessment in the United States date back to the early twentieth century. Methods of assessment encompassed dietary records, written and spoken dietary recalls, FFQ using pencil and paper and more recently computer and internet applications. Emerging innovations involve camera and mobile telephone technology to capture food and meal images. This paper describes six projects sponsored by the United States National Institutes of Health that use digital methods to improve food records and two mobile phone applications using crowdsourcing. The techniques under development show promise for improving accuracy of food records.
A design of real time image capturing and processing system using Texas Instrument's processor
NASA Astrophysics Data System (ADS)
Wee, Toon-Joo; Chaisorn, Lekha; Rahardja, Susanto; Gan, Woon-Seng
2007-09-01
In this work, we developed and implemented an image capturing and processing system that equipped with capability of capturing images from an input video in real time. The input video can be a video from a PC, video camcorder or DVD player. We developed two modes of operation in the system. In the first mode, an input image from the PC is processed on the processing board (development platform with a digital signal processor) and is displayed on the PC. In the second mode, current captured image from the video camcorder (or from DVD player) is processed on the board but is displayed on the LCD monitor. The major difference between our system and other existing conventional systems is that image-processing functions are performed on the board instead of the PC (so that the functions can be used for further developments on the board). The user can control the operations of the board through the Graphic User Interface (GUI) provided on the PC. In order to have a smooth image data transfer between the PC and the board, we employed Real Time Data Transfer (RTDX TM) technology to create a link between them. For image processing functions, we developed three main groups of function: (1) Point Processing; (2) Filtering and; (3) 'Others'. Point Processing includes rotation, negation and mirroring. Filter category provides median, adaptive, smooth and sharpen filtering in the time domain. In 'Others' category, auto-contrast adjustment, edge detection, segmentation and sepia color are provided, these functions either add effect on the image or enhance the image. We have developed and implemented our system using C/C# programming language on TMS320DM642 (or DM642) board from Texas Instruments (TI). The system was showcased in College of Engineering (CoE) exhibition 2006 at Nanyang Technological University (NTU) and have more than 40 users tried our system. It is demonstrated that our system is adequate for real time image capturing. Our system can be used or applied for applications such as medical imaging, video surveillance, etc.
Land-markings: 12 Journeys through 9/11 Living Memorials [DVD
Erika S. Svendsen; Lindsay K. Campbell; Phu Duong
2007-01-01
The Land-markings DVD was created from a multimedia exhibition of 12 digitally authored journeys through more than 700 living memorials nationwide. Land-markings captures stories and images of how we use the landscape as a way to remember people, places, and events. Ranging from single tree plantings, to the creation of new parks, to the restoration of existing forests...
Automated complete slide digitization: a medium for simultaneous viewing by multiple pathologists.
Leong, F J; McGee, J O
2001-11-01
Developments in telepathology robotic systems have evolved the concept of a 'virtual microscope' handling 'digital slides'. Slide digitization is a method of archiving salient histological features in numerical (digital) form. The value and potential of this have begun to be recognized by several international centres. Automated complete slide digitization has application at all levels of clinical practice and will benefit undergraduate, postgraduate, and continuing education. Unfortunately, as the volume of potential data on a histological slide represents a significant problem in terms of digitization, storage, and subsequent manipulation, the reality of virtual microscopy to date has comprised limited views at inadequate resolution. This paper outlines a system refined in the authors' laboratory, which employs a combination of enhanced hardware, image capture, and processing techniques designed for telepathology. The system is able to scan an entire slide at high magnification and create a library of such slides that may exist on an internet server or be distributed on removable media (such as CD-ROM or DVD). A digital slide allows image data manipulation at a level not possible with conventional light microscopy. Combinations of multiple users, multiple magnifications, annotations, and addition of ancillary textual and visual data are now possible. This demonstrates that with increased sophistication, the applications of telepathology technology need not be confined to second opinion, but can be extended on a wider front. Copyright 2001 John Wiley & Sons, Ltd.
An Investigation of Surge in a High-Speed Centrifugal Compressor Using Digital PIV
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Bright, Michelle M.; Skoch, Gary J.
2001-01-01
Compressor stall is a catastrophic breakdown of the flow in a compressor, which con lead to a loss of engine power, large pressure transients in the inlet/nacelle, and engine flameout. The implementation of active or passive strategies for controlling rotating stall and surge can significantly extend the stable operating range of a compressor without substantially sacrificing performance. It is crucial to identify the dynamic changes occurring in the flow field prior to rotating stall and surge in order to control these events successfully. Generally, pressure transducer measurements are made to capture the transient response of a compressor prior to rotating stall. In this investigation, Digital Particle Imaging Velocimetry (DPIV) is used in conjunction with dynamic pressure transducers to capture transient velocity and pressure measurements simultaneously in the nonstationary flow field during compressor surge. DPIV is an instantaneous, planar measurement technique that is ideally suited for studying transient flow phenomena in highspeed turbomachinery and has been used previously to map the stable operating point flow field in the diffuser of a high-speed centrifugal compressor. Through the acquisition of both DPIV images and transient pressure data, the time evolution of the unsteady flow during surge is revealed.
An Investigation of Surge in a High-Speed Centrifugal Compressor Using Digital PIV
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Bright, Michelle M.; Skoch, Gary J.
2002-01-01
Compressor stall is a catastrophic breakdown of the flow in a compressor, which can lead to a loss of engine power, large pressure transients in the inlet/nacelle and engine flameout. The implementation of active or passive strategies for controlling rotating stall and surge can significantly extend the stable operating range of a compressor without substantially sacrificing performance. It is crucial to identify the dynamic changes occurring in the flow field prior to rotating stall and surge in order to successfully control these events. Generally, pressure transducer measurements are made to capture the transient response of a compressor prior to rotating stall. In this investigation, Digital Particle Imaging Velocimetry (DPIV) is used in conjunction with dynamic pressure transducers to simultaneously capture transient velocity and pressure measurements in the non-stationary flow field during compressor surge. DPIV is an instantaneous, planar measurement technique which is ideally suited for studying transient flow phenomena in high speed turbomachinery and has been used previously to successfully map the stable operating point flow field in the diffuser of a high speed centrifugal compressor. Through the acquisition of both DPIV images and transient pressure data, the time evolution of the unsteady flow during surge is revealed.
Fast words boundaries localization in text fields for low quality document images
NASA Astrophysics Data System (ADS)
Ilin, Dmitry; Novikov, Dmitriy; Polevoy, Dmitry; Nikolaev, Dmitry
2018-04-01
The paper examines the problem of word boundaries precise localization in document text zones. Document processing on a mobile device consists of document localization, perspective correction, localization of individual fields, finding words in separate zones, segmentation and recognition. While capturing an image with a mobile digital camera under uncontrolled capturing conditions, digital noise, perspective distortions or glares may occur. Further document processing gets complicated because of its specifics: layout elements, complex background, static text, document security elements, variety of text fonts. However, the problem of word boundaries localization has to be solved at runtime on mobile CPU with limited computing capabilities under specified restrictions. At the moment, there are several groups of methods optimized for different conditions. Methods for the scanned printed text are quick but limited only for images of high quality. Methods for text in the wild have an excessively high computational complexity, thus, are hardly suitable for running on mobile devices as part of the mobile document recognition system. The method presented in this paper solves a more specialized problem than the task of finding text on natural images. It uses local features, a sliding window and a lightweight neural network in order to achieve an optimal algorithm speed-precision ratio. The duration of the algorithm is 12 ms per field running on an ARM processor of a mobile device. The error rate for boundaries localization on a test sample of 8000 fields is 0.3
High-resolution digital brain atlases: a Hubble telescope for the brain.
Jones, Edward G; Stone, James M; Karten, Harvey J
2011-05-01
We describe implementation of a method for digitizing at microscopic resolution brain tissue sections containing normal and experimental data and for making the content readily accessible online. Web-accessible brain atlases and virtual microscopes for online examination can be developed using existing computer and internet technologies. Resulting databases, made up of hierarchically organized, multiresolution images, enable rapid, seamless navigation through the vast image datasets generated by high-resolution scanning. Tools for visualization and annotation of virtual microscope slides enable remote and universal data sharing. Interactive visualization of a complete series of brain sections digitized at subneuronal levels of resolution offers fine grain and large-scale localization and quantification of many aspects of neural organization and structure. The method is straightforward and replicable; it can increase accessibility and facilitate sharing of neuroanatomical data. It provides an opportunity for capturing and preserving irreplaceable, archival neurohistological collections and making them available to all scientists in perpetuity, if resources could be obtained from hitherto uninterested agencies of scientific support. © 2011 New York Academy of Sciences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chetvertkov, Mikhail A., E-mail: chetvertkov@wayne
2016-10-15
Purpose: To develop standard (SPCA) and regularized (RPCA) principal component analysis models of anatomical changes from daily cone beam CTs (CBCTs) of head and neck (H&N) patients and assess their potential use in adaptive radiation therapy, and for extracting quantitative information for treatment response assessment. Methods: Planning CT images of ten H&N patients were artificially deformed to create “digital phantom” images, which modeled systematic anatomical changes during radiation therapy. Artificial deformations closely mirrored patients’ actual deformations and were interpolated to generate 35 synthetic CBCTs, representing evolving anatomy over 35 fractions. Deformation vector fields (DVFs) were acquired between pCT and syntheticmore » CBCTs (i.e., digital phantoms) and between pCT and clinical CBCTs. Patient-specific SPCA and RPCA models were built from these synthetic and clinical DVF sets. EigenDVFs (EDVFs) having the largest eigenvalues were hypothesized to capture the major anatomical deformations during treatment. Results: Principal component analysis (PCA) models achieve variable results, depending on the size and location of anatomical change. Random changes prevent or degrade PCA’s ability to detect underlying systematic change. RPCA is able to detect smaller systematic changes against the background of random fraction-to-fraction changes and is therefore more successful than SPCA at capturing systematic changes early in treatment. SPCA models were less successful at modeling systematic changes in clinical patient images, which contain a wider range of random motion than synthetic CBCTs, while the regularized approach was able to extract major modes of motion. Conclusions: Leading EDVFs from the both PCA approaches have the potential to capture systematic anatomical change during H&N radiotherapy when systematic changes are large enough with respect to random fraction-to-fraction changes. In all cases the RPCA approach appears to be more reliable at capturing systematic changes, enabling dosimetric consequences to be projected once trends are established early in a treatment course, or based on population models.« less
NASA Astrophysics Data System (ADS)
Yang, Chang-Ying Joseph; Huang, Weidong
2009-02-01
Computed radiography (CR) is considered a drop-in addition or replacement for traditional screen-film (SF) systems in digital mammography. Unlike other technologies, CR has the advantage of being compatible with existing mammography units. One of the challenges, however, is to properly configure the automatic exposure control (AEC) on existing mammography units for CR use. Unlike analogue systems, the capture and display of digital CR images is decoupled. The function of AEC is changed from ensuring proper and consistent optical density of the captured image on film to balancing image quality with patient dose needed for CR. One of the preferences when acquiring CR images under AEC is to use the same patient dose as SF systems. The challenge is whether the existing AEC design and calibration process-most of them proprietary from the X-ray systems manufacturers and tailored specifically for SF response properties-can be adapted for CR cassettes, in order to compensate for their response and attenuation differences. This paper describes the methods for configuring the AEC of three different mammography units models to match the patient dose used for CR with those that are used for a KODAK MIN-R 2000 SF System. Based on phantom test results, these methods provide the dose level under AEC for the CR systems to match with the dose of SF systems. These methods can be used in clinical environments that require the acquisition of CR images under AEC at the same dose levels as those used for SF systems.
Electronic data capture and DICOM data management in multi-center clinical trials
NASA Astrophysics Data System (ADS)
Haak, Daniel; Page, Charles-E.; Deserno, Thomas M.
2016-03-01
Providing eligibility, efficacy and security evaluation by quantitative and qualitative disease findings, medical imaging has become increasingly important in clinical trials. Here, subject's data is today captured in electronic case reports forms (eCRFs), which are offered by electronic data capture (EDC) systems. However, integration of subject's medical image data into eCRFs is insufficiently supported. Neither integration of subject's digital imaging and communications in medicine (DICOM) data, nor communication with picture archiving and communication systems (PACS), is possible. This aggravates the workflow of the study personnel, in special regarding studies with distributed data capture in multiple sites. Hence, in this work, a system architecture is presented, which connects an EDC system, a PACS and a DICOM viewer via the web access to DICOM objects (WADO) protocol. The architecture is implemented using the open source tools OpenClinica, DCM4CHEE and Weasis. The eCRF forms the primary endpoint for the study personnel, where subject's image data is stored and retrieved. Background communication with the PACS is completely hidden for the users. Data privacy and consistency is ensured by automatic de-identification and re-labelling of DICOM data with context information (e.g. study and subject identifiers), respectively. The system is exemplarily demonstrated in a clinical trial, where computer tomography (CT) data is de-centrally captured from the subjects and centrally read by a chief radiologists to decide on inclusion of the subjects in the trial. Errors, latency and costs in the EDC workflow are reduced, while, a research database is implicitly built up in the background.
Veli, Muhammed; Ozcan, Aydogan
2018-03-27
We present a cost-effective and portable platform based on contact lenses for noninvasively detecting Staphylococcus aureus, which is part of the human ocular microbiome and resides on the cornea and conjunctiva. Using S. aureus-specific antibodies and a surface chemistry protocol that is compatible with human tears, contact lenses are designed to specifically capture S. aureus. After the bacteria capture on the lens and right before its imaging, the captured bacteria are tagged with surface-functionalized polystyrene microparticles. These microbeads provide sufficient signal-to-noise ratio for the quantification of the captured bacteria on the contact lens, without any fluorescent labels, by 3D imaging of the curved surface of each lens using only one hologram taken with a lens-free on-chip microscope. After the 3D surface of the contact lens is computationally reconstructed using rotational field transformations and holographic digital focusing, a machine learning algorithm is employed to automatically count the number of beads on the lens surface, revealing the count of the captured bacteria. To demonstrate its proof-of-concept, we created a field-portable and cost-effective holographic microscope, which weighs 77 g, controlled by a laptop. Using daily contact lenses that are spiked with bacteria, we demonstrated that this computational sensing platform provides a detection limit of ∼16 bacteria/μL. This contact-lens-based wearable sensor can be broadly applicable to detect various bacteria, viruses, and analytes in tears using a cost-effective and portable computational imager that might be used even at home by consumers.
Estimation of saturated pixel values in digital color imaging
Zhang, Xuemei; Brainard, David H.
2007-01-01
Pixel saturation, where the incident light at a pixel causes one of the color channels of the camera sensor to respond at its maximum value, can produce undesirable artifacts in digital color images. We present a Bayesian algorithm that estimates what the saturated channel's value would have been in the absence of saturation. The algorithm uses the non-saturated responses from the other color channels, together with a multivariate Normal prior that captures the correlation in response across color channels. The appropriate parameters for the prior may be estimated directly from the image data, since most image pixels are not saturated. Given the prior, the responses of the non-saturated channels, and the fact that the true response of the saturated channel is known to be greater than the saturation level, the algorithm returns the optimal expected mean square estimate for the true response. Extensions of the algorithm to the case where more than one channel is saturated are also discussed. Both simulations and examples with real images are presented to show that the algorithm is effective. PMID:15603065
Blue Beaufort Sea Ice from Operation IceBridge
2017-12-08
Mosaic image of sea ice in the Beaufort Sea created by the Digital Mapping System (DMS) instrument aboard the IceBridge P-3B. The dark area in the middle of the image is open water seen through a lead, or opening, in the ice. Light blue areas are thick sea ice and dark blue areas are thinner ice formed as water in the lead refreezes. Leads are formed when cracks develop in sea ice as it moves in response to wind and ocean currents. DMS uses a modified digital SLR camera that points down through a window in the underside of the plane, capturing roughly one frame per second. These images are then combined into an image mosaic using specialized computer software. Credit: NASA/DMS NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Avila, Manuel; Graterol, Eduardo; Alezones, Jesús; Criollo, Beisy; Castillo, Dámaso; Kuri, Victoria; Oviedo, Norman; Moquete, Cesar; Romero, Marbella; Hanley, Zaida; Taylor, Margie
2012-06-01
The appearance of rice grain is a key aspect in quality determination. Mainly, this analysis is performed by expert analysts through visual observation; however, due to the subjective nature of the analysis, the results may vary among analysts. In order to evaluate the concordance between analysts from Latin-American rice quality laboratories for rice grain appearance through digital images, an inter-laboratory test was performed with ten analysts and images of 90 grains captured with a high resolution scanner. Rice grains were classified in four categories including translucent, chalky, white belly, and damaged grain. Data was categorized using statistic parameters like mode and its frequency, the relative concordance, and the reproducibility parameter kappa. Additionally, a referential image gallery of typical grain for each category was constructed based on mode frequency. Results showed a Kappa value of 0.49, corresponding to a moderate reproducibility, attributable to subjectivity in the visual analysis of grain images. These results reveal the need for standardize the evaluation criteria among analysts to improve the confidence of the determination of rice grain appearance.
Accuracy evaluation of optical distortion calibration by digital image correlation
NASA Astrophysics Data System (ADS)
Gao, Zeren; Zhang, Qingchuan; Su, Yong; Wu, Shangquan
2017-11-01
Due to its convenience of operation, the camera calibration algorithm, which is based on the plane template, is widely used in image measurement, computer vision and other fields. How to select a suitable distortion model is always a problem to be solved. Therefore, there is an urgent need for an experimental evaluation of the accuracy of camera distortion calibrations. This paper presents an experimental method for evaluating camera distortion calibration accuracy, which is easy to implement, has high precision, and is suitable for a variety of commonly used lens. First, we use the digital image correlation method to calculate the in-plane rigid body displacement field of an image displayed on a liquid crystal display before and after translation, as captured with a camera. Next, we use a calibration board to calibrate the camera to obtain calibration parameters which are used to correct calculation points of the image before and after deformation. The displacement field before and after correction is compared to analyze the distortion calibration results. Experiments were carried out to evaluate the performance of two commonly used industrial camera lenses for four commonly used distortion models.
A fast and automatic fusion algorithm for unregistered multi-exposure image sequence
NASA Astrophysics Data System (ADS)
Liu, Yan; Yu, Feihong
2014-09-01
Human visual system (HVS) can visualize all the brightness levels of the scene through visual adaptation. However, the dynamic range of most commercial digital cameras and display devices are smaller than the dynamic range of human eye. This implies low dynamic range (LDR) images captured by normal digital camera may lose image details. We propose an efficient approach to high dynamic (HDR) image fusion that copes with image displacement and image blur degradation in a computationally efficient manner, which is suitable for implementation on mobile devices. The various image registration algorithms proposed in the previous literatures are unable to meet the efficiency and performance requirements in the application of mobile devices. In this paper, we selected Oriented Brief (ORB) detector to extract local image structures. The descriptor selected in multi-exposure image fusion algorithm has to be fast and robust to illumination variations and geometric deformations. ORB descriptor is the best candidate in our algorithm. Further, we perform an improved RANdom Sample Consensus (RANSAC) algorithm to reject incorrect matches. For the fusion of images, a new approach based on Stationary Wavelet Transform (SWT) is used. The experimental results demonstrate that the proposed algorithm generates high quality images at low computational cost. Comparisons with a number of other feature matching methods show that our method gets better performance.
How Phoenix Creates Color Images (Animation)
NASA Technical Reports Server (NTRS)
2008-01-01
[figure removed for brevity, see original site] Click on image for animation This simple animation shows how a color image is made from images taken by Phoenix. The Surface Stereo Imager captures the same scene with three different filters. The images are sent to Earth in black and white and the color is added by mission scientists. By contrast, consumer digital cameras and cell phones have filters built in and do all of the color processing within the camera itself. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASAaE(TM)s Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.NASA Technical Reports Server (NTRS)
1972-01-01
The solar imaging X-ray telescope experiment (designated the S-056 experiment) is described. It will photograph the sun in the far ultraviolet or soft X-ray region. Because of the imaging characteristics of this telescope and the necessity of using special techniques for capturing images on film at these wave lengths, methods were developed for computer processing of the photographs. The problems of image restoration were addressed to develop and test digital computer techniques for applying a deconvolution process to restore overall S-056 image quality. Additional techniques for reducing or eliminating the effects of noise and nonlinearity in S-056 photographs were developed.
NASA Astrophysics Data System (ADS)
Montoya, Gustavo; Valecillos, María; Romero, Carlos; Gonzáles, Dosinda
2009-11-01
In the present research a digital image processing-based automated algorithm was developed in order to determine the phase's height, hold up, and statistical distribution of the drop size in a two-phase system water-air using pipes with 0 , 10 , and 90 of inclination. Digital images were acquired with a high speed camera (up to 4500fps), using an equipment that consist of a system with three acrylic pipes with diameters of 1.905, 3.175, and 4.445 cm. Each pipe is arranged in two sections of 8 m of length. Various flow patterns were visualized for different superficial velocities of water and air. Finally, using the image processing program designed in Matlab/Simulink^, the captured images were processed to establish the parameters previously mentioned. The image processing algorithm is based in the frequency domain analysis of the source pictures, which allows to find the phase as the edge between the water and air, through a Sobel filter that extracts the high frequency components of the image. The drop size was found using the calculation of the Feret diameter. Three flow patterns were observed: Annular, ST, and ST&MI.
NASA Astrophysics Data System (ADS)
Yoon, J. S.; Culligan, P. J.; Germaine, J. T.
2003-12-01
Subsurface colloid behavior has recently drawn attention because colloids are suspected of enhancing contaminant transport in groundwater systems. To better understand the processes by which colloids move through the subsurface, and in particular the vadose zone, a new technique that enables real-time visualization of colloid particles as they move through a porous medium has been developed. This visualization technique involves the use of laser induced fluorescent particles and digital image processing to directly observe particles moving through a porous medium consisting of soda-lime glass beads and water in a transparent experimental box of 10.0cm\\x9D27.9cm\\x9D2.38cm. Colloid particles are simulated using commercially available micron sized particles that fluoresce under argon-ion laser light. The fluorescent light given off from the particles is captured through a camera filter, which lets through only the emitted wavelength of the colloid particles. The intensity of the emitted light is proportional to the colloid particle concentration. The images of colloid movement are captured by a MagnaFire digital camera; a cooled CCD digital camera produced by Optronics. This camera enables real-time capture of images to a computer, thereby allowing the images to be processed immediately. The images taken by the camera are analyzed by the ImagePro software from Media Cybernetics, which contains a range of counting, sizing, measuring, and image enhancement tools for image processing. Laboratory experiments using the new technique have demonstrated the existence of both irreversible and reversible sites for colloid entrapment during uniform saturated flow in a homogeneous porous medium. These tests have also shown a dependence of colloid entrapment on velocity. Models for colloid transport currently available in the literature have proven to be inadequate predictors for the experimental observations, despite the simplicity of the system studied. To further extend the work, the visualization technique has been developed for use on the geo-centrifuge. The advantage that the geo-centrifuge has for investigating subsurface colloid behavior, is the ability to simulate unsaturated transport mechanisms under well simulated field moisture profiles and in shortened periods of time. A series of tests to investigate colloid transport during uniform saturated flow is being used to examine basic scaling laws for colloid transport under enhanced gravity. The paper will describe the new visualization technique, its use in geo-centrifuge testing and observations on scaling relationships for colloid transport during geo-centrifuge experiments. Although the visualization technique has been developed for investigating subsurface colloid behavior, it does have application in other areas of investigation, including the investigation of microbial behavior in the subsurface.
Hasanreisoglu, Murat; Priel, Ethan; Naveh, Lili; Lusky, Moshe; Weinberger, Dov; Benjamini, Yoav; Gaton, Dan D
2013-03-01
One of the leading methods for optic nerve head assessment in glaucoma remains stereoscopic photography. This study compared conventional film and digital stereoscopy in the quantitative and qualitative assessment of the optic nerve head in glaucoma and glaucoma suspect patients. Fifty patients with glaucoma or suspected glaucoma underwent stereoscopic photography of the optic nerve head with a 35-mm color slide film and a digital camera. Photographs/images were presented in random order to 3 glaucoma specialists for independent analysis using a standardized assessment form. Findings for the following parameters were compared among assessors and between techniques: cup/disc (C/D) ratio, state of the optic rim, presence of peripapillary atrophy and appearance of the retinal nerve fiber layer, blood vessels, and lamina cribrosa. The film-based and image-based diagnoses (glaucoma yes/no) were compared as well. Despite high level of agreement across graders using the same method for the horizontal and vertical C/D ratio, (intraclass correlations 0.80 to 0.83), the agreement across graders was much lower for the other parameters using the same method. Similarly the agreement between the findings of the same grader using either method was high for horizontal and vertical C/D ratio, but low for the other parameters. The latter differences were reflected in the disagreement regarding the final diagnosis: The diagnoses differed by technique for each grader in 18% to 46% of eyes, resulting in 38.5% of eyes diagnosed with glaucoma by film photography that "lost" their diagnosis on the digital images, whereas 18.7% of eyes diagnosed as nonglaucomatous by film photography were considered to have glaucoma on the digital images. Although there is consistency between 35-mm film stereoscopy and digital stereoscopy in determining the cup/disc (C/D) ratio, in all other parameters large differences exist, leading to differences in diagnosis. Differences in capturing images between digital and film photography may lead to loss of information and misdiagnosis. Further studies are needed to determine the reliability of the new digital techniques.
McDonald, Linda S; Panozzo, Joseph F; Salisbury, Phillip A; Ford, Rebecca
2016-01-01
Field peas (Pisum sativum L.) are generally traded based on seed appearance, which subjectively defines broad market-grades. In this study, we developed an objective Linear Discriminant Analysis (LDA) model to classify market grades of field peas based on seed colour, shape and size traits extracted from digital images. Seeds were imaged in a high-throughput system consisting of a camera and laser positioned over a conveyor belt. Six colour intensity digital images were captured (under 405, 470, 530, 590, 660 and 850nm light) for each seed, and surface height was measured at each pixel by laser. Colour, shape and size traits were compiled across all seed in each sample to determine the median trait values. Defective and non-defective seed samples were used to calibrate and validate the model. Colour components were sufficient to correctly classify all non-defective seed samples into correct market grades. Defective samples required a combination of colour, shape and size traits to achieve 87% and 77% accuracy in market grade classification of calibration and validation sample-sets respectively. Following these results, we used the same colour, shape and size traits to develop an LDA model which correctly classified over 97% of all validation samples as defective or non-defective.
NASA Astrophysics Data System (ADS)
Belloni, V.; Ravanelli, R.; Nascetti, A.; Di Rita, M.; Mattei, D.; Crespi, M.
2018-05-01
In the last few decades, there has been a growing interest in studying non-contact methods for full-field displacement and strain measurement. Among such techniques, Digital Image Correlation (DIC) has received particular attention, thanks to its ability to provide these information by comparing digital images of a sample surface before and after deformation. The method is now commonly adopted in the field of civil, mechanical and aerospace engineering and different companies and some research groups implemented 2D and 3D DIC software. In this work a review on DIC software status is given at first. Moreover, a free and open source 2D DIC software is presented, named py2DIC and developed in Python at the Geodesy and Geomatics Division of DICEA of the University of Rome "La Sapienza"; its potentialities were evaluated by processing the images captured during tensile tests performed in the Structural Engineering Lab of the University of Rome "La Sapienza" and comparing them to those obtained using the commercial software Vic-2D developed by Correlated Solutions Inc, USA. The agreement of these results at one hundredth of millimetre level demonstrate the possibility to use this open source software as a valuable 2D DIC tool to measure full-field displacements on the investigated sample surface.
McDonald, Linda S.; Panozzo, Joseph F.; Salisbury, Phillip A.; Ford, Rebecca
2016-01-01
Field peas (Pisum sativum L.) are generally traded based on seed appearance, which subjectively defines broad market-grades. In this study, we developed an objective Linear Discriminant Analysis (LDA) model to classify market grades of field peas based on seed colour, shape and size traits extracted from digital images. Seeds were imaged in a high-throughput system consisting of a camera and laser positioned over a conveyor belt. Six colour intensity digital images were captured (under 405, 470, 530, 590, 660 and 850nm light) for each seed, and surface height was measured at each pixel by laser. Colour, shape and size traits were compiled across all seed in each sample to determine the median trait values. Defective and non-defective seed samples were used to calibrate and validate the model. Colour components were sufficient to correctly classify all non-defective seed samples into correct market grades. Defective samples required a combination of colour, shape and size traits to achieve 87% and 77% accuracy in market grade classification of calibration and validation sample-sets respectively. Following these results, we used the same colour, shape and size traits to develop an LDA model which correctly classified over 97% of all validation samples as defective or non-defective. PMID:27176469
Iterative current mode per pixel ADC for 3D SoftChip implementation in CMOS
NASA Astrophysics Data System (ADS)
Lachowicz, Stefan W.; Rassau, Alexander; Lee, Seung-Minh; Eshraghian, Kamran; Lee, Mike M.
2003-04-01
Mobile multimedia communication has rapidly become a significant area of research and development constantly challenging boundaries on a variety of technological fronts. The processing requirements for the capture, conversion, compression, decompression, enhancement, display, etc. of increasingly higher quality multimedia content places heavy demands even on current ULSI (ultra large scale integration) systems, particularly for mobile applications where area and power are primary considerations. The ADC presented in this paper is designed for a vertically integrated (3D) system comprising two distinct layers bonded together using Indium bump technology. The top layer is a CMOS imaging array containing analogue-to-digital converters, and a buffer memory. The bottom layer takes the form of a configurable array processor (CAP), a highly parallel array of soft programmable processors capable of carrying out complex processing tasks directly on data stored in the top plane. This paper presents a ADC scheme for the image capture plane. The analogue photocurrent or sampled voltage is transferred to the ADC via a column or a column/row bus. In the proposed system, an array of analogue-to-digital converters is distributed, so that a one-bit cell is associated with one sensor. The analogue-to-digital converters are algorithmic current-mode converters. Eight such cells are cascaded to form an 8-bit converter. Additionally, each photo-sensor is equipped with a current memory cell, and multiple conversions are performed with scaled values of the photocurrent for colour processing.
Design and development of a smart aerial platform for surface hydrological measurements
NASA Astrophysics Data System (ADS)
Tauro, F.; Pagano, C.; Porfiri, M.; Grimaldi, S.
2013-12-01
Currently available experimental methodologies for surface hydrological monitoring rely on the use of intrusive sensing technologies which tend to provide local rather than distributed information on the flow physics. In this context, drawbacks deriving from the use of invasive instrumentation are partially alleviated by Large Scale Particle Image Velocimetry (LSPIV). LSPIV is based on the use of cameras mounted on masts along river banks which capture images of artificial tracers or naturally occurring objects floating on water surfaces. Images are then georeferenced and the displacement of groups of floating tracers statistically analyzed to reconstruct flow velocity maps at specific river cross-sections. In this work, we mitigate LSPIV spatial limitations and inaccuracies due to image calibration by designing and developing a smart platform which integrates digital acquisition system and laser calibration units onboard of a custom-built quadricopter. The quadricopter is designed to be lightweight, low cost as compared to kits available on the market, highly customizable, and stable to guarantee minimal vibrations during image acquisition. The onboard digital system includes an encased GoPro Hero 3 camera whose axis is constantly kept orthogonal to the water surface by means of an in-house developed gimbal. The gimbal is connected to the quadricopter through a shock absorber damping device which further reduces eventual vibrations. Image calibration is performed through laser units mounted at known distances on the quadricopter landing apparatus. The vehicle can be remotely controlled by the open-source Ardupilot microcontroller. Calibration tests and field experiments are conducted in outdoor environments to assess the feasibility of using the smart platform for acquisition of high quality images of natural streams. Captured images are processed by LSPIV algorithms and average flow velocities are compared to independently acquired flow estimates. Further, videos are presented where the smart platform captures the motion of environmentally-friendly buoyant fluorescent particle tracers floating on the surface of water bodies. Such fluorescent particles are in-house synthesized and their visibility and accuracy in tracing complex flows have been previously tested in laboratory and outdoor settings. Experimental results demonstrate the potential of the methodology in monitoring severely accessible and spatially extended environments. Improved accuracy in flow monitoring is accomplished by minimizing image orthorectification and introducing highly visible particle tracers. Future developments will aim at the autonomy of the vehicle through machine learning procedures for unmanned monitoring in the environment.
Bai, Jin-Shun; Cao, Wei-Dong; Xiong, Jing; Zeng, Nao-Hua; Shimizu, Katshyoshi; Rui, Yu-Kui
2013-12-01
In order to explore the feasibility of using the image processing technology to diagnose the nitrogen status and to predict the maize yield, a field experiment with different nitrogen rates with green manure incorporation was conducted. Maize canopy digital images over a range of growth stages were captured by digital camera. Maize nitrogen status and the relationships between image color indices derived by digital camera for maize at different growth stages and maize nitrogen status indicators were analyzed. These digital camera sourced image color indices at different growth stages for maize were also regressed with maize grain yield at maturity. The results showed that the plant nitrogen status for maize was improved by green manure application. The leaf chlorophyll content (SPAD value), aboveground biomass and nitrogen uptake for green manure treatments at different maize growth stages were all higher than that for chemical fertilization treatments. The correlations between spectral indices with plant nitrogen indicators for maize affected by green manure application were weaker than that affected by chemical fertilization. And the correlation coefficients for green manure application were ranged with the maize growth stages changes. The best spectral indices for diagnosis of plant nitrogen status after green manure incorporation were normalized blue value (B/(R+G+B)) at 12-leaf (V12) stage and normalized red value (R/(R+G+B)) at grain-filling (R4) stage individually. The coefficients of determination based on linear regression were 0. 45 and 0. 46 for B/(R+G+B) at V12 stage and R/(R+G+B) at R4 stage respectively, acting as a predictor of maize yield response to nitrogen affected by green manure incorporation. Our findings suggested that digital image technique could be a potential tool for in-season prediction of the nitrogen status and grain yield for maize after green manure incorporation when the suitable growth stages and spectral indices for diagnosis were selected.
Wide-field computational imaging of pathology slides using lens-free on-chip microscopy.
Greenbaum, Alon; Zhang, Yibo; Feizi, Alborz; Chung, Ping-Luen; Luo, Wei; Kandukuri, Shivani R; Ozcan, Aydogan
2014-12-17
Optical examination of microscale features in pathology slides is one of the gold standards to diagnose disease. However, the use of conventional light microscopes is partially limited owing to their relatively high cost, bulkiness of lens-based optics, small field of view (FOV), and requirements for lateral scanning and three-dimensional (3D) focus adjustment. We illustrate the performance of a computational lens-free, holographic on-chip microscope that uses the transport-of-intensity equation, multi-height iterative phase retrieval, and rotational field transformations to perform wide-FOV imaging of pathology samples with comparable image quality to a traditional transmission lens-based microscope. The holographically reconstructed image can be digitally focused at any depth within the object FOV (after image capture) without the need for mechanical focus adjustment and is also digitally corrected for artifacts arising from uncontrolled tilting and height variations between the sample and sensor planes. Using this lens-free on-chip microscope, we successfully imaged invasive carcinoma cells within human breast sections, Papanicolaou smears revealing a high-grade squamous intraepithelial lesion, and sickle cell anemia blood smears over a FOV of 20.5 mm(2). The resulting wide-field lens-free images had sufficient image resolution and contrast for clinical evaluation, as demonstrated by a pathologist's blinded diagnosis of breast cancer tissue samples, achieving an overall accuracy of ~99%. By providing high-resolution images of large-area pathology samples with 3D digital focus adjustment, lens-free on-chip microscopy can be useful in resource-limited and point-of-care settings. Copyright © 2014, American Association for the Advancement of Science.
NV-CMOS HD camera for day/night imaging
NASA Astrophysics Data System (ADS)
Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.
2014-06-01
SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.
Acquisition of thin coronal sectional dataset of cadaveric liver.
Lou, Li; Liu, Shu Wei; Zhao, Zhen Mei; Tang, Yu Chun; Lin, Xiang Tao
2014-04-01
To obtain the thin coronal sectional anatomic dataset of the liver by using digital freezing milling technique. The upper abdomen of one Chinese adult cadaver was selected as the specimen. After CT and MRI examinations verification of absent liver lesions, the specimen was embedded with gelatin in stand erect position and frozen under profound hypothermia, and the specimen was then serially sectioned from anterior to posterior layer by layer with digital milling machine in the freezing chamber. The sequential images were captured by means of a digital camera and the dataset was imported to imaging workstation. The thin serial section of the liver added up to 699 layers with each layer being 0.2 mm in thickness. The shape, location, structure, intrahepatic vessels and adjacent structures of the liver was displayed clearly on each layer of the coronal sectional slice. CT and MR images through the body were obtained at 1.0 and 3.0 mm intervals, respectively. The methodology reported here is an adaptation of the milling methods previously described, which is a new data acquisition method for sectional anatomy. The thin coronal sectional anatomic dataset of the liver obtained by this technique is of high precision and good quality.
A practical introduction to skeletons for the plant sciences1
Bucksch, Alexander
2014-01-01
Before the availability of digital photography resulting from the invention of charged couple devices in 1969, the measurement of plant architecture was a manual process either on the plant itself or on traditional photographs. The introduction of cheap digital imaging devices for the consumer market enabled the wide use of digital images to capture the shape of plant networks such as roots, tree crowns, or leaf venation. Plant networks contain geometric traits that can establish links to genetic or physiological characteristics, support plant breeding efforts, drive evolutionary studies, or serve as input to plant growth simulations. Typically, traits are encoded in shape descriptors that are computed from imaging data. Skeletons are one class of shape descriptors that are used to describe the hierarchies and extent of branching and looping plant networks. While the mathematical understanding of skeletons is well developed, their application within the plant sciences remains challenging because the quality of the measurement depends partly on the interpretation of the skeleton. This article is meant to bridge the skeletonization literature in the plant sciences and related technical fields by discussing best practices for deriving diameters and approximating branching hierarchies in a plant network. PMID:25202645
An online detection system for aggregate sizes and shapes based on digital image processing
NASA Astrophysics Data System (ADS)
Yang, Jianhong; Chen, Sijia
2017-02-01
Traditional aggregate size measuring methods are time-consuming, taxing, and do not deliver online measurements. A new online detection system for determining aggregate size and shape based on a digital camera with a charge-coupled device, and subsequent digital image processing, have been developed to overcome these problems. The system captures images of aggregates while falling and flat lying. Using these data, the particle size and shape distribution can be obtained in real time. Here, we calibrate this method using standard globules. Our experiments show that the maximum particle size distribution error was only 3 wt%, while the maximum particle shape distribution error was only 2 wt% for data derived from falling aggregates, having good dispersion. In contrast, the data for flat-lying aggregates had a maximum particle size distribution error of 12 wt%, and a maximum particle shape distribution error of 10 wt%; their accuracy was clearly lower than for falling aggregates. However, they performed well for single-graded aggregates, and did not require a dispersion device. Our system is low-cost and easy to install. It can successfully achieve online detection of aggregate size and shape with good reliability, and it has great potential for aggregate quality assurance.
3D Surface Reconstruction for Lower Limb Prosthetic Model using Radon Transform
NASA Astrophysics Data System (ADS)
Sobani, S. S. Mohd; Mahmood, N. H.; Zakaria, N. A.; Razak, M. A. Abdul
2018-03-01
This paper describes the idea to realize three-dimensional surfaces of objects with cylinder-based shapes where the techniques adopted and the strategy developed for a non-rigid three-dimensional surface reconstruction of an object from uncalibrated two-dimensional image sequences using multiple-view digital camera and turntable setup. The surface of an object is reconstructed based on the concept of tomography with the aid of performing several digital image processing algorithms on the two-dimensional images captured by a digital camera in thirty-six different projections and the three-dimensional structure of the surface is analysed. Four different objects are used as experimental models in the reconstructions and each object is placed on a manually rotated turntable. The results shown that the proposed method has successfully reconstruct the three-dimensional surface of the objects and practicable. The shape and size of the reconstructed three-dimensional objects are recognizable and distinguishable. The reconstructions of objects involved in the test are strengthened with the analysis where the maximum percent error obtained from the computation is approximately 1.4 % for the height whilst 4.0%, 4.79% and 4.7% for the diameters at three specific heights of the objects.
Sawicki, Piotr
2018-01-01
The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011. PMID:29509679
Gabara, Grzegorz; Sawicki, Piotr
2018-03-06
The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011.
2017-07-11
Commercial businesses and scientific researchers have a new capability to capture digital imagery of Earth, thanks to MUSES: the Multiple User System for Earth Sensing facility. This platform on the outside of the International Space Station is capable of holding four different payloads, ranging from high-resolution digital cameras to hyperspectral imagers, which will support Earth science observations in agricultural awareness, air quality, disaster response, fire detection, and many other research topics. MUSES program manager Mike Soutullo explains the system and its unique features including the ability to change and upgrade payloads using the space station’s Canadarm2 and Special Purpose Dexterous Manipulator. For more information about MUSES, please visit: https://www.nasa.gov/mission_pages/station/research/news/MUSES For more on ISS science, https://www.nasa.gov/mission_pages/station/research/index.html or follow us on Twitter @ISS_research
Earth Observation as seen by Expedition Two crew
2001-04-16
ISS002-E-5656 (16 April 2001) --- Extreme southern topography of California, including inland portions of the San Diego area were captured in this digital still camera's image from the International Space Station's Expedition Two crew members. The previous frame (5655) and this one were both recorded with an 800mm lens, whereas the succeeding frame (5657) was shot with a 105mm lens.
HST Solar Arrays photographed by Electronic Still Camera
NASA Technical Reports Server (NTRS)
1993-01-01
This view, backdropped against the blackness of space shows one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST). The scene was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.
Optical head tracking for functional magnetic resonance imaging using structured light.
Zaremba, Andrei A; MacFarlane, Duncan L; Tseng, Wei-Che; Stark, Andrew J; Briggs, Richard W; Gopinath, Kaundinya S; Cheshkov, Sergey; White, Keith D
2008-07-01
An accurate motion-tracking technique is needed to compensate for subject motion during functional magnetic resonance imaging (fMRI) procedures. Here, a novel approach to motion metrology is discussed. A structured light pattern specifically coded for digital signal processing is positioned onto a fiduciary of the patient. As the patient undergoes spatial transformations in 6 DoF (degrees of freedom), a high-resolution CCD camera captures successive images for analysis on a computing platform. A high-speed image processing algorithm is used to calculate spatial transformations in a time frame commensurate with patient movements (10-100 ms) and with a precision of at least 0.5 microm for translations and 0.1 deg for rotations.
Siegel, Nisan; Rosen, Joseph; Brooker, Gary
2013-10-01
Recent advances in Fresnel incoherent correlation holography (FINCH) increase the signal-to-noise ratio in hologram recording by interference of images from two diffractive lenses with focal lengths close to the image plane. Holograms requiring short reconstruction distances are created that reconstruct poorly with existing Fresnel propagation methods. Here we show a dramatic improvement in reconstructed fluorescent images when a 2D Hamming window function substituted for the disk window typically used to bound the impulse response in the Fresnel propagation. Greatly improved image contrast and quality are shown for simulated and experimentally determined FINCH holograms using a 2D Hamming window without significant loss in lateral or axial resolution.
A generic FPGA-based detector readout and real-time image processing board
NASA Astrophysics Data System (ADS)
Sarpotdar, Mayuresh; Mathew, Joice; Safonova, Margarita; Murthy, Jayant
2016-07-01
For space-based astronomical observations, it is important to have a mechanism to capture the digital output from the standard detector for further on-board analysis and storage. We have developed a generic (application- wise) field-programmable gate array (FPGA) board to interface with an image sensor, a method to generate the clocks required to read the image data from the sensor, and a real-time image processor system (on-chip) which can be used for various image processing tasks. The FPGA board is applied as the image processor board in the Lunar Ultraviolet Cosmic Imager (LUCI) and a star sensor (StarSense) - instruments developed by our group. In this paper, we discuss the various design considerations for this board and its applications in the future balloon and possible space flights.
Capturing migration phenology of terrestrial wildlife using camera traps
Tape, Ken D.; Gustine, David D.
2014-01-01
Remote photography, using camera traps, can be an effective and noninvasive tool for capturing the migration phenology of terrestrial wildlife. We deployed 14 digital cameras along a 104-kilometer longitudinal transect to record the spring migrations of caribou (Rangifer tarandus) and ptarmigan (Lagopus spp.) in the Alaskan Arctic. The cameras recorded images at 15-minute intervals, producing approximately 40,000 images, including 6685 caribou observations and 5329 ptarmigan observations. The northward caribou migration was evident because the median caribou observation (i.e., herd median) occurred later with increasing latitude; average caribou migration speed also increased with latitude (r2 = .91). Except at the northernmost latitude, a northward ptarmigan migration was similarly evident (r2 = .93). Future applications of this method could be used to examine the conditions proximate to animal movement, such as habitat or snow cover, that may influence migration phenology.
Digital Earth Watch And Picture Post Network: Measuring The Environment Through Digital Images
NASA Astrophysics Data System (ADS)
Schloss, A. L.; Beaudry, J.; Pickle, J.; Carrera, F.
2012-12-01
Digital Earth Watch (DEW) involves individuals, schools, organizations and communities in a systematic monitoring project of their local environment, especially vegetation health. The program offers people the means to join the Picture Post network and to study and analyze their own findings using DEW software. A Picture Post is an easy-to-use and inexpensive platform for repeatedly taking digital photographs as a standardized set of images of the entire 360° landscape, which then can be shared over the Internet on the Picture Post website. This simple concept has the potential to create a wealth of information and data on changing environmental conditions, which is important for a society grappling with the effects of environmental change. Picture Posts may be added by anyone interested in monitoring a particular location. The value of a Picture Post is in the commitment of participants to take repeated photographs - monthly, weekly, or even daily - to build up a long-term record over many years. This poster will show examples of Picture Post pictures being used for monitoring and research applications, and a DEW mobile app for capturing repeat digital photographs at a virtual post. We invite individuals, schools, informal education centers, groups and communities to join.; A new post and its website. ; Creating a virtual post using the mobile app.
Automated facial acne assessment from smartphone images
NASA Astrophysics Data System (ADS)
Amini, Mohammad; Vasefi, Fartash; Valdebran, Manuel; Huang, Kevin; Zhang, Haomiao; Kemp, William; MacKinnon, Nicholas
2018-02-01
A smartphone mobile medical application is presented, that provides analysis of the health of skin on the face using a smartphone image and cloud-based image processing techniques. The mobile application employs the use of the camera to capture a front face image of a subject, after which the captured image is spatially calibrated based on fiducial points such as position of the iris of the eye. A facial recognition algorithm is used to identify features of the human face image, to normalize the image, and to define facial regions of interest (ROI) for acne assessment. We identify acne lesions and classify them into two categories: those that are papules and those that are pustules. Automated facial acne assessment was validated by performing tests on images of 60 digital human models and 10 real human face images. The application was able to identify 92% of acne lesions within five facial ROIs. The classification accuracy for separating papules from pustules was 98%. Combined with in-app documentation of treatment, lifestyle factors, and automated facial acne assessment, the app can be used in both cosmetic and clinical dermatology. It allows users to quantitatively self-measure acne severity and treatment efficacy on an ongoing basis to help them manage their chronic facial acne.
Crew Earth Observations (CEO) taken during Expedition Five on the ISS
2002-08-18
ISS005-E-10000 (18 August 2002) --- This is the first of two images recently released by the Earth Sciences and Image Analysis Laboratory at Johnson Space Center, showing the devastating European flooding in August. The images were captured by astronauts using a digital still camera onboard the International Space Station (ISS). The photographs show flooding around the Danube Bend area just north of Budapest near the city of Vác, Hungary. The flood peaked in Budapest the day after this photo was made, on August 19, at about 8.5 meters (28 feet), exceeding the previous 1965 flood record. This image shows the waters inundating farmland in the flood plain. Image no. ISS005-E-10926 shows the area four days later.
Multi-Contrast Imaging and Digital Refocusing on a Mobile Microscope with a Domed LED Array.
Phillips, Zachary F; D'Ambrosio, Michael V; Tian, Lei; Rulison, Jared J; Patel, Hurshal S; Sadras, Nitin; Gande, Aditya V; Switz, Neil A; Fletcher, Daniel A; Waller, Laura
2015-01-01
We demonstrate the design and application of an add-on device for improving the diagnostic and research capabilities of CellScope--a low-cost, smartphone-based point-of-care microscope. We replace the single LED illumination of the original CellScope with a programmable domed LED array. By leveraging recent advances in computational illumination, this new device enables simultaneous multi-contrast imaging with brightfield, darkfield, and phase imaging modes. Further, we scan through illumination angles to capture lightfield datasets, which can be used to recover 3D intensity and phase images without any hardware changes. This digital refocusing procedure can be used for either 3D imaging or software-only focus correction, reducing the need for precise mechanical focusing during field experiments. All acquisition and processing is performed on the mobile phone and controlled through a smartphone application, making the computational microscope compact and portable. Using multiple samples and different objective magnifications, we demonstrate that the performance of our device is comparable to that of a commercial microscope. This unique device platform extends the field imaging capabilities of CellScope, opening up new clinical and research possibilities.
Multi-Contrast Imaging and Digital Refocusing on a Mobile Microscope with a Domed LED Array
Phillips, Zachary F.; D'Ambrosio, Michael V.; Tian, Lei; Rulison, Jared J.; Patel, Hurshal S.; Sadras, Nitin; Gande, Aditya V.; Switz, Neil A.; Fletcher, Daniel A.; Waller, Laura
2015-01-01
We demonstrate the design and application of an add-on device for improving the diagnostic and research capabilities of CellScope—a low-cost, smartphone-based point-of-care microscope. We replace the single LED illumination of the original CellScope with a programmable domed LED array. By leveraging recent advances in computational illumination, this new device enables simultaneous multi-contrast imaging with brightfield, darkfield, and phase imaging modes. Further, we scan through illumination angles to capture lightfield datasets, which can be used to recover 3D intensity and phase images without any hardware changes. This digital refocusing procedure can be used for either 3D imaging or software-only focus correction, reducing the need for precise mechanical focusing during field experiments. All acquisition and processing is performed on the mobile phone and controlled through a smartphone application, making the computational microscope compact and portable. Using multiple samples and different objective magnifications, we demonstrate that the performance of our device is comparable to that of a commercial microscope. This unique device platform extends the field imaging capabilities of CellScope, opening up new clinical and research possibilities. PMID:25969980
Roguev, Assen; Ryan, Colm J; Xu, Jiewei; Colson, Isabelle; Hartsuiker, Edgar; Krogan, Nevan
2018-02-01
This protocol describes computational analysis of genetic interaction screens, ranging from data capture (plate imaging) to downstream analyses. Plate imaging approaches using both digital camera and office flatbed scanners are included, along with a protocol for the extraction of colony size measurements from the resulting images. A commonly used genetic interaction scoring method, calculation of the S-score, is discussed. These methods require minimal computer skills, but some familiarity with MATLAB and Linux/Unix is a plus. Finally, an outline for using clustering and visualization software for analysis of resulting data sets is provided. © 2018 Cold Spring Harbor Laboratory Press.
X-ray online detection for laser welding T-joint of Al-Li alloy
NASA Astrophysics Data System (ADS)
Zhan, Xiaohong; Bu, Xing; Qin, Tao; Yu, Haisong; Chen, Jie; Wei, Yanhong
2017-05-01
In order to detect weld defects in laser welding T-joint of Al-Li alloy, a real-time X-ray image system is set up for quality inspection. Experiments on real-time radiography procedure of the weldment are conducted by using this system. Twin fillet welding seam radiographic arrangement is designed according to the structural characteristics of the weldment. The critical parameters including magnification times, focal length, tube current and tube voltage are studied to acquire high quality weld images. Through the theoretical and data analysis, optimum parameters are settled and expected digital images are captured, which is conductive to automatic defect detection.
Dental impressions using 3D digital scanners: virtual becomes reality.
Birnbaum, Nathan S; Aaronson, Heidi B
2008-10-01
The technologies that have made the use of three-dimensional (3D) digital scanners an integral part of many industries for decades have been improved and refined for application to dentistry. Since the introduction of the first dental impressioning digital scanner in the 1980s, development engineers at a number of companies have enhanced the technologies and created in-office scanners that are increasingly user-friendly and able to produce precisely fitting dental restorations. These systems are capable of capturing 3D virtual images of tooth preparations, from which restorations may be fabricated directly (ie, CAD/CAM systems) or fabricated indirectly (ie, dedicated impression scanning systems for the creation of accurate master models). The use of these products is increasing rapidly around the world and presents a paradigm shift in the way in which dental impressions are made. Several of the leading 3D dental digital scanning systems are presented and discussed in this article.
Method for the visualization of landform by mapping using low altitude UAV application
NASA Astrophysics Data System (ADS)
Sharan Kumar, N.; Ashraf Mohamad Ismail, Mohd; Sukor, Nur Sabahiah Abdul; Cheang, William
2018-05-01
Unmanned Aerial Vehicle (UAV) and Digital Photogrammetry are evolving drastically in mapping technology. The significance and necessity for digital landform mapping are developing with years. In this study, a mapping workflow is applied to obtain two different input data sets which are the orthophoto and DSM. A fine flying technology is used to capture Low Altitude Aerial Photography (LAAP). Low altitude UAV (Drone) with the fixed advanced camera was utilized for imagery while computerized photogrammetry handling using Photo Scan was applied for cartographic information accumulation. The data processing through photogrammetry and orthomosaic processes is the main applications. High imagery quality is essential for the effectiveness and nature of normal mapping output such as 3D model, Digital Elevation Model (DEM), Digital Surface Model (DSM) and Ortho Images. The exactitude of Ground Control Points (GCP), flight altitude and the resolution of the camera are essential for good quality DEM and Orthophoto.
Forensic characterization of camcorded movies: digital cinema vs. celluloid film prints
NASA Astrophysics Data System (ADS)
Rolland-Nevière, Xavier; Chupeau, Bertrand; Do"rr, Gwena"l.; Blondé, Laurent
2012-03-01
Digital camcording in the premises of cinema theaters is the main source of pirate copies of newly released movies. To trace such recordings, watermarking systems are exploited in order for each projection to be unique and thus identifiable. The forensic analysis to recover these marks is different for digital and legacy cinemas. To avoid running both detectors, a reliable oracle discriminating between cams originating from analog or digital projections is required. This article details a classification framework relying on three complementary features : the spatial uniformity of the screen illumination, the vertical (in)stability of the projected image, and the luminance artifacts due to the interplay between the display and acquisition devices. The system has been tuned with cams captured in a controlled environment and benchmarked against a medium-sized dataset (61 samples) composed of real-life pirate cams. Reported experimental results demonstrate that such a framework yields over 80% classification accuracy.
Metasurface optics for full-color computational imaging.
Colburn, Shane; Zhan, Alan; Majumdar, Arka
2018-02-01
Conventional imaging systems comprise large and expensive optical components that successively mitigate aberrations. Metasurface optics offers a route to miniaturize imaging systems by replacing bulky components with flat and compact implementations. The diffractive nature of these devices, however, induces severe chromatic aberrations, and current multiwavelength and narrowband achromatic metasurfaces cannot support full visible spectrum imaging (400 to 700 nm). We combine principles of both computational imaging and metasurface optics to build a system with a single metalens of numerical aperture ~0.45, which generates in-focus images under white light illumination. Our metalens exhibits a spectrally invariant point spread function that enables computational reconstruction of captured images with a single digital filter. This work connects computational imaging and metasurface optics and demonstrates the capabilities of combining these disciplines by simultaneously reducing aberrations and downsizing imaging systems using simpler optics.
Wave analysis of a plenoptic system and its applications
NASA Astrophysics Data System (ADS)
Shroff, Sapna A.; Berkner, Kathrin
2013-03-01
Traditional imaging systems directly image a 2D object plane on to the sensor. Plenoptic imaging systems contain a lenslet array at the conventional image plane and a sensor at the back focal plane of the lenslet array. In this configuration the data captured at the sensor is not a direct image of the object. Each lenslet effectively images the aperture of the main imaging lens at the sensor. Therefore the sensor data retains angular light-field information which can be used for a posteriori digital computation of multi-angle images and axially refocused images. If a filter array, containing spectral filters or neutral density or polarization filters, is placed at the pupil aperture of the main imaging lens, then each lenslet images the filters on to the sensor. This enables the digital separation of multiple filter modalities giving single snapshot, multi-modal images. Due to the diversity of potential applications of plenoptic systems, their investigation is increasing. As the application space moves towards microscopes and other complex systems, and as pixel sizes become smaller, the consideration of diffraction effects in these systems becomes increasingly important. We discuss a plenoptic system and its wave propagation analysis for both coherent and incoherent imaging. We simulate a system response using our analysis and discuss various applications of the system response pertaining to plenoptic system design, implementation and calibration.
Lensfree microscopy on a cellphone
Tseng, Derek; Mudanyali, Onur; Oztoprak, Cetin; Isikman, Serhan O.; Sencan, Ikbal; Yaglidere, Oguzhan; Ozcan, Aydogan
2010-01-01
We demonstrate lensfree digital microscopy on a cellphone. This compact and light-weight holographic microscope installed on a cellphone does not utilize any lenses, lasers or other bulky optical components and it may offer a cost-effective tool for telemedicine applications to address various global health challenges. Weighing ~38 grams (<1.4 ounces), this lensfree imaging platform can be mechanically attached to the camera unit of a cellphone where the samples are loaded from the side, and are vertically illuminated by a simple light-emitting diode (LED). This incoherent LED light is then scattered from each micro-object to coherently interfere with the background light, creating the lensfree hologram of each object on the detector array of the cellphone. These holographic signatures captured by the cellphone permit reconstruction of microscopic images of the objects through rapid digital processing. We report the performance of this lensfree cellphone microscope by imaging various sized micro-particles, as well as red blood cells, white blood cells, platelets and a waterborne parasite (Giardia lamblia). PMID:20445943
Digital Compositing Techniques for Coronal Imaging (Invited review)
NASA Astrophysics Data System (ADS)
Espenak, F.
2000-04-01
The solar corona exhibits a huge range in brightness which cannot be captured in any single photographic exposure. Short exposures show the bright inner corona and prominences, while long exposures reveal faint details in equatorial streamers and polar brushes. For many years, radial gradient filters and other analog techniques have been used to compress the corona's dynamic range in order to study its morphology. Such techniques demand perfect pointing and tracking during the eclipse, and can be difficult to calibrate. In the past decade, the speed, memory and hard disk capacity of personal computers have rapidly increased as prices continue to drop. It is now possible to perform sophisticated image processing of eclipse photographs on commercially available CPU's. Software programs such as Adobe Photoshop permit combining multiple eclipse photographs into a composite image which compresses the corona's dynamic range and can reveal subtle features and structures. Algorithms and digital techniques used for processing 1998 eclipse photographs will be discussed which are equally applicable to the recent eclipse of 1999 August 11.
Image Analysis Technique for Material Behavior Evaluation in Civil Structures
Moretti, Michele; Rossi, Gianluca
2017-01-01
The article presents a hybrid monitoring technique for the measurement of the deformation field. The goal is to obtain information about crack propagation in existing structures, for the purpose of monitoring their state of health. The measurement technique is based on the capture and analysis of a digital image set. Special markers were used on the surface of the structures that can be removed without damaging existing structures as the historical masonry. The digital image analysis was done using software specifically designed in Matlab to follow the tracking of the markers and determine the evolution of the deformation state. The method can be used in any type of structure but is particularly suitable when it is necessary not to damage the surface of structures. A series of experiments carried out on masonry walls of the Oliverian Museum (Pesaro, Italy) and Palazzo Silvi (Perugia, Italy) have allowed the validation of the procedure elaborated by comparing the results with those derived from traditional measuring techniques. PMID:28773129
Image Analysis Technique for Material Behavior Evaluation in Civil Structures.
Speranzini, Emanuela; Marsili, Roberto; Moretti, Michele; Rossi, Gianluca
2017-07-08
The article presents a hybrid monitoring technique for the measurement of the deformation field. The goal is to obtain information about crack propagation in existing structures, for the purpose of monitoring their state of health. The measurement technique is based on the capture and analysis of a digital image set. Special markers were used on the surface of the structures that can be removed without damaging existing structures as the historical masonry. The digital image analysis was done using software specifically designed in Matlab to follow the tracking of the markers and determine the evolution of the deformation state. The method can be used in any type of structure but is particularly suitable when it is necessary not to damage the surface of structures. A series of experiments carried out on masonry walls of the Oliverian Museum (Pesaro, Italy) and Palazzo Silvi (Perugia, Italy) have allowed the validation of the procedure elaborated by comparing the results with those derived from traditional measuring techniques.
Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei
2016-01-01
High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023
Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei
2016-03-04
High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera.
3D digital image correlation using single color camera pseudo-stereo system
NASA Astrophysics Data System (ADS)
Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang
2017-10-01
Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.
Mexican sign language recognition using normalized moments and artificial neural networks
NASA Astrophysics Data System (ADS)
Solís-V., J.-Francisco; Toxqui-Quitl, Carina; Martínez-Martínez, David; H.-G., Margarita
2014-09-01
This work presents a framework designed for the Mexican Sign Language (MSL) recognition. A data set was recorded with 24 static signs from the MSL using 5 different versions, this MSL dataset was captured using a digital camera in incoherent light conditions. Digital Image Processing was used to segment hand gestures, a uniform background was selected to avoid using gloved hands or some special markers. Feature extraction was performed by calculating normalized geometric moments of gray scaled signs, then an Artificial Neural Network performs the recognition using a 10-fold cross validation tested in weka, the best result achieved 95.83% of recognition rate.
Ahmed, Laura; Seal, Leonard H; Ainley, Carol; De la Salle, Barbara; Brereton, Michelle; Hyde, Keith; Burthem, John; Gilmore, William Samuel
2016-08-11
Morphological examination of blood films remains the reference standard for malaria diagnosis. Supporting the skills required to make an accurate morphological diagnosis is therefore essential. However, providing support across different countries and environments is a substantial challenge. This paper reports a scheme supplying digital slides of malaria-infected blood within an Internet-based virtual microscope environment to users with different access to training and computing facilities. The feasibility of the approach was established, allowing users to test, record, and compare their own performance with that of other users. From Giemsa stained thick and thin blood films, 56 large high-resolution digital slides were prepared, using high-quality image capture and 63x oil-immersion objective lens. The individual images were combined using the photomerge function of Adobe Photoshop and then adjusted to ensure resolution and reproduction of essential diagnostic features. Web delivery employed the Digital Slidebox platform allowing digital microscope viewing facilities and image annotation with data gathering from participants. Engagement was high with images viewed by 38 participants in five countries in a range of environments and a mean completion rate of 42/56 cases. The rate of parasite detection was 78% and accuracy of species identification was 53%, which was comparable with results of similar studies using glass slides. Data collection allowed users to compare performance with other users over time or for each individual case. Overall, these results demonstrate that users worldwide can effectively engage with the system in a range of environments, with the potential to enhance personal performance through education, external quality assessment, and personal professional development, especially in regions where educational resources are difficult to access.
Developing the Digital Kyoto Collection in Education and Research.
Hill, Mark Anthony
2018-04-16
The Kyoto embryo collection was begun in 1961 by Dr. Hideo Nishimura. The collection has been continuously developed and currently contains over 44,000 human normal and abnormal specimens. Beginning online in 1997, the internet provided an opportunity to make embryos from the collection widely available for research and educational purposes (http://tiny.cc/Embryo). These embryonic development resources have been continuously published and available from that time until today. Published in Japanese as an Atlas of Embryonic Development. Published online as the Kyoto Human Embryo Visualization Project (http://atlas.cac.med.kyoto-u.ac.jp) and also as the Human Embryo Atlas (http://tiny.cc/Human_Embryo_Atlas). Published now electronically as a digital eBook (http://tiny.cc/Kyoto_Collection_eBook). This new digital format allows incorporation of whole embryo and histology manipulable images, labels, and a linked glossary. New imaging modalities of magnetic resonance imaging (MRI) and episcopic fluorescence image capture (EFIC) can also be easily displayed as animations. For research, the collection specimens and histological sections have been extensively studied and published in several hundred papers, discussed here and elsewhere in this special edition. I will also describe how the Kyoto collection will now form a major partner of a new international embryology research group, the Digital Embryology Consortium (https://human-embryology.org). The digital Kyoto collection will be made available for remote researcher access, analysis, and comparison with other collections allowing new research and educational applications. This work was presented at the 40th Anniversary Commemoration Symposium of the Congenital Anomaly Research Center, Graduate School of Medicine, Kyoto University, Japan, November, 2015. Anat Rec, 2018. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Design of integrated eye tracker-display device for head mounted systems
NASA Astrophysics Data System (ADS)
David, Y.; Apter, B.; Thirer, N.; Baal-Zedaka, I.; Efron, U.
2009-08-01
We propose an Eye Tracker/Display system, based on a novel, dual function device termed ETD, which allows sharing the optical paths of the Eye tracker and the display and on-chip processing. The proposed ETD design is based on a CMOS chip combining a Liquid-Crystal-on-Silicon (LCoS) micro-display technology with near infrared (NIR) Active Pixel Sensor imager. The ET operation allows capturing the Near IR (NIR) light, back-reflected from the eye's retina. The retinal image is then used for the detection of the current direction of eye's gaze. The design of the eye tracking imager is based on the "deep p-well" pixel technology, providing low crosstalk while shielding the active pixel circuitry, which serves the imaging and the display drivers, from the photo charges generated in the substrate. The use of the ETD in the HMD Design enables a very compact design suitable for Smart Goggle applications. A preliminary optical, electronic and digital design of the goggle and its associated ETD chip and digital control, are presented.
CMOS Image Sensor with a Built-in Lane Detector.
Hsiao, Pei-Yung; Cheng, Hsien-Chein; Huang, Shih-Shinh; Fu, Li-Chen
2009-01-01
This work develops a new current-mode mixed signal Complementary Metal-Oxide-Semiconductor (CMOS) imager, which can capture images and simultaneously produce vehicle lane maps. The adopted lane detection algorithm, which was modified to be compatible with hardware requirements, can achieve a high recognition rate of up to approximately 96% under various weather conditions. Instead of a Personal Computer (PC) based system or embedded platform system equipped with expensive high performance chip of Reduced Instruction Set Computer (RISC) or Digital Signal Processor (DSP), the proposed imager, without extra Analog to Digital Converter (ADC) circuits to transform signals, is a compact, lower cost key-component chip. It is also an innovative component device that can be integrated into intelligent automotive lane departure systems. The chip size is 2,191.4 × 2,389.8 μm, and the package uses 40 pin Dual-In-Package (DIP). The pixel cell size is 18.45 × 21.8 μm and the core size of photodiode is 12.45 × 9.6 μm; the resulting fill factor is 29.7%.
Context dependent anti-aliasing image reconstruction
NASA Technical Reports Server (NTRS)
Beaudet, Paul R.; Hunt, A.; Arlia, N.
1989-01-01
Image Reconstruction has been mostly confined to context free linear processes; the traditional continuum interpretation of digital array data uses a linear interpolator with or without an enhancement filter. Here, anti-aliasing context dependent interpretation techniques are investigated for image reconstruction. Pattern classification is applied to each neighborhood to assign it a context class; a different interpolation/filter is applied to neighborhoods of differing context. It is shown how the context dependent interpolation is computed through ensemble average statistics using high resolution training imagery from which the lower resolution image array data is obtained (simulation). A quadratic least squares (LS) context-free image quality model is described from which the context dependent interpolation coefficients are derived. It is shown how ensembles of high-resolution images can be used to capture the a priori special character of different context classes. As a consequence, a priori information such as the translational invariance of edges along the edge direction, edge discontinuity, and the character of corners is captured and can be used to interpret image array data with greater spatial resolution than would be expected by the Nyquist limit. A Gibb-like artifact associated with this super-resolution is discussed. More realistic context dependent image quality models are needed and a suggestion is made for using a quality model which now is finding application in data compression.
Experience with a proposed teleradiology system for digital mammography
NASA Astrophysics Data System (ADS)
Saulnier, Emilie T.; Mitchell, Robert J.; Abdel-Malek, Aiman A.; Dudding, Kathryn E.
1995-05-01
Teleradiology offers significant improvement in efficiency and effectiveness over current practices in traditional film/screen-based diagnosis. In the context of digital mammography, the increasing number of women who need to be screened for breast cancer, including those in remote rural regions, make the advantages of teleradiology especially attractive for digital mammography. At the same time, the size and resolution of digital mammograms are among the most challenging to support in a cost effective teleradiology system. This paper describes a teleradiology architecture developed for use with digital mammography by GE Corporate Research and Development in collaboration with Massachusetts General Hospital under National Cancer Institute (NCI/NIH) grant number R01 CA60246-01. Experience with a testbed prototype is described. The telemammography architecture is intended to consist of a main mammography diagnostic site serving several remote screening sites. As patient exams become available, they are forwarded by an image server to the diagnostic site over a WAN communications link. A radiologist at the diagnostic site views a patient exam as it arrives, interprets it, and then relays a report back to the technician at the remote site. A secondary future scenario consists of mobile units which forward images to a remote site, which then forwards them to the main diagnostic site. The testbed architecture is based on the Digital Imaging and Communications in Medicine (DICOM) standard, created by the American College of Radiology (ACR) and National Electrical Manufacturers Association (NEMA). A specification of vendor-independent data formats and data transfer services for digital medical images, DICOM specifies a protocol suite starting at the application layer downward, including the TCP/IP layers. The current DICOM definition does not provide an information element that is specifically tailored to mammography, so we have used the DICOM secondary capture data format for the mammography images. In conclusion, experience with the testbed is described, as is performance analysis related to selection of network components needed to extend this architecture to clinical evaluation. Recommendations are made as to the critical areas for future work.
2000-09-08
KENNEDY SPACE CENTER, FLA. -- This view of the shock wave condensation collars backlit by the sun occurred during the launch of Atlantis on STS-106 and was captured on an engineering 35mm motion picture film. One frame was digitized to make this still image. Although the primary effect is created by the Orbiter forward fuselage, secondary effects can be seen on the SRB forward skirt, Orbiter vertical stabilizer and wing trailing edges (behind SSME's).
2000-09-08
KENNEDY SPACE CENTER, FLA. -- This view of the shock wave condensation collars backlit by the sun occurred during the launch of Atlantis on STS-106 and was captured on an engineering 35mm motion picture film. One frame was digitized to make this still image. Although the primary effect is created by the Orbiter forward fuselage, secondary effects can be seen on the SRB forward skirt, Orbiter vertical stabilizer and wing trailing edges (behind SSME's).
Full-color large-scaled computer-generated holograms for physical and non-physical objects
NASA Astrophysics Data System (ADS)
Matsushima, Kyoji; Tsuchiyama, Yasuhiro; Sonobe, Noriaki; Masuji, Shoya; Yamaguchi, Masahiro; Sakamoto, Yuji
2017-05-01
Several full-color high-definition CGHs are created for reconstructing 3D scenes including real-existing physical objects. The field of the physical objects are generated or captured by employing three techniques; 3D scanner, synthetic aperture digital holography, and multi-viewpoint images. Full-color reconstruction of high-definition CGHs is realized by RGB color filters. The optical reconstructions are presented for verifying these techniques.
Mol, André; Dunn, Stanley M
2003-06-01
To assess the effect of the orientation of arbitrarily shaped bone chips on the correlation between radiographic estimates of bone loss and true mineral loss using digital subtraction radiography. Twenty arbitrarily shaped bone chips (dry weight 1-10 mg) were placed individually on the superior lingual aspect of the interdental alveolar bone of a dry dentate hemi-mandible. After acquiring the first baseline image, each chip was rotated 90 degrees and a second radiograph was captured. Follow-up images were created without the bone chips and after rotating the mandible 0, 1, 2, 4, and 6 degrees around a vertical axis. Aluminum step tablet intensities were used to normalize image intensities for each image pair. Follow-up images were registered and geometrically standardized using projective standardization. Bone chips were dry ashed and analyzed for calcium content using atomic absorption. No significant difference was found between the radiographic estimates of bone loss from the different bone chip orientations (Wilcoxon: P > 0.05). The correlation between the two series of estimates for all rotations was 0.93 (Spearman: P < 0.05). Linear regression analysis indicated that both correlates did not differ appreciably ( and ). It is concluded that the spatial orientation of arbitrarily shaped bone chips does not have a significant impact on quantitative estimates of changes in bone mass in digital subtraction radiography. These results were obtained in the presence of irreversible projection errors of up to six degrees and after application of projective standardization for image reconstruction and image registration.
Ogawa, Y; Wada, B; Taniguchi, K; Miyasaka, S; Imaizumi, K
2015-12-01
This study clarifies the anthropometric variations of the Japanese face by presenting large-sample population data of photo anthropometric measurements. The measurements can be used as standard reference data for the personal identification of facial images in forensic practices. To this end, three-dimensional (3D) facial images of 1126 Japanese individuals (865 male and 261 female Japanese individuals, aged 19-60 years) were acquired as samples using an already validated 3D capture system, and normative anthropometric analysis was carried out. In this anthropometric analysis, first, anthropological landmarks (22 items, i.e., entocanthion (en), alare (al), cheilion (ch), zygion (zy), gonion (go), sellion (se), gnathion (gn), labrale superius (ls), stomion (sto), labrale inferius (li)) were positioned on each 3D facial image (the direction of which had been adjusted to the Frankfort horizontal plane as the standard position for appropriate anthropometry), and anthropometric absolute measurements (19 items, i.e., bientocanthion breadth (en-en), nose breadth (al-al), mouth breadth (ch-ch), bizygomatic breadth (zy-zy), bigonial breadth (go-go), morphologic face height (se-gn), upper-lip height (ls-sto), lower-lip height (sto-li)) were exported using computer software for the measurement of a 3D digital object. Second, anthropometric indices (21 items, i.e., (se-gn)/(zy-zy), (en-en)/(al-al), (ls-li)/(ch-ch), (ls-sto)/(sto-li)) were calculated from these exported measurements. As a result, basic statistics, such as the mean values, standard deviations, and quartiles, and details of the distributions of these anthropometric results were shown. All of the results except "upper/lower lip ratio (ls-sto)/(sto-li)" were normally distributed. They were acquired as carefully as possible employing a 3D capture system and 3D digital imaging technologies. The sample of images was much larger than any Japanese sample used before for the purpose of personal identification. The measurements will be useful as standard reference data for forensic practices and as material data for future studies in this field. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Parekh, Ruchi; Armañanzas, Rubén; Ascoli, Giorgio A
2015-04-01
Digital reconstructions of axonal and dendritic arbors provide a powerful representation of neuronal morphology in formats amenable to quantitative analysis, computational modeling, and data mining. Reconstructed files, however, require adequate metadata to identify the appropriate animal species, developmental stage, brain region, and neuron type. Moreover, experimental details about tissue processing, neurite visualization and microscopic imaging are essential to assess the information content of digital morphologies. Typical morphological reconstructions only partially capture the underlying biological reality. Tracings are often limited to certain domains (e.g., dendrites and not axons), may be incomplete due to tissue sectioning, imperfect staining, and limited imaging resolution, or can disregard aspects irrelevant to their specific scientific focus (such as branch thickness or depth). Gauging these factors is critical in subsequent data reuse and comparison. NeuroMorpho.Org is a central repository of reconstructions from many laboratories and experimental conditions. Here, we introduce substantial additions to the existing metadata annotation aimed to describe the completeness of the reconstructed neurons in NeuroMorpho.Org. These expanded metadata form a suitable basis for effective description of neuromorphological data.
Brain's tumor image processing using shearlet transform
NASA Astrophysics Data System (ADS)
Cadena, Luis; Espinosa, Nikolai; Cadena, Franklin; Korneeva, Anna; Kruglyakov, Alexey; Legalov, Alexander; Romanenko, Alexey; Zotin, Alexander
2017-09-01
Brain tumor detection is well known research area for medical and computer scientists. In last decades there has been much research done on tumor detection, segmentation, and classification. Medical imaging plays a central role in the diagnosis of brain tumors and nowadays uses methods non-invasive, high-resolution techniques, especially magnetic resonance imaging and computed tomography scans. Edge detection is a fundamental tool in image processing, particularly in the areas of feature detection and feature extraction, which aim at identifying points in a digital image at which the image has discontinuities. Shearlets is the most successful frameworks for the efficient representation of multidimensional data, capturing edges and other anisotropic features which frequently dominate multidimensional phenomena. The paper proposes an improved brain tumor detection method by automatically detecting tumor location in MR images, its features are extracted by new shearlet transform.
Noise reduction techniques for Bayer-matrix images
NASA Astrophysics Data System (ADS)
Kalevo, Ossi; Rantanen, Henry
2002-04-01
In this paper, some arrangements to apply Noise Reduction (NR) techniques for images captured by a single sensor digital camera are studied. Usually, the NR filter processes full three-color component image data. This requires that raw Bayer-matrix image data, available from the image sensor, is first interpolated by using Color Filter Array Interpolation (CFAI) method. Another choice is that the raw Bayer-matrix image data is processed directly. The advantages and disadvantages of both processing orders, before (pre-) CFAI and after (post-) CFAI, are studied with linear, multi-stage median, multistage median hybrid and median-rational filters .The comparison is based on the quality of the output image, the processing power requirements and the amount of memory needed. Also the solution, which improves preservation of details in the NR filtering before the CFAI, is proposed.
Active eye-tracking for an adaptive optics scanning laser ophthalmoscope
Sheehy, Christy K.; Tiruveedhula, Pavan; Sabesan, Ramkumar; Roorda, Austin
2015-01-01
We demonstrate a system that combines a tracking scanning laser ophthalmoscope (TSLO) and an adaptive optics scanning laser ophthalmoscope (AOSLO) system resulting in both optical (hardware) and digital (software) eye-tracking capabilities. The hybrid system employs the TSLO for active eye-tracking at a rate up to 960 Hz for real-time stabilization of the AOSLO system. AOSLO videos with active eye-tracking signals showed, at most, an amplitude of motion of 0.20 arcminutes for horizontal motion and 0.14 arcminutes for vertical motion. Subsequent real-time digital stabilization limited residual motion to an average of only 0.06 arcminutes (a 95% reduction). By correcting for high amplitude, low frequency drifts of the eye, the active TSLO eye-tracking system enabled the AOSLO system to capture high-resolution retinal images over a larger range of motion than previously possible with just the AOSLO imaging system alone. PMID:26203370
NASA Astrophysics Data System (ADS)
Themistocleous, K.; Agapiou, A.; Hadjimitsis, D.
2016-10-01
The documentation of architectural cultural heritage sites has traditionally been expensive and labor-intensive. New innovative technologies, such as Unmanned Aerial Vehicles (UAVs), provide an affordable, reliable and straightforward method of capturing cultural heritage sites, thereby providing a more efficient and sustainable approach to documentation of cultural heritage structures. In this study, hundreds of images of the Panagia Chryseleousa church in Foinikaria, Cyprus were taken using a UAV with an attached high resolution camera. The images were processed to generate an accurate digital 3D model by using Structure in Motion techniques. Building Information Model (BIM) was then used to generate drawings of the church. The methodology described in the paper provides an accurate, simple and cost-effective method of documenting cultural heritage sites and generating digital 3D models using novel techniques and innovative methods.
High-speed particle tracking in microscopy using SPAD image sensors
NASA Astrophysics Data System (ADS)
Gyongy, Istvan; Davies, Amy; Miguelez Crespo, Allende; Green, Andrew; Dutton, Neale A. W.; Duncan, Rory R.; Rickman, Colin; Henderson, Robert K.; Dalgarno, Paul A.
2018-02-01
Single photon avalanche diodes (SPADs) are used in a wide range of applications, from fluorescence lifetime imaging microscopy (FLIM) to time-of-flight (ToF) 3D imaging. SPAD arrays are becoming increasingly established, combining the unique properties of SPADs with widefield camera configurations. Traditionally, the photosensitive area (fill factor) of SPAD arrays has been limited by the in-pixel digital electronics. However, recent designs have demonstrated that by replacing the complex digital pixel logic with simple binary pixels and external frame summation, the fill factor can be increased considerably. A significant advantage of such binary SPAD arrays is the high frame rates offered by the sensors (>100kFPS), which opens up new possibilities for capturing ultra-fast temporal dynamics in, for example, life science cellular imaging. In this work we consider the use of novel binary SPAD arrays in high-speed particle tracking in microscopy. We demonstrate the tracking of fluorescent microspheres undergoing Brownian motion, and in intra-cellular vesicle dynamics, at high frame rates. We thereby show how binary SPAD arrays can offer an important advance in live cell imaging in such fields as intercellular communication, cell trafficking and cell signaling.
Terrain Extraction by Integrating Terrestrial Laser Scanner Data and Spectral Information
NASA Astrophysics Data System (ADS)
Lau, C. L.; Halim, S.; Zulkepli, M.; Azwan, A. M.; Tang, W. L.; Chong, A. K.
2015-10-01
The extraction of true terrain points from unstructured laser point cloud data is an important process in order to produce an accurate digital terrain model (DTM). However, most of these spatial filtering methods just utilizing the geometrical data to discriminate the terrain points from nonterrain points. The point cloud filtering method also can be improved by using the spectral information available with some scanners. Therefore, the objective of this study is to investigate the effectiveness of using the three-channel (red, green and blue) of the colour image captured from built-in digital camera which is available in some Terrestrial Laser Scanner (TLS) for terrain extraction. In this study, the data acquisition was conducted at a mini replica landscape in Universiti Teknologi Malaysia (UTM), Skudai campus using Leica ScanStation C10. The spectral information of the coloured point clouds from selected sample classes are extracted for spectral analysis. The coloured point clouds which within the corresponding preset spectral threshold are identified as that specific feature point from the dataset. This process of terrain extraction is done through using developed Matlab coding. Result demonstrates that a higher spectral resolution passive image is required in order to improve the output. This is because low quality of the colour images captured by the sensor contributes to the low separability in spectral reflectance. In conclusion, this study shows that, spectral information is capable to be used as a parameter for terrain extraction.
Keyes, S D; Gillard, F; Soper, N; Mavrogordato, M N; Sinclair, I; Roose, T
2016-06-14
The mechanical impedance of soils inhibits the growth of plant roots, often being the most significant physical limitation to root system development. Non-invasive imaging techniques have recently been used to investigate the development of root system architecture over time, but the relationship with soil deformation is usually neglected. Correlative mapping approaches parameterised using 2D and 3D image data have recently gained prominence for quantifying physical deformation in composite materials including fibre-reinforced polymers and trabecular bone. Digital Image Correlation (DIC) and Digital Volume Correlation (DVC) are computational techniques which use the inherent material texture of surfaces and volumes, captured using imaging techniques, to map full-field deformation components in samples during physical loading. Here we develop an experimental assay and methodology for four-dimensional, in vivo X-ray Computed Tomography (XCT) and apply a Digital Volume Correlation (DVC) approach to the data to quantify deformation. The method is validated for a field-derived soil under conditions of uniaxial compression, and a calibration study is used to quantify thresholds of displacement and strain measurement. The validated and calibrated approach is then demonstrated for an in vivo test case in which an extending maize root in field-derived soil was imaged hourly using XCT over a growth period of 19h. This allowed full-field soil deformation data and 3D root tip dynamics to be quantified in parallel for the first time. This fusion of methods paves the way for comparative studies of contrasting soils and plant genotypes, improving our understanding of the fundamental mechanical processes which influence root system development. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Digital Preclinical PET/MRI Insert and Initial Results.
Weissler, Bjoern; Gebhardt, Pierre; Dueppenbecker, Peter M; Wehner, Jakob; Schug, David; Lerche, Christoph W; Goldschmidt, Benjamin; Salomon, Andre; Verel, Iris; Heijman, Edwin; Perkuhn, Michael; Heberling, Dirk; Botnar, Rene M; Kiessling, Fabian; Schulz, Volkmar
2015-11-01
Combining Positron Emission Tomography (PET) with Magnetic Resonance Imaging (MRI) results in a promising hybrid molecular imaging modality as it unifies the high sensitivity of PET for molecular and cellular processes with the functional and anatomical information from MRI. Digital Silicon Photomultipliers (dSiPMs) are the digital evolution in scintillation light detector technology and promise high PET SNR. DSiPMs from Philips Digital Photon Counting (PDPC) were used to develop a preclinical PET/RF gantry with 1-mm scintillation crystal pitch as an insert for clinical MRI scanners. With three exchangeable RF coils, the hybrid field of view has a maximum size of 160 mm × 96.6 mm (transaxial × axial). 0.1 ppm volume-root-mean-square B 0-homogeneity is kept within a spherical diameter of 96 mm (automatic volume shimming). Depending on the coil, MRI SNR is decreased by 13% or 5% by the PET system. PET count rates, energy resolution of 12.6% FWHM, and spatial resolution of 0.73 mm (3) (isometric volume resolution at isocenter) are not affected by applied MRI sequences. PET time resolution of 565 ps (FWHM) degraded by 6 ps during an EPI sequence. Timing-optimized settings yielded 260 ps time resolution. PET and MR images of a hot-rod phantom show no visible differences when the other modality was in operation and both resolve 0.8-mm rods. Versatility of the insert is shown by successfully combining multi-nuclei MRI ((1)H/(19)F) with simultaneously measured PET ((18)F-FDG). A longitudinal study of a tumor-bearing mouse verifies the operability, stability, and in vivo capabilities of the system. Cardiac- and respiratory-gated PET/MRI motion-capturing (CINE) images of the mouse heart demonstrate the advantage of simultaneous acquisition for temporal and spatial image registration.
Multiplexed phase-space imaging for 3D fluorescence microscopy.
Liu, Hsiou-Yuan; Zhong, Jingshan; Waller, Laura
2017-06-26
Optical phase-space functions describe spatial and angular information simultaneously; examples of optical phase-space functions include light fields in ray optics and Wigner functions in wave optics. Measurement of phase-space enables digital refocusing, aberration removal and 3D reconstruction. High-resolution capture of 4D phase-space datasets is, however, challenging. Previous scanning approaches are slow, light inefficient and do not achieve diffraction-limited resolution. Here, we propose a multiplexed method that solves these problems. We use a spatial light modulator (SLM) in the pupil plane of a microscope in order to sequentially pattern multiplexed coded apertures while capturing images in real space. Then, we reconstruct the 3D fluorescence distribution of our sample by solving an inverse problem via regularized least squares with a proximal accelerated gradient descent solver. We experimentally reconstruct a 101 Megavoxel 3D volume (1010×510×500µm with NA 0.4), demonstrating improved acquisition time, light throughput and resolution compared to scanning aperture methods. Our flexible patterning scheme further allows sparsity in the sample to be exploited for reduced data capture.
Featured Image: Fireball After a Temporary Capture?
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2016-06-01
This image of a fireball was captured in the Czech Republic by cameras at a digital autonomous observatory in the village of Kunak. This observatory is part of a network of stations known as the European Fireball Network, and this particular meteoroid detection, labeled EN130114, is notable because it has the lowest initial velocity of any natural object ever observed by the network. Led by David Clark (University of Western Ontario), the authors of a recent study speculate that before this meteoroid impacted Earth, it may have been a Temporarily Captured Orbiter (TCO). TCOs are near-Earth objects that make a few orbits of Earth before returning to heliocentric orbits. Only one has ever been observed to date, and though they are thought to make up 0.1% of all meteoroids, EN130114 is the first event ever detected that exhibits conclusive behavior of a TCO. For more information on EN130114 and why TCOs are important to study, check out the paper below!CitationDavid L. Clark et al 2016 AJ 151 135. doi:10.3847/0004-6256/151/6/135
Evidence and diagnostic reporting in the IHE context.
Loef, Cor; Truyen, Roel
2005-05-01
Capturing clinical observations and findings during the diagnostic imaging process is increasingly becoming a critical step in diagnostic reporting. Standards developers-notably HL7 and DICOM-are making significant progress toward standards that enable exchanging clinical observations and findings among the various information systems of the healthcare enterprise. DICOM-like the HL7 Clinical Document Architecture (CDA) -uses templates and constrained, coded vocabulary (SNOMED, LOINC, etc.). Such a representation facilitates automated software recognition of findings and observations, intrapatient comparison, correlation to norms, and outcomes research. The scope of DICOM Structured Reporting (SR) includes many findings that products routinely create in digital form (measurements, computed estimates, etc.). In the Integrating the Healthcare Enterprise (IHE) framework, two Integration Profiles are defined for clinical data capture and diagnostic reporting: Evidence Document, and Simple Image and Numeric Report. This report describes these two DICOM SR-based integration profiles in the diagnostic reporting process.
Otitis Media Diagnosis for Developing Countries Using Tympanic Membrane Image-Analysis.
Myburgh, Hermanus C; van Zijl, Willemien H; Swanepoel, DeWet; Hellström, Sten; Laurent, Claude
2016-03-01
Otitis media is one of the most common childhood diseases worldwide, but because of lack of doctors and health personnel in developing countries it is often misdiagnosed or not diagnosed at all. This may lead to serious, and life-threatening complications. There is, thus a need for an automated computer based image-analyzing system that could assist in making accurate otitis media diagnoses anywhere. A method for automated diagnosis of otitis media is proposed. The method uses image-processing techniques to classify otitis media. The system is trained using high quality pre-assessed images of tympanic membranes, captured by digital video-otoscopes, and classifies undiagnosed images into five otitis media categories based on predefined signs. Several verification tests analyzed the classification capability of the method. An accuracy of 80.6% was achieved for images taken with commercial video-otoscopes, while an accuracy of 78.7% was achieved for images captured on-site with a low cost custom-made video-otoscope. The high accuracy of the proposed otitis media classification system compares well with the classification accuracy of general practitioners and pediatricians (~64% to 80%) using traditional otoscopes, and therefore holds promise for the future in making automated diagnosis of otitis media in medically underserved populations.
Otitis Media Diagnosis for Developing Countries Using Tympanic Membrane Image-Analysis
Myburgh, Hermanus C.; van Zijl, Willemien H.; Swanepoel, DeWet; Hellström, Sten; Laurent, Claude
2016-01-01
Background Otitis media is one of the most common childhood diseases worldwide, but because of lack of doctors and health personnel in developing countries it is often misdiagnosed or not diagnosed at all. This may lead to serious, and life-threatening complications. There is, thus a need for an automated computer based image-analyzing system that could assist in making accurate otitis media diagnoses anywhere. Methods A method for automated diagnosis of otitis media is proposed. The method uses image-processing techniques to classify otitis media. The system is trained using high quality pre-assessed images of tympanic membranes, captured by digital video-otoscopes, and classifies undiagnosed images into five otitis media categories based on predefined signs. Several verification tests analyzed the classification capability of the method. Findings An accuracy of 80.6% was achieved for images taken with commercial video-otoscopes, while an accuracy of 78.7% was achieved for images captured on-site with a low cost custom-made video-otoscope. Interpretation The high accuracy of the proposed otitis media classification system compares well with the classification accuracy of general practitioners and pediatricians (~ 64% to 80%) using traditional otoscopes, and therefore holds promise for the future in making automated diagnosis of otitis media in medically underserved populations. PMID:27077122
Landman, Adam; Emani, Srinivas; Carlile, Narath; Rosenthal, David I; Semakov, Simon; Pallin, Daniel J; Poon, Eric G
2015-01-02
Photographs are important tools to record, track, and communicate clinical findings. Mobile devices with high-resolution cameras are now ubiquitous, giving clinicians the opportunity to capture and share images from the bedside. However, secure and efficient ways to manage and share digital images are lacking. The aim of this study is to describe the implementation of a secure application for capturing and storing clinical images in the electronic health record (EHR), and to describe initial user experiences. We developed CliniCam, a secure Apple iOS (iPhone, iPad) application that allows for user authentication, patient selection, image capture, image annotation, and storage of images as a Portable Document Format (PDF) file in the EHR. We leveraged our organization's enterprise service-oriented architecture to transmit the image file from CliniCam to our enterprise clinical data repository. There is no permanent storage of protected health information on the mobile device. CliniCam also required connection to our organization's secure WiFi network. Resident physicians from emergency medicine, internal medicine, and dermatology used CliniCam in clinical practice for one month. They were then asked to complete a survey on their experience. We analyzed the survey results using descriptive statistics. Twenty-eight physicians participated and 19/28 (68%) completed the survey. Of the respondents who used CliniCam, 89% found it useful or very useful for clinical practice and easy to use, and wanted to continue using the app. Respondents provided constructive feedback on location of the photos in the EHR, preferring to have photos embedded in (or linked to) clinical notes instead of storing them as separate PDFs within the EHR. Some users experienced difficulty with WiFi connectivity which was addressed by enhancing CliniCam to check for connectivity on launch. CliniCam was implemented successfully and found to be easy to use and useful for clinical practice. CliniCam is now available to all clinical users in our hospital, providing a secure and efficient way to capture clinical images and to insert them into the EHR. Future clinical image apps should more closely link clinical images and clinical documentation and consider enabling secure transmission over public WiFi or cellular networks.
Crew Earth Observations (CEO) taken during Expedition Five on the ISS
2002-08-23
ISS005-E-10926 (23 August 2002) --- This is the second of two images recently released by the Earth Sciences and Image Analysis Laboratory at Johnson Space Center, showing some of the devastating late summer 2002 European flooding. The images were captured by astronauts using a digital still camera onboard the International Space Station (ISS). The photographs show flooding around the Danube Bend area just north of Budapest near the city of Vác, Hungary. The flood peaked in Budapest four days before this photo was made, on August 19, at about 8.5 meters (28 feet), exceeding the previous 1965 flood record. Water had begun to recede when this image was made. Image no. ISS005-E-10000 shows the area four days earlier.
High-Speed Observer: Automated Streak Detection in SSME Plumes
NASA Technical Reports Server (NTRS)
Rieckoff, T. J.; Covan, M.; OFarrell, J. M.
2001-01-01
A high frame rate digital video camera installed on test stands at Stennis Space Center has been used to capture images of Space Shuttle main engine plumes during test. These plume images are processed in real time to detect and differentiate anomalous plume events occurring during a time interval on the order of 5 msec. Such speed yields near instantaneous availability of information concerning the state of the hardware. This information can be monitored by the test conductor or by other computer systems, such as the integrated health monitoring system processors, for possible test shutdown before occurrence of a catastrophic engine failure.
Input Scanners: A Growing Impact In A Diverse Marketplace
NASA Astrophysics Data System (ADS)
Marks, Kevin E.
1989-08-01
Just as newly invented photographic processes revolutionized the printing industry at the turn of the century, electronic imaging has affected almost every computer application today. To completely emulate traditionally mechanical means of information handling, computer based systems must be able to capture graphic images. Thus, there is a widespread need for the electronic camera, the digitizer, the input scanner. This paper will review how various types of input scanners are being used in many diverse applications. The following topics will be covered: - Historical overview of input scanners - New applications for scanners - Impact of scanning technology on select markets - Scanning systems issues
Novel computer-based endoscopic camera
NASA Astrophysics Data System (ADS)
Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia
1995-05-01
We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.
Computerized Analysis of Digital Photographs for Evaluation of Tooth Movement.
Toodehzaeim, Mohammad Hossein; Karandish, Maryam; Karandish, Mohammad Nabi
2015-03-01
Various methods have been introduced for evaluation of tooth movement in orthodontics. The challenge is to adopt the most accurate and most beneficial method for patients. This study was designed to introduce analysis of digital photographs with AutoCAD software as a method to evaluate tooth movement and assess the reliability of this method. Eighteen patients were evaluated in this study. Three intraoral digital images from the buccal view were captured from each patient in half an hour interval. All the photos were sent to AutoCAD software 2011, calibrated and the distance between canine and molar hooks were measured. The data was analyzed using intraclass correlation coefficient. Photographs were found to have high reliability coefficient (P > 0.05). The introduced method is an accurate, efficient and reliable method for evaluation of tooth movement.
No-reference multiscale blur detection tool for content based image retrieval
NASA Astrophysics Data System (ADS)
Ezekiel, Soundararajan; Stocker, Russell; Harrity, Kyle; Alford, Mark; Ferris, David; Blasch, Erik; Gorniak, Mark
2014-06-01
In recent years, digital cameras have been widely used for image capturing. These devices are equipped in cell phones, laptops, tablets, webcams, etc. Image quality is an important component of digital image analysis. To assess image quality for these mobile products, a standard image is required as a reference image. In this case, Root Mean Square Error and Peak Signal to Noise Ratio can be used to measure the quality of the images. However, these methods are not possible if there is no reference image. In our approach, a discrete-wavelet transformation is applied to the blurred image, which decomposes into the approximate image and three detail sub-images, namely horizontal, vertical, and diagonal images. We then focus on noise-measuring the detail images and blur-measuring the approximate image to assess the image quality. We then compute noise mean and noise ratio from the detail images, and blur mean and blur ratio from the approximate image. The Multi-scale Blur Detection (MBD) metric provides both an assessment of the noise and blur content. These values are weighted based on a linear regression against full-reference y values. From these statistics, we can compare to normal useful image statistics for image quality without needing a reference image. We then test the validity of our obtained weights by R2 analysis as well as using them to estimate image quality of an image with a known quality measure. The result shows that our method provides acceptable results for images containing low to mid noise levels and blur content.
Soller, David R.
1997-01-01
Introduction: From June 2-5, 1997, selected technical representatives of the USGS and State geological surveys participated in the 'AASG/USGS Digital Mapping Techniques' workshop in Lawrence, Kansas. The workshop was initiated by the AASG/USGS Data Capture Working Group, and was hosted by the Kansas Geological Survey (KGS). With a focus on methods for data capture and digital map production, the goal was to help move the state surveys and the USGS toward development of more cost-effective, flexible, and useful systems for digital mapping and GIS analysis.
SFM Technique and Focus Stacking for Digital Documentation of Archaeological Artifacts
NASA Astrophysics Data System (ADS)
Clini, P.; Frapiccini, N.; Mengoni, M.; Nespeca, R.; Ruggeri, L.
2016-06-01
Digital documentation and high-quality 3D representation are always more requested in many disciplines and areas due to the large amount of technologies and data available for fast, detailed and quick documentation. This work aims to investigate the area of medium and small sized artefacts and presents a fast and low cost acquisition system that guarantees the creation of 3D models with an high level of detail, making the digitalization of cultural heritage a simply and fast procedure. The 3D models of the artefacts are created with the photogrammetric technique Structure From Motion that makes it possible to obtain, in addition to three-dimensional models, high-definition images for a deepened study and understanding of the artefacts. For the survey of small objects (only few centimetres) it is used a macro lens and the focus stacking, a photographic technique that consists in capturing a stack of images at different focus planes for each camera pose so that is possible to obtain a final image with a higher depth of field. The acquisition with focus stacking technique has been finally validated with an acquisition with laser triangulation scanner Minolta that demonstrates the validity compatible with the allowable error in relation to the expected precision.
Multi-spectral imaging with infrared sensitive organic light emitting diode
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-01-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions. PMID:25091589
Kernel-aligned multi-view canonical correlation analysis for image recognition
NASA Astrophysics Data System (ADS)
Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao
2016-09-01
Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.
Multi-spectral imaging with infrared sensitive organic light emitting diode
NASA Astrophysics Data System (ADS)
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-08-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions.
A method to perform a fast fourier transform with primitive image transformations.
Sheridan, Phil
2007-05-01
The Fourier transform is one of the most important transformations in image processing. A major component of this influence comes from the ability to implement it efficiently on a digital computer. This paper describes a new methodology to perform a fast Fourier transform (FFT). This methodology emerges from considerations of the natural physical constraints imposed by image capture devices (camera/eye). The novel aspects of the specific FFT method described include: 1) a bit-wise reversal re-grouping operation of the conventional FFT is replaced by the use of lossless image rotation and scaling and 2) the usual arithmetic operations of complex multiplication are replaced with integer addition. The significance of the FFT presented in this paper is introduced by extending a discrete and finite image algebra, named Spiral Honeycomb Image Algebra (SHIA), to a continuous version, named SHIAC.
Earth Obsersation taken by the Expedition 11 crew
2005-07-07
ISS011-E-10214 (7 July 2005) --- At the time of this Expedition 11 digital still camera's image, Hurricane Dennis was churning northwestward through the Caribbean Sea between Jamaica and eastern Cuba packing winds of up to 115 miles per hour. Even though the hurricane had just attained Category 3 intensity, the eye had not yet cleared. This oblique view, captured with a 70mm lens at 21:12:01 gmt, is looking west.
Collection and Analysis of Crowd Data with Aerial, Rooftop, and Ground Views
2014-11-10
collected these datasets using different aircrafts. Erista 8 HL OctaCopter is a heavy-lift aerial platform capable of using high-resolution cinema ...is another high-resolution camera that is cinema grade and high quality, with the capability of capturing videos with 4K resolution at 30 frames per...292.58 Imaging Systems and Accessories Blackmagic Production Camera 4 Crowd Counting using 4K Cameras High resolution cinema grade digital video
HST Solar Arrays photographed by Electronic Still Camera
NASA Technical Reports Server (NTRS)
1993-01-01
This medium close-up view of one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. This view shows the cell side of the minus V-2 panel. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.
Aidlen, Jeremy T; Glick, Sara; Silverman, Kenneth; Silverman, Harvey F; Luks, Francois I
2009-08-01
Light-weight, low-profile, and high-resolution head-mounted displays (HMDs) now allow personalized viewing, of a laparoscopic image. The advantages include unobstructed viewing, regardless of position at the operating table, and the possibility to customize the image (i.e., enhanced reality, picture-in-picture, etc.). The bright image display allows use in daylight surroundings and the low profile of the HMD provides adequate peripheral vision. Theoretic disadvantages include reliance for all on the same image capture and anticues (i.e., reality disconnect) when the projected image remains static, despite changes in head position. This can lead to discomfort and even nausea. We have developed a prototype of interactive laparoscopic image display that allows hands-free control of the displayed image by changes in spatial orientation of the operator's head. The prototype consists of an HMD, a spatial orientation device, and computer software to enable hands-free panning and zooming of a video-endoscopic image display. The spatial orientation device uses magnetic fields created by a transmitter and receiver, each containing three orthogonal coils. The transmitter coils are efficiently driven, using USB power only, by a newly developed circuit, each at a unique frequency. The HMD-mounted receiver system links to a commercially available PC-interface PCI-bus sound card (M-Audiocard Delta 44; Avid Technology, Tewksbury, MA). Analog signals at the receiver are filtered, amplified, and converted to digital signals, which are processed to control the image display. The prototype uses a proprietary static fish-eye lens and software for the distortion-free reconstitution of any portion of the captured image. Left-right and up-down motions of the head (and HMD) produce real-time panning of the displayed image. Motion of the head toward, or away from, the transmitter causes real-time zooming in or out, respectively, of the displayed image. This prototype of the interactive HMD allows hands-free, intuitive control of the laparoscopic field, independent of the captured image.
Hubble Space Telescope photographed by Electronic Still Camera
1993-12-04
S61-E-008 (4 Dec 1993) --- This view of the Earth-orbiting Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. This view was taken during rendezvous operations. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope. Over a period of five days, four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Kamimura, Emi; Tanaka, Shinpei; Takaba, Masayuki; Tachi, Keita; Baba, Kazuyoshi
2017-01-01
The aim of this study was to evaluate and compare the inter-operator reproducibility of three-dimensional (3D) images of teeth captured by a digital impression technique to a conventional impression technique in vivo. Twelve participants with complete natural dentition were included in this study. A digital impression of the mandibular molars of these participants was made by two operators with different levels of clinical experience, 3 or 16 years, using an intra-oral scanner (Lava COS, 3M ESPE). A silicone impression also was made by the same operators using the double mix impression technique (Imprint3, 3M ESPE). Stereolithography (STL) data were directly exported from the Lava COS system, while STL data of a plaster model made from silicone impression were captured by a three-dimensional (3D) laboratory scanner (D810, 3shape). The STL datasets recorded by two different operators were compared using 3D evaluation software and superimposed using the best-fit-algorithm method (least-squares method, PolyWorks, InnovMetric Software) for each impression technique. Inter-operator reproducibility as evaluated by average discrepancies of corresponding 3D data was compared between the two techniques (Wilcoxon signed-rank test). The visual inspection of superimposed datasets revealed that discrepancies between repeated digital impression were smaller than observed with silicone impression. Confirmation was forthcoming from statistical analysis revealing significantly smaller average inter-operator reproducibility using a digital impression technique (0.014± 0.02 mm) than when using a conventional impression technique (0.023 ± 0.01 mm). The results of this in vivo study suggest that inter-operator reproducibility with a digital impression technique may be better than that of a conventional impression technique and is independent of the clinical experience of the operator.
Integrating user profile in medical CBIR systems to answer perceptual similarity queries
NASA Astrophysics Data System (ADS)
Bugatti, Pedro H.; Kaster, Daniel S.; Ponciano-Silva, Marcelo; Traina, Agma J. M.; Traina, Caetano, Jr.
2011-03-01
Techniques for Content-Based Image Retrieval (CBIR) have been intensively explored due to the increase in the amount of captured images and the need of fast retrieval of them. The medical field is a specific example that generates a large flow of information, especially digital images employed for diagnosing. One issue that still remains unsolved deals with how to reach the perceptual similarity. That is, to achieve an effective retrieval, one must characterize and quantify the perceptual similarity regarding the specialist in the field. Therefore, the present paper was conceived to fill in this gap creating a consistent support to perform similarity queries over medical images, maintaining the semantics of a given query desired by the user. CBIR systems relying in relevance feedback techniques usually request the users to label relevant images. In this paper, we present a simple but highly effective strategy to survey user profiles, taking advantage of such labeling to implicitly gather the user perceptual similarity. The user profiles maintain the settings desired for each user, allowing tuning the similarity assessment, which encompasses dynamically changing the distance function employed through an interactive process. Experiments using computed tomography lung images show that the proposed approach is effective in capturing the users' perception.
NASA Astrophysics Data System (ADS)
Coffey, Stephen; Connell, Joseph
2005-06-01
This paper presents a development platform for real-time image processing based on the ADSP-BF533 Blackfin processor and the MicroC/OS-II real-time operating system (RTOS). MicroC/OS-II is a completely portable, ROMable, pre-emptive, real-time kernel. The Blackfin Digital Signal Processors (DSPs), incorporating the Analog Devices/Intel Micro Signal Architecture (MSA), are a broad family of 16-bit fixed-point products with a dual Multiply Accumulate (MAC) core. In addition, they have a rich instruction set with variable instruction length and both DSP and MCU functionality thus making them ideal for media based applications. Using the MicroC/OS-II for task scheduling and management, the proposed system can capture and process raw RGB data from any standard 8-bit greyscale image sensor in soft real-time and then display the processed result using a simple PC graphical user interface (GUI). Additionally, the GUI allows configuration of the image capture rate and the system and core DSP clock rates thereby allowing connectivity to a selection of image sensors and memory devices. The GUI also allows selection from a set of image processing algorithms based in the embedded operating system.
Use of Digital Image Technology to 'Clearly' Depict Global Change
NASA Astrophysics Data System (ADS)
Molnia, B. F.; Carbo, C. L.
2014-12-01
Earth is dynamic and beautiful. Understanding why, when, how, and how fast its surface changes yields information and serves as a source of inspiration. The artistic use of geoscience information can inform the public about what is happening to their planet in a non-confrontational and apolitical way. While individual images may clearly depict a landscape, photographic comparisons are necessary to clearly capture and display annual, decadal, or century-scale impacts of climate and environmental change on Earth's landscapes. After years of effort to artistically communicate geoscience concepts with unenhanced individual photographs or pairs of images, the authors have partnered to maximize this process by using digital image enhancement technology. This is done, not to manipulate the inherent artistic content or information content of the photographs, but to insure that the comparative photo pairs produced are geometrically correct and unambiguous. For comparative photography, information-rich historical photographs are selected from archives, websites, and other sources. After determining the geographic location from which the historical photograph was made, the original site is identified and eventually revisited. There, the historical photos field of view is again photographed, ideally from the original location. From nearly 250 locations revisited, about 175 pairs have been produced. Every effort is made to reoccupy the original historical site. However, vegetation growth, visibility reduction, and co-seismic level change may make this impossible. Also, inherent differences in lens optics, camera construction, and image format may result in differences in the geometry of the new photograph when compared to the old. Upon selection, historical photos are cleaned, contrast stretched, brightness adjusted, and sharpened to maximize site identification and information extraction. To facilitate matching historical and new images, digital files of each are overlain in an image enhancement program. The new image is resized to match the historical photo and then, using a pixel warping tool, portions of the new image are reconfigured and matched to historical pixels to create a perfect match. Through the use of digital image technology we are able to 'clearly' convey the realities of our changing planet.
Image registration for multi-exposed HDRI and motion deblurring
NASA Astrophysics Data System (ADS)
Lee, Seok; Wey, Ho-Cheon; Lee, Seong-Deok
2009-02-01
In multi-exposure based image fusion task, alignment is an essential prerequisite to prevent ghost artifact after blending. Compared to usual matching problem, registration is more difficult when each image is captured under different photographing conditions. In HDR imaging, we use long and short exposure images, which have different brightness and there exist over/under satuated regions. In motion deblurring problem, we use blurred and noisy image pair and the amount of motion blur varies from one image to another due to the different exposure times. The main difficulty is that luminance levels of the two images are not in linear relationship and we cannot perfectly equalize or normalize the brightness of each image and this leads to unstable and inaccurate alignment results. To solve this problem, we applied probabilistic measure such as mutual information to represent similarity between images after alignment. In this paper, we discribed about the characteristics of multi-exposed input images in the aspect of registration and also analyzed the magnitude of camera hand shake. By exploiting the independence of luminance of mutual information, we proposed a fast and practically useful image registration technique in multiple capturing. Our algorithm can be applied to extreme HDR scenes and motion blurred scenes with over 90% success rate and its simplicity enables to be embedded in digital camera and mobile camera phone. The effectiveness of our registration algorithm is examined by various experiments on real HDR or motion deblurring cases using hand-held camera.
PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.
Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David
2009-04-01
Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.
Robustness of an artificially tailored fisheye imaging system with a curvilinear image surface
NASA Astrophysics Data System (ADS)
Lee, Gil Ju; Nam, Won Il; Song, Young Min
2017-11-01
Curved image sensors inspired by animal and insect eyes have provided a new development direction in next-generation digital cameras. It is known that natural fish eyes afford an extremely wide field of view (FOV) imaging due to the geometrical properties of the spherical lens and hemispherical retina. However, its inherent drawbacks, such as the low off-axis illumination and the fabrication difficulty of a 'dome-like' hemispherical imager, limit the development of bio-inspired wide FOV cameras. Here, a new type of fisheye imaging system is introduced that has simple lens configurations with a curvilinear image surface, while maintaining high off-axis illumination and a wide FOV. Moreover, through comparisons with commercial conventional fisheye designs, it is determined that the volume and required number of optical elements of the proposed design is practical while capturing the fundamental optical performances. Detailed design guidelines for tailoring the proposed optic system are also discussed.
High-resolution CCD imaging alternatives
NASA Astrophysics Data System (ADS)
Brown, D. L.; Acker, D. E.
1992-08-01
High resolution CCD color cameras have recently stimulated the interest of a large number of potential end-users for a wide range of practical applications. Real-time High Definition Television (HDTV) systems are now being used or considered for use in applications ranging from entertainment program origination through digital image storage to medical and scientific research. HDTV generation of electronic images offers significant cost and time-saving advantages over the use of film in such applications. Further in still image systems electronic image capture is faster and more efficient than conventional image scanners. The CCD still camera can capture 3-dimensional objects into the computing environment directly without having to shoot a picture on film develop it and then scan the image into a computer. 2. EXTENDING CCD TECHNOLOGY BEYOND BROADCAST Most standard production CCD sensor chips are made for broadcast-compatible systems. One popular CCD and the basis for this discussion offers arrays of roughly 750 x 580 picture elements (pixels) or a total array of approximately 435 pixels (see Fig. 1). FOR. A has developed a technique to increase the number of available pixels for a given image compared to that produced by the standard CCD itself. Using an inter-lined CCD with an overall spatial structure several times larger than the photo-sensitive sensor areas each of the CCD sensors is shifted in two dimensions in order to fill in spatial gaps between adjacent sensors.
NASA Astrophysics Data System (ADS)
Wijaya, Surya Li; Savvides, Marios; Vijaya Kumar, B. V. K.
2005-02-01
Face recognition on mobile devices, such as personal digital assistants and cell phones, is a big challenge owing to the limited computational resources available to run verifications on the devices themselves. One approach is to transmit the captured face images by use of the cell-phone connection and to run the verification on a remote station. However, owing to limitations in communication bandwidth, it may be necessary to transmit a compressed version of the image. We propose using the image compression standard JPEG2000, which is a wavelet-based compression engine used to compress the face images to low bit rates suitable for transmission over low-bandwidth communication channels. At the receiver end, the face images are reconstructed with a JPEG2000 decoder and are fed into the verification engine. We explore how advanced correlation filters, such as the minimum average correlation energy filter [Appl. Opt. 26, 3633 (1987)] and its variants, perform by using face images captured under different illumination conditions and encoded with different bit rates under the JPEG2000 wavelet-encoding standard. We evaluate the performance of these filters by using illumination variations from the Carnegie Mellon University's Pose, Illumination, and Expression (PIE) face database. We also demonstrate the tolerance of these filters to noisy versions of images with illumination variations.
Multispectral digital holographic microscopy with applications in water quality assessment
NASA Astrophysics Data System (ADS)
Kazemzadeh, Farnoud; Jin, Chao; Yu, Mei; Amelard, Robert; Haider, Shahid; Saini, Simarjeet; Emelko, Monica; Clausi, David A.; Wong, Alexander
2015-09-01
Safe drinking water is essential for human health, yet over a billion people worldwide do not have access to safe drinking water. Due to the presence and accumulation of biological contaminants in natural waters (e.g., pathogens and neuro-, hepato-, and cytotoxins associated with algal blooms) remain a critical challenge in the provision of safe drinking water globally. It is not financially feasible and practical to monitor and quantify water quality frequently enough to identify the potential health risk due to contamination, especially in developing countries. We propose a low-cost, small-profile multispectral (MS) system based on Digital Holographic Microscopy (DHM) and investigate methods for rapidly capturing holographic data of natural water samples. We have developed a test-bed for an MSDHM instrument to produce and capture holographic data of the sample at different wavelengths in the visible and the near Infra-red spectral region, allowing for resolution improvement in the reconstructed images. Additionally, we have developed high-speed statistical signal processing and analysis techniques to facilitate rapid reconstruction and assessment of the MS holographic data being captured by the MSDHM instrument. The proposed system is used to examine cyanobacteria as well as Cryptosporidium parvum oocysts which remain important and difficult to treat microbiological contaminants that must be addressed for the provision of safe drinking water globally.
Macromolecular Topography Leaps into the Digital Age
NASA Technical Reports Server (NTRS)
Lovelace, J.; Bellamy, H.; Snell, E. H.; Borgstahl, G.
2003-01-01
A low-cost, real-time digital topography system is under development which will replace x-ray film and nuclear emulsion plates. The imaging system is based on an inexpensive surveillance camera that offers a 1000x1000 array of 8 im square pixels, anti-blooming circuitry, and very quick read out. Currently, the system directly converts x-rays to an image with no phosphor. The system is small and light and can be easily adapted to work with other crystallographic equipment. Preliminary images have been acquired of cubic insulin at the NSLS x26c beam line. NSLS x26c was configured for unfocused monochromatic radiation. Six reflections were collected with stills spaced from 0.002 to 0.001 degrees apart across the entire oscillation range that the reflections were in diffracting condition. All of the reflections were rotated to the vertical to reduce Lorentz and beam related effects. This particular CCD is designed for short exposure applications (much less than 1 sec) and so has a relatively high dark current leading to noisy raw images. The images are processed to remove background and other system noise with a multi-step approach including the use of wavelets, histogram, and mean window filtering. After processing, animations were constructed with the corresponding reflection profile to show the diffraction of the crystal volume vs. the oscillation angle as well as composite images showing the parts of the crystal with the strongest diffraction for each reflection. The final goal is to correlate features seen in reflection profiles captured with fine phi slicing to those seen in the topography images. With this development macromolecular topography finally comes into the digital age.
Vibration measurement by temporal Fourier analyses of a digital hologram sequence.
Fu, Yu; Pedrini, Giancarlo; Osten, Wolfgang
2007-08-10
A method for whole-field noncontact measurement of displacement, velocity, and acceleration of a vibrating object based on image-plane digital holography is presented. A series of digital holograms of a vibrating object are captured by use of a high-speed CCD camera. The result of the reconstruction is a three-dimensional complex-valued matrix with noise. We apply Fourier analysis and windowed Fourier analysis in both the spatial and the temporal domains to extract the displacement, the velocity, and the acceleration. The instantaneous displacement is obtained by temporal unwrapping of the filtered phase map, whereas the velocity and acceleration are evaluated by Fourier analysis and by windowed Fourier analysis along the time axis. The combination of digital holography and temporal Fourier analyses allows for evaluation of the vibration, without a phase ambiguity problem, and smooth spatial distribution of instantaneous displacement, velocity, and acceleration of each instant are obtained. The comparison of Fourier analysis and windowed Fourier analysis in velocity and acceleration measurements is also presented.
Exploring of PST-TBPM in Monitoring Dynamic Deformation of Steel Structure in Vibration
NASA Astrophysics Data System (ADS)
Chen, Mingzhi; Zhao, Yongqian; Hai, Hua; Yu, Chengxin; Zhang, Guojian
2018-01-01
In order to monitor the dynamic deformation of steel structure in the real-time, digital photography is used in this paper. Firstly, the grid method is used correct the distortion of digital camera. Then the digital cameras are used to capture the initial and experimental images of steel structure to obtain its relative deformation. PST-TBPM (photographing scale transformation-time baseline parallax method) is used to eliminate the parallax error and convert the pixel change value of deformation points into the actual displacement value. In order to visualize the deformation trend of steel structure, the deformation curves are drawn based on the deformation value of deformation points. Results show that the average absolute accuracy and relative accuracy of PST-TBPM are 0.28mm and 1.1‰, respectively. Digital photography used in this study can meet accuracy requirements of steel structure deformation monitoring. It also can warn the safety of steel structure and provide data support for managers’ safety decisions based on the deformation curves on site.
Bishara, Waheb; Sikora, Uzair; Mudanyali, Onur; Su, Ting-Wei; Yaglidere, Oguzhan; Luckhart, Shirley; Ozcan, Aydogan
2011-04-07
We report a portable lensless on-chip microscope that can achieve <1 µm resolution over a wide field-of-view of ∼ 24 mm(2) without the use of any mechanical scanning. This compact on-chip microscope weighs ∼ 95 g and is based on partially coherent digital in-line holography. Multiple fiber-optic waveguides are butt-coupled to light emitting diodes, which are controlled by a low-cost micro-controller to sequentially illuminate the sample. The resulting lensfree holograms are then captured by a digital sensor-array and are rapidly processed using a pixel super-resolution algorithm to generate much higher resolution holographic images (both phase and amplitude) of the objects. This wide-field and high-resolution on-chip microscope, being compact and light-weight, would be important for global health problems such as diagnosis of infectious diseases in remote locations. Toward this end, we validate the performance of this field-portable microscope by imaging human malaria parasites (Plasmodium falciparum) in thin blood smears. Our results constitute the first-time that a lensfree on-chip microscope has successfully imaged malaria parasites.
Luster measurements of lips treated with lipstick formulations.
Yadav, Santosh; Issa, Nevine; Streuli, David; McMullen, Roger; Fares, Hani
2011-01-01
In this study, digital photography in combination with image analysis was used to measure the luster of several lipstick formulations containing varying amounts and types of polymers. A weighed amount of lipstick was applied to a mannequin's lips and the mannequin was illuminated by a uniform beam of a white light source. Digital images of the mannequin were captured with a high-resolution camera and the images were analyzed using image analysis software. Luster analysis was performed using Stamm (L(Stamm)) and Reich-Robbins (L(R-R)) luster parameters. Statistical analysis was performed on each luster parameter (L(Stamm) and L(R-R)), peak height, and peak width. Peak heights for lipstick formulation containing 11% and 5% VP/eicosene copolymer were statistically different from those of the control. The L(Stamm) and L(R-R) parameters for the treatment containing 11% VP/eicosene copolymer were statistically different from these of the control. Based on the results obtained in this study, we are able to determine whether a polymer is a good pigment dispersant and contributes to visually detected shine of a lipstick upon application. The methodology presented in this paper could serve as a tool for investigators to screen their ingredients for shine in lipstick formulations.
Enhanced Video-Oculography System
NASA Technical Reports Server (NTRS)
Moore, Steven T.; MacDougall, Hamish G.
2009-01-01
A previously developed video-oculography system has been enhanced for use in measuring vestibulo-ocular reflexes of a human subject in a centrifuge, motor vehicle, or other setting. The system as previously developed included a lightweight digital video camera mounted on goggles. The left eye was illuminated by an infrared light-emitting diode via a dichroic mirror, and the camera captured images of the left eye in infrared light. To extract eye-movement data, the digitized video images were processed by software running in a laptop computer. Eye movements were calibrated by having the subject view a target pattern, fixed with respect to the subject s head, generated by a goggle-mounted laser with a diffraction grating. The system as enhanced includes a second camera for imaging the scene from the subject s perspective, and two inertial measurement units (IMUs) for measuring linear accelerations and rates of rotation for computing head movements. One IMU is mounted on the goggles, the other on the centrifuge or vehicle frame. All eye-movement and head-motion data are time-stamped. In addition, the subject s point of regard is superimposed on each scene image to enable analysis of patterns of gaze in real time.
Video and thermal imaging system for monitoring interiors of high temperature reaction vessels
Saveliev, Alexei V [Chicago, IL; Zelepouga, Serguei A [Hoffman Estates, IL; Rue, David M [Chicago, IL
2012-01-10
A system and method for real-time monitoring of the interior of a combustor or gasifier wherein light emitted by the interior surface of a refractory wall of the combustor or gasifier is collected using an imaging fiber optic bundle having a light receiving end and a light output end. Color information in the light is captured with primary color (RGB) filters or complimentary color (GMCY) filters placed over individual pixels of color sensors disposed within a digital color camera in a BAYER mosaic layout, producing RGB signal outputs or GMCY signal outputs. The signal outputs are processed using intensity ratios of the primary color filters or the complimentary color filters, producing video images and/or thermal images of the interior of the combustor or gasifier.
NASA Astrophysics Data System (ADS)
da Silva Nunes, L. C.; dos Santos, Paulo Acioly M.
2004-10-01
We present an application of the use of stereoscope to recovering obliterated firearms serial number. We investigate a promising new combined cheap method using both non-destructive and destructive techniques. With the use of a stereomicroscope coupled with a digital camera and a flexible cold light source, we can capture the image of the damaged area, and with continuous polishing and sometimes with the help of image processing techniques we could enhance the observed images and they can also be recorded as evidence. This method has already proven to be useful, in certain cases, in aluminum dotted pistol frames, whose serial number is printed with a laser, when etching techniques are not successful. We can also observe acid treated steel surfaces and enhance the images of recovered serial numbers, which sometimes lack of definition.
Ultra-high throughput real-time instruments for capturing fast signals and rare events
NASA Astrophysics Data System (ADS)
Buckley, Brandon Walter
Wide-band signals play important roles in the most exciting areas of science, engineering, and medicine. To keep up with the demands of exploding internet traffic, modern data centers and communication networks are employing increasingly faster data rates. Wide-band techniques such as pulsed radar jamming and spread spectrum frequency hopping are used on the battlefield to wrestle control of the electromagnetic spectrum. Neurons communicate with each other using transient action potentials that last for only milliseconds at a time. And in the search for rare cells, biologists flow large populations of cells single file down microfluidic channels, interrogating them one-by-one, tens of thousands of times per second. Studying and enabling such high-speed phenomena pose enormous technical challenges. For one, parasitic capacitance inherent in analog electrical components limits their response time. Additionally, converting these fast analog signals to the digital domain requires enormous sampling speeds, which can lead to significant jitter and distortion. State-of-the-art imaging technologies, essential for studying biological dynamics and cells in flow, are limited in speed and sensitivity by finite charge transfer and read rates, and by the small numbers of photo-electrons accumulated in short integration times. And finally, ultra-high throughput real-time digital processing is required at the backend to analyze the streaming data. In this thesis, I discuss my work in developing real-time instruments, employing ultrafast optical techniques, which overcome some of these obstacles. In particular, I use broadband dispersive optics to slow down fast signals to speeds accessible to high-bit depth digitizers and signal processors. I also apply telecommunication multiplexing techniques to boost the speeds of confocal fluorescence microscopy. The photonic time stretcher (TiSER) uses dispersive Fourier transformation to slow down analog signals before digitization and processing. The act of time-stretching effectively boosts the performance of the back-end electronics and digital signal processors. The slowed down signals reach the back-end electronics with reduced bandwidth, and are therefore less affected by high-frequency roll-off and distortion. Time-stretching also increases the effective sampling rate of analog-to-digital converters and reduces aperture jitter, thereby improving resolution. Finally, the instantaneous throughputs of digital signal processors are enhanced by the stretch factor to otherwise unattainable speeds. Leveraging these unique capabilities, TiSER becomes the ideal tool for capturing high-speed signals and characterizing rare phenomena. For this thesis, I have developed techniques to improve the spectral efficiency, bandwidth, and resolution of TiSER using polarization multiplexing, all-optical modulation, and coherent dispersive Fourier transformation. To reduce the latency and improve the data handling capacity, I have also designed and implemented a real-time digital signal processing electronic backend, achieving 1.5 tera-bit per second instantaneous processing throughput. Finally, I will present results from experiments highlighting TiSER's impact in real-world applications. Confocal fluorescence microscopy is the most widely used method for unveiling the molecular composition of biological specimens. However, the weak optical emission of fluorescent probes and the tradeoff between imaging speed and sensitivity is problematic for acquiring blur-free images of fast phenomena and cells flowing at high speed. Here I introduce a new fluorescence imaging modality, which leverages techniques from wireless communication to reach record pixel and frame rates. Termed Fluorescence Imaging using Radio-frequency tagged Emission (FIRE), this new imaging modality is capable of resolving never before seen dynamics in living cells - such as action potentials in neurons and metabolic waves in astrocytes - as well as performing high-content image assays of cells and particles in high-speed flow.
Unmanned aerial vehicles (UAVs) for surveying marine fauna: a dugong case study.
Hodgson, Amanda; Kelly, Natalie; Peel, David
2013-01-01
Aerial surveys of marine mammals are routinely conducted to assess and monitor species' habitat use and population status. In Australia, dugongs (Dugong dugon) are regularly surveyed and long-term datasets have formed the basis for defining habitat of high conservation value and risk assessments of human impacts. Unmanned aerial vehicles (UAVs) may facilitate more accurate, human-risk free, and cheaper aerial surveys. We undertook the first Australian UAV survey trial in Shark Bay, western Australia. We conducted seven flights of the ScanEagle UAV, mounted with a digital SLR camera payload. During each flight, ten transects covering a 1.3 km(2) area frequently used by dugongs, were flown at 500, 750 and 1000 ft. Image (photograph) capture was controlled via the Ground Control Station and the capture rate was scheduled to achieve a prescribed 10% overlap between images along transect lines. Images were manually reviewed post hoc for animals and scored according to sun glitter, Beaufort Sea state and turbidity. We captured 6243 images, 627 containing dugongs. We also identified whales, dolphins, turtles and a range of other fauna. Of all possible dugong sightings, 95% (CI = 90%, 98%) were subjectively classed as 'certain' (unmistakably dugongs). Neither our dugong sighting rate, nor our ability to identify dugongs with certainty, were affected by UAV altitude. Turbidity was the only environmental variable significantly affecting the dugong sighting rate. Our results suggest that UAV systems may not be limited by sea state conditions in the same manner as sightings from manned surveys. The overlap between images proved valuable for detecting animals that were masked by sun glitter in the corners of images, and identifying animals initially captured at awkward body angles. This initial trial of a basic camera system has successfully demonstrated that the ScanEagle UAV has great potential as a tool for marine mammal aerial surveys.
Unmanned Aerial Vehicles (UAVs) for Surveying Marine Fauna: A Dugong Case Study
Hodgson, Amanda; Kelly, Natalie; Peel, David
2013-01-01
Aerial surveys of marine mammals are routinely conducted to assess and monitor species’ habitat use and population status. In Australia, dugongs (Dugong dugon) are regularly surveyed and long-term datasets have formed the basis for defining habitat of high conservation value and risk assessments of human impacts. Unmanned aerial vehicles (UAVs) may facilitate more accurate, human-risk free, and cheaper aerial surveys. We undertook the first Australian UAV survey trial in Shark Bay, western Australia. We conducted seven flights of the ScanEagle UAV, mounted with a digital SLR camera payload. During each flight, ten transects covering a 1.3 km2 area frequently used by dugongs, were flown at 500, 750 and 1000 ft. Image (photograph) capture was controlled via the Ground Control Station and the capture rate was scheduled to achieve a prescribed 10% overlap between images along transect lines. Images were manually reviewed post hoc for animals and scored according to sun glitter, Beaufort Sea state and turbidity. We captured 6243 images, 627 containing dugongs. We also identified whales, dolphins, turtles and a range of other fauna. Of all possible dugong sightings, 95% (CI = 90%, 98%) were subjectively classed as ‘certain’ (unmistakably dugongs). Neither our dugong sighting rate, nor our ability to identify dugongs with certainty, were affected by UAV altitude. Turbidity was the only environmental variable significantly affecting the dugong sighting rate. Our results suggest that UAV systems may not be limited by sea state conditions in the same manner as sightings from manned surveys. The overlap between images proved valuable for detecting animals that were masked by sun glitter in the corners of images, and identifying animals initially captured at awkward body angles. This initial trial of a basic camera system has successfully demonstrated that the ScanEagle UAV has great potential as a tool for marine mammal aerial surveys. PMID:24223967
Tyurin readies the NASDA exposure experiment cases for their EVA
2001-10-14
ISS003-E-6623 (14 October 2001) --- Cosmonaut Mikhail Tyurin, Expedition Three flight engineer representing Rosaviakosmos, works with hardware for the Micro-Particles Capturer (MPAC) and Space Environment Exposure Device (SEED) experiment and fixture mechanism in the Zvezda Service Module on the International Space Station (ISS). MPAC and SEED were developed by Japans National Space Development Agency (NASDA), and Russia developed the Fixture Mechanism. This image was taken with a digital still camera.
MS Linnehan watches EVA 2 from aft flight deck
2002-03-05
STS109-E-5621 (5 March 2002) --- Astronaut Richard M. Linnehan, mission specialist, monitors the STS-109 mission's second space walk from the aft flight deck of the Space Shuttle Columbia. Astronauts James H. Newman and Michael J. Massimino were working on the Hubble Space Telescope (HST), temporarily captured in the shuttle's cargo bay. Linnehan had participated in the mission's first space walk on the previous day. This image was recorded with a digital still camera.
Daaboul, George G; Lopez, Carlos A; Chinnala, Jyothsna; Goldberg, Bennett B; Connor, John H; Ünlü, M Selim
2014-06-24
Rapid, sensitive, and direct label-free capture and characterization of nanoparticles from complex media such as blood or serum will broadly impact medicine and the life sciences. We demonstrate identification of virus particles in complex samples for replication-competent wild-type vesicular stomatitis virus (VSV), defective VSV, and Ebola- and Marburg-pseudotyped VSV with high sensitivity and specificity. Size discrimination of the imaged nanoparticles (virions) allows differentiation between modified viruses having different genome lengths and facilitates a reduction in the counting of nonspecifically bound particles to achieve a limit-of-detection (LOD) of 5 × 10(3) pfu/mL for the Ebola and Marburg VSV pseudotypes. We demonstrate the simultaneous detection of multiple viruses in a single sample (composed of serum or whole blood) for screening applications and uncompromised detection capabilities in samples contaminated with high levels of bacteria. By employing affinity-based capture, size discrimination, and a "digital" detection scheme to count single virus particles, we show that a robust and sensitive virus/nanoparticle sensing assay can be established for targets in complex samples. The nanoparticle microscopy system is termed the Single Particle Interferometric Reflectance Imaging Sensor (SP-IRIS) and is capable of high-throughput and rapid sizing of large numbers of biological nanoparticles on an antibody microarray for research and diagnostic applications.
Simulation of a complete X-ray digital radiographic system for industrial applications.
Nazemi, E; Rokrok, B; Movafeghi, A; Choopan Dastjerdi, M H
2018-05-19
Simulating X-ray images is of great importance in industry and medicine. Using such simulation permits us to optimize parameters which affect image's quality without the limitations of an experimental procedure. This study revolves around a novel methodology to simulate a complete industrial X-ray digital radiographic system composed of an X-ray tube and a computed radiography (CR) image plate using Monte Carlo N Particle eXtended (MCNPX) code. In the process of our research, an industrial X-ray tube with maximum voltage of 300 kV and current of 5 mA was simulated. A 3-layer uniform plate including a polymer overcoat layer, a phosphor layer and a polycarbonate backing layer was also defined and simulated as the CR imaging plate. To model the image formation in the image plate, at first the absorbed dose was calculated in each pixel inside the phosphor layer of CR imaging plate using the mesh tally in MCNPX code and then was converted to gray value using a mathematical relationship determined in a separate procedure. To validate the simulation results, an experimental setup was designed and the images of two step wedges created out of aluminum and steel were captured by the experiments and compared with the simulations. The results show that the simulated images are in good agreement with the experimental ones demonstrating the ability of the proposed methodology for simulating an industrial X-ray imaging system. Copyright © 2018 Elsevier Ltd. All rights reserved.
Computerized Analysis of Digital Photographs for Evaluation of Tooth Movement
Toodehzaeim, Mohammad Hossein; Karandish, Maryam; Karandish, Mohammad Nabi
2015-01-01
Objectives: Various methods have been introduced for evaluation of tooth movement in orthodontics. The challenge is to adopt the most accurate and most beneficial method for patients. This study was designed to introduce analysis of digital photographs with AutoCAD software as a method to evaluate tooth movement and assess the reliability of this method. Materials and Methods: Eighteen patients were evaluated in this study. Three intraoral digital images from the buccal view were captured from each patient in half an hour interval. All the photos were sent to AutoCAD software 2011, calibrated and the distance between canine and molar hooks were measured. The data was analyzed using intraclass correlation coefficient. Results: Photographs were found to have high reliability coefficient (P > 0.05). Conclusion: The introduced method is an accurate, efficient and reliable method for evaluation of tooth movement. PMID:26622272
Kakudo, Natsuko; Kushida, Satoshi; Suzuki, Kenji; Kusumoto, Kenji
2013-12-01
Chemical peeling is becoming increasingly popular for skin rejuvenation in dermatological cosmetic medicine. However, the improvements seen with chemical peeling are often very minor, and it is difficult to conduct a quantitative assessment of pre- and post-treatment appearance. We report the pre- and postpeeling effects for facial pigment deposition using a novel computer analysis method for digital-camera-captured images. Glycolic acid chemical peeling was performed a total of 5 times at 2-week intervals in 23 healthy women. We conducted a computer image analysis by utilizing Robo Skin Analyzer CS 50 and Clinical Suite 2.1 and then reviewed each parameter for the area of facial pigment deposition pre- and post-treatment. Parameters were pigmentation size and four pigmentation categories: little pigmentation and three levels of marked pigmentation (Lv1, 2, and 3) based on detection threshold. Each parameter was measured, and the total area of facial pigmentation was calculated. The total area of little pigmentation and marked pigmentation (Lv1) was significantly reduced. On the other hand, a significant difference was not observed for the total area of marked pigmentation Lv2 and Lv3. This suggests that glycolic acid chemical peeling has an effect on small facial pigment disposition or has an effect on light pigment deposition. As the Robo Skin Analyzer is useful for objectively quantifying and analyzing minor changes in facial skin, it is considered to be an effective tool for accumulating treatment evidence in the cosmetic and esthetic skin field. © 2013 Wiley Periodicals, Inc.
Mapping Land and Water Surface Topography with instantaneous Structure from Motion
NASA Astrophysics Data System (ADS)
Dietrich, J.; Fonstad, M. A.
2012-12-01
Structure from Motion (SfM) has given researchers an invaluable tool for low-cost, high-resolution 3D mapping of the environment. These SfM 3D surface models are commonly constructed from many digital photographs collected with one digital camera (either handheld or attached to aerial platform). This method works for stationary or very slow moving objects. However, objects in motion are impossible to capture with one-camera SfM. With multiple simultaneously triggered cameras, it becomes possible to capture multiple photographs at the same time which allows for the construction 3D surface models of moving objects and surfaces, an instantaneous SfM (ISfM) surface model. In river science, ISfM provides a low-cost solution for measuring a number of river variables that researchers normally estimate or are unable to collect over large areas. With ISfM and sufficient coverage of the banks and RTK-GPS control it is possible to create a digital surface model of land and water surface elevations across an entire channel and water surface slopes at any point within the surface model. By setting the cameras to collect time-lapse photography of a scene it is possible to create multiple surfaces that can be compared using traditional digital surface model differencing. These water surface models could be combined the high-resolution bathymetry to create fully 3D cross sections that could be useful in hydrologic modeling. Multiple temporal image sets could also be used in 2D or 3D particle image velocimetry to create 3D surface velocity maps of a channel. Other applications in earth science include anything where researchers could benefit from temporal surface modeling like mass movements, lava flows, dam removal monitoring. The camera system that was used for this research consisted of ten pocket digital cameras (Canon A3300) equipped with wireless triggers. The triggers were constructed with an Arduino-style microcontroller and off-the-shelf handheld radios with a maximum range of several kilometers. The cameras are controlled from another microcontroller/radio combination that allows for manual or automatic triggering of the cameras. The total cost of the camera system was approximately 1500 USD.
Clayton, Gemma
2013-06-01
This project was undertaken as part of the PhD research project of Paul Malone, Pricipal Investigator, Covance plc, Harrogate. Mr Malone approached the photography department for involvement in the study with the aim of settling the current debate on the anatomical and histological features of the distal radioulnar ligaments by capturing the anatomy photographically throughout the process of dissection via a microtome. The author was approached to lead on the photographic protocol as part of her post-graduate certificate training at Staffordshire University. High-resolution digital images of an entire human arm were required, the main area of interest being the distal radioulnar joint of the wrist. Images were to be taken at 40 μm intervals as the specimen was sliced. When microtomy was undertaken through the ligaments images were made at 20 μm intervals. A method of suspending a camera approximately 1 metre above the specimen was devised, together with the preparation for the capture, processing and storage of images. The resulting images were then to be subject to further analysis in the form of 3-Dimensional reconstruction, using computer modelling techniques and software. The possibility of merging the images with sequences obtained from both CT & MRI using image handling software is also an area of exploration, in collaboration with the University of Manchester's Visualisation Centre.
Fluorescent Microscopy Enhancement Using Imaging
NASA Astrophysics Data System (ADS)
Conrad, Morgan P.; Reck tenwald, Diether J.; Woodhouse, Bryan S.
1986-06-01
To enhance our capabilities for observing fluorescent stains in biological systems, we are developing a low cost imaging system based around an IBM AT microcomputer and a commercial image capture board compatible with a standard RS-170 format video camera. The image is digitized in real time with 256 grey levels, while being displayed and also stored in memory. The software allows for interactive processing of the data, such as histogram equalization or pseudocolor enhancement of the display. The entire image, or a quadrant thereof, can be averaged over time to improve the signal to noise ratio. Images may be stored to disk for later use or comparison. The camera may be selected for better response in the UV or near IR. Combined with signal averaging, this increases the sensitivity relative to that of the human eye, while still allowing for the fluorescence distribution on either the surface or internal cytoskeletal structure to be observed.
Mars Cameras Make Panoramic Photography a Snap
NASA Technical Reports Server (NTRS)
2008-01-01
If you wish to explore a Martian landscape without leaving your armchair, a few simple clicks around the NASA Web site will lead you to panoramic photographs taken from the Mars Exploration Rovers, Spirit and Opportunity. Many of the technologies that enable this spectacular Mars photography have also inspired advancements in photography here on Earth, including the panoramic camera (Pancam) and its housing assembly, designed by the Jet Propulsion Laboratory and Cornell University for the Mars missions. Mounted atop each rover, the Pancam mast assembly (PMA) can tilt a full 180 degrees and swivel 360 degrees, allowing for a complete, highly detailed view of the Martian landscape. The rover Pancams take small, 1 megapixel (1 million pixel) digital photographs, which are stitched together into large panoramas that sometimes measure 4 by 24 megapixels. The Pancam software performs some image correction and stitching after the photographs are transmitted back to Earth. Different lens filters and a spectrometer also assist scientists in their analyses of infrared radiation from the objects in the photographs. These photographs from Mars spurred developers to begin thinking in terms of larger and higher quality images: super-sized digital pictures, or gigapixels, which are images composed of 1 billion or more pixels. Gigapixel images are more than 200 times the size captured by today s standard 4 megapixel digital camera. Although originally created for the Mars missions, the detail provided by these large photographs allows for many purposes, not all of which are limited to extraterrestrial photography.
NASA Astrophysics Data System (ADS)
de Oliveira, Helder C. R.; Moraes, Diego R.; Reche, Gustavo A.; Borges, Lucas R.; Catani, Juliana H.; de Barros, Nestor; Melo, Carlos F. E.; Gonzaga, Adilson; Vieira, Marcelo A. C.
2017-03-01
This paper presents a new local micro-pattern texture descriptor for the detection of Architectural Distortion (AD) in digital mammography images. AD is a subtle contraction of breast parenchyma that may represent an early sign of breast cancer. Due to its subtlety and variability, AD is more difficult to detect compared to microcalcifications and masses, and is commonly found in retrospective evaluations of false-negative mammograms. Several computer-based systems have been proposed for automatic detection of AD, but their performance are still unsatisfactory. The proposed descriptor, Local Mapped Pattern (LMP), is a generalization of the Local Binary Pattern (LBP), which is considered one of the most powerful feature descriptor for texture classification in digital images. Compared to LBP, the LMP descriptor captures more effectively the minor differences between the local image pixels. Moreover, LMP is a parametric model which can be optimized for the desired application. In our work, the LMP performance was compared to the LBP and four Haralick's texture descriptors for the classification of 400 regions of interest (ROIs) extracted from clinical mammograms. ROIs were selected and divided into four classes: AD, normal tissue, microcalcifications and masses. Feature vectors were used as input to a multilayer perceptron neural network, with a single hidden layer. Results showed that LMP is a good descriptor to distinguish AD from other anomalies in digital mammography. LMP performance was slightly better than the LBP and comparable to Haralick's descriptors (mean classification accuracy = 83%).
Legally compatible design of digital dactyloscopy in future surveillance scenarios
NASA Astrophysics Data System (ADS)
Pocs, Matthias; Schott, Maik; Hildebrandt, Mario
2012-06-01
Innovation in multimedia systems impacts on our society. For example surveillance camera systems combine video and audio information. Currently a new sensor for capturing fingerprint traces is being researched. It combines greyscale images to determine the intensity of the image signal, on one hand, and topographic information to determine fingerprint texture on a variety of surface materials, on the other. This research proposes new application areas which will be analyzed from a technical-legal view point. It assesses how technology design can promote legal criteria of German and European privacy and data protection. For this we focus on one technology goal as an example.
Triaxial testing system for pressure core analysis using image processing technique
NASA Astrophysics Data System (ADS)
Yoneda, J.; Masui, A.; Tenma, N.; Nagao, J.
2013-11-01
In this study, a newly developed innovative triaxial testing system to investigate strength, deformation behavior, and/or permeability of gas hydrate bearing-sediments in deep sea is described. Transport of the pressure core from the storage chamber to the interior of the sealing sleeve of a triaxial cell without depressurization was achieved. An image processing technique was used to capture the motion and local deformation of a specimen in a transparent acrylic triaxial pressure cell and digital photographs were obtained at each strain level during the compression test. The material strength was successfully measured and the failure mode was evaluated under high confining and pore water pressures.
NASA Astrophysics Data System (ADS)
Czermak, A.; Zalewska, A.; Dulny, B.; Sowicki, B.; Jastrząb, M.; Nowak, L.
2004-07-01
The needs for real time monitoring of the hadrontherapy beam intensity and profile as well as requirements for the fast dosimetry using Monolithic Active Pixel Sensors (MAPS) forced the SUCIMA collaboration to the design of the unique Data Acquisition System (DAQ SUCIMA Imager). The DAQ system has been developed on one of the most advanced XILINX Field Programmable Gate Array chip - VERTEX II. The dedicated multifunctional electronic board for the detector's analogue signals capture, their parallel digital processing and final data compression as well as transmission through the high speed USB 2.0 port has been prototyped and tested.
Depth estimation using a lightfield camera
NASA Astrophysics Data System (ADS)
Roper, Carissa
The latest innovation to camera design has come in the form of the lightfield, or plenoptic, camera that captures 4-D radiance data rather than just the 2-D scene image via microlens arrays. With the spatial and angular light ray data now recorded on the camera sensor, it is feasible to construct algorithms that can estimate depth of field in different portions of a given scene. There are limitations to the precision due to hardware structure and the sheer number of scene variations that can occur. In this thesis, the potential of digital image analysis and spatial filtering to extract depth information is tested on the commercially available plenoptic camera.
NASA Astrophysics Data System (ADS)
Abdelsalam, D. G.; Shaalan, M. S.; Eloker, M. M.; Kim, Daesuk
2010-06-01
In this paper a method is presented to accurately measure the radius of curvature of different types of curved surfaces of different radii of curvatures of 38 000,18 000 and 8000 mm using multiple-beam interference fringes in reflection. The images captured by the digital detector were corrected by flat fielding method. The corrected images were analyzed and the form of the surfaces was obtained. A 3D profile for the three types of surfaces was obtained using Zernike polynomial fitting. Some sources of uncertainty in measurement were calculated by means of ray tracing simulations and the uncertainty budget was estimated within λ/40.
Eskandarloo, Amir; Yousefi, Arman; Soheili, Setareh; Ghazikhanloo, Karim; Amini, Payam; Mohammadpoor, Haniyeh
2017-01-01
Background: Nowadays, digital radiography is widely used in dental practice. One of the most common types is Photo Stimulated Phosphor Plate (PSP). Objective: The aims of this experimental study were to evaluate the impacts of different combinations of storage conditions and varying delays in reading of digital images captured using PSPs. Methods: Standardized images of a step wedges were obtained using PSPs from the Digora digital systems. Plates were exposed and immediately scanned to produce the baseline gold standard. The plates were re-exposed and stored in four different storage conditions: white light, yellow light, natural light environment and dark room, then scanned after 10 and 30 minutes and 4 and 8 hours. Objective analysis was conducted by density measurements and the data were analyzed statistically using GEE test. Subjective analysis was performed by two oral and maxillofacial radiologists and the results were analyzed using McNemar’s test. Results: The results from GEE analysis show that in the natural light environment, the densities in 10 minutes did not differ from the baseline. The mean densities decreased significantly during the time in all environments. The mean densities in step 2 for the dark room environment decreased with a slighter slope in comparison to yellow environment significantly. Conclusion: PSP images showed significant decrease in the density in plates scanned for 10 minutes or longer after exposure which may not be detected clinically. The yellow light environment had a different impact on the quality of PSP images. The spatial resolution did not change significantly with time. PMID:29430262
Eskandarloo, Amir; Yousefi, Arman; Soheili, Setareh; Ghazikhanloo, Karim; Amini, Payam; Mohammadpoor, Haniyeh
2017-01-01
Nowadays, digital radiography is widely used in dental practice. One of the most common types is Photo Stimulated Phosphor Plate (PSP). The aims of this experimental study were to evaluate the impacts of different combinations of storage conditions and varying delays in reading of digital images captured using PSPs. Standardized images of a step wedges were obtained using PSPs from the Digora digital systems. Plates were exposed and immediately scanned to produce the baseline gold standard. The plates were re-exposed and stored in four different storage conditions: white light, yellow light, natural light environment and dark room, then scanned after 10 and 30 minutes and 4 and 8 hours. Objective analysis was conducted by density measurements and the data were analyzed statistically using GEE test. Subjective analysis was performed by two oral and maxillofacial radiologists and the results were analyzed using McNemar's test. The results from GEE analysis show that in the natural light environment, the densities in 10 minutes did not differ from the baseline. The mean densities decreased significantly during the time in all environments. The mean densities in step 2 for the dark room environment decreased with a slighter slope in comparison to yellow environment significantly. PSP images showed significant decrease in the density in plates scanned for 10 minutes or longer after exposure which may not be detected clinically. The yellow light environment had a different impact on the quality of PSP images. The spatial resolution did not change significantly with time.
Laser-induced fluorescence imaging of bacteria
NASA Astrophysics Data System (ADS)
Hilton, Peter J.
1998-12-01
This paper outlines a method for optically detecting bacteria on various backgrounds, such as meat, by imaging their laser induced auto-fluorescence response. This method can potentially operate in real-time, which is many times faster than current bacterial detection methods, which require culturing of bacterial samples. This paper describes the imaging technique employed whereby a laser spot is scanned across an object while capturing, filtering, and digitizing the returned light. Preliminary results of the bacterial auto-fluorescence are reported and plans for future research are discussed. The results to date are encouraging with six of the eight bacterial strains investigated exhibiting auto-fluorescence when excited at 488 nm. Discrimination of these bacterial strains against red meat is shown and techniques for reducing background fluorescence discussed.
Correlation applied to the recognition of regular geometric figures
NASA Astrophysics Data System (ADS)
Lasso, William; Morales, Yaileth; Vega, Fabio; Díaz, Leonardo; Flórez, Daniel; Torres, Cesar
2013-11-01
It developed a system capable of recognizing of regular geometric figures, the images are taken by the software automatically through a process of validating the presence of figure to the camera lens, the digitized image is compared with a database that contains previously images captured, to subsequently be recognized and finally identified using sonorous words referring to the name of the figure identified. The contribution of system set out is the fact that the acquisition of data is done in real time and using a spy smart glasses with usb interface offering an system equally optimal but much more economical. This tool may be useful as a possible application for visually impaired people can get information of surrounding environment.
In vitro imaging of ophthalmic tissue by digital interference holography
NASA Astrophysics Data System (ADS)
Potcoava, Mariana C.; Kay, Christine N.; Kim, Myung K.; Richards, David W.
2010-01-01
We used digital interference holography (DIH) for in vitro imaging of human optic nerve head and retina. Samples of peripheral retina, macula, and optic nerve head from two formaldehyde-preserved human eyes were dissected and mounted onto slides. Holograms were captured by a monochrome CCD camera (Sony XC-ST50, with 780 × 640 pixels and pixel size of ∼9 µm). Light source was a solid-state pumped dye laser with tunable wavelength range of 560-605 nm. Using about 50 wavelengths in this band, holograms were obtained and numerically reconstructed using custom software based on NI LabView. Tomographic images were produced by superposition of holograms. Holograms of all tissue samples were obtained with a signal-to-noise ratio of approximately 50 dB. Optic nerve head characteristics (shape, diameter, cup depth, and cup width) were quantified with a few micron resolution (4.06-4.8 µm). Multiple layers were distinguishable in cross-sectional images of the macula. To our knowledge, this is the first report of DIH use to image human macular and optic nerve tissue. DIH has the potential to become a useful tool for researchers and clinicians in the diagnosis and treatment of many ocular diseases, including glaucoma and a variety of macular diseases.
NASA Astrophysics Data System (ADS)
Abu-Zaid, N. A. M.
2017-11-01
In many circumstances, it is difficult for humans to reach some areas, due to its topography, personal safety, or security regulations in the country. Governments and persons need to calculate those areas and classify the green parts for reclamation to benefit from it.To solve this problem, this research proposes to use a phantom air plane to capture a digital image for the targeted area, then use a segmentation algorithm to separate the green space and calculate it's area. It was necessary to deal with two problems. The first is the variable elevation at which an image was taken, which leads to a change in the physical area of each pixel. To overcome this problem a fourth degree polynomial was fit to some experimental data. The second problem was the existence of different unconnected pieces of green areas in a single image, but we might be interested only in one of them. To solve this problem, the probability of classifying the targeted area as green was increased, while the probability of other untargeted sections was decreased by the inclusion of parts of it as non-green. A practical law was also devised to measure the target area in the digital image for comparison purposes with practical measurements and the polynomial fit.
NASA Astrophysics Data System (ADS)
Takashima, Ichiro; Kajiwara, Riichi; Murano, Kiyo; Iijima, Toshio; Morinaka, Yasuhiro; Komobuchi, Hiroyoshi
2001-04-01
We have designed and built a high-speed CCD imaging system for monitoring neural activity in an exposed animal cortex stained with a voltage-sensitive dye. Two types of custom-made CCD sensors were developed for this system. The type I chip has a resolution of 2664 (H) X 1200 (V) pixels and a wide imaging area of 28.1 X 13.8 mm, while the type II chip has 1776 X 1626 pixels and an active imaging area of 20.4 X 18.7 mm. The CCD arrays were constructed with multiple output amplifiers in order to accelerate the readout rate. The two chips were divided into either 24 (I) or 16 (II) distinct areas that were driven in parallel. The parallel CCD outputs were digitized by 12-bit A/D converters and then stored in the frame memory. The frame memory was constructed with synchronous DRAM modules, which provided a capacity of 128 MB per channel. On-chip and on-memory binning methods were incorporated into the system, e.g., this enabled us to capture 444 X 200 pixel-images for periods of 36 seconds at a rate of 500 frames/second. This system was successfully used to visualize neural activity in the cortices of rats, guinea pigs, and monkeys.
AMUC: Associated Motion capture User Categories.
Norman, Sally Jane; Lawson, Sian E M; Olivier, Patrick; Watson, Paul; Chan, Anita M-A; Dade-Robertson, Martyn; Dunphy, Paul; Green, Dave; Hiden, Hugo; Hook, Jonathan; Jackson, Daniel G
2009-07-13
The AMUC (Associated Motion capture User Categories) project consisted of building a prototype sketch retrieval client for exploring motion capture archives. High-dimensional datasets reflect the dynamic process of motion capture and comprise high-rate sampled data of a performer's joint angles; in response to multiple query criteria, these data can potentially yield different kinds of information. The AMUC prototype harnesses graphic input via an electronic tablet as a query mechanism, time and position signals obtained from the sketch being mapped to the properties of data streams stored in the motion capture repository. As well as proposing a pragmatic solution for exploring motion capture datasets, the project demonstrates the conceptual value of iterative prototyping in innovative interdisciplinary design. The AMUC team was composed of live performance practitioners and theorists conversant with a variety of movement techniques, bioengineers who recorded and processed motion data for integration into the retrieval tool, and computer scientists who designed and implemented the retrieval system and server architecture, scoped for Grid-based applications. Creative input on information system design and navigation, and digital image processing, underpinned implementation of the prototype, which has undergone preliminary trials with diverse users, allowing identification of rich potential development areas.
Rapid mapping of landslide disaster using UAV- photogrammetry
NASA Astrophysics Data System (ADS)
Cahyono, A. B.; Zayd, R. A.
2018-03-01
Unmanned Aerial Vehicle (UAV) systems offered many advantages in several mapping applications such as slope mapping, geohazard studies, etc. This study utilizes UAV system for landslide disaster occurred in Jombang Regency, East Java. This study concentrates on type of rotor-wing UAV, that is because rotor wing units are stable and able to capture images easily. Aerial photograph were acquired in the form of strips which followed the procedure of acquiring aerial photograph where taken 60 photos. Secondary data of ground control points using GPS Geodetic and check points established using Total Station technique was used. The digital camera was calibrated using close range photogrammetric software and the recovered camera calibration parameters were then used in the processing of digital images. All the aerial photographs were processed using digital photogrammetric software and the output in the form of orthophoto was produced. The final result shows a 1: 1500 scale orthophoto map from the data processing with SfM algorithm with GSD accuracy of 3.45 cm. And the calculated volume of contour line delineation of 10527.03 m3. The result is significantly different from the result of terrestrial methode equal to 964.67 m3 or 8.4% of the difference of both.
NASA Astrophysics Data System (ADS)
Russell, E.; Chi, J.; Waldo, S.; Pressley, S. N.; Lamb, B. K.; Pan, W.
2017-12-01
Diurnal and seasonal gas fluxes vary by crop growth stage. Digital cameras are increasingly being used to monitor inter-annual changes in vegetation phenology in a variety of ecosystems. These cameras are not designed as scientific instruments but the information they gather can add value to established measurement techniques (i.e. eddy covariance). This work combined deconstructed digital images with eddy covariance data from five agricultural sites (1 fallow, 4 cropped) in the inland Pacific Northwest, USA. The data were broken down with respect to crop stage and management activities. The fallow field highlighted the camera response to changing net radiation, illumination, and rainfall. At the cropped sites, the net ecosystem exchange, gross primary production, and evapotranspiration were correlated with the greenness and redness values derived from the images over the growing season. However, the color values do not change quickly enough to respond to day-to-day variability in the flux exchange as the two measurement types are based on different processes. The management practices and changes in phenology through the growing season were not visible within the camera data though the camera did capture the general evolution of the ecosystem fluxes.
Computerized morphometry as an aid in distinguishing recurrent versus nonrecurrent meningiomas.
Noy, Shawna; Vlodavsky, Euvgeni; Klorin, Geula; Drumea, Karen; Ben Izhak, Ofer; Shor, Eli; Sabo, Edmond
2011-06-01
To use novel digital and morphometric methods to identify variables able to better predict the recurrence of intracranial meningiomas. Histologic images from 30 previously diagnosed meningioma tumors that recurred over 10 years of follow-up were consecutively selected from the Rambam Pathology Archives. Images were captured and morphometrically analyzed. Novel algorithms of digital pattern recognition using Fourier transformation and fractal and nuclear texture analyses were applied to evaluate the overall growth pattern complexity of the tumors, as well as the chromatin texture of individual tumor nuclei. The extracted parameters were then correlated with patient prognosis. Kaplan-Meier analyses revealed statistically significant associations between tumor morphometric parameters and recurrence times. Tumors with less nuclear orientation, more nuclear density, higher fractal dimension, and less regular chromatin textures tended to recur faster than those with a higher degree of nuclear order, less pattern complexity, lower density, and more homogeneous chromatin nuclear textures (p < 0.01). To our knowledge, these digital morphometric methods were used for the first time to accurately predict tumor recurrence in patients with intracranial meningiomas. The use of these methods may bring additional valuable information to the clinician regarding the optimal management of these patients.
Turchini, John; Buckland, Michael E; Gill, Anthony J; Battye, Shane
2018-05-30
- Three-dimensional (3D) photogrammetry is a method of image-based modeling in which data points in digital images, taken from offset viewpoints, are analyzed to generate a 3D model. This modeling technique has been widely used in the context of geomorphology and artificial imagery, but has yet to be used within the realm of anatomic pathology. - To describe the application of a 3D photogrammetry system capable of producing high-quality 3D digital models and its uses in routine surgical pathology practice as well as medical education. - We modeled specimens received in the 2 participating laboratories. The capture and photogrammetry process was automated using user control software, a digital single-lens reflex camera, and digital turntable, to generate a 3D model with the output in a PDF file. - The entity demonstrated in each specimen was well demarcated and easily identified. Adjacent normal tissue could also be easily distinguished. Colors were preserved. The concave shapes of any cystic structures or normal convex rounded structures were discernable. Surgically important regions were identifiable. - Macroscopic 3D modeling of specimens can be achieved through Structure-From-Motion photogrammetry technology and can be applied quickly and easily in routine laboratory practice. There are numerous advantages to the use of 3D photogrammetry in pathology, including improved clinicopathologic correlation for the surgeon and enhanced medical education, revolutionizing the digital pathology museum with virtual reality environments and 3D-printing specimen models.
NASA Astrophysics Data System (ADS)
Parraman, Carinna
2012-01-01
This presentation highlights issues relating to the digital capture printing of 2D and 3D artefacts and accurate colour reproduction of 3D objects. There are a range of opportunities and technologies for the scanning and printing of two-dimensional and threedimensional artefacts [1]. A successful approach of Polynomial Texture Mapping (PTM) technique, to create a Reflectance Transformation Image (RTI) [2-4] is being used for the conservation and heritage of artworks as these methods are non invasive or non destructive of fragile artefacts. This approach captures surface detail of twodimensional artworks using a multidimensional approach that by using a hemispherical dome comprising 64 lamps to create an entire surface topography. The benefits of this approach are to provide a highly detailed visualization of the surface of materials and objects.
HST High Gain Antennae photographed by Electronic Still Camera
1993-12-04
S61-E-021 (7 Dec 1993) --- This close-up view of one of two High Gain Antennae (HGA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members have been working in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Hubble Space Telescope photographed by Electronic Still Camera
1993-12-04
S61-E-001 (4 Dec 1993) --- This medium close-up view of the top portion of the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
HST Solar Arrays photographed by Electronic Still Camera
1993-12-07
S61-E-020 (7 Dec 1993) --- This close-up view of one of two Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993, in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Common-path digital holographic microscopy based on a beam displacer unit
NASA Astrophysics Data System (ADS)
Di, Jianglei; Zhang, Jiwei; Song, Yu; Wang, Kaiqiang; Wei, Kun; Zhao, Jianlin
2018-02-01
Digital holographic microscopy (DHM) has become a novel tool with advantages of full field, non-destructive, high-resolution and 3D imaging, which captures the quantitative amplitude and phase information of microscopic specimens. It's a well-established method for digital recording and numerical reconstructing the full complex field of wavefront of the samples with a diffraction-limited lateral resolution down to 0.3 μm depending on the numerical aperture of microscope objective. Meanwhile, its axial resolution through axial direction is less than 10 nm due to the interferometric nature in phase imaging. Compared with the typical optical configurations such as Mach-Zehnder interferometer and Michelson interferometer, the common-path DHM has the advantages of simple and compact configuration, high stability, and so on. Here, a simple, compact, and low-cost common-path DHM based on a beam displacer unit is proposed for quantitative phase imaging of biological cells. The beam displacer unit is completely compatible with commercial microscope and can be easily set up in the output port of the microscope as a compact independent device. This technique can be used to achieve the quantitative phase measurement of biological cells with an excellent temporal stability of 0.51 nm, which makes it having a good prospect in the fields of biological and medical science. Living mouse osteoblastic cells are quantitatively measured with the system to demonstrate its capability and applicability.
Lu, Hangwen; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei
2016-01-01
Differential phase contrast (DPC) is a non-interferometric quantitative phase imaging method achieved by using an asymmetric imaging procedure. We report a pupil modulation differential phase contrast (PMDPC) imaging method by filtering a sample’s Fourier domain with half-circle pupils. A phase gradient image is captured with each half-circle pupil, and a quantitative high resolution phase image is obtained after a deconvolution process with a minimum of two phase gradient images. Here, we introduce PMDPC quantitative phase image reconstruction algorithm and realize it experimentally in a 4f system with an SLM placed at the pupil plane. In our current experimental setup with the numerical aperture of 0.36, we obtain a quantitative phase image with a resolution of 1.73μm after computationally removing system aberrations and refocusing. We also extend the depth of field digitally by 20 times to ±50μm with a resolution of 1.76μm. PMID:27828473
STS-42 Earth observation of Kamchatka Peninsula
NASA Technical Reports Server (NTRS)
1992-01-01
STS-42 Earth observation taken aboard Discovery, Orbiter Vehicle (OV) 103, with an electronic still camera (ESC) is of Kamchatka Peninsula in Russia. Mid-afternoon sun projects long shadows from volcanoes on the Kamchatka Peninsula. This flat-topped volcano with the sharp summit crater is Tobachinsky, over 3,085 kilometers high. Its last major eruption was in 1975 and 1976, but it has been very active since the middle of the Sixteenth Century. The shadows cast by the low sunlight brings out the dramatic relief of the volcano as well as the smaller morphologic features. For example, the small hills in the foreground and behind the central volcano are cinder cones, approximately only 200 meters high. Note the sharp triangular shadow from the conical volcano at right. Electronic still photography is a relatively new technology that enables a camera to electronically capture and digitize an image with resolution approaching film quality. The digital images from STS-42 were stored on a disk
STS-42 Earth observation of Kamchatka Peninsula
NASA Technical Reports Server (NTRS)
1992-01-01
STS-42 Earth observation taken aboard Discovery, Orbiter Vehicle (OV) 103, with an electronic still camera (ESC) is of Kamchatka Peninsula in Russia. Mid-afternoon sun projects long shadows from volcanoes on the Kamchatka Peninsula. This flat-topped volcano with the sharp summit crater is Tobachinsky, over 3,085 kilometers high. Its last major eruption was in 1975 and 1976, but it has been very active since the middle of the Sixteenth Century. The shadows cast by the low sunlight brings out the dramatic relief of the volcano as well as the smaller morphologic features. Electronic still photography is a relatively new technology that enables a camera to electronically capture and digitize an image with resolution approaching film quality. The digital images from STS-42 were stored on a disk and brought home with the flight crewmembers for processing. ESC was developed by the JSC Man-Systems Division and this mission's application of it is part of a continuing evolutionary development le
A Photometric Technique for Determining Fluid Concentration using Consumer-Grade Hardware
NASA Technical Reports Server (NTRS)
Leslie, F.; Ramachandran, N.
1999-01-01
In support of a separate study to produce an exponential concentration gradient in a magnetic fluid, a noninvasive technique for determining, species concentration from off-the-shelf hardware has been developed. The approach uses a backlighted fluid test cell photographed with a commercial digital camcorder. Because the light extinction coefficient is wavelength dependent, tests were conducted to determine the best filter color to use, although some guidance was also provided using an absorption spectrophotometer. With the appropriate filter in place, the provide attenuation of the light passing, through the test cell was captured by the camcorder. The digital image was analyzed for intensity using, software from Scion Image Corp. downloaded from the Internet. The analysis provides a two-dimensional array of concentration with an average error of 0.0095 ml/ml. This technique is superior to invasive techniques, which require extraction of a sample that disturbs the concentration distribution in the test cell. Refinements of this technique using a true monochromatic laser light Source are also discussed.
Tracking and Quantifying Developmental Processes in C. elegans Using Open-source Tools.
Dutta, Priyanka; Lehmann, Christina; Odedra, Devang; Singh, Deepika; Pohl, Christian
2015-12-16
Quantitatively capturing developmental processes is crucial to derive mechanistic models and key to identify and describe mutant phenotypes. Here protocols are presented for preparing embryos and adult C. elegans animals for short- and long-term time-lapse microscopy and methods for tracking and quantification of developmental processes. The methods presented are all based on C. elegans strains available from the Caenorhabditis Genetics Center and on open-source software that can be easily implemented in any laboratory independently of the microscopy system used. A reconstruction of a 3D cell-shape model using the modelling software IMOD, manual tracking of fluorescently-labeled subcellular structures using the multi-purpose image analysis program Endrov, and an analysis of cortical contractile flow using PIVlab (Time-Resolved Digital Particle Image Velocimetry Tool for MATLAB) are shown. It is discussed how these methods can also be deployed to quantitatively capture other developmental processes in different models, e.g., cell tracking and lineage tracing, tracking of vesicle flow.
Geometric correction and digital elevation extraction using multiple MTI datasets
Mercier, Jeffrey A.; Schowengerdt, Robert A.; Storey, James C.; Smith, Jody L.
2007-01-01
Digital Elevation Models (DEMs) are traditionally acquired from a stereo pair of aerial photographs sequentially captured by an airborne metric camera. Standard DEM extraction techniques can be naturally extended to satellite imagery, but the particular characteristics of satellite imaging can cause difficulties. The spacecraft ephemeris with respect to the ground site during image collects is the most important factor in the elevation extraction process. When the angle of separation between the stereo images is small, the extraction process typically produces measurements with low accuracy, while a large angle of separation can cause an excessive number of erroneous points in the DEM from occlusion of ground areas. The use of three or more images registered to the same ground area can potentially reduce these problems and improve the accuracy of the extracted DEM. The pointing capability of some sensors, such as the Multispectral Thermal Imager (MTI), allows for multiple collects of the same area from different perspectives. This functionality of MTI makes it a good candidate for the implementation of a DEM extraction algorithm using multiple images for improved accuracy. Evaluation of this capability and development of algorithms to geometrically model the MTI sensor and extract DEMs from multi-look MTI imagery are described in this paper. An RMS elevation error of 6.3-meters is achieved using 11 ground test points, while the MTI band has a 5-meter ground sample distance.
A simple procedure for retrieval of a cement-retained implant-supported crown: a case report.
Buzayan, Muaiyed Mahmoud; Mahmood, Wan Adida; Yunus, Norsiah Binti
2014-02-01
Retrieval of cement-retained implant prostheses can be more demanding than retrieval of screw-retained prostheses. This case report describes a simple and predictable procedure to locate the abutment screw access openings of cementretained implant-supported crowns in cases of fractured ceramic veneer. A conventional periapical radiography image was captured using a digital camera, transferred to a computer, and manipulated using Microsoft Word document software to estimate the location of the abutment screw access.
STS-109 MS Linnehan with laser range finder on aft flight deck
2002-03-02
STS109-E-5003 (3 March 2002) --- Astronaut Richard M. Linnehan, mission specialist, uses a laser ranging device designed to measure the range between two spacecraft. Linnehan positioned himself on the cabin's aft flight deck as the Space Shuttle Columbia approached the Hubble Space Telescope. A short time later, the STS-109 crew captured and latched down the giant telescope in the vehicle's cargo bay for several days of work on the Hubble. The image was recorded with a digital still camera.
STS-109 MS Linnehan with laser range finder on aft flight deck
2002-03-02
STS109-E-5002 (3 March 2002) --- Astronaut Richard M. Linnehan, mission specialist, uses a laser ranging device designed to measure the range between two spacecraft. Linnehan positioned himself on the cabin's aft flight deck as the Space Shuttle Columbia approached the Hubble Space Telescope. A short time later, the STS-109 crew captured and latched down the giant telescope in the vehicle's cargo bay for several days of work on the Hubble. The image was recorded with a digital still camera.
Endoscopic measurements using a panoramic annular lens
NASA Technical Reports Server (NTRS)
Gilbert, John A.; Matthys, Donald R.
1992-01-01
The objective of this project was to design, build, demonstrate, and deliver a prototype system for making measurements within cavities. The system was to utilize structured lighting as the means for making measurements and was to rely on a stationary probe, equipped with a unique panoramic annular lens, to capture a cylindrical view of the illuminated cavity. Panoramic images, acquired with a digitizing camera and stored in a desk top computer, were to be linearized and analyzed by mouse-driven interactive software.
Evaluation of DICOM viewer software for workflow integration in clinical trials
NASA Astrophysics Data System (ADS)
Haak, Daniel; Page, Charles E.; Kabino, Klaus; Deserno, Thomas M.
2015-03-01
The digital imaging and communications in medicine (DICOM) protocol is nowadays the leading standard for capture, exchange and storage of image data in medical applications. A broad range of commercial, free, and open source software tools supporting a variety of DICOM functionality exists. However, different from patient's care in hospital, DICOM has not yet arrived in electronic data capture systems (EDCS) for clinical trials. Due to missing integration, even just the visualization of patient's image data in electronic case report forms (eCRFs) is impossible. Four increasing levels for integration of DICOM components into EDCS are conceivable, raising functionality but also demands on interfaces with each level. Hence, in this paper, a comprehensive evaluation of 27 DICOM viewer software projects is performed, investigating viewing functionality as well as interfaces for integration. Concerning general, integration, and viewing requirements the survey involves the criteria (i) license, (ii) support, (iii) platform, (iv) interfaces, (v) two-dimensional (2D) and (vi) three-dimensional (3D) image viewing functionality. Optimal viewers are suggested for applications in clinical trials for 3D imaging, hospital communication, and workflow. Focusing on open source solutions, the viewers ImageJ and MicroView are superior for 3D visualization, whereas GingkoCADx is advantageous for hospital integration. Concerning workflow optimization in multi-centered clinical trials, we suggest the open source viewer Weasis. Covering most use cases, an EDCS and PACS interconnection with Weasis is suggested.
A digital-signal-processor-based optical tomographic system for dynamic imaging of joint diseases
NASA Astrophysics Data System (ADS)
Lasker, Joseph M.
Over the last decade, optical tomography (OT) has emerged as viable biomedical imaging modality. Various imaging systems have been developed that are employed in preclinical as well as clinical studies, mostly targeting breast imaging, brain imaging, and cancer related studies. Of particular interest are so-called dynamic imaging studies where one attempts to image changes in optical properties and/or physiological parameters as they occur during a system perturbation. To successfully perform dynamic imaging studies, great effort is put towards system development that offers increasingly enhanced signal-to-noise performance at ever shorter data acquisition times, thus capturing high fidelity tomographic data within narrower time periods. Towards this goal, I have developed in this thesis a dynamic optical tomography system that is, unlike currently available analog instrumentation, based on digital data acquisition and filtering techniques. At the core of this instrument is a digital signal processor (DSP) that collects, collates, and processes the digitized data set. Complementary protocols between the DSP and a complex programmable logic device synchronizes the sampling process and organizes data flow. Instrument control is implemented through a comprehensive graphical user interface which integrates automated calibration, data acquisition, and signal post-processing. Real-time data is generated at frame rates as high as 140 Hz. An extensive dynamic range (˜190 dB) accommodates a wide scope of measurement geometries and tissue types. Performance analysis demonstrates very low system noise (˜1 pW rms noise equivalent power), excellent signal precision (˜0.04%--0.2%) and long term system stability (˜1% over 40 min). Experiments on tissue phantoms validate spatial and temporal accuracy of the system. As a potential new application of dynamic optical imaging I present the first application of this method to use vascular hemodynamics as a means of characterizing joint diseases, especially effects of rheumatoid arthritis (RA) in the proximal interphalangeal finger joints. Using a dual-wavelength tomographic imaging system and previously implemented reconstruction scheme, I have performed initial dynamic imaging case studies on healthy volunteers and patients diagnosed with RA. These studies support our hypothesis that differences in the vascular and metabolic reactivity exist between affected and unaffected joints and can be used for diagnostic purposes.
Classification of pollen species using autofluorescence image analysis.
Mitsumoto, Kotaro; Yabusaki, Katsumi; Aoyagi, Hideki
2009-01-01
A new method to classify pollen species was developed by monitoring autofluorescence images of pollen grains. The pollens of nine species were selected, and their autofluorescence images were captured by a microscope equipped with a digital camera. The pollen size and the ratio of the blue to red pollen autofluorescence spectra (the B/R ratio) were calculated by image processing. The B/R ratios and pollen size varied among the species. Furthermore, the scatter-plot of pollen size versus the B/R ratio showed that pollen could be classified to the species level using both parameters. The pollen size and B/R ratio were confirmed by means of particle flow image analysis and the fluorescence spectra, respectively. These results suggest that a flow system capable of measuring both scattered light and the autofluorescence of particles could classify and count pollen grains in real time.
Sonorous images through digital holographic images
NASA Astrophysics Data System (ADS)
Azevedo, Isabel; Sandford-Richardson, Elizabeth
2017-03-01
The art of the last fifty years has significantly surrounded the presence of the body, the relationship between human and interactive technologies. Today in interactive art, there are not only representations that speak of the body but actions and behaviours that involve the body. In holography, the image appears and disappears from the observer's vision field; because the holographic image is light, we can see multidimensional spaces, shapes and colours existing on the same time, presence and absence of the image on the holographic plate. And the image can be flowing in front of the plate that sometimes people try touching it with his hands. That means, to the viewer will be interactive events, with no beginning or end that can be perceived in any direction, forward or backward, depending on the relative position and the time the viewer spends in front of the hologram. To explore that feature we are proposing an installation with four holograms, and several sources of different kind of sounds connected with each hologram. When viewers will move in front of each hologram they will activate different sources of sound. The search is not only about the images in the holograms, but also the looking for different types of sounds that this demand will require. The digital holograms were produced using the HoloCam Portable Light System with the 35 mm camera Canon 700D to capture image information, it was then edited on computer using the Motion 5 and Final Cut Pro X programs.
Biocular vehicle display optical designs
NASA Astrophysics Data System (ADS)
Chu, H.; Carter, Tom
2012-06-01
Biocular vehicle display optics is a fast collimating lens (f / # < 0.9) that presents the image of the display at infinity to both eyes of the viewer. Each eye captures the scene independently and the brain merges the two images into one through the overlapping portions of the images. With the recent conversion from analog CRT based displays to lighter, more compact active-matrix organic light-emitting diodes (AMOLED) digital image sources, display optical designs have evolved to take advantage of the higher resolution AMOLED image sources. To maximize the field of view of the display optics and fully resolve the smaller pixels, the digital image source is pre-magnified by relay optics or a coherent taper fiber optics plate. Coherent taper fiber optics plates are used extensively to: 1. Convert plano focal planes to spherical focal planes in order to eliminate Petzval field curvature. This elimination enables faster lens speed and/or larger field of view of eye pieces, display optics. 2. Provide pre-magnification to lighten the work load of the optics to further increase the numerical aperture and/or field of view. 3. Improve light flux collection efficiency and field of view by collecting all the light emitted by the image source and guiding imaging light bundles toward the lens aperture stop. 4. Reduce complexity of the optical design and overall packaging volume by replacing pre-magnification optics with a compact taper fiber optics plate. This paper will review and compare the performance of biocular vehicle display designs without and with taper fiber optics plate.
Motion detection and compensation in infrared retinal image sequences.
Scharcanski, J; Schardosim, L R; Santos, D; Stuchi, A
2013-01-01
Infrared image data captured by non-mydriatic digital retinography systems often are used in the diagnosis and treatment of the diabetic macular edema (DME). Infrared illumination is less aggressive to the patient retina, and retinal studies can be carried out without pupil dilation. However, sequences of infrared eye fundus images of static scenes, tend to present pixel intensity fluctuations in time, and noisy and background illumination changes pose a challenge to most motion detection methods proposed in the literature. In this paper, we present a retinal motion detection method that is adaptive to background noise and illumination changes. Our experimental results indicate that this method is suitable for detecting retinal motion in infrared image sequences, and compensate the detected motion, which is relevant in retinal laser treatment systems for DME. Copyright © 2013 Elsevier Ltd. All rights reserved.
Wang, Ling-jia; Kissler, Hermann J; Wang, Xiaojun; Cochet, Olivia; Krzystyniak, Adam; Misawa, Ryosuke; Golab, Karolina; Tibudan, Martin; Grzanka, Jakub; Savari, Omid; Grose, Randall; Kaufman, Dixon B; Millis, Michael; Witkowski, Piotr
2015-01-01
Pancreatic islet mass, represented by islet equivalent (IEQ), is the most important parameter in decision making for clinical islet transplantation. To obtain IEQ, the sample of islets is routinely counted manually under a microscope and discarded thereafter. Islet purity, another parameter in islet processing, is routinely acquired by estimation only. In this study, we validated our digital image analysis (DIA) system developed using the software of Image Pro Plus for islet mass and purity assessment. Application of the DIA allows to better comply with current good manufacturing practice (cGMP) standards. Human islet samples were captured as calibrated digital images for the permanent record. Five trained technicians participated in determination of IEQ and purity by manual counting method and DIA. IEQ count showed statistically significant correlations between the manual method and DIA in all sample comparisons (r >0.819 and p < 0.0001). Statistically significant difference in IEQ between both methods was found only in High purity 100μL sample group (p = 0.029). As far as purity determination, statistically significant differences between manual assessment and DIA measurement was found in High and Low purity 100μL samples (p<0.005), In addition, islet particle number (IPN) and the IEQ/IPN ratio did not differ statistically between manual counting method and DIA. In conclusion, the DIA used in this study is a reliable technique in determination of IEQ and purity. Islet sample preserved as a digital image and results produced by DIA can be permanently stored for verification, technical training and islet information exchange between different islet centers. Therefore, DIA complies better with cGMP requirements than the manual counting method. We propose DIA as a quality control tool to supplement the established standard manual method for islets counting and purity estimation. PMID:24806436
Fast Fourier single-pixel imaging via binary illumination.
Zhang, Zibang; Wang, Xueying; Zheng, Guoan; Zhong, Jingang
2017-09-20
Fourier single-pixel imaging (FSI) employs Fourier basis patterns for encoding spatial information and is capable of reconstructing high-quality two-dimensional and three-dimensional images. Fourier-domain sparsity in natural scenes allows FSI to recover sharp images from undersampled data. The original FSI demonstration, however, requires grayscale Fourier basis patterns for illumination. This requirement imposes a limitation on the imaging speed as digital micro-mirror devices (DMDs) generate grayscale patterns at a low refreshing rate. In this paper, we report a new strategy to increase the speed of FSI by two orders of magnitude. In this strategy, we binarize the Fourier basis patterns based on upsampling and error diffusion dithering. We demonstrate a 20,000 Hz projection rate using a DMD and capture 256-by-256-pixel dynamic scenes at a speed of 10 frames per second. The reported technique substantially accelerates image acquisition speed of FSI. It may find broad imaging applications at wavebands that are not accessible using conventional two-dimensional image sensors.
Hdr Imaging for Feature Detection on Detailed Architectural Scenes
NASA Astrophysics Data System (ADS)
Kontogianni, G.; Stathopoulou, E. K.; Georgopoulos, A.; Doulamis, A.
2015-02-01
3D reconstruction relies on accurate detection, extraction, description and matching of image features. This is even truer for complex architectural scenes that pose needs for 3D models of high quality, without any loss of detail in geometry or color. Illumination conditions influence the radiometric quality of images, as standard sensors cannot depict properly a wide range of intensities in the same scene. Indeed, overexposed or underexposed pixels cause irreplaceable information loss and degrade digital representation. Images taken under extreme lighting environments may be thus prohibitive for feature detection/extraction and consequently for matching and 3D reconstruction. High Dynamic Range (HDR) images could be helpful for these operators because they broaden the limits of illumination range that Standard or Low Dynamic Range (SDR/LDR) images can capture and increase in this way the amount of details contained in the image. Experimental results of this study prove this assumption as they examine state of the art feature detectors applied both on standard dynamic range and HDR images.
Fixed Pattern Noise pixel-wise linear correction for crime scene imaging CMOS sensor
NASA Astrophysics Data System (ADS)
Yang, Jie; Messinger, David W.; Dube, Roger R.; Ientilucci, Emmett J.
2017-05-01
Filtered multispectral imaging technique might be a potential method for crime scene documentation and evidence detection due to its abundant spectral information as well as non-contact and non-destructive nature. Low-cost and portable multispectral crime scene imaging device would be highly useful and efficient. The second generation crime scene imaging system uses CMOS imaging sensor to capture spatial scene and bandpass Interference Filters (IFs) to capture spectral information. Unfortunately CMOS sensors suffer from severe spatial non-uniformity compared to CCD sensors and the major cause is Fixed Pattern Noise (FPN). IFs suffer from "blue shift" effect and introduce spatial-spectral correlated errors. Therefore, Fixed Pattern Noise (FPN) correction is critical to enhance crime scene image quality and is also helpful for spatial-spectral noise de-correlation. In this paper, a pixel-wise linear radiance to Digital Count (DC) conversion model is constructed for crime scene imaging CMOS sensor. Pixel-wise conversion gain Gi,j and Dark Signal Non-Uniformity (DSNU) Zi,j are calculated. Also, conversion gain is divided into four components: FPN row component, FPN column component, defects component and effective photo response signal component. Conversion gain is then corrected to average FPN column and row components and defects component so that the sensor conversion gain is uniform. Based on corrected conversion gain and estimated image incident radiance from the reverse of pixel-wise linear radiance to DC model, corrected image spatial uniformity can be enhanced to 7 times as raw image, and the bigger the image DC value within its dynamic range, the better the enhancement.
Information technology in the foxhole.
Eyestone, S M
1995-08-01
The importance of digital data capture at the point of health care service within the military environment is highlighted. Current paper-based data capture does not allow for efficient data reuse throughout the medical support information domain. A simple, high-level process and data flow model is used to demonstrate the importance of data capture at point of service. The Department of Defense is developing a personal digital assistant, called MEDTAG, that accomplishes point of service data capture in the field using a prototype smart card as a data store in austere environments.
Digital dissection system for medical school anatomy training
NASA Astrophysics Data System (ADS)
Augustine, Kurt E.; Pawlina, Wojciech; Carmichael, Stephen W.; Korinek, Mark J.; Schroeder, Kathryn K.; Segovis, Colin M.; Robb, Richard A.
2003-05-01
As technology advances, new and innovative ways of viewing and visualizing the human body are developed. Medicine has benefited greatly from imaging modalities that provide ways for us to visualize anatomy that cannot be seen without invasive procedures. As long as medical procedures include invasive operations, students of anatomy will benefit from the cadaveric dissection experience. Teaching proper technique for dissection of human cadavers is a challenging task for anatomy educators. Traditional methods, which have not changed significantly for centuries, include the use of textbooks and pictures to show students what a particular dissection specimen should look like. The ability to properly carry out such highly visual and interactive procedures is significantly constrained by these methods. The student receives a single view and has no idea how the procedure was carried out. The Department of Anatomy at Mayo Medical School recently built a new, state-of-the-art teaching laboratory, including data ports and power sources above each dissection table. This feature allows students to access the Mayo intranet from a computer mounted on each table. The vision of the Department of Anatomy is to replace all paper-based resources in the laboratory (dissection manuals, anatomic atlases, etc.) with a more dynamic medium that will direct students in dissection and in learning human anatomy. Part of that vision includes the use of interactive 3-D visualization technology. The Biomedical Imaging Resource (BIR) at Mayo Clinic has developed, in collaboration with the Department of Anatomy, a system for the control and capture of high resolution digital photographic sequences which can be used to create 3-D interactive visualizations of specimen dissections. The primary components of the system include a Kodak DC290 digital camera, a motorized controller rig from Kaidan, a PC, and custom software to synchronize and control the components. For each dissection procedure, the images are captured automatically, and then processed to generate a Quicktime VR sequence, which permits users to view an object from multiple angles by rotating it on the screen. This provides 3-D visualizations of anatomy for students without the need for special '3-D glasses' that would be impractical to use in a laboratory setting. In addition, a digital video camera may be mounted on the rig for capturing video recordings of selected dissection procedures being carried out by expert anatomists for playback by the students. Anatomists from the Department of Anatomy at Mayo have captured several sets of dissection sequences and processed them into Quicktime VR sequences. The students are able to look at these specimens from multiple angles using this VR technology. In addition, the student may zoom in to obtain high-resolution close-up views of the specimen. They may interactively view the specimen at varying stages of dissection, providing a way to quickly and intuitively navigate through the layers of tissue. Electronic media has begun to impact all areas of education, but a 3-D interactive visualization of specimen dissections in the laboratory environment is a unique and powerful means of teaching anatomy. When fully implemented, anatomy education will be enhanced significantly by comparison to traditional methods.
Detection of fresh bruises in apples by structured-illumination reflectance imaging
NASA Astrophysics Data System (ADS)
Lu, Yuzhen; Li, Richard; Lu, Renfu
2016-05-01
Detection of fresh bruises in apples remains a challenging task due to the absence of visual symptoms and significant chemical alterations of fruit tissues during the initial stage after the fruit have been bruised. This paper reports on a new structured-illumination reflectance imaging (SIRI) technique for enhanced detection of fresh bruises in apples. Using a digital light projector engine, sinusoidally-modulated illumination at the spatial frequencies of 50, 100, 150 and 200 cycles/m was generated. A digital camera was then used to capture the reflectance images from `Gala' and `Jonagold' apples, immediately after they had been subjected to two levels of bruising by impact tests. A conventional three-phase demodulation (TPD) scheme was applied to the acquired images for obtaining the planar (direct component or DC) and amplitude (alternating component or AC) images. Bruises were identified in the amplitude images with varying image contrasts, depending on spatial frequency. The bruise visibility was further enhanced through post-processing of the amplitude images. Furthermore, three spiral phase transform (SPT)-based demodulation methods, using single and two images and two phase-shifted images, were proposed for obtaining AC images. Results showed that the demodulation methods greatly enhanced the contrast and spatial resolution of the AC images, making it feasible to detect the fresh bruises that, otherwise, could not be achieved by conventional imaging technique with planar or uniform illumination. The effectiveness of image enhancement, however, varied with spatial frequency. Both 2-image and 2-phase SPT methods achieved the performance similar to that by conventional TPD. SIRI technique has demonstrated the capability of detecting fresh bruises in apples, and it has the potential as a new imaging modality for enhancing food quality and safety detection.
Compressive Coded-Aperture Multimodal Imaging Systems
NASA Astrophysics Data System (ADS)
Rueda-Chacon, Hoover F.
Multimodal imaging refers to the framework of capturing images that span different physical domains such as space, spectrum, depth, time, polarization, and others. For instance, spectral images are modeled as 3D cubes with two spatial and one spectral coordinate. Three-dimensional cubes spanning just the space domain, are referred as depth volumes. Imaging cubes varying in time, spectra or depth, are referred as 4D-images. Nature itself spans different physical domains, thus imaging our real world demands capturing information in at least 6 different domains simultaneously, giving turn to 3D-spatial+spectral+polarized dynamic sequences. Conventional imaging devices, however, can capture dynamic sequences with up-to 3 spectral channels, in real-time, by the use of color sensors. Capturing multiple spectral channels require scanning methodologies, which demand long time. In general, to-date multimodal imaging requires a sequence of different imaging sensors, placed in tandem, to simultaneously capture the different physical properties of a scene. Then, different fusion techniques are employed to mix all the individual information into a single image. Therefore, new ways to efficiently capture more than 3 spectral channels of 3D time-varying spatial information, in a single or few sensors, are of high interest. Compressive spectral imaging (CSI) is an imaging framework that seeks to optimally capture spectral imagery (tens of spectral channels of 2D spatial information), using fewer measurements than that required by traditional sensing procedures which follows the Shannon-Nyquist sampling. Instead of capturing direct one-to-one representations of natural scenes, CSI systems acquire linear random projections of the scene and then solve an optimization algorithm to estimate the 3D spatio-spectral data cube by exploiting the theory of compressive sensing (CS). To date, the coding procedure in CSI has been realized through the use of ``block-unblock" coded apertures, commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately capturing spatial and spectral material properties. The applications of such a sensor are self-evident in fields such as computer/robotic vision because they would allow an artificial intelligence to make informed decisions about not only the location of objects within a scene but also their material properties.
Real-time unmanned aircraft systems surveillance video mosaicking using GPU
NASA Astrophysics Data System (ADS)
Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.
2010-04-01
Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.
Modeling human faces with multi-image photogrammetry
NASA Astrophysics Data System (ADS)
D'Apuzzo, Nicola
2002-03-01
Modeling and measurement of the human face have been increasing by importance for various purposes. Laser scanning, coded light range digitizers, image-based approaches and digital stereo photogrammetry are the used methods currently employed in medical applications, computer animation, video surveillance, teleconferencing and virtual reality to produce three dimensional computer models of the human face. Depending on the application, different are the requirements. Ours are primarily high accuracy of the measurement and automation in the process. The method presented in this paper is based on multi-image photogrammetry. The equipment, the method and results achieved with this technique are here depicted. The process is composed of five steps: acquisition of multi-images, calibration of the system, establishment of corresponding points in the images, computation of their 3-D coordinates and generation of a surface model. The images captured by five CCD cameras arranged in front of the subject are digitized by a frame grabber. The complete system is calibrated using a reference object with coded target points, which can be measured fully automatically. To facilitate the establishment of correspondences in the images, texture in the form of random patterns can be projected from two directions onto the face. The multi-image matching process, based on a geometrical constrained least squares matching algorithm, produces a dense set of corresponding points in the five images. Neighborhood filters are then applied on the matching results to remove the errors. After filtering the data, the three-dimensional coordinates of the matched points are computed by forward intersection using the results of the calibration process; the achieved mean accuracy is about 0.2 mm in the sagittal direction and about 0.1 mm in the lateral direction. The last step of data processing is the generation of a surface model from the point cloud and the application of smooth filters. Moreover, a color texture image can be draped over the model to achieve a photorealistic visualization. The advantage of the presented method over laser scanning and coded light range digitizers is the acquisition of the source data in a fraction of a second, allowing the measurement of human faces with higher accuracy and the possibility to measure dynamic events like the speech of a person.
Sodickson, Aaron; Warden, Graham I; Farkas, Cameron E; Ikuta, Ichiro; Prevedello, Luciano M; Andriole, Katherine P; Khorasani, Ramin
2012-08-01
To develop and validate an informatics toolkit that extracts anatomy-specific computed tomography (CT) radiation exposure metrics (volume CT dose index and dose-length product) from existing digital image archives through optical character recognition of CT dose report screen captures (dose screens) combined with Digital Imaging and Communications in Medicine attributes. This institutional review board-approved HIPAA-compliant study was performed in a large urban health care delivery network. Data were drawn from a random sample of CT encounters that occurred between 2000 and 2010; images from these encounters were contained within the enterprise image archive, which encompassed images obtained at an adult academic tertiary referral hospital and its affiliated sites, including a cancer center, a community hospital, and outpatient imaging centers, as well as images imported from other facilities. Software was validated by using 150 randomly selected encounters for each major CT scanner manufacturer, with outcome measures of dose screen retrieval rate (proportion of correctly located dose screens) and anatomic assignment precision (proportion of extracted exposure data with correctly assigned anatomic region, such as head, chest, or abdomen and pelvis). The 95% binomial confidence intervals (CIs) were calculated for discrete proportions, and CIs were derived from the standard error of the mean for continuous variables. After validation, the informatics toolkit was used to populate an exposure repository from a cohort of 54 549 CT encounters; of which 29 948 had available dose screens. Validation yielded a dose screen retrieval rate of 99% (597 of 605 CT encounters; 95% CI: 98%, 100%) and an anatomic assignment precision of 94% (summed DLP fraction correct 563 in 600 CT encounters; 95% CI: 92%, 96%). Patient safety applications of the resulting data repository include benchmarking between institutions, CT protocol quality control and optimization, and cumulative patient- and anatomy-specific radiation exposure monitoring. Large-scale anatomy-specific radiation exposure data repositories can be created with high fidelity from existing digital image archives by using open-source informatics tools.
Achieving sub-millimetre precision with a solid-state full-field heterodyning range imaging camera
NASA Astrophysics Data System (ADS)
Dorrington, A. A.; Cree, M. J.; Payne, A. D.; Conroy, R. M.; Carnegie, D. A.
2007-09-01
We have developed a full-field solid-state range imaging system capable of capturing range and intensity data simultaneously for every pixel in a scene with sub-millimetre range precision. The system is based on indirect time-of-flight measurements by heterodyning intensity-modulated illumination with a gain modulation intensified digital video camera. Sub-millimetre precision to beyond 5 m and 2 mm precision out to 12 m has been achieved. In this paper, we describe the new sub-millimetre class range imaging system in detail, and review the important aspects that have been instrumental in achieving high precision ranging. We also present the results of performance characterization experiments and a method of resolving the range ambiguity problem associated with homodyne and heterodyne ranging systems.
Static sign language recognition using 1D descriptors and neural networks
NASA Astrophysics Data System (ADS)
Solís, José F.; Toxqui, Carina; Padilla, Alfonso; Santiago, César
2012-10-01
A frame work for static sign language recognition using descriptors which represents 2D images in 1D data and artificial neural networks is presented in this work. The 1D descriptors were computed by two methods, first one consists in a correlation rotational operator.1 and second is based on contour analysis of hand shape. One of the main problems in sign language recognition is segmentation; most of papers report a special color in gloves or background for hand shape analysis. In order to avoid the use of gloves or special clothing, a thermal imaging camera was used to capture images. Static signs were picked up from 1 to 9 digits of American Sign Language, a multilayer perceptron reached 100% recognition with cross-validation.
Three Dimentional Reconstruction of Large Cultural Heritage Objects Based on Uav Video and Tls Data
NASA Astrophysics Data System (ADS)
Xu, Z.; Wu, T. H.; Shen, Y.; Wu, L.
2016-06-01
This paper investigates the synergetic use of unmanned aerial vehicle (UAV) and terrestrial laser scanner (TLS) in 3D reconstruction of cultural heritage objects. Rather than capturing still images, the UAV that equips a consumer digital camera is used to collect dynamic videos to overcome its limited endurance capacity. Then, a set of 3D point-cloud is generated from video image sequences using the automated structure-from-motion (SfM) and patch-based multi-view stereo (PMVS) methods. The TLS is used to collect the information that beyond the reachability of UAV imaging e.g., partial building facades. A coarse to fine method is introduced to integrate the two sets of point clouds UAV image-reconstruction and TLS scanning for completed 3D reconstruction. For increased reliability, a variant of ICP algorithm is introduced using local terrain invariant regions in the combined designation. The experimental study is conducted in the Tulou culture heritage building in Fujian province, China, which is focused on one of the TuLou clusters built several hundred years ago. Results show a digital 3D model of the Tulou cluster with complete coverage and textural information. This paper demonstrates the usability of the proposed method for efficient 3D reconstruction of heritage object based on UAV video and TLS data.
Web conferencing systems: Skype and MSN in telepathology
Klock, Clóvis; Gomes, Regina de Paula Xavier
2008-01-01
Virtual pathology is a very important tool that can be used in several ways, including interconsultations with specialists in many areas and for frozen sections. We considered in this work the use of Windows Live Messenger and Skype for image transmission. The conference was made through wide broad internet using Nikon E 200 microscope and Digital Samsung Colour SCC-131 camera. Internet speed for transmission varied from 400 Kb to 2.0 Mb. Both programs allow voice transmission concomitant to image, so the communication between the involved pathologists was possible using microphones and speakers. Alive image could be seen by the receptor pathologist who was able to ask for moving the field or increase/diminish the augmentation. No phone call or typing required. The programs MSN and Skype can be used in many ways and with different operational systems installed in the computer. The capture system is simple and relatively cheap, what proves the viability of the system to be used in developing countries and in cities where do not exist pathologists. With the improvement of software and the improvement of digital image quality, associated to the use of the high speed broad band Internet this will be able to become a new modality in surgical pathology. PMID:18673501
Web conferencing systems: Skype and MSN in telepathology.
Klock, Clóvis; Gomes, Regina de Paula Xavier
2008-07-15
Virtual pathology is a very important tool that can be used in several ways, including interconsultations with specialists in many areas and for frozen sections. We considered in this work the use of Windows Live Messenger and Skype for image transmission. The conference was made through wide broad internet using Nikon E 200 microscope and Digital Samsung Colour SCC-131 camera. Internet speed for transmission varied from 400 Kb to 2.0 Mb. Both programs allow voice transmission concomitant to image, so the communication between the involved pathologists was possible using microphones and speakers. A live image could be seen by the receptor pathologist who was able to ask for moving the field or increase/diminish the augmentation. No phone call or typing required. The programs MSN and Skype can be used in many ways and with different operational systems installed in the computer. The capture system is simple and relatively cheap, what proves the viability of the system to be used in developing countries and in cities where do not exist pathologists. With the improvement of software and the improvement of digital image quality, associated to the use of the high speed broad band Internet this will be able to become a new modality in surgical pathology.
NASA Astrophysics Data System (ADS)
Holohan, E. P.; Walter, T. R.; Schöpfer, M. P. J.; Walsh, J. J.; Orr, T.; Poland, M.
2012-04-01
In March 2011, a spectacular fissure eruption on Kilauea was associated with a major collapse event in the highly-active Puu Oo crater. Time-lapse cameras maintained by the Hawaii Volcano Observatory captured views of the crater in the moments before, during, and after the collapse. The 2011 event hence represents a unique opportunity to characterize the surface deformation related to the onset of a pit crater collapse and to understand what factors influence it. To do so, we used two approaches. First, we analyzed the available series of camera images by means of digital image correlation techniques. This enabled us to gain a semi-quantitative (pixel-unit) description of the surface displacements and the structural development of the collapsing crater floor. Secondly, we ran a series of 'true-scale' numerical pit-crater collapse simulations based on the two-dimensional Distinct Element Method (2D-DEM). This enabled us to gain insights into what geometric and mechanical factors could have controlled the observed surface displacement pattern and structural development. Our analysis of the time-lapse images reveals that the crater floor initially gently sagged, and then rapidly collapsed in association with the appearance of a large ring-like fault scarp. The observed structural development and surface displacement patterns of the March 2011 Puu Oo collapse are best reproduced in DEM models with a relatively shallow magma reservoir that is vertically elongated, and with a crater floor rock mass that is reasonably strong. In combining digital image correlation with DEM modeling, our study highlights the future potential of these relatively new techniques for understanding physical processes at active volcanoes.
Sea Ice in the Bellingshausen Sea
2017-12-08
Antarctica—the continent at the southernmost reach of the planet—is fringed by cold, often frozen waters of the Southern Ocean. The extent of sea ice around the continent typically reaches a peak in September and a minimum in February. The photograph above shows Antarctic sea ice on November 5, 2014, during the annual cycle of melt. The image was acquired by the Digital Mapping System (DMS), a digital camera installed in the belly of research aircraft to capture images of terrain below. In this case, the system flew on the DC-8 during a flight as part of NASA’s Operation IceBridge. Most of the view shows first-year sea ice in the Bellingshausen Sea, as it appeared from an altitude of 328 meters (1,076 feet). The block of ice on the right side of the image is older, thicker, and was once attached to the Antarctic Ice Sheet. By the time this image was acquired, however, the ice had broken away to form an iceberg. Given its close proximity to the ice sheet, this could have been a relatively new berg. Read more: earthobservatory.nasa.gov/IOTD/view.php?id=86721 Credit: NASA/Goddard/IceBridge DMS L0 Raw Imagery courtesy of the Digital Mapping System (DMS) team and the NASA DAAC at the National Snow and Ice Data Center Credit: NASA Earth Observatory NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
NASA Astrophysics Data System (ADS)
Ahn, Chul Kyun; Heo, Changyong; Jin, Heongmin; Kim, Jong Hyo
2017-03-01
Mammographic breast density is a well-established marker for breast cancer risk. However, accurate measurement of dense tissue is a difficult task due to faint contrast and significant variations in background fatty tissue. This study presents a novel method for automated mammographic density estimation based on Convolutional Neural Network (CNN). A total of 397 full-field digital mammograms were selected from Seoul National University Hospital. Among them, 297 mammograms were randomly selected as a training set and the rest 100 mammograms were used for a test set. We designed a CNN architecture suitable to learn the imaging characteristic from a multitudes of sub-images and classify them into dense and fatty tissues. To train the CNN, not only local statistics but also global statistics extracted from an image set were used. The image set was composed of original mammogram and eigen-image which was able to capture the X-ray characteristics in despite of the fact that CNN is well known to effectively extract features on original image. The 100 test images which was not used in training the CNN was used to validate the performance. The correlation coefficient between the breast estimates by the CNN and those by the expert's manual measurement was 0.96. Our study demonstrated the feasibility of incorporating the deep learning technology into radiology practice, especially for breast density estimation. The proposed method has a potential to be used as an automated and quantitative assessment tool for mammographic breast density in routine practice.
Maestre-Rendon, J. Rodolfo; Sierra-Hernandez, Juan M.; Contreras-Medina, Luis M.; Fernandez-Jaramillo, Arturo A.
2017-01-01
Manual measurements of foot anthropometry can lead to errors since this task involves the experience of the specialist who performs them, resulting in different subjective measures from the same footprint. Moreover, some of the diagnoses that are given to classify a footprint deformity are based on a qualitative interpretation by the physician; there is no quantitative interpretation of the footprint. The importance of providing a correct and accurate diagnosis lies in the need to ensure that an appropriate treatment is provided for the improvement of the patient without risking his or her health. Therefore, this article presents a smart sensor that integrates the capture of the footprint, a low computational-cost analysis of the image and the interpretation of the results through a quantitative evaluation. The smart sensor implemented required the use of a camera (Logitech C920) connected to a Raspberry Pi 3, where a graphical interface was made for the capture and processing of the image, and it was adapted to a podoscope conventionally used by specialists such as orthopedist, physiotherapists and podiatrists. The footprint diagnosis smart sensor (FPDSS) has proven to be robust to different types of deformity, precise, sensitive and correlated in 0.99 with the measurements from the digitalized image of the ink mat. PMID:29165397
Maestre-Rendon, J Rodolfo; Rivera-Roman, Tomas A; Sierra-Hernandez, Juan M; Cruz-Aceves, Ivan; Contreras-Medina, Luis M; Duarte-Galvan, Carlos; Fernandez-Jaramillo, Arturo A
2017-11-22
Manual measurements of foot anthropometry can lead to errors since this task involves the experience of the specialist who performs them, resulting in different subjective measures from the same footprint. Moreover, some of the diagnoses that are given to classify a footprint deformity are based on a qualitative interpretation by the physician; there is no quantitative interpretation of the footprint. The importance of providing a correct and accurate diagnosis lies in the need to ensure that an appropriate treatment is provided for the improvement of the patient without risking his or her health. Therefore, this article presents a smart sensor that integrates the capture of the footprint, a low computational-cost analysis of the image and the interpretation of the results through a quantitative evaluation. The smart sensor implemented required the use of a camera (Logitech C920) connected to a Raspberry Pi 3, where a graphical interface was made for the capture and processing of the image, and it was adapted to a podoscope conventionally used by specialists such as orthopedist, physiotherapists and podiatrists. The footprint diagnosis smart sensor (FPDSS) has proven to be robust to different types of deformity, precise, sensitive and correlated in 0.99 with the measurements from the digitalized image of the ink mat.
Relating transverse ray error and light fields in plenoptic camera images
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim; Tyo, J. Scott
2013-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. The camera image is focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The resultant image is an array of circular exit pupil images, each corresponding to the overlying lenslet. The position of the lenslet encodes the spatial information of the scene, whereas as the sensor pixels encode the angular information for light incident on the lenslet. The 4D light field is therefore described by the 2D spatial information and 2D angular information captured by the plenoptic camera. In aberration theory, the transverse ray error relates the pupil coordinates of a given ray to its deviation from the ideal image point in the image plane and is consequently a 4D function as well. We demonstrate a technique for modifying the traditional transverse ray error equations to recover the 4D light field of a general scene. In the case of a well corrected optical system, this light field is easily related to the depth of various objects in the scene. Finally, the effects of sampling with both the lenslet array and the camera sensor on the 4D light field data are analyzed to illustrate the limitations of such systems.
NASA Astrophysics Data System (ADS)
Jain, Pranay; Sarma, Sanjay E.
2015-05-01
Milk is an emulsion of fat globules and casein micelles dispersed in an aqueous medium with dissolved lactose, whey proteins and minerals. Quantification of constituents in milk is important in various stages of the dairy supply chain for proper process control and quality assurance. In field-level applications, spectrophotometric analysis is an economical option due to the low-cost of silicon photodetectors, sensitive to UV/Vis radiation with wavelengths between 300 - 1100 nm. Both absorption and scattering are witnessed as incident UV/Vis radiation interacts with dissolved and dispersed constituents in milk. These effects can in turn be used to characterize the chemical and physical composition of a milk sample. However, in order to simplify analysis, most existing instrument require dilution of samples to avoid effects of multiple scattering. The sample preparation steps are usually expensive, prone to human errors and unsuitable for field-level and online analysis. This paper introduces a novel digital imaging based method of online spectrophotometric measurements on raw milk without any sample preparation. Multiple LEDs of different emission spectra are used as discrete light sources and a digital CMOS camera is used as an image sensor. The extinction characteristic of samples is derived from captured images. The dependence of multiple scattering on power of incident radiation is exploited to quantify scattering. The method has been validated with experiments for response with varying fat concentrations and fat globule sizes. Despite of the presence of multiple scattering, the method is able to unequivocally quantify extinction of incident radiation and relate it to the fat concentrations and globule sizes of samples.
Optimal time following fluorescein instillation to evaluate rigid gas permeable contact lens fit.
Wolffsohn, James S; Tharoo, Ali; Lakhlani, Nikita
2015-04-01
To examine the optimum time at which fluorescein patterns of gas permeable lenses (GPs) should be evaluated. Aligned, 0.2mm steep and 0.2mm flat GPs were fitted to 17 patients (aged 20.6 ± 1.1 years, 10 male). Fluorescein was applied to their upper temporal bulbar conjunctiva with a moistened fluorescein strip. Digital slit lamp images (CSO, Italy) at 10× magnification of the fluorescein pattern viewed with blue light through a yellow filter were captured every 15s. Fluorescein intensity in central, mid peripheral and edge regions of the superior, inferior, temporal and nasal quadrants of the lens were graded subjectively using a +2 to -2 scale and using ImageJ software on the simultaneously captured images. Subjectively graded and objectively image analysed fluorescein intensity changed with time (p < 0.001), lens region (centre, mid-periphery and edge: p < 0.05) and there was interaction between lens region with lens fit (p < 0.001). For edge band width, there was a significant effect of time (F = 118.503, p < 0.001) and lens fit (F = 5.1249, p = 0.012). The expected alignment, flat and steep fitting patterns could be seen from approximately after 30 to 180 s subjectively and 15 to 105 s in captured images. Although the stability of fluorescein intensity can start to decline in as little as 45 s post fluorescein instillation, the diagnostic pattern of alignment, steep or flat fit is seen in each meridian by subjective observation from about 30s to 3 min indicating this is the most appropriate time window to evaluate GP lenses in clinical practice. Copyright © 2014 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tian, J.; Krauß, T.; d'Angelo, P.
2017-05-01
Automatic rooftop extraction is one of the most challenging problems in remote sensing image analysis. Classical 2D image processing techniques are expensive due to the high amount of features required to locate buildings. This problem can be avoided when 3D information is available. In this paper, we show how to fuse the spectral and height information of stereo imagery to achieve an efficient and robust rooftop extraction. In the first step, the digital terrain model (DTM) and in turn the normalized digital surface model (nDSM) is generated by using a newly step-edge approach. In the second step, the initial building locations and rooftop boundaries are derived by removing the low-level pixels and high-level pixels with higher probability to be trees and shadows. This boundary is then served as the initial level set function, which is further refined to fit the best possible boundaries through distance regularized level-set curve evolution. During the fitting procedure, the edge-based active contour model is adopted and implemented by using the edges indicators extracted from panchromatic image. The performance of the proposed approach is tested by using the WorldView-2 satellite data captured over Munich.
Three-dimensional reconstruction of Roman coins from photometric image sets
NASA Astrophysics Data System (ADS)
MacDonald, Lindsay; Moitinho de Almeida, Vera; Hess, Mona
2017-01-01
A method is presented for increasing the spatial resolution of the three-dimensional (3-D) digital representation of coins by combining fine photometric detail derived from a set of photographic images with accurate geometric data from a 3-D laser scanner. 3-D reconstructions were made of the obverse and reverse sides of two ancient Roman denarii by processing sets of images captured under directional lighting in an illumination dome. Surface normal vectors were calculated by a "bounded regression" technique, excluding both shadow and specular components of reflection from the metallic surface. Because of the known difficulty in achieving geometric accuracy when integrating photometric normals to produce a digital elevation model, the low spatial frequencies were replaced by those derived from the point cloud produced by a 3-D laser scanner. The two datasets were scaled and registered by matching the outlines and correlating the surface gradients. The final result was a realistic rendering of the coins at a spatial resolution of 75 pixels/mm (13-μm spacing), in which the fine detail modulated the underlying geometric form of the surface relief. The method opens the way to obtain high quality 3-D representations of coins in collections to enable interactive online viewing.
New long-zoom lens for 4K super 35mm digital cameras
NASA Astrophysics Data System (ADS)
Thorpe, Laurence J.; Usui, Fumiaki; Kamata, Ryuhei
2015-05-01
The world of television production is beginning to adopt 4K Super 35 mm (S35) image capture for a widening range of program genres that seek both the unique imaging properties of that large image format and the protection of their program assets in a world anticipating future 4K services. Documentary and natural history production in particular are transitioning to this form of production. The nature of their shooting demands long zoom lenses. In their traditional world of 2/3-inch digital HDTV cameras they have a broad choice in portable lenses - with zoom ranges as high as 40:1. In the world of Super 35mm the longest zoom lens is limited to 12:1 offering a telephoto of 400mm. Canon was requested to consider a significantly longer focal range lens while severely curtailing its size and weight. Extensive computer simulation explored countless combinations of optical and optomechanical systems in a quest to ensure that all operational requests and full 4K performance could be met. The final lens design is anticipated to have applications beyond entertainment production, including a variety of security systems.
Kamimura, Emi; Tanaka, Shinpei; Takaba, Masayuki; Tachi, Keita; Baba, Kazuyoshi
2017-01-01
Purpose The aim of this study was to evaluate and compare the inter-operator reproducibility of three-dimensional (3D) images of teeth captured by a digital impression technique to a conventional impression technique in vivo. Materials and methods Twelve participants with complete natural dentition were included in this study. A digital impression of the mandibular molars of these participants was made by two operators with different levels of clinical experience, 3 or 16 years, using an intra-oral scanner (Lava COS, 3M ESPE). A silicone impression also was made by the same operators using the double mix impression technique (Imprint3, 3M ESPE). Stereolithography (STL) data were directly exported from the Lava COS system, while STL data of a plaster model made from silicone impression were captured by a three-dimensional (3D) laboratory scanner (D810, 3shape). The STL datasets recorded by two different operators were compared using 3D evaluation software and superimposed using the best-fit-algorithm method (least-squares method, PolyWorks, InnovMetric Software) for each impression technique. Inter-operator reproducibility as evaluated by average discrepancies of corresponding 3D data was compared between the two techniques (Wilcoxon signed-rank test). Results The visual inspection of superimposed datasets revealed that discrepancies between repeated digital impression were smaller than observed with silicone impression. Confirmation was forthcoming from statistical analysis revealing significantly smaller average inter-operator reproducibility using a digital impression technique (0.014± 0.02 mm) than when using a conventional impression technique (0.023 ± 0.01 mm). Conclusion The results of this in vivo study suggest that inter-operator reproducibility with a digital impression technique may be better than that of a conventional impression technique and is independent of the clinical experience of the operator. PMID:28636642
Procedures and Guidelines for Digitization (Scanning)
These documents establishes EPA's approach for creating digitized versions of Agency documents and establishes standards for capturing digitized content from paper and microform Agency documents and records.
Latch of HST aft shroud photographed by Electronic Still Camera
1993-12-04
S61-E-010 (4 Dec 1993) --- This close-up view of a latch on the minus V3 aft shroud door of the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Latch of HST aft shroud photographed by Electronic Still Camera
1993-12-04
S61-E-005 (4 Dec 1993) --- This close-up view of a latch on the minus V3 aft shroud door of the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope. Over a period of five days, four of the seven crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Latch of HST aft shroud photographed by Electronic Still Camera
1993-12-04
S61-E-004 (4 Dec 1993) --- This close-up view of a latch on the minus V3 aft shroud door of the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope. Over a period of five days, four of the seven crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Quantifying biodiversity using digital cameras and automated image analysis.
NASA Astrophysics Data System (ADS)
Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.
2009-04-01
Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and enabling automatic deletion of images generated by erroneous triggering (e.g. cloud movements). This is the first step to a hierarchical image processing framework, where situation subclasses such as birds or climatic conditions can be fed into more appropriate automated or semi-automated data mining software.
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2015-07-01
In the field of orthodontic planning, the creation of a complete digital dental model to simulate and predict treatments is of utmost importance. Nowadays, orthodontists use panoramic radiographs (PAN) and dental crown representations obtained by optical scanning. However, these data do not contain any 3D information regarding tooth root geometries. A reliable orthodontic treatment should instead take into account entire geometrical models of dental shapes in order to better predict tooth movements. This paper presents a methodology to create complete 3D patient dental anatomies by combining digital mouth models and panoramic radiographs. The modeling process is based on using crown surfaces, reconstructed by optical scanning, and root geometries, obtained by adapting anatomical CAD templates over patient specific information extracted from radiographic data. The radiographic process is virtually replicated on crown digital geometries through the Discrete Radon Transform (DRT). The resulting virtual PAN image is used to integrate the actual radiographic data and the digital mouth model. This procedure provides the root references on the 3D digital crown models, which guide a shape adjustment of the dental CAD templates. The entire geometrical models are finally created by merging dental crowns, captured by optical scanning, and root geometries, obtained from the CAD templates. Copyright © 2015 Elsevier Ltd. All rights reserved.
Raghunath, Vignesh; Braxton, Melissa O.; Gagnon, Stephanie A.; Brunyé, Tad T.; Allison, Kimberly H.; Reisch, Lisa M.; Weaver, Donald L.; Elmore, Joann G.; Shapiro, Linda G.
2012-01-01
Context: Digital pathology has the potential to dramatically alter the way pathologists work, yet little is known about pathologists’ viewing behavior while interpreting digital whole slide images. While tracking pathologist eye movements when viewing digital slides may be the most direct method of capturing pathologists’ viewing strategies, this technique is cumbersome and technically challenging to use in remote settings. Tracking pathologist mouse cursor movements may serve as a practical method of studying digital slide interpretation, and mouse cursor data may illuminate pathologists’ viewing strategies and time expenditures in their interpretive workflow. Aims: To evaluate the utility of mouse cursor movement data, in addition to eye-tracking data, in studying pathologists’ attention and viewing behavior. Settings and Design: Pathologists (N = 7) viewed 10 digital whole slide images of breast tissue that were selected using a random stratified sampling technique to include a range of breast pathology diagnoses (benign/atypia, carcinoma in situ, and invasive breast cancer). A panel of three expert breast pathologists established a consensus diagnosis for each case using a modified Delphi approach. Materials and Methods: Participants’ foveal vision was tracked using SensoMotoric Instruments RED 60 Hz eye-tracking system. Mouse cursor movement was tracked using a custom MATLAB script. Statistical Analysis Used: Data on eye-gaze and mouse cursor position were gathered at fixed intervals and analyzed using distance comparisons and regression analyses by slide diagnosis and pathologist expertise. Pathologists’ accuracy (defined as percent agreement with the expert consensus diagnoses) and efficiency (accuracy and speed) were also analyzed. Results: Mean viewing time per slide was 75.2 seconds (SD = 38.42). Accuracy (percent agreement with expert consensus) by diagnosis type was: 83% (benign/atypia); 48% (carcinoma in situ); and 93% (invasive). Spatial coupling was close between eye-gaze and mouse cursor positions (highest frequency ∆x was 4.00px (SD = 16.10), and ∆y was 37.50px (SD = 28.08)). Mouse cursor position moderately predicted eye gaze patterns (Rx = 0.33 and Ry = 0.21). Conclusions: Data detailing mouse cursor movements may be a useful addition to future studies of pathologists’ accuracy and efficiency when using digital pathology. PMID:23372984
Raghunath, Vignesh; Braxton, Melissa O; Gagnon, Stephanie A; Brunyé, Tad T; Allison, Kimberly H; Reisch, Lisa M; Weaver, Donald L; Elmore, Joann G; Shapiro, Linda G
2012-01-01
Digital pathology has the potential to dramatically alter the way pathologists work, yet little is known about pathologists' viewing behavior while interpreting digital whole slide images. While tracking pathologist eye movements when viewing digital slides may be the most direct method of capturing pathologists' viewing strategies, this technique is cumbersome and technically challenging to use in remote settings. Tracking pathologist mouse cursor movements may serve as a practical method of studying digital slide interpretation, and mouse cursor data may illuminate pathologists' viewing strategies and time expenditures in their interpretive workflow. To evaluate the utility of mouse cursor movement data, in addition to eye-tracking data, in studying pathologists' attention and viewing behavior. Pathologists (N = 7) viewed 10 digital whole slide images of breast tissue that were selected using a random stratified sampling technique to include a range of breast pathology diagnoses (benign/atypia, carcinoma in situ, and invasive breast cancer). A panel of three expert breast pathologists established a consensus diagnosis for each case using a modified Delphi approach. Participants' foveal vision was tracked using SensoMotoric Instruments RED 60 Hz eye-tracking system. Mouse cursor movement was tracked using a custom MATLAB script. Data on eye-gaze and mouse cursor position were gathered at fixed intervals and analyzed using distance comparisons and regression analyses by slide diagnosis and pathologist expertise. Pathologists' accuracy (defined as percent agreement with the expert consensus diagnoses) and efficiency (accuracy and speed) were also analyzed. Mean viewing time per slide was 75.2 seconds (SD = 38.42). Accuracy (percent agreement with expert consensus) by diagnosis type was: 83% (benign/atypia); 48% (carcinoma in situ); and 93% (invasive). Spatial coupling was close between eye-gaze and mouse cursor positions (highest frequency ∆x was 4.00px (SD = 16.10), and ∆y was 37.50px (SD = 28.08)). Mouse cursor position moderately predicted eye gaze patterns (Rx = 0.33 and Ry = 0.21). Data detailing mouse cursor movements may be a useful addition to future studies of pathologists' accuracy and efficiency when using digital pathology.