Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-26
... Images, and Components Thereof; Receipt of Complaint; Solicitation of Comments Relating to the Public... Devices for Capturing and Transmitting Images, and Components Thereof, DN 2869; the Commission is... importation of certain electronic devices for capturing and transmitting images, and components thereof. The...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-15
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-831] Certain Electronic Devices for Capturing and Transmitting Images, and Components Thereof; Commission Determination Not To Review an Initial... certain electronic devices for capturing and transmitting images, and components thereof. The complaint...
Simultaneous acquisition of differing image types
Demos, Stavros G
2012-10-09
A system in one embodiment includes an image forming device for forming an image from an area of interest containing different image components; an illumination device for illuminating the area of interest with light containing multiple components; at least one light source coupled to the illumination device, the at least one light source providing light to the illumination device containing different components, each component having distinct spectral characteristics and relative intensity; an image analyzer coupled to the image forming device, the image analyzer decomposing the image formed by the image forming device into multiple component parts based on type of imaging; and multiple image capture devices, each image capture device receiving one of the component parts of the image. A method in one embodiment includes receiving an image from an image forming device; decomposing the image formed by the image forming device into multiple component parts based on type of imaging; receiving the component parts of the image; and outputting image information based on the component parts of the image. Additional systems and methods are presented.
Micro-optical system based 3D imaging for full HD depth image capturing
NASA Astrophysics Data System (ADS)
Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan
2012-03-01
20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.
Device for wavelength-selective imaging
Frangioni, John V.
2010-09-14
An imaging device captures both a visible light image and a diagnostic image, the diagnostic image corresponding to emissions from an imaging medium within the object. The visible light image (which may be color or grayscale) and the diagnostic image may be superimposed to display regions of diagnostic significance within a visible light image. A number of imaging media may be used according to an intended application for the imaging device, and an imaging medium may have wavelengths above, below, or within the visible light spectrum. The devices described herein may be advantageously packaged within a single integrated device or other solid state device, and/or employed in an integrated, single-camera medical imaging system, as well as many non-medical imaging systems that would benefit from simultaneous capture of visible-light wavelength images along with images at other wavelengths.
Design and implementation of a contactless multiple hand feature acquisition system
NASA Astrophysics Data System (ADS)
Zhao, Qiushi; Bu, Wei; Wu, Xiangqian; Zhang, David
2012-06-01
In this work, an integrated contactless multiple hand feature acquisition system is designed. The system can capture palmprint, palm vein, and palm dorsal vein images simultaneously. Moreover, the images are captured in a contactless manner, that is, users need not to touch any part of the device when capturing. Palmprint is imaged under visible illumination while palm vein and palm dorsal vein are imaged under near infrared (NIR) illumination. The capturing is controlled by computer and the whole process is less than 1 second, which is sufficient for online biometric systems. Based on this device, this paper also implements a contactless hand-based multimodal biometric system. Palmprint, palm vein, palm dorsal vein, finger vein, and hand geometry features are extracted from the captured images. After similarity measure, the matching scores are fused using weighted sum fusion rule. Experimental results show that although the verification accuracy of each uni-modality is not as high as that of state-of-the-art, the fusion result is superior to most of the existing hand-based biometric systems. This result indicates that the proposed device is competent in the application of contactless multimodal hand-based biometrics.
In situ characterization of the brain-microdevice interface using Device Capture Histology
Woolley, Andrew J.; Desai, Himanshi A.; Steckbeck, Mitchell A.; Patel, Neil K.; Otto, Kevin J.
2011-01-01
Accurate assessment of brain-implantable microdevice bio-integration remains a formidable challenge. Prevailing histological methods require device extraction prior to tissue processing, often disrupting and removing the tissue of interest which had been surrounding the device. The Device-Capture Histology method, presented here, overcomes many limitations of the conventional Device-Explant Histology method, by collecting the device and surrounding tissue intact for subsequent labeling. With the implant remaining in situ, accurate and precise imaging of the morphologically preserved tissue at the brain/microdevice interface can then be collected and quantified. First, this article presents the Device-Capture Histology method for obtaining and processing the intact, undisturbed microdevice-tissue interface, and images using fluorescent labeling and confocal microscopy. Second, this article gives examples of how to quantify features found in the captured peridevice tissue. We also share histological data capturing 1) the impact of microdevice implantation on tissue, 2) the effects of an experimental anti-inflammatory coating, 3) a dense grouping of cell nuclei encapsulating a long-term implant, and 4) atypical oligodendrocyte organization neighboring a longterm implant. Data sets collected using the Device-Capture Histology method are presented to demonstrate the significant advantages of processing the intact microdevice-tissue interface, and to underscore the utility of the method in understanding the effects of the brain-implantable microdevices on nearby tissue. PMID:21802446
Stable image acquisition for mobile image processing applications
NASA Astrophysics Data System (ADS)
Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker
2015-02-01
Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.
Real-time computer treatment of THz passive device images with the high image quality
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2012-06-01
We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.
[Electronic Device for Retinal and Iris Imaging].
Drahanský, M; Kolář, R; Mňuk, T
This paper describes design and construction of a new device for automatic capturing of eye retina and iris. This device has two possible ways of utilization - either for biometric purposes (persons recognition on the base of their eye characteristics) or for medical purposes as supporting diagnostic device. eye retina, eye iris, device, acquisition, image.
A smartphone-based chip-scale microscope using ambient illumination.
Lee, Seung Ah; Yang, Changhuei
2014-08-21
Portable chip-scale microscopy devices can potentially address various imaging needs in mobile healthcare and environmental monitoring. Here, we demonstrate the adaptation of a smartphone's camera to function as a compact lensless microscope. Unlike other chip-scale microscopy schemes, this method uses ambient illumination as its light source and does not require the incorporation of a dedicated light source. The method is based on the shadow imaging technique where the sample is placed on the surface of the image sensor, which captures direct shadow images under illumination. To improve the image resolution beyond the pixel size, we perform pixel super-resolution reconstruction with multiple images at different angles of illumination, which are captured while the user is manually tilting the device around any ambient light source, such as the sun or a lamp. The lensless imaging scheme allows for sub-micron resolution imaging over an ultra-wide field-of-view (FOV). Image acquisition and reconstruction are performed on the device using a custom-built Android application, constructing a stand-alone imaging device for field applications. We discuss the construction of the device using a commercial smartphone and demonstrate the imaging capabilities of our system.
A smartphone-based chip-scale microscope using ambient illumination
Lee, Seung Ah; Yang, Changhuei
2014-01-01
Portable chip-scale microscopy devices can potentially address various imaging needs in mobile healthcare and environmental monitoring. Here, we demonstrate the adaptation of a smartphone’s camera to function as a compact lensless microscope. Unlike other chip-scale microscopy schemes, this method uses ambient illumination as its light source and does not require the incorporation of a dedicated light source. The method is based on the shadow imaging technique where the sample is placed on the surface of the image sensor, which captures direct shadow images under illumination. To improve the imaging resolution beyond the pixel size, we perform pixel super-resolution reconstruction with multiple images at different angles of illumination, which are captured while the user is manually tilting the device around any ambient light source, such as the sun or a lamp. The lensless imaging scheme allows for sub-micron resolution imaging over an ultra-wide field-of-view (FOV). Image acquisition and reconstruction is performed on the device using a custom-built android application, constructing a stand-alone imaging device for field applications. We discuss the construction of the device using a commercial smartphone and demonstrate the imaging capabilities of our system. PMID:24964209
Agreement and reading time for differently-priced devices for the digital capture of X-ray films.
Salazar, Antonio José; Camacho, Juan Camilo; Aguirre, Diego Andrés
2012-03-01
We assessed the reliability of three digital capture devices: a film digitizer (which cost US $15,000), a flat-bed scanner (US $1800) and a digital camera (US $450). Reliability was measured as the agreement between six observers when reading images acquired from a single device and also in terms of the pair-device agreement. The images were 136 chest X-ray cases. The variables measured were the interstitial opacities distribution, interstitial patterns, nodule size and percentage pneumothorax size. The agreement between the six readers when reading images acquired from a single device was similar for the three devices. The pair-device agreements were moderate for all variables. There were significant differences in reading-time between devices: the mean reading-time for the film digitizer was 93 s, it was 59 s for the flat-bed scanner and 70 s for the digital camera. Despite the differences in their cost, there were no substantial differences in the performance of the three devices.
Online coupled camera pose estimation and dense reconstruction from video
Medioni, Gerard; Kang, Zhuoliang
2016-11-01
A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.
NASA Astrophysics Data System (ADS)
Boaggio, K.; Bandamede, M.; Bancroft, L.; Hurler, K.; Magee, N. B.
2016-12-01
We report on details of continuing instrument development and deployment of a novel balloon-borne device for capturing and characterizing atmospheric ice and aerosol particles, the Ice Cryo Encapsulator by Balloon (ICE-Ball). The device is designed to capture and preserve cirrus ice particles, maintaining them at cold equilibrium temperatures, so that high-altitude particles can recovered, transferred intact, and then imaged under SEM at an unprecedented resolution (approximately 3 nm maximum resolution). In addition to cirrus ice particles, high altitude aerosol particles are also captured, imaged, and analyzed for geometry, chemical composition, and activity as ice nucleating particles. Prototype versions of ICE-Ball have successfully captured and preserved high altitude ice particles and aerosols, then returned them for recovery and SEM imaging and analysis. New improvements include 1) ability to capture particles from multiple narrowly-defined altitudes on a single payload, 2) high quality measurements of coincident temperature, humidity, and high-resolution video at capture altitude, 3) ability to capture particles during both ascent and descent, 4) better characterization of particle collection volume and collection efficiency, and 5) improved isolation and characterization of capture-cell cryo environment. This presentation provides detailed capability specifications for anyone interested in using measurements, collaborating on continued instrument development, or including this instrument in ongoing or future field campaigns.
A computational approach to real-time image processing for serial time-encoded amplified microscopy
NASA Astrophysics Data System (ADS)
Oikawa, Minoru; Hiyama, Daisuke; Hirayama, Ryuji; Hasegawa, Satoki; Endo, Yutaka; Sugie, Takahisa; Tsumura, Norimichi; Kuroshima, Mai; Maki, Masanori; Okada, Genki; Lei, Cheng; Ozeki, Yasuyuki; Goda, Keisuke; Shimobaba, Tomoyoshi
2016-03-01
High-speed imaging is an indispensable technique, particularly for identifying or analyzing fast-moving objects. The serial time-encoded amplified microscopy (STEAM) technique was proposed to enable us to capture images with a frame rate 1,000 times faster than using conventional methods such as CCD (charge-coupled device) cameras. The application of this high-speed STEAM imaging technique to a real-time system, such as flow cytometry for a cell-sorting system, requires successively processing a large number of captured images with high throughput in real time. We are now developing a high-speed flow cytometer system including a STEAM camera. In this paper, we describe our approach to processing these large amounts of image data in real time. We use an analog-to-digital converter that has up to 7.0G samples/s and 8-bit resolution for capturing the output voltage signal that involves grayscale images from the STEAM camera. Therefore the direct data output from the STEAM camera generates 7.0G byte/s continuously. We provided a field-programmable gate array (FPGA) device as a digital signal pre-processor for image reconstruction and finding objects in a microfluidic channel with high data rates in real time. We also utilized graphics processing unit (GPU) devices for accelerating the calculation speed of identification of the reconstructed images. We built our prototype system, which including a STEAM camera, a FPGA device and a GPU device, and evaluated its performance in real-time identification of small particles (beads), as virtual biological cells, owing through a microfluidic channel.
Palmprint Recognition Across Different Devices.
Jia, Wei; Hu, Rong-Xiang; Gui, Jie; Zhao, Yang; Ren, Xiao-Ming
2012-01-01
In this paper, the problem of Palmprint Recognition Across Different Devices (PRADD) is investigated, which has not been well studied so far. Since there is no publicly available PRADD image database, we created a non-contact PRADD image database containing 12,000 grayscale captured from 100 subjects using three devices, i.e., one digital camera and two smart-phones. Due to the non-contact image acquisition used, rotation and scale changes between different images captured from a same palm are inevitable. We propose a robust method to calculate the palm width, which can be effectively used for scale normalization of palmprints. On this PRADD image database, we evaluate the recognition performance of three different methods, i.e., subspace learning method, correlation method, and orientation coding based method, respectively. Experiments results show that orientation coding based methods achieved promising recognition performance for PRADD.
Palmprint Recognition across Different Devices
Jia, Wei; Hu, Rong-Xiang; Gui, Jie; Zhao, Yang; Ren, Xiao-Ming
2012-01-01
In this paper, the problem of Palmprint Recognition Across Different Devices (PRADD) is investigated, which has not been well studied so far. Since there is no publicly available PRADD image database, we created a non-contact PRADD image database containing 12,000 grayscale captured from 100 subjects using three devices, i.e., one digital camera and two smart-phones. Due to the non-contact image acquisition used, rotation and scale changes between different images captured from a same palm are inevitable. We propose a robust method to calculate the palm width, which can be effectively used for scale normalization of palmprints. On this PRADD image database, we evaluate the recognition performance of three different methods, i.e., subspace learning method, correlation method, and orientation coding based method, respectively. Experiments results show that orientation coding based methods achieved promising recognition performance for PRADD. PMID:22969380
Optical cell monitoring system for underwater targets
NASA Astrophysics Data System (ADS)
Moon, SangJun; Manzur, Fahim; Manzur, Tariq; Demirci, Utkan
2008-10-01
We demonstrate a cell based detection system that could be used for monitoring an underwater target volume and environment using a microfluidic chip and charge-coupled-device (CCD). This technique allows us to capture specific cells and enumerate these cells on a large area on a microchip. The microfluidic chip and a lens-less imaging platform were then merged to monitor cell populations and morphologies as a system that may find use in distributed sensor networks. The chip, featuring surface chemistry and automatic cell imaging, was fabricated from a cover glass slide, double sided adhesive film and a transparent Polymethlymetacrylate (PMMA) slab. The optically clear chip allows detecting cells with a CCD sensor. These chips were fabricated with a laser cutter without the use of photolithography. We utilized CD4+ cells that are captured on the floor of a microfluidic chip due to the ability to address specific target cells using antibody-antigen binding. Captured CD4+ cells were imaged with a fluorescence microscope to verify the chip specificity and efficiency. We achieved 70.2 +/- 6.5% capturing efficiency and 88.8 +/- 5.4% specificity for CD4+ T lymphocytes (n = 9 devices). Bright field images of the captured cells in the 24 mm × 4 mm × 50 μm microfluidic chip were obtained with the CCD sensor in one second. We achieved an inexpensive system that rapidly captures cells and images them using a lens-less CCD system. This microfluidic device can be modified for use in single cell detection utilizing a cheap light-emitting diode (LED) chip instead of a wide range CCD system.
Dynamic integral imaging technology for 3D applications (Conference Presentation)
NASA Astrophysics Data System (ADS)
Huang, Yi-Pai; Javidi, Bahram; Martínez-Corral, Manuel; Shieh, Han-Ping D.; Jen, Tai-Hsiang; Hsieh, Po-Yuan; Hassanfiroozi, Amir
2017-05-01
Depth and resolution are always the trade-off in integral imaging technology. With the dynamic adjustable devices, the two factors of integral imaging can be fully compensated with time-multiplexed addressing. Those dynamic devices can be mechanical or electrical driven. In this presentation, we will mainly focused on discussing various Liquid Crystal devices which can change the focal length, scan and shift the image position, or switched in between 2D/3D mode. By using the Liquid Crystal devices, dynamic integral imaging have been successfully applied on 3D Display, capturing, and bio-imaging applications.
NASA Astrophysics Data System (ADS)
Zordan, Michael D.; Grafton, Meggie M. G.; Park, Kinam; Leary, James F.
2010-02-01
The rapid detection of foodborne pathogens is increasingly important due to the rising occurrence of contaminated food supplies. We have previously demonstrated the design of a hybrid optical device that has the capability to perform realtime surface plasmon resonance (SPR) and epi-fluorescence imaging. We now present the design of a microfluidic biochip consisting of a two-dimensional array of functionalized gold spots. The spots on the array have been functionalized with capture peptides that specifically bind E. coli O157:H7 or Salmonella enterica. This array is enclosed by a PDMS microfluidic flow cell. A magnetically pre-concentrated sample is injected into the biochip, and whole pathogens will bind to the capture array. The previously constructed optical device is being used to detect the presence and identity of captured pathogens using SPR imaging. This detection occurs in a label-free manner, and does not require the culture of bacterial samples. Molecular imaging can also be performed using the epi-fluorescence capabilities of the device to determine pathogen state, or to validate the identity of the captured pathogens using fluorescently labeled antibodies. We demonstrate the real-time screening of a sample for the presence of E. coli O157:H7 and Salmonella enterica. Additionally the mechanical properties of the microfluidic flow cell will be assessed. The effect of these properties on pathogen capture will be examined.
Járvás, Gábor; Varga, Tamás; Szigeti, Márton; Hajba, László; Fürjes, Péter; Rajta, István; Guttman, András
2018-02-01
As a continuation of our previously published work, this paper presents a detailed evaluation of a microfabricated cell capture device utilizing a doubly tilted micropillar array. The device was fabricated using a novel hybrid technology based on the combination of proton beam writing and conventional lithography techniques. Tilted pillars offer unique flow characteristics and support enhanced fluidic interaction for improved immunoaffinity based cell capture. The performance of the microdevice was evaluated by an image sequence analysis based in-house developed single-cell tracking system. Individual cell tracking allowed in-depth analysis of the cell-chip surface interaction mechanism from hydrodynamic point of view. Simulation results were validated by using the hybrid device and the optimized surface functionalization procedure. Finally, the cell capture capability of this new generation microdevice was demonstrated by efficiently arresting cells from a HT29 cell-line suspension. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Yip, Hon Ming; Li, John C. S.; Cui, Xin; Gao, Qiannan; Leung, Chi Chiu
2014-01-01
As microfluidics has been applied extensively in many cell and biochemical applications, monitoring the related processes is an important requirement. In this work, we design and fabricate a high-throughput microfluidic device which contains 32 microchambers to perform automated parallel microfluidic operations and monitoring on an automated stage of a microscope. Images are captured at multiple spots on the device during the operations for monitoring samples in microchambers in parallel; yet the device positions may vary at different time points throughout operations as the device moves back and forth on a motorized microscopic stage. Here, we report an image-based positioning strategy to realign the chamber position before every recording of microscopic image. We fabricate alignment marks at defined locations next to the chambers in the microfluidic device as reference positions. We also develop image processing algorithms to recognize the chamber positions in real-time, followed by realigning the chambers to their preset positions in the captured images. We perform experiments to validate and characterize the device functionality and the automated realignment operation. Together, this microfluidic realignment strategy can be a platform technology to achieve precise positioning of multiple chambers for general microfluidic applications requiring long-term parallel monitoring of cell and biochemical activities. PMID:25133248
Preliminary experiments on quantification of skin condition
NASA Astrophysics Data System (ADS)
Kitajima, Kenzo; Iyatomi, Hitoshi
2014-03-01
In this study, we investigated a preliminary assessment method for skin conditions such as a moisturizing property and its fineness of the skin with an image analysis only. We captured a facial images from volunteer subjects aged between 30s and 60s by Pocket Micro (R) device (Scalar Co., Japan). This device has two image capturing modes; the normal mode and the non-reflection mode with the aid of the equipped polarization filter. We captured skin images from a total of 68 spots from subjects' face using both modes (i.e. total of 136 skin images). The moisture-retaining property of the skin and subjective evaluation score of the skin fineness in 5-point scale for each case were also obtained in advance as a gold standard (their mean and SD were 35.15 +/- 3.22 (μS) and 3.45 +/- 1.17, respectively). We extracted a total of 107 image features from each image and built linear regression models for estimating abovementioned criteria with a stepwise feature selection. The developed model for estimating the skin moisture achieved the MSE of 1.92 (μS) with 6 selected parameters, while the model for skin fineness achieved that of 0.51 scales with 7 parameters under the leave-one-out cross validation. We confirmed the developed models predicted the moisture-retaining property and fineness of the skin appropriately with only captured image.
Hyperchromatic lens for recording time-resolved phenomena
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frayer, Daniel K.
A method and apparatus for the capture of a high number of quasi-continuous effective frames of 2-D data from an event at very short time scales (from less than 10.sup.-12 to more than 10.sup.-8 seconds) is disclosed which allows for short recording windows and effective number of frames. Active illumination, from a chirped laser pulse directed to the event creates a reflection where wavelength is dependent upon time and spatial position is utilized to encode temporal phenomena onto wavelength. A hyperchromatic lens system receives the reflection and maps wavelength onto axial position. An image capture device, such as holography ormore » plenoptic imaging device, captures the resultant focal stack from the hyperchromatic lens system in both spatial (imaging) and longitudinal (temporal) axes. The hyperchromatic lens system incorporates a combination of diffractive and refractive components to maximally separate focal position as a function of wavelength.« less
Face verification system for Android mobile devices using histogram based features
NASA Astrophysics Data System (ADS)
Sato, Sho; Kobayashi, Kazuhiro; Chen, Qiu
2016-07-01
This paper proposes a face verification system that runs on Android mobile devices. In this system, facial image is captured by a built-in camera on the Android device firstly, and then face detection is implemented using Haar-like features and AdaBoost learning algorithm. The proposed system verify the detected face using histogram based features, which are generated by binary Vector Quantization (VQ) histogram using DCT coefficients in low frequency domains, as well as Improved Local Binary Pattern (Improved LBP) histogram in spatial domain. Verification results with different type of histogram based features are first obtained separately and then combined by weighted averaging. We evaluate our proposed algorithm by using publicly available ORL database and facial images captured by an Android tablet.
Salazar, Antonio José; Camacho, Juan Camilo; Aguirre, Diego Andrés
2012-02-01
A common teleradiology practice is digitizing films. The costs of specialized digitizers are very high, that is why there is a trend to use conventional scanners and digital cameras. Statistical clinical studies are required to determine the accuracy of these devices, which are very difficult to carry out. The purpose of this study was to compare three capture devices in terms of their capacity to detect several image characteristics. Spatial resolution, contrast, gray levels, and geometric deformation were compared for a specialized digitizer ICR (US$ 15,000), a conventional scanner UMAX (US$ 1,800), and a digital camera LUMIX (US$ 450, but require an additional support system and a light box for about US$ 400). Test patterns printed in films were used. The results detected gray levels lower than real values for all three devices; acceptable contrast and low geometric deformation with three devices. All three devices are appropriate solutions, but a digital camera requires more operator training and more settings.
Interpretation and mapping of geological features using mobile devices for 3D outcrop modelling
NASA Astrophysics Data System (ADS)
Buckley, Simon J.; Kehl, Christian; Mullins, James R.; Howell, John A.
2016-04-01
Advances in 3D digital geometric characterisation have resulted in widespread adoption in recent years, with photorealistic models utilised for interpretation, quantitative and qualitative analysis, as well as education, in an increasingly diverse range of geoscience applications. Topographic models created using lidar and photogrammetry, optionally combined with imagery from sensors such as hyperspectral and thermal cameras, are now becoming commonplace in geoscientific research. Mobile devices (tablets and smartphones) are maturing rapidly to become powerful field computers capable of displaying and interpreting 3D models directly in the field. With increasingly high-quality digital image capture, combined with on-board sensor pose estimation, mobile devices are, in addition, a source of primary data, which can be employed to enhance existing geological models. Adding supplementary image textures and 2D annotations to photorealistic models is therefore a desirable next step to complement conventional field geoscience. This contribution reports on research into field-based interpretation and conceptual sketching on images and photorealistic models on mobile devices, motivated by the desire to utilise digital outcrop models to generate high quality training images (TIs) for multipoint statistics (MPS) property modelling. Representative training images define sedimentological concepts and spatial relationships between elements in the system, which are subsequently modelled using artificial learning to populate geocellular models. Photorealistic outcrop models are underused sources of quantitative and qualitative information for generating TIs, explored further in this research by linking field and office workflows through the mobile device. Existing textured models are loaded to the mobile device, allowing rendering in a 3D environment. Because interpretation in 2D is more familiar and comfortable for users, the developed application allows new images to be captured with the device's digital camera, and an interface is available for annotating (interpreting) the image using lines and polygons. Image-to-geometry registration is then performed using a developed algorithm, initialised using the coarse pose from the on-board orientation and positioning sensors. The annotations made on the captured images are then available in the 3D model coordinate system for overlay and export. This workflow allows geologists to make interpretations and conceptual models in the field, which can then be linked to and refined in office workflows for later MPS property modelling.
Stevenson, Paul; Finnane, Anna R; Soyer, H Peter
2016-03-21
Capturing clinical images is becoming more prevalent in everyday clinical practice, and dermatology lends itself to the use of clinical photographs and teledermatology. "Store-and-forward", whereby clinical images are forwarded to a specialist who later responds with an opinion on diagnosis and management is a popular form of teledermatology. Store-and-forward teledermatology has proven accurate and reliable, accelerating the process of diagnosis and treatment and improving patient outcomes. Practitioners' personal smartphones and other devices are often used to capture and communicate clinical images. Patient privacy can be placed at risk with the use of this technology. Practitioners should obtain consent for taking images, explain how they will be used, apply appropriate security in their digital communications, and delete images and other data on patients from personal devices after saving these to patient health records. Failing to use appropriate security precautions poses an emerging medico-legal risk for practitioners.
Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images
Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong
2015-01-01
In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods. PMID:26703596
Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images.
Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong
2015-12-12
In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods.
Omniview motionless camera orientation system
NASA Technical Reports Server (NTRS)
Zimmermann, Steven D. (Inventor); Martin, H. Lee (Inventor)
1999-01-01
A device for omnidirectional image viewing providing pan-and-tilt orientation, rotation, and magnification within a hemispherical field-of-view that utilizes no moving parts. The imaging device is based on the effect that the image from a fisheye lens, which produces a circular image of at entire hemispherical field-of-view, which can be mathematically corrected using high speed electronic circuitry. More specifically, an incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical field-of-view without the need for any mechanical mechanisms. The preferred embodiment of the image transformation device can provide corrected images at real-time rates, compatible with standard video equipment. The device can be used for any application where a conventional pan-and-tilt or orientation mechanism might be considered including inspection, monitoring, surveillance, and target acquisition.
Precision disablement aiming system
Monda, Mark J.; Hobart, Clinton G.; Gladwell, Thomas Scott
2016-02-16
A disrupter to a target may be precisely aimed by positioning a radiation source to direct radiation towards the target, and a detector is positioned to detect radiation that passes through the target. An aiming device is positioned between the radiation source and the target, wherein a mechanical feature of the aiming device is superimposed on the target in a captured radiographic image. The location of the aiming device in the radiographic image is used to aim a disrupter towards the target.
Image analysis for microelectronic retinal prosthesis.
Hallum, L E; Cloherty, S L; Lovell, N H
2008-01-01
By way of extracellular, stimulating electrodes, a microelectronic retinal prosthesis aims to render discrete, luminous spots-so-called phosphenes-in the visual field, thereby providing a phosphene image (PI) as a rudimentary remediation of profound blindness. As part thereof, a digital camera, or some other photosensitive array, captures frames, frames are analyzed, and phosphenes are actuated accordingly by way of modulated charge injections. Here, we present a method that allows the assessment of image analysis schemes for integration with a prosthetic device, that is, the means of converting the captured image (high resolution) to modulated charge injections (low resolution). We use the mutual-information function to quantify the amount of information conveyed to the PI observer (device implantee), while accounting for the statistics of visual stimuli. We demonstrate an effective scheme involving overlapping, Gaussian kernels, and discuss extensions of the method to account for shortterm visual memory in observers, and their perceptual errors of omission and commission.
Seamless presentation capture, indexing, and management
NASA Astrophysics Data System (ADS)
Hilbert, David M.; Cooper, Matthew; Denoue, Laurent; Adcock, John; Billsus, Daniel
2005-10-01
Technology abounds for capturing presentations. However, no simple solution exists that is completely automatic. ProjectorBox is a "zero user interaction" appliance that automatically captures, indexes, and manages presentation multimedia. It operates continuously to record the RGB information sent from presentation devices, such as a presenter's laptop, to display devices, such as a projector. It seamlessly captures high-resolution slide images, text and audio. It requires no operator, specialized software, or changes to current presentation practice. Automatic media analysis is used to detect presentation content and segment presentations. The analysis substantially enhances the web-based user interface for browsing, searching, and exporting captured presentations. ProjectorBox has been in use for over a year in our corporate conference room, and has been deployed in two universities. Our goal is to develop automatic capture services that address both corporate and educational needs.
Client/server approach to image capturing
NASA Astrophysics Data System (ADS)
Tuijn, Chris; Stokes, Earle
1998-01-01
The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven applications). This paper is structured as follows. In the introduction, we further motive the need for a scan server-based architecture. In the second section, we give a brief architectural overview of the scan server and the other components it is connected to. The third chapter exposes the generic model for input devices as well as the image processing model; the fourth chapter reveals the different shapes the scanning applications (or modules) can have. In the last section, we briefly summarize the presented material and point out trends for future development.
Block Copolymer Membranes for Efficient Capture of a Chemotherapy Drug
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, X. Chelsea; Oh, Hee Jeung; Yu, Jay F.
In this paper, we introduce the use of block copolymer membranes for an emerging application, “drug capture”. The polymer is incorporated in a new class of biomedical devices, referred to as ChemoFilter, which is an image-guided temporarily deployable endovascular device designed to increase the efficacy of chemotherapy-based cancer treatment. We show that block copolymer membranes consisting of functional sulfonated polystyrene end blocks and a structural polyethylene middle block (SSES) are capable of capturing doxorubicin, a chemotherapy drug. We focus on the relationship between morphology of the membrane in the ChemoFilter device and efficacy of doxorubicin capture measured in vitro. Usingmore » small-angle X-ray scattering and cryogenic scanning transmission electron microscopy, we discovered that rapid doxorubicin capture is associated with the presence of water-rich channels in the lamellar-forming S-SES membranes in aqueous environment.« less
Block Copolymer Membranes for Efficient Capture of a Chemotherapy Drug
Chen, X. Chelsea; Oh, Hee Jeung; Yu, Jay F.; ...
2016-07-23
In this paper, we introduce the use of block copolymer membranes for an emerging application, “drug capture”. The polymer is incorporated in a new class of biomedical devices, referred to as ChemoFilter, which is an image-guided temporarily deployable endovascular device designed to increase the efficacy of chemotherapy-based cancer treatment. We show that block copolymer membranes consisting of functional sulfonated polystyrene end blocks and a structural polyethylene middle block (SSES) are capable of capturing doxorubicin, a chemotherapy drug. We focus on the relationship between morphology of the membrane in the ChemoFilter device and efficacy of doxorubicin capture measured in vitro. Usingmore » small-angle X-ray scattering and cryogenic scanning transmission electron microscopy, we discovered that rapid doxorubicin capture is associated with the presence of water-rich channels in the lamellar-forming S-SES membranes in aqueous environment.« less
Image processing system design for microcantilever-based optical readout infrared arrays
NASA Astrophysics Data System (ADS)
Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu
2012-12-01
Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.
Medical photography: current technology, evolving issues and legal perspectives.
Harting, M T; DeWees, J M; Vela, K M; Khirallah, R T
2015-04-01
Medical photographic image capture and data management has undergone a rapid and compelling change in complexity over the last 20 years. This is because of multiple factors, including significant advances in ease of photograph capture, alongside an evolution of mechanisms of data portability/dissemination, combined with governmental focus on health information privacy. Literature to guide medical, legal, governmental and business professionals when dealing with issues related to medical photography is virtually nonexistent. Herein, we will address the breadth of uses of medical photography, device properties/specific devices utilised for image capture, methods of data transfer and dissemination and patient perceptions and attitudes regarding photography in a medical setting. In addition, we will address the legal implications, including legal precedent, copyright and privacy law, informed consent, protected health information and the Health Insurance Portability and Accountability Act (HIPAA), as they pertain to medical photography. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Bernat, Amir S.; Bar-Am, Kfir; Cataldo, Leigh; Bolton, Frank J.; Kahn, Bruce S.; Levitz, David
2018-02-01
Cervical cancer is a leading cause of death for women in low resource settings. In order to better detect cervical dysplasia, a low cost multi-spectral colposcope was developed utilizing low costs LEDs and an area scan camera. The device is capable of both traditional colposcopic imaging and multi-spectral image capture. Following initial bench testing, the device was deployed to a gynecology clinic where it was used to image patients in a colposcopy setting. Both traditional colposcopic images and spectral data from patients were uploaded to a cloud server for remote analysis. Multi-spectral imaging ( 30 second capture) took place before any clinical procedure; the standard of care was followed thereafter. If acetic acid was used in the standard of care, a post-acetowhitening colposcopic image was also captured. In analyzing the data, normal and abnormal regions were identified in the colposcopic images by an expert clinician. Spectral data were fit to a theoretical model based on diffusion theory, yielding information on scattering and absorption parameters. Data were grouped according to clinician labeling of the tissue, as well as any additional clinical test results available (Pap, HPV, biopsy). Altogether, N=20 patients were imaged in this study, with 9 of them abnormal. In comparing normal and abnormal regions of interest from patients, substantial differences were measured in blood content, while differences in oxygen saturation parameters were more subtle. These results suggest that optical measurements made using low cost spectral imaging systems can distinguish between normal and pathological tissues.
Mobile app for chemical detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klunder, Gregory; Cooper, Chadway R.; Satcher, Jr., Joe H.
The present invention incorporates the camera from a mobile device (phone, iPad, etc.) to capture an image from a chemical test kit and process the image to provide chemical information. A simple user interface enables the automatic evaluation of the image, data entry, gps info, and maintain records from previous analyses.
Single-Image Distance Measurement by a Smart Mobile Device.
Chen, Shangwen; Fang, Xianyong; Shen, Jianbing; Wang, Linbo; Shao, Ling
2017-12-01
Existing distance measurement methods either require multiple images and special photographing poses or only measure the height with a special view configuration. We propose a novel image-based method that can measure various types of distance from single image captured by a smart mobile device. The embedded accelerometer is used to determine the view orientation of the device. Consequently, pixels can be back-projected to the ground, thanks to the efficient calibration method using two known distances. Then the distance in pixel is transformed to a real distance in centimeter with a linear model parameterized by the magnification ratio. Various types of distance specified in the image can be computed accordingly. Experimental results demonstrate the effectiveness of the proposed method.
Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori; Yamamoto, Kenji; Kurita, Taiichiro
2012-09-10
We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second.
OC ToGo: bed site image integration into OpenClinica with mobile devices
NASA Astrophysics Data System (ADS)
Haak, Daniel; Gehlen, Johan; Jonas, Stephan; Deserno, Thomas M.
2014-03-01
Imaging and image-based measurements nowadays play an essential role in controlled clinical trials, but electronic data capture (EDC) systems insufficiently support integration of captured images by mobile devices (e.g. smartphones and tablets). The web application OpenClinica has established as one of the world's leading EDC systems and is used to collect, manage and store data of clinical trials in electronic case report forms (eCRFs). In this paper, we present a mobile application for instantaneous integration of images into OpenClinica directly during examination on patient's bed site. The communication between the Android application and OpenClinica is based on the simple object access protocol (SOAP) and representational state transfer (REST) web services for metadata, and secure file transfer protocol (SFTP) for image transfer, respectively. OpenClinica's web services are used to query context information (e.g. existing studies, events and subjects) and to import data into the eCRF, as well as export of eCRF metadata and structural information. A stable image transfer is ensured and progress information (e.g. remaining time) visualized to the user. The workflow is demonstrated for a European multi-center registry, where patients with calciphylaxis disease are included. Our approach improves the EDC workflow, saves time, and reduces costs. Furthermore, data privacy is enhanced, since storage of private health data on the imaging devices becomes obsolete.
NASA Astrophysics Data System (ADS)
Wijaya, Surya Li; Savvides, Marios; Vijaya Kumar, B. V. K.
2005-02-01
Face recognition on mobile devices, such as personal digital assistants and cell phones, is a big challenge owing to the limited computational resources available to run verifications on the devices themselves. One approach is to transmit the captured face images by use of the cell-phone connection and to run the verification on a remote station. However, owing to limitations in communication bandwidth, it may be necessary to transmit a compressed version of the image. We propose using the image compression standard JPEG2000, which is a wavelet-based compression engine used to compress the face images to low bit rates suitable for transmission over low-bandwidth communication channels. At the receiver end, the face images are reconstructed with a JPEG2000 decoder and are fed into the verification engine. We explore how advanced correlation filters, such as the minimum average correlation energy filter [Appl. Opt. 26, 3633 (1987)] and its variants, perform by using face images captured under different illumination conditions and encoded with different bit rates under the JPEG2000 wavelet-encoding standard. We evaluate the performance of these filters by using illumination variations from the Carnegie Mellon University's Pose, Illumination, and Expression (PIE) face database. We also demonstrate the tolerance of these filters to noisy versions of images with illumination variations.
Plenoptic imaging with second-order correlations of light
NASA Astrophysics Data System (ADS)
Pepe, Francesco V.; Scarcelli, Giuliano; Garuccio, Augusto; D'Angelo, Milena
2016-01-01
Plenoptic imaging is a promising optical modality that simultaneously captures the location and the propagation direction of light in order to enable tridimensional imaging in a single shot. We demonstrate that it is possible to implement plenoptic imaging through second-order correlations of chaotic light, thus enabling to overcome the typical limitations of classical plenoptic devices.
Mobile computing device configured to compute irradiance, glint, and glare of the sun
Gupta, Vipin P; Ho, Clifford K; Khalsa, Siri Sahib
2014-03-11
Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. A mobile computing device includes at least one camera that captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed by the mobile computing device.
Galletti, Giuseppe; Sung, Matthew S; Vahdat, Linda T; Shah, Manish A; Santana, Steven M; Altavilla, Giuseppe; Kirby, Brian J; Giannakakou, Paraskevi
2014-01-07
Circulating tumor cells (CTCs) have emerged as a reliable source of tumor cells, and their concentration has prognostic implications. CTC capture offers real-time access to cancer tissue without the need of an invasive biopsy, while their phenotypic and molecular interrogation can provide insight into the biological changes of the tumor that occur during treatment. The majority of the CTC capture methods are based on EpCAM expression as a surface marker of tumor-derived cells. However, EpCAM protein expression levels can be significantly down regulated during cancer progression as a consequence of the process of epithelial to mesenchymal transition. In this paper, we describe a novel HER2 (Human Epidermal Receptor 2)-based microfluidic device for the isolation of CTCs from peripheral blood of patients with HER2-expressing solid tumors. We selected HER2 as an alternative to EpCAM as the receptor is biologically and therapeutically relevant in several solid tumors, like breast cancer (BC), where it is overexpressed in 30% of the patients and expressed in 90%, and gastric cancer (GC), in which HER2 presence is identified in more than 60% of the cases. We tested the performance of various anti HER2 antibodies in a panel of nine different BC cell lines with varying HER2 protein expression levels, using immunoblotting, confocal microscopy, live cells imaging and flow cytometry analyses. The antibody associated with the highest capture efficiency and sensitivity for HER2 expressing cells on the microfluidic device was the one that performed best in live cells imaging and flow cytometry assays as opposed to the fixed cell analyses, suggesting that recognition of the native conformation of the HER2 extracellular epitope on living cells was essential for specificity and sensitivity of CTC capture. Next, we tested the performance of the HER2 microfluidic device using blood from metastatic breast and gastric cancer patients. The HER2 microfluidic device exhibited CTC capture in 9/9 blood samples. Thus, the described HER2-based microfluidic device can be considered as a valid clinically relevant method for CTC capture in HER2 expressing solid cancers.
Evaluation of Night Vision Devices for Image Fusion Studies
2004-12-01
July 2004. http://www.sensorsmag.com/articles/0400/34/main.shtml Task, Harry L., Hartman, Richard T., Marasco , Peter L., Methods for Measuring...Press, Bellingham, Washington, 1998. Burt, Peter J. & Kolczynski, Raymond J., David Sarnoff Research Center, Enhanced Image Capture through Fusion
NASA Technical Reports Server (NTRS)
1998-01-01
PixelVision, Inc., has developed a series of integrated imaging engines capable of high-resolution image capture at dynamic speeds. This technology was used originally at Jet Propulsion Laboratory in a series of imaging engines for a NASA mission to Pluto. By producing this integrated package, Charge-Coupled Device (CCD) technology has been made accessible to a wide range of users.
Scanning computed confocal imager
George, John S.
2000-03-14
There is provided a confocal imager comprising a light source emitting a light, with a light modulator in optical communication with the light source for varying the spatial and temporal pattern of the light. A beam splitter receives the scanned light and direct the scanned light onto a target and pass light reflected from the target to a video capturing device for receiving the reflected light and transferring a digital image of the reflected light to a computer for creating a virtual aperture and outputting the digital image. In a transmissive mode of operation the invention omits the beam splitter means and captures light passed through the target.
Objective analysis of image quality of video image capture systems
NASA Astrophysics Data System (ADS)
Rowberg, Alan H.
1990-07-01
As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give horizontal or vertical streaking. While many of these results are significant from an engineering standpoint alone, there are clinical implications and some anatomy or pathology may not be visualized if an image capture system is used improperly.
An electronic pan/tilt/zoom camera system
NASA Technical Reports Server (NTRS)
Zimmermann, Steve; Martin, H. Lee
1991-01-01
A camera system for omnidirectional image viewing applications that provides pan, tilt, zoom, and rotational orientation within a hemispherical field of view (FOV) using no moving parts was developed. The imaging device is based on the effect that from a fisheye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high speed electronic circuitry. An incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical FOV without the need for any mechanical mechanisms. A programmable transformation processor provides flexible control over viewing situations. Multiple images, each with different image magnifications and pan tilt rotation parameters, can be obtained from a single camera. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment.
Robust Dehaze Algorithm for Degraded Image of CMOS Image Sensors.
Qu, Chen; Bi, Du-Yan; Sui, Ping; Chao, Ai-Nong; Wang, Yun-Fei
2017-09-22
The CMOS (Complementary Metal-Oxide-Semiconductor) is a new type of solid image sensor device widely used in object tracking, object recognition, intelligent navigation fields, and so on. However, images captured by outdoor CMOS sensor devices are usually affected by suspended atmospheric particles (such as haze), causing a reduction in image contrast, color distortion problems, and so on. In view of this, we propose a novel dehazing approach based on a local consistent Markov random field (MRF) framework. The neighboring clique in traditional MRF is extended to the non-neighboring clique, which is defined on local consistent blocks based on two clues, where both the atmospheric light and transmission map satisfy the character of local consistency. In this framework, our model can strengthen the restriction of the whole image while incorporating more sophisticated statistical priors, resulting in more expressive power of modeling, thus, solving inadequate detail recovery effectively and alleviating color distortion. Moreover, the local consistent MRF framework can obtain details while maintaining better results for dehazing, which effectively improves the image quality captured by the CMOS image sensor. Experimental results verified that the method proposed has the combined advantages of detail recovery and color preservation.
The role of electron irradiation history in liquid cell transmission electron microscopy.
Moser, Trevor H; Mehta, Hardeep; Park, Chiwoo; Kelly, Ryan T; Shokuhfar, Tolou; Evans, James E
2018-04-01
In situ liquid cell transmission electron microscopy (LC-TEM) allows dynamic nanoscale characterization of systems in a hydrated state. Although powerful, this technique remains impaired by issues of repeatability that limit experimental fidelity and hinder the identification and control of some variables underlying observed dynamics. We detail new LC-TEM devices that improve experimental reproducibility by expanding available imaging area and providing a platform for investigating electron flux history on the sample. Irradiation history is an important factor influencing LC-TEM results that has, to this point, been largely qualitatively and not quantitatively described. We use these devices to highlight the role of cumulative electron flux history on samples from both nanoparticle growth and biological imaging experiments and demonstrate capture of time zero, low-dose images on beam-sensitive samples. In particular, the ability to capture pristine images of biological samples, where the acquired image is the first time that the cell experiences significant electron flux, allowed us to determine that nanoparticle movement compared to the cell membrane was a function of cell damage and therefore an artifact rather than visualizing cell dynamics in action. These results highlight just a subset of the new science that is accessible with LC-TEM through the new multiwindow devices with patterned focusing aides.
The role of electron irradiation history in liquid cell transmission electron microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moser, Trevor H.; Mehta, Hardeep; Park, Chiwoo
In situ liquid cell transmission electron microscopy (LC-TEM) allows dynamic nanoscale characterization of systems in a hydrated state. Although powerful, this technique remains impaired by issues of repeatability that limit experimental fidelity and hinder the identification and control of some variables underlying observed dynamics. We detail new LC- TEM devices that improve experimental reproducibility by expanding available imaging area and providing a platform for investigating electron flux history on the sample. Irradiation history is an important factor influencing LC-TEM results that has, to this point, been largely qualitatively and not quantitatively described. We use these devices to highlight the rolemore » of cumulative electron flux history on samples from both nanoparticle growth and biological imaging experiments and demonstrate capture of time zero, low-dose images on beam-sensitive samples. In particular, the ability to capture pristine images of biological samples, where the acquired image is the first time that the cell experiences significant electron flux, allowed us to determine that nanoparticle movement compared to the cell membrane was a function of cell damage and therefore an artifact rather than visualizing cell dynamics in action. These results highlight just a subset of the new science that is accessible with LC-TEM through the new multiwindow devices with patterned focusing aides.« less
The role of electron irradiation history in liquid cell transmission electron microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moser, Trevor H.; Mehta, Hardeep; Park, Chiwoo
In situ liquid cell transmission electron microscopy (LC-TEM) allows dynamic nanoscale characterization of systems in a hydrated state. Although powerful, this technique remains impaired by issues of repeatability that limit experimental fidelity and hinder the identification and control of some variables underlying observed dynamics. We detail new LC-TEM devices that improve experimental reproducibility by expanding available imaging area and providing a platform for investigating electron flux history on the sample. Irradiation history is an important factor influencing LC-TEM results that has, to this point, been largely qualitatively and not quantitatively described. We use these devices to highlight the role ofmore » cumulative electron flux history on samples from both nanoparticle growth and biological imaging experiments and demonstrate capture of time zero, low-dose images on beam-sensitive samples. In particular, the ability to capture pristine images of biological samples, where the acquired image is the first time that the cell experiences significant electron flux, allowed us to determine that nanoparticle movement compared to the cell membrane was a function of cell damage and therefore an artifact rather than visualizing cell dynamics in action. Lastly, these results highlight just a subset of the new science that is accessible with LC-TEM through the new multiwindow devices with patterned focusing aides.« less
The role of electron irradiation history in liquid cell transmission electron microscopy
Mehta, Hardeep
2018-01-01
In situ liquid cell transmission electron microscopy (LC-TEM) allows dynamic nanoscale characterization of systems in a hydrated state. Although powerful, this technique remains impaired by issues of repeatability that limit experimental fidelity and hinder the identification and control of some variables underlying observed dynamics. We detail new LC-TEM devices that improve experimental reproducibility by expanding available imaging area and providing a platform for investigating electron flux history on the sample. Irradiation history is an important factor influencing LC-TEM results that has, to this point, been largely qualitatively and not quantitatively described. We use these devices to highlight the role of cumulative electron flux history on samples from both nanoparticle growth and biological imaging experiments and demonstrate capture of time zero, low-dose images on beam-sensitive samples. In particular, the ability to capture pristine images of biological samples, where the acquired image is the first time that the cell experiences significant electron flux, allowed us to determine that nanoparticle movement compared to the cell membrane was a function of cell damage and therefore an artifact rather than visualizing cell dynamics in action. These results highlight just a subset of the new science that is accessible with LC-TEM through the new multiwindow devices with patterned focusing aides. PMID:29725619
The role of electron irradiation history in liquid cell transmission electron microscopy
Moser, Trevor H.; Mehta, Hardeep; Park, Chiwoo; ...
2018-04-20
In situ liquid cell transmission electron microscopy (LC-TEM) allows dynamic nanoscale characterization of systems in a hydrated state. Although powerful, this technique remains impaired by issues of repeatability that limit experimental fidelity and hinder the identification and control of some variables underlying observed dynamics. We detail new LC-TEM devices that improve experimental reproducibility by expanding available imaging area and providing a platform for investigating electron flux history on the sample. Irradiation history is an important factor influencing LC-TEM results that has, to this point, been largely qualitatively and not quantitatively described. We use these devices to highlight the role ofmore » cumulative electron flux history on samples from both nanoparticle growth and biological imaging experiments and demonstrate capture of time zero, low-dose images on beam-sensitive samples. In particular, the ability to capture pristine images of biological samples, where the acquired image is the first time that the cell experiences significant electron flux, allowed us to determine that nanoparticle movement compared to the cell membrane was a function of cell damage and therefore an artifact rather than visualizing cell dynamics in action. Lastly, these results highlight just a subset of the new science that is accessible with LC-TEM through the new multiwindow devices with patterned focusing aides.« less
Compressive Coded-Aperture Multimodal Imaging Systems
NASA Astrophysics Data System (ADS)
Rueda-Chacon, Hoover F.
Multimodal imaging refers to the framework of capturing images that span different physical domains such as space, spectrum, depth, time, polarization, and others. For instance, spectral images are modeled as 3D cubes with two spatial and one spectral coordinate. Three-dimensional cubes spanning just the space domain, are referred as depth volumes. Imaging cubes varying in time, spectra or depth, are referred as 4D-images. Nature itself spans different physical domains, thus imaging our real world demands capturing information in at least 6 different domains simultaneously, giving turn to 3D-spatial+spectral+polarized dynamic sequences. Conventional imaging devices, however, can capture dynamic sequences with up-to 3 spectral channels, in real-time, by the use of color sensors. Capturing multiple spectral channels require scanning methodologies, which demand long time. In general, to-date multimodal imaging requires a sequence of different imaging sensors, placed in tandem, to simultaneously capture the different physical properties of a scene. Then, different fusion techniques are employed to mix all the individual information into a single image. Therefore, new ways to efficiently capture more than 3 spectral channels of 3D time-varying spatial information, in a single or few sensors, are of high interest. Compressive spectral imaging (CSI) is an imaging framework that seeks to optimally capture spectral imagery (tens of spectral channels of 2D spatial information), using fewer measurements than that required by traditional sensing procedures which follows the Shannon-Nyquist sampling. Instead of capturing direct one-to-one representations of natural scenes, CSI systems acquire linear random projections of the scene and then solve an optimization algorithm to estimate the 3D spatio-spectral data cube by exploiting the theory of compressive sensing (CS). To date, the coding procedure in CSI has been realized through the use of ``block-unblock" coded apertures, commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately capturing spatial and spectral material properties. The applications of such a sensor are self-evident in fields such as computer/robotic vision because they would allow an artificial intelligence to make informed decisions about not only the location of objects within a scene but also their material properties.
Lipecz, Agnes; Tsorbatzoglou, Alexis; Hassan, Ziad; Berta, Andras; Modis, Laszlo; Nemeth, Gabor
2017-05-11
To analyze the effect of the accommodation on the anterior segment data (corneal and anterior chamber parameters) induced by short-time reading in a healthy, nonpresbyopic adult patient group. Images of both eyes of nonpresbyopic volunteers were captured with a Scheimpflug device (Pentacam HR) in a nonaccommodative state. Fifteen minutes of reading followed and through fixation of the built-in target of Pentacam HR further accommodation was achieved and new images were captured by the device. Anterior segment parameters were observed and the differences were analyzed. Fifty-two healthy eyes of 26 subjects (range 20.04-28.58 years) were analyzed. No significant differences were observed in the keratometric values before and after the accommodative task (p = 0.35). A statistically significant difference was measured in the 5.0-mm-diameter and the 7.0-mm-diameter corneal volume (p = 0.01 and p = 0.03) between accommodation states. Corneal aberrometric data did not change significantly during short-term accommodation. Significant differences were observed between nonaccommodative and accommodative states of the eyes for all measured anterior chamber parameters. Among the parameters of the cornea, only corneal volume changed during the short-term accommodation process, showing some fine changes with accommodation of the cornea in young, emmetropic patients. The position of the pupil and the anterior chamber parameters were observed to change with accommodation as captured by a Scheimpflug device.
Up-Close Look at 'Bread-Basket'
NASA Technical Reports Server (NTRS)
2004-01-01
NASA's Mars Exploration Rover Spirit took this image with its front hazard-avoidance camera on sol 175 (June 30, 2004). It captures the instrument deployment device in perfect position as the rover uses its microscopic imager to get an up-close look at the rock target 'Bread-Basket.'A Foundation for Enterprise Imaging: HIMSS-SIIM Collaborative White Paper.
Roth, Christopher J; Lannum, Louis M; Persons, Kenneth R
2016-10-01
Care providers today routinely obtain valuable clinical multimedia with mobile devices, scope cameras, ultrasound, and many other modalities at the point of care. Image capture and storage workflows may be heterogeneous across an enterprise, and as a result, they often are not well incorporated in the electronic health record. Enterprise Imaging refers to a set of strategies, initiatives, and workflows implemented across a healthcare enterprise to consistently and optimally capture, index, manage, store, distribute, view, exchange, and analyze all clinical imaging and multimedia content to enhance the electronic health record. This paper is intended to introduce Enterprise Imaging as an important initiative to clinical and informatics leadership, and outline its key elements of governance, strategy, infrastructure, common multimedia content, acquisition workflows, enterprise image viewers, and image exchange services.
Initial results from a video-laser rangefinder device
Neil A. Clark
2000-01-01
Three hundred and nine width measurements at various heights to 10 m on a metal light pole were calculated from video images captured with a prototype video-laser rangefinder instrument. Data were captured at distances from 6 to 15 m. The endpoints for the width measurements were manually selected to the nearest pixel from individual video frames.Chi-square...
Introducing the depth transfer curve for 3D capture system characterization
NASA Astrophysics Data System (ADS)
Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas
2011-03-01
3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.
A High-Resolution Minimicroscope System for Wireless Real-Time Monitoring.
Wang, Zongjie; Boddeda, Akash; Parker, Benjamin; Samanipour, Roya; Ghosh, Sanjoy; Menard, Frederic; Kim, Keekyoung
2018-07-01
Compact, cost-effective, and high-performance microscope that enables the real-time imaging of cells and lab-on-a-chip devices is highly demanded for cell biology and biomedical engineering. This paper aims to present the design and application of an inexpensive wireless minimicroscope with resolution up to 2592 × 1944 pixels and speed up to 90 f/s. The minimicroscope system was built on a commercial embedded system (Raspberry Pi). We modified a camera module and adopted an inverse dual lens system to obtain the clear field of view and appropriate magnification for tens of micrometer objects. The system was capable of capturing time-lapse images and transferring image data wirelessly. The entire system can be operated wirelessly and cordlessly in a conventional cell culturing incubator. The developed minimicroscope was used to monitor the attachment and proliferation of NIH-3T3 and HEK 293 cells inside an incubator for 50 h. In addition, the minimicroscope was used to monitor a droplet generation process in a microfluidic device. The high-quality images captured by the minimicroscope enabled us an automated analysis of experimental parameters. The successful applications prove the great potential of the developed minimicroscope for monitoring various biological samples and microfluidic devices. This paper presents the design of a high-resolution minimicroscope system that enables the wireless real-time imaging of cells inside the incubator. This system has been verified to be a useful tool to obtain high-quality images and videos for the automated quantitative analysis of biological samples and lab-on-a-chip devices in the long term.
NASA Astrophysics Data System (ADS)
Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin
2006-02-01
A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.
Web surveillance system using platform-based design
NASA Astrophysics Data System (ADS)
Lin, Shin-Yo; Tsai, Tsung-Han
2004-04-01
A revolutionary methodology of SOPC platform-based design environment for multimedia communications will be developed. We embed a softcore processor to perform the image compression in FPGA. Then, we plug-in an Ethernet daughter board in the SOPC development platform system. Afterward, a web surveillance platform system is presented. The web surveillance system consists of three parts: image capture, web server and JPEG compression. In this architecture, user can control the surveillance system by remote. By the IP address configures to Ethernet daughter board, the user can access the surveillance system via browser. When user access the surveillance system, the CMOS sensor presently capture the remote image. After that, it will feed the captured image with the embedded processor. The embedded processor immediately performs the JPEG compression. Afterward, the user receives the compressed data via Ethernet. To sum up of the above mentioned, the all system will be implemented on APEX20K200E484-2X device.
Pre-Capture Privacy for Small Vision Sensors.
Pittaluga, Francesco; Koppal, Sanjeev Jagannatha
2017-11-01
The next wave of micro and nano devices will create a world with trillions of small networked cameras. This will lead to increased concerns about privacy and security. Most privacy preserving algorithms for computer vision are applied after image/video data has been captured. We propose to use privacy preserving optics that filter or block sensitive information directly from the incident light-field before sensor measurements are made, adding a new layer of privacy. In addition to balancing the privacy and utility of the captured data, we address trade-offs unique to miniature vision sensors, such as achieving high-quality field-of-view and resolution within the constraints of mass and volume. Our privacy preserving optics enable applications such as depth sensing, full-body motion tracking, people counting, blob detection and privacy preserving face recognition. While we demonstrate applications on macro-scale devices (smartphones, webcams, etc.) our theory has impact for smaller devices.
Metrological characterization of 3D imaging devices
NASA Astrophysics Data System (ADS)
Guidi, G.
2013-04-01
Manufacturers often express the performance of a 3D imaging device in various non-uniform ways for the lack of internationally recognized standard requirements for metrological parameters able to identify the capability of capturing a real scene. For this reason several national and international organizations in the last ten years have been developing protocols for verifying such performance. Ranging from VDI/VDE 2634, published by the Association of German Engineers and oriented to the world of mechanical 3D measurements (triangulation-based devices), to the ASTM technical committee E57, working also on laser systems based on direct range detection (TOF, Phase Shift, FM-CW, flash LADAR), this paper shows the state of the art about the characterization of active range devices, with special emphasis on measurement uncertainty, accuracy and resolution. Most of these protocols are based on special objects whose shape and size are certified with a known level of accuracy. By capturing the 3D shape of such objects with a range device, a comparison between the measured points and the theoretical shape they should represent is possible. The actual deviations can be directly analyzed or some derived parameters can be obtained (e.g. angles between planes, distances between barycenters of spheres rigidly connected, frequency domain parameters, etc.). This paper shows theoretical aspects and experimental results of some novel characterization methods applied to different categories of active 3D imaging devices based on both principles of triangulation and direct range detection.
Color Imaging management in film processing
NASA Astrophysics Data System (ADS)
Tremeau, Alain; Konik, Hubert; Colantoni, Philippe
2003-12-01
The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.
Microscopy imaging device with advanced imaging properties
Ghosh, Kunal; Burns, Laurie; El Gamal, Abbas; Schnitzer, Mark J.; Cocker, Eric; Ho, Tatt Wei
2015-11-24
Systems, methods and devices are implemented for microscope imaging solutions. One embodiment of the present disclosure is directed toward an epifluorescence microscope. The microscope includes an image capture circuit including an array of optical sensor. An optical arrangement is configured to direct excitation light of less than about 1 mW to a target object in a field of view of that is at least 0.5 mm.sup.2 and to direct epi-fluorescence emission caused by the excitation light to the array of optical sensors. The optical arrangement and array of optical sensors are each sufficiently close to the target object to provide at least 2.5 .mu.m resolution for an image of the field of view.
Orthoscopic real-image display of digital holograms.
Makowski, P L; Kozacki, T; Zaperty, W
2017-10-01
We present a practical solution for the long-standing problem of depth inversion in real-image holographic display of digital holograms. It relies on a field lens inserted in front of the spatial light modulator device addressed by a properly processed hologram. The processing algorithm accounts for pixel size and wavelength mismatch between capture and display devices in a way that prevents image deformation. Complete images of large dimensions are observable from one position with a naked eye. We demonstrate the method experimentally on a 10-cm-long 3D object using a single full-HD spatial light modulator, but it can supplement most holographic displays designed to form a real image, including circular wide angle configurations.
Melanoma detection using smartphone and multimode hyperspectral imaging
NASA Astrophysics Data System (ADS)
MacKinnon, Nicholas; Vasefi, Fartash; Booth, Nicholas; Farkas, Daniel L.
2016-04-01
This project's goal is to determine how to effectively implement a technology continuum from a low cost, remotely deployable imaging device to a more sophisticated multimode imaging system within a standard clinical practice. In this work a smartphone is used in conjunction with an optical attachment to capture cross-polarized and collinear color images of a nevus that are analyzed to quantify chromophore distribution. The nevus is also imaged by a multimode hyperspectral system, our proprietary SkinSpect™ device. Relative accuracy and biological plausibility of the two systems algorithms are compared to assess aspects of feasibility of in-home or primary care practitioner smartphone screening prior to rigorous clinical analysis via the SkinSpect.
Microscopy imaging device with advanced imaging properties
Ghosh, Kunal; Burns, Laurie; El Gamal, Abbas; Schnitzer, Mark J.; Cocker, Eric; Ho, Tatt Wei
2016-10-25
Systems, methods and devices are implemented for microscope imaging solutions. One embodiment of the present disclosure is directed toward an epifluorescence microscope. The microscope includes an image capture circuit including an array of optical sensor. An optical arrangement is configured to direct excitation light of less than about 1 mW to a target object in a field of view of that is at least 0.5 mm.sup.2 and to direct epi-fluorescence emission caused by the excitation light to the array of optical sensors. The optical arrangement and array of optical sensors are each sufficiently close to the target object to provide at least 2.5 .mu.m resolution for an image of the field of view.
Microscopy imaging device with advanced imaging properties
Ghosh, Kunal; Burns, Laurie; El Gamal, Abbas; Schnitzer, Mark J.; Cocker, Eric; Ho, Tatt Wei
2016-11-22
Systems, methods and devices are implemented for microscope imaging solutions. One embodiment of the present disclosure is directed toward an epifluorescence microscope. The microscope includes an image capture circuit including an array of optical sensor. An optical arrangement is configured to direct excitation light of less than about 1 mW to a target object in a field of view of that is at least 0.5 mm.sup.2 and to direct epi-fluorescence emission caused by the excitation light to the array of optical sensors. The optical arrangement and array of optical sensors are each sufficiently close to the target object to provide at least 2.5 .mu.m resolution for an image of the field of view.
Microscopy imaging device with advanced imaging properties
Ghosh, Kunal; Burns, Laurie; El Gamal, Abbas; Schnitzer, Mark J.; Cocker, Eric; Ho, Tatt Wei
2017-04-25
Systems, methods and devices are implemented for microscope imaging solutions. One embodiment of the present disclosure is directed toward an epifluorescence microscope. The microscope includes an image capture circuit including an array of optical sensor. An optical arrangement is configured to direct excitation light of less than about 1 mW to a target object in a field of view of that is at least 0.5 mm.sup.2 and to direct epi-fluorescence emission caused by the excitation light to the array of optical sensors. The optical arrangement and array of optical sensors are each sufficiently close to the target object to provide at least 2.5 .mu.m resolution for an image of the field of view.
NASA Astrophysics Data System (ADS)
Higgins, Laura M.; Pierce, Mark C.
2014-08-01
A compact handpiece combining high resolution fluorescence (HRF) imaging with optical coherence tomography (OCT) was developed to provide real-time assessment of oral lesions. This multimodal imaging device simultaneously captures coregistered en face images with subcellular detail alongside cross-sectional images of tissue microstructure. The HRF imaging acquires a 712×594 μm2 field-of-view at the sample with a spatial resolution of 3.5 μm. The OCT images were acquired to a depth of 1.5 mm with axial and lateral resolutions of 9.3 and 8.0 μm, respectively. HRF and OCT images are simultaneously displayed at 25 fps. The handheld device was used to image a healthy volunteer, demonstrating the potential for in vivo assessment of the epithelial surface for dysplastic and neoplastic changes at the cellular level, while simultaneously evaluating submucosal involvement. We anticipate potential applications in real-time assessment of oral lesions for improved surveillance and surgical guidance.
Nonlinear filtering for character recognition in low quality document images
NASA Astrophysics Data System (ADS)
Diaz-Escobar, Julia; Kober, Vitaly
2014-09-01
Optical character recognition in scanned printed documents is a well-studied task, where the captured conditions like sheet position, illumination, contrast and resolution are controlled. Nowadays, it is more practical to use mobile devices for document capture than a scanner. So as a consequence, the quality of document images is often poor owing to presence of geometric distortions, nonhomogeneous illumination, low resolution, etc. In this work we propose to use multiple adaptive nonlinear composite filters for detection and classification of characters. Computer simulation results obtained with the proposed system are presented and discussed.
Device and Method of Scintillating Quantum Dots for Radiation Imaging
NASA Technical Reports Server (NTRS)
Burke, Eric R. (Inventor); DeHaven, Stanton L. (Inventor); Williams, Phillip A. (Inventor)
2017-01-01
A radiation imaging device includes a radiation source and a micro structured detector comprising a material defining a surface that faces the radiation source. The material includes a plurality of discreet cavities having openings in the surface. The detector also includes a plurality of quantum dots disclosed in the cavities. The quantum dots are configured to interact with radiation from the radiation source, and to emit visible photons that indicate the presence of radiation. A digital camera and optics may be used to capture images formed by the detector in response to exposure to radiation.
A New View of Civil War Photography
ERIC Educational Resources Information Center
Percoco, James A.
2014-01-01
Students today are used to a rich visual dimension of living. Students carry with them to school each day devices that allow them to capture their lives in real time. This is possible because of the hard labor of men who toiled for hours to capture for time immemorial images that have become engrained in the American narrative. When teaching the…
A low cost mobile phone dark-field microscope for nanoparticle-based quantitative studies.
Sun, Dali; Hu, Tony Y
2018-01-15
Dark-field microscope (DFM) analysis of nanoparticle binding signal is highly useful for a variety of research and biomedical applications, but current applications for nanoparticle quantification rely on expensive DFM systems. The cost, size, limited robustness of these DFMs limits their utility for non-laboratory settings. Most nanoparticle analyses use high-magnification DFM images, which are labor intensive to acquire and subject to operator bias. Low-magnification DFM image capture is faster, but is subject to background from surface artifacts and debris, although image processing can partially compensate for background signal. We thus mated an LED light source, a dark-field condenser and a 20× objective lens with a mobile phone camera to create an inexpensive, portable and robust DFM system suitable for use in non-laboratory conditions. This proof-of-concept mobile DFM device weighs less than 400g and costs less than $2000, but analysis of images captured with this device reveal similar nanoparticle quantitation results to those acquired with a much larger and more expensive desktop DFMM system. Our results suggest that similar devices may be useful for quantification of stable, nanoparticle-based activity and quantitation assays in resource-limited areas where conventional assay approaches are not practical. Copyright © 2017 Elsevier B.V. All rights reserved.
Cloud-based processing of multi-spectral imaging data
NASA Astrophysics Data System (ADS)
Bernat, Amir S.; Bolton, Frank J.; Weiser, Reuven; Levitz, David
2017-03-01
Multispectral imaging holds great promise as a non-contact tool for the assessment of tissue composition. Performing multi - spectral imaging on a hand held mobile device would allow to bring this technology and with it knowledge to low resource settings to provide a state of the art classification of tissue health. This modality however produces considerably larger data sets than white light imaging and requires preliminary image analysis for it to be used. The data then needs to be analyzed and logged, while not requiring too much of the system resource or a long computation time and battery use by the end point device. Cloud environments were designed to allow offloading of those problems by allowing end point devices (smartphones) to offload computationally hard tasks. For this end we present a method where the a hand held device based around a smartphone captures a multi - spectral dataset in a movie file format (mp4) and compare it to other image format in size, noise and correctness. We present the cloud configuration used for segmenting images to frames where they can later be used for further analysis.
A new 4-dimensional imaging system for jaw tracking.
Lauren, Mark
2014-01-01
A non-invasive 4D imaging system that produces high resolution time-based 3D surface data has been developed to capture jaw motion. Fluorescent microspheres are brushed onto both tooth and soft-tissue areas of the upper and lower arches to be imaged. An extraoral hand-held imaging device, operated about 12 cm from the mouth, captures a time-based set of perspective image triplets of the patch areas. Each triplet, containing both upper and lower arch data, is converted to a high-resolution 3D point mesh using photogrammetry, providing the instantaneous relative jaw position. Eight 3D positions per second are captured. Using one of the 3D frames as a reference, a 4D model can be constructed to describe the incremental free body motion of the mandible. The surface data produced by this system can be registered to conventional 3D models of the dentition, allowing them to be animated. Applications include integration into prosthetic CAD and CBCT data.
Landman, Adam; Emani, Srinivas; Carlile, Narath; Rosenthal, David I; Semakov, Simon; Pallin, Daniel J; Poon, Eric G
2015-01-02
Photographs are important tools to record, track, and communicate clinical findings. Mobile devices with high-resolution cameras are now ubiquitous, giving clinicians the opportunity to capture and share images from the bedside. However, secure and efficient ways to manage and share digital images are lacking. The aim of this study is to describe the implementation of a secure application for capturing and storing clinical images in the electronic health record (EHR), and to describe initial user experiences. We developed CliniCam, a secure Apple iOS (iPhone, iPad) application that allows for user authentication, patient selection, image capture, image annotation, and storage of images as a Portable Document Format (PDF) file in the EHR. We leveraged our organization's enterprise service-oriented architecture to transmit the image file from CliniCam to our enterprise clinical data repository. There is no permanent storage of protected health information on the mobile device. CliniCam also required connection to our organization's secure WiFi network. Resident physicians from emergency medicine, internal medicine, and dermatology used CliniCam in clinical practice for one month. They were then asked to complete a survey on their experience. We analyzed the survey results using descriptive statistics. Twenty-eight physicians participated and 19/28 (68%) completed the survey. Of the respondents who used CliniCam, 89% found it useful or very useful for clinical practice and easy to use, and wanted to continue using the app. Respondents provided constructive feedback on location of the photos in the EHR, preferring to have photos embedded in (or linked to) clinical notes instead of storing them as separate PDFs within the EHR. Some users experienced difficulty with WiFi connectivity which was addressed by enhancing CliniCam to check for connectivity on launch. CliniCam was implemented successfully and found to be easy to use and useful for clinical practice. CliniCam is now available to all clinical users in our hospital, providing a secure and efficient way to capture clinical images and to insert them into the EHR. Future clinical image apps should more closely link clinical images and clinical documentation and consider enabling secure transmission over public WiFi or cellular networks.
Optical character recognition of camera-captured images based on phase features
NASA Astrophysics Data System (ADS)
Diaz-Escobar, Julia; Kober, Vitaly
2015-09-01
Nowadays most of digital information is obtained using mobile devices specially smartphones. In particular, it brings the opportunity for optical character recognition in camera-captured images. For this reason many recognition applications have been recently developed such as recognition of license plates, business cards, receipts and street signal; document classification, augmented reality, language translator and so on. Camera-captured images are usually affected by geometric distortions, nonuniform illumination, shadow, noise, which make difficult the recognition task with existing systems. It is well known that the Fourier phase contains a lot of important information regardless of the Fourier magnitude. So, in this work we propose a phase-based recognition system exploiting phase-congruency features for illumination/scale invariance. The performance of the proposed system is tested in terms of miss classifications and false alarms with the help of computer simulation.
Development of a vision-based pH reading system
NASA Astrophysics Data System (ADS)
Hur, Min Goo; Kong, Young Bae; Lee, Eun Je; Park, Jeong Hoon; Yang, Seung Dae; Moon, Ha Jung; Lee, Dong Hoon
2015-10-01
pH paper is generally used for pH interpretation in the QC (quality control) process of radiopharmaceuticals. pH paper is easy to handle and useful for small samples such as radio-isotopes and radioisotope (RI)-labeled compounds for positron emission tomography (PET). However, pHpaper-based detecting methods may have some errors due limitations of eye sight and inaccurate readings. In this paper, we report a new device for pH reading and related software. The proposed pH reading system is developed with a vision algorithm based on the RGB library. The pH reading system is divided into two parts. First is the reading device that consists of a light source, a CCD camera and a data acquisition (DAQ) board. To improve the accuracy of the sensitivity, we utilize the three primary colors of the LED (light emission diode) in the reading device. The use of three colors is better than the use of a single color for a white LED because of wavelength. The other is a graph user interface (GUI) program for a vision interface and report generation. The GUI program inserts the color codes of the pH paper into the database; then, the CCD camera captures the pH paper and compares its color with the RGB database image in the reading mode. The software captures and reports information on the samples, such as pH results, capture images, and library images, and saves them as excel files.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-30
... capture unit, as required by the claims under the ALJ's construction of the limitations ``first... construction of the claim limitation, i.e., ``storing the image data in a buffer in one of two directions such...'' under the ALJ's construction of the [[Page 60869
NASA Astrophysics Data System (ADS)
Gui, Chen; Wang, Kan; Li, Chao; Dai, Xuan; Cui, Daxiang
2014-02-01
Immunochromatographic assays are widely used to detect many analytes. CagA is proved to be associated closely with initiation of gastric carcinoma. Here, we reported that a charge-coupled device (CCD)-based test strip reader combined with CdS quantum dot-labeled lateral flow strips for quantitative detection of CagA was developed, which used 365-nm ultraviolet LED as the excitation light source, and captured the test strip images through an acquisition module. Then, the captured image was transferred to the computer and was processed by a software system. A revised weighted threshold histogram equalization (WTHE) image processing algorithm was applied to analyze the result. CdS quantum dot-labeled lateral flow strips for detection of CagA were prepared. One hundred sera samples from clinical patients with gastric cancer and healthy people were prepared for detection, which demonstrated that the device could realize rapid, stable, and point-of-care detection, with a sensitivity of 20 pg/mL.
Mobility and orientation aid for blind persons using artificial vision
NASA Astrophysics Data System (ADS)
Costa, Gustavo; Gusberti, Adrián; Graffigna, Juan Pablo; Guzzo, Martín; Nasisi, Oscar
2007-11-01
Blind or vision-impaired persons are limited in their normal life activities. Mobility and orientation of blind persons is an ever-present research subject because no total solution has yet been reached for these activities that pose certain risks for the affected persons. The current work presents the design and development of a device conceived on capturing environment information through stereoscopic vision. The images captured by a couple of video cameras are transferred and processed by configurable and sequential FPGA and DSP devices that issue action signals to a tactile feedback system. Optimal processing algorithms are implemented to perform this feedback in real time. The components selected permit portability; that is, to readily get used to wearing the device.
NASA Astrophysics Data System (ADS)
Carbinatto, Fernanda M.; Inada, Natalia Mayumi; Lombardi, Welington; Cossetin, Natália Fernandez; Varoto, Cinthia; Kurachi, Cristina; Bagnato, Vanderlei Salvador
2015-06-01
The use of portable electronic devices, in particular mobile phones such as smartphones is increasing not only for all known applications, but also for diagnosis of diseases and monitoring treatments like topical Photodynamic Therapy. The aim of the study is to evaluate the production of the photosensitizer Protoporphyrin IX (PpIX) after topical application of a cream containing methyl aminolevulinate (MAL) in the cervix with diagnosis of Cervical Intraepithelial Neoplasia (CIN) through the fluorescence images captured after one and three hours and compare the images using two devices (a Sony Xperia® mobile and an Apple Ipod®. Was observed an increasing fluorescence intensity of the cervix three hours after cream application, in both portable electronic devices. However, because was used a specific program for the treatment of images using the Ipod® device, these images presented better resolution than observed by the Sony cell phone without a specific program. One hour after cream application presented a more selective fluorescence than the group of three hours. In conclusion, the use of portable devices to obtain images of PpIX fluorescence shown to be an effective tool and is necessary the improvement of programs for achievement of better results.
NASA Astrophysics Data System (ADS)
Jaiswal, Mayoore; Horning, Matt; Hu, Liming; Ben-Or, Yau; Champlin, Cary; Wilson, Benjamin; Levitz, David
2018-02-01
Cervical cancer is the fourth most common cancer among women worldwide and is especially prevalent in low resource settings due to lack of screening and treatment options. Visual inspection with acetic acid (VIA) is a widespread and cost-effective screening method for cervical pre-cancer lesions, but accuracy depends on the experience level of the health worker. Digital cervicography, capturing images of the cervix, enables review by an off-site expert or potentially a machine learning algorithm. These reviews require images of sufficient quality. However, image quality varies greatly across users. A novel algorithm was developed to evaluate the sharpness of images captured with the MobileODT's digital cervicography device (EVA System), in order to, eventually provide feedback to the health worker. The key challenges are that the algorithm evaluates only a single image of each cervix, it needs to be robust to the variability in cervix images and fast enough to run in real time on a mobile device, and the machine learning model needs to be small enough to fit on a mobile device's memory, train on a small imbalanced dataset and run in real-time. In this paper, the focus scores of a preprocessed image and a Gaussian-blurred version of the image are calculated using established methods and used as features. A feature selection metric is proposed to select the top features which were then used in a random forest classifier to produce the final focus score. The resulting model, based on nine calculated focus scores, achieved significantly better accuracy than any single focus measure when tested on a holdout set of images. The area under the receiver operating characteristics curve was 0.9459.
Novel shadowless imaging for eyes-like diagnosis in vivo
NASA Astrophysics Data System (ADS)
Xue, Ning; Jiang, Kai; Li, Qi; Zhang, Lili; Ma, Li; Huang, Guoliang
2016-10-01
Eyes-like diagnosis was a traditional Chinese medicine method for many diseases, such as chronic gastritis, diabetes, hypertension etc. There was a close relationship between viscera and eyes-like. White-Eye was divided into fourteen sections, which corresponded to different viscera, so eyes-like was the reflection of status of viscera, in another words, it was an epitome of viscera health condition. In this paper, we developed a novel shadowless imaging technology and system for eyes-like diagnosis in vivo, which consisted of an optical shadowless imaging device for capturing and saving images of patients' eyes-like, and a computer linked to the device for image processing. A character matching algorithm was developed to extract the character of white-eye in corresponding sections of eyes-like images taken by the optical shadowless imaging device, according to the character of eyes-like, whether there were viscera diseases could be learned. A series of assays were carried out, and the results verified the feasibility of eyes-like diagnosis technique.
ERIC Educational Resources Information Center
Arreguin-Anderson, Maria G.; Ruiz, Elsa Cantu
2013-01-01
The exploration into cultural practices occurring in households has become more fluid and transparent process thanks to the presence of mobile technologies that allow members of a group to capture daily occurrences. This case study explored ways in which three Latino preservice teachers used mobile devices to discover connections between…
Aidlen, Jeremy T; Glick, Sara; Silverman, Kenneth; Silverman, Harvey F; Luks, Francois I
2009-08-01
Light-weight, low-profile, and high-resolution head-mounted displays (HMDs) now allow personalized viewing, of a laparoscopic image. The advantages include unobstructed viewing, regardless of position at the operating table, and the possibility to customize the image (i.e., enhanced reality, picture-in-picture, etc.). The bright image display allows use in daylight surroundings and the low profile of the HMD provides adequate peripheral vision. Theoretic disadvantages include reliance for all on the same image capture and anticues (i.e., reality disconnect) when the projected image remains static, despite changes in head position. This can lead to discomfort and even nausea. We have developed a prototype of interactive laparoscopic image display that allows hands-free control of the displayed image by changes in spatial orientation of the operator's head. The prototype consists of an HMD, a spatial orientation device, and computer software to enable hands-free panning and zooming of a video-endoscopic image display. The spatial orientation device uses magnetic fields created by a transmitter and receiver, each containing three orthogonal coils. The transmitter coils are efficiently driven, using USB power only, by a newly developed circuit, each at a unique frequency. The HMD-mounted receiver system links to a commercially available PC-interface PCI-bus sound card (M-Audiocard Delta 44; Avid Technology, Tewksbury, MA). Analog signals at the receiver are filtered, amplified, and converted to digital signals, which are processed to control the image display. The prototype uses a proprietary static fish-eye lens and software for the distortion-free reconstitution of any portion of the captured image. Left-right and up-down motions of the head (and HMD) produce real-time panning of the displayed image. Motion of the head toward, or away from, the transmitter causes real-time zooming in or out, respectively, of the displayed image. This prototype of the interactive HMD allows hands-free, intuitive control of the laparoscopic field, independent of the captured image.
Technical Challenges of Enterprise Imaging: HIMSS-SIIM Collaborative White Paper.
Clunie, David A; Dennison, Don K; Cram, Dawn; Persons, Kenneth R; Bronkalla, Mark D; Primo, Henri Rik
2016-10-01
This white paper explores the technical challenges and solutions for acquiring (capturing) and managing enterprise images, particularly those involving visible light applications. The types of acquisition devices used for various general-purpose photography and specialized applications including dermatology, endoscopy, and anatomic pathology are reviewed. The formats and standards used, and the associated metadata requirements and communication protocols for transfer and workflow are considered. Particular emphasis is placed on the importance of metadata capture in both order- and encounter-based workflow. The benefits of using DICOM to provide a standard means of recording and accessing both metadata and image and video data are considered, as is the role of IHE and FHIR.
Feedback mechanism for smart nozzles and nebulizers
Montaser, Akbar [Potomac, MD; Jorabchi, Kaveh [Arlington, VA; Kahen, Kaveh [Kleinburg, CA
2009-01-27
Nozzles and nebulizers able to produce aerosol with optimum and reproducible quality based on feedback information obtained using laser imaging techniques. Two laser-based imaging techniques based on particle image velocimetry (PTV) and optical patternation map and contrast size and velocity distributions for indirect and direct pneumatic nebulizations in plasma spectrometry. Two pulses from thin laser sheet with known time difference illuminate droplets flow field. Charge coupled device (CCL)) captures scattering of laser light from droplets, providing two instantaneous particle images. Pointwise cross-correlation of corresponding images yields two-dimensional velocity map of aerosol velocity field. For droplet size distribution studies, solution is doped with fluorescent dye and both laser induced florescence (LIF) and Mie scattering images are captured simultaneously by two CCDs with the same field of view. Ratio of LIF/Mie images provides relative droplet size information, then scaled by point calibration method via phase Doppler particle analyzer.
Smartphone-based quantitative measurements on holographic sensors.
Khalili Moghaddam, Gita; Lowe, Christopher Robin
2017-01-01
The research reported herein integrates a generic holographic sensor platform and a smartphone-based colour quantification algorithm in order to standardise and improve the determination of the concentration of analytes of interest. The utility of this approach has been exemplified by analysing the replay colour of the captured image of a holographic pH sensor in near real-time. Personalised image encryption followed by a wavelet-based image compression method were applied to secure the image transfer across a bandwidth-limited network to the cloud. The decrypted and decompressed image was processed through four principal steps: Recognition of the hologram in the image with a complex background using a template-based approach, conversion of device-dependent RGB values to device-independent CIEXYZ values using a polynomial model of the camera and computation of the CIEL*a*b* values, use of the colour coordinates of the captured image to segment the image, select the appropriate colour descriptors and, ultimately, locate the region of interest (ROI), i.e. the hologram in this case, and finally, application of a machine learning-based algorithm to correlate the colour coordinates of the ROI to the analyte concentration. Integrating holographic sensors and the colour image processing algorithm potentially offers a cost-effective platform for the remote monitoring of analytes in real time in readily accessible body fluids by minimally trained individuals.
Smartphone-based quantitative measurements on holographic sensors
Khalili Moghaddam, Gita
2017-01-01
The research reported herein integrates a generic holographic sensor platform and a smartphone-based colour quantification algorithm in order to standardise and improve the determination of the concentration of analytes of interest. The utility of this approach has been exemplified by analysing the replay colour of the captured image of a holographic pH sensor in near real-time. Personalised image encryption followed by a wavelet-based image compression method were applied to secure the image transfer across a bandwidth-limited network to the cloud. The decrypted and decompressed image was processed through four principal steps: Recognition of the hologram in the image with a complex background using a template-based approach, conversion of device-dependent RGB values to device-independent CIEXYZ values using a polynomial model of the camera and computation of the CIEL*a*b* values, use of the colour coordinates of the captured image to segment the image, select the appropriate colour descriptors and, ultimately, locate the region of interest (ROI), i.e. the hologram in this case, and finally, application of a machine learning-based algorithm to correlate the colour coordinates of the ROI to the analyte concentration. Integrating holographic sensors and the colour image processing algorithm potentially offers a cost-effective platform for the remote monitoring of analytes in real time in readily accessible body fluids by minimally trained individuals. PMID:29141008
System and method for controlling a combustor assembly
York, William David; Ziminsky, Willy Steve; Johnson, Thomas Edward; Stevenson, Christian Xavier
2013-03-05
A system and method for controlling a combustor assembly are disclosed. The system includes a combustor assembly. The combustor assembly includes a combustor and a fuel nozzle assembly. The combustor includes a casing. The fuel nozzle assembly is positioned at least partially within the casing and includes a fuel nozzle. The fuel nozzle assembly further defines a head end. The system further includes a viewing device configured for capturing an image of at least a portion of the head end, and a processor communicatively coupled to the viewing device, the processor configured to compare the image to a standard image for the head end.
Image enhancement and quality measures for dietary assessment using mobile devices
NASA Astrophysics Data System (ADS)
Xu, Chang; Zhu, Fengqing; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.
2012-03-01
Measuring accurate dietary intake is considered to be an open research problem in the nutrition and health fields. We are developing a system, known as the mobile device food record (mdFR), to automatically identify and quantify foods and beverages consumed based on analyzing meal images captured with a mobile device. The mdFR makes use of a fiducial marker and other contextual information to calibrate the imaging system so that accurate amounts of food can be estimated from the scene. Food identification is a difficult problem since foods can dramatically vary in appearance. Such variations may arise not only from non-rigid deformations and intra-class variability in shape, texture, color and other visual properties, but also from changes in illumination and viewpoint. To address the color consistency problem, this paper describes illumination quality assessment methods implemented on a mobile device and three post color correction methods.
Image Enhancement and Quality Measures for Dietary Assessment Using Mobile Devices
Xu, Chang; Zhu, Fengqing; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.
2016-01-01
Measuring accurate dietary intake is considered to be an open research problem in the nutrition and health fields. We are developing a system, known as the mobile device food record (mdFR), to automatically identify and quantify foods and beverages consumed based on analyzing meal images captured with a mobile device. The mdFR makes use of a fiducial marker and other contextual information to calibrate the imaging system so that accurate amounts of food can be estimated from the scene. Food identification is a difficult problem since foods can dramatically vary in appearance. Such variations may arise not only from non-rigid deformations and intra-class variability in shape, texture, color and other visual properties, but also from changes in illumination and viewpoint. To address the color consistency problem, this paper describes illumination quality assessment methods implemented on a mobile device and three post color correction methods. PMID:28572695
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-27
.... 1337, on behalf of Eastman Kodak Company of Rochester, New York. Letters supplementing the complaint...: Eastman Kodak Company, 343 State Street Rochester, NY 14650. (b) The respondent is the following entity...
NASA Astrophysics Data System (ADS)
Coffey, Stephen; Connell, Joseph
2005-06-01
This paper presents a development platform for real-time image processing based on the ADSP-BF533 Blackfin processor and the MicroC/OS-II real-time operating system (RTOS). MicroC/OS-II is a completely portable, ROMable, pre-emptive, real-time kernel. The Blackfin Digital Signal Processors (DSPs), incorporating the Analog Devices/Intel Micro Signal Architecture (MSA), are a broad family of 16-bit fixed-point products with a dual Multiply Accumulate (MAC) core. In addition, they have a rich instruction set with variable instruction length and both DSP and MCU functionality thus making them ideal for media based applications. Using the MicroC/OS-II for task scheduling and management, the proposed system can capture and process raw RGB data from any standard 8-bit greyscale image sensor in soft real-time and then display the processed result using a simple PC graphical user interface (GUI). Additionally, the GUI allows configuration of the image capture rate and the system and core DSP clock rates thereby allowing connectivity to a selection of image sensors and memory devices. The GUI also allows selection from a set of image processing algorithms based in the embedded operating system.
Ng, David C; Tamura, Hideki; Tokuda, Takashi; Yamamoto, Akio; Matsuo, Masamichi; Nunoshita, Masahiro; Ishikawa, Yasuyuki; Shiosaka, Sadao; Ohta, Jun
2006-09-30
The aim of the present study is to demonstrate the application of complementary metal-oxide semiconductor (CMOS) imaging technology for studying the mouse brain. By using a dedicated CMOS image sensor, we have successfully imaged and measured brain serine protease activity in vivo, in real-time, and for an extended period of time. We have developed a biofluorescence imaging device by packaging the CMOS image sensor which enabled on-chip imaging configuration. In this configuration, no optics are required whereby an excitation filter is applied onto the sensor to replace the filter cube block found in conventional fluorescence microscopes. The fully packaged device measures 350 microm thick x 2.7 mm wide, consists of an array of 176 x 144 pixels, and is small enough for measurement inside a single hemisphere of the mouse brain, while still providing sufficient imaging resolution. In the experiment, intraperitoneally injected kainic acid induced upregulation of serine protease activity in the brain. These events were captured in real time by imaging and measuring the fluorescence from a fluorogenic substrate that detected this activity. The entire device, which weighs less than 1% of the body weight of the mouse, holds promise for studying freely moving animals.
Clegg, G; Roebuck, S; Steedman, D
2001-01-01
Objectives—To develop a computer based storage system for clinical images—radiographs, photographs, ECGs, text—for use in teaching, training, reference and research within an accident and emergency (A&E) department. Exploration of methods to access and utilise the data stored in the archive. Methods—Implementation of a digital image archive using flatbed scanner and digital camera as capture devices. A sophisticated coding system based on ICD 10. Storage via an "intelligent" custom interface. Results—A practical solution to the problems of clinical image storage for teaching purposes. Conclusions—We have successfully developed a digital image capture and storage system, which provides an excellent teaching facility for a busy A&E department. We have revolutionised the practice of the "hand-over meeting". PMID:11435357
Design of a smartphone-camera-based fluorescence imaging system for the detection of oral cancer
NASA Astrophysics Data System (ADS)
Uthoff, Ross
Shown is the design of the Smartphone Oral Cancer Detection System (SOCeeDS). The SOCeeDS attaches to a smartphone and utilizes its embedded imaging optics and sensors to capture images of the oral cavity to detect oral cancer. Violet illumination sources excite the oral tissues to induce fluorescence. Images are captured with the smartphone's onboard camera. Areas where the tissues of the oral cavity are darkened signify an absence of fluorescence signal, indicating breakdown in tissue structure brought by precancerous or cancerous conditions. With this data the patient can seek further testing and diagnosis as needed. Proliferation of this device will allow communities with limited access to healthcare professionals a tool to detect cancer in its early stages, increasing the likelihood of cancer reversal.
Computer vision research with new imaging technology
NASA Astrophysics Data System (ADS)
Hou, Guangqi; Liu, Fei; Sun, Zhenan
2015-12-01
Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.
Implementation of Nearest Neighbor using HSV to Identify Skin Disease
NASA Astrophysics Data System (ADS)
Gerhana, Y. A.; Zulfikar, W. B.; Ramdani, A. H.; Ramdhani, M. A.
2018-01-01
Today, Android is one of the most widely used operating system in the world. Most of android device has a camera that could capture an image, this feature could be optimized to identify skin disease. The disease is one of health problem caused by bacterium, fungi, and virus. The symptoms of skin disease usually visible. In this work, the symptoms that captured as image contains HSV in every pixel of the image. HSV can extracted and then calculate to earn euclidean value. The value compared using nearest neighbor algorithm to discover closer value between image testing and image training to get highest value that decide class label or type of skin disease. The testing result show that 166 of 200 or about 80% is accurate. There are some reasons that influence the result of classification model like number of image training and quality of android device’s camera.
NASA Astrophysics Data System (ADS)
Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu
2000-12-01
New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.
Efficient Smart CMOS Camera Based on FPGAs Oriented to Embedded Image Processing
Bravo, Ignacio; Baliñas, Javier; Gardel, Alfredo; Lázaro, José L.; Espinosa, Felipe; García, Jorge
2011-01-01
This article describes an image processing system based on an intelligent ad-hoc camera, whose two principle elements are a high speed 1.2 megapixel Complementary Metal Oxide Semiconductor (CMOS) sensor and a Field Programmable Gate Array (FPGA). The latter is used to control the various sensor parameter configurations and, where desired, to receive and process the images captured by the CMOS sensor. The flexibility and versatility offered by the new FPGA families makes it possible to incorporate microprocessors into these reconfigurable devices, and these are normally used for highly sequential tasks unsuitable for parallelization in hardware. For the present study, we used a Xilinx XC4VFX12 FPGA, which contains an internal Power PC (PPC) microprocessor. In turn, this contains a standalone system which manages the FPGA image processing hardware and endows the system with multiple software options for processing the images captured by the CMOS sensor. The system also incorporates an Ethernet channel for sending processed and unprocessed images from the FPGA to a remote node. Consequently, it is possible to visualize and configure system operation and captured and/or processed images remotely. PMID:22163739
Remote media vision-based computer input device
NASA Astrophysics Data System (ADS)
Arabnia, Hamid R.; Chen, Ching-Yi
1991-11-01
In this paper, we introduce a vision-based computer input device which has been built at the University of Georgia. The user of this system gives commands to the computer without touching any physical device. The system receives input through a CCD camera; it is PC- based and is built on top of the DOS operating system. The major components of the input device are: a monitor, an image capturing board, a CCD camera, and some software (developed by use). These are interfaced with a standard PC running under the DOS operating system.
Nazarian, Saman; Hansford, Rozann; Roguin, Ariel; Goldsher, Dorith; Zviman, Menekhem M.; Lardo, Albert C.; Caffo, Brian S.; Frick, Kevin D.; Kraut, Michael A.; Kamel, Ihab R.; Calkins, Hugh; Berger, Ronald D.; Bluemke, David A.; Halperin, Henry R.
2015-01-01
Background Magnetic resonance imaging (MRI) is avoided in most patients with implanted cardiac devices because of safety concerns. Objective To define the safety of a protocol for MRI at the commonly used magnetic strength of 1.5 T in patients with implanted cardiac devices. Design Prospective nonrandomized trial. (ClinicalTrials.gov registration number: NCT01130896) Setting One center in the United States (94% of examinations) and one in Israel. Patients 438 patients with devices (54% with pacemakers and 46% with defibrillators) who underwent 555 MRI studies. Intervention Pacing mode was changed to asynchronous for pacemaker-dependent patients and to demand for others. Tachy-arrhythmia functions were disabled. Blood pressure, electrocardiography, oximetry, and symptoms were monitored by a nurse with experience in cardiac life support and device programming who had immediate backup from an electrophysiologist. Measurements Activation or inhibition of pacing, symptoms, and device variables. Results In 3 patients (0.7% [95% CI, 0% to 1.5%]), the device reverted to a transient back-up programming mode without long-term effects. Right ventricular (RV) sensing (median change, 0 mV [interquartile range {IQR}, −0.7 to 0 V]) and atrial and right and left ventricular lead impedances (median change, −2 Ω[IQR, −13 to 0 Ω], −4 Ω [IQR, −16 to 0 Ω], and −11 Ω [IQR, −40 to 0 Ω], respectively) were reduced immediately after MRI. At long-term follow-up (61% of patients), decreased RV sensing (median, 0 mV, [IQR, −1.1 to 0.3 mV]), decreased RV lead impedance (median, −3 Ω, [IQR, −29 to 15 Ω]), increased RV capture threshold (median, 0 V, IQR, [0 to 0.2 Ω]), and decreased battery voltage (median, −0.01 V, IQR, −0.04 to 0 V) were noted. The observed changes did not require device revision or reprogramming. Limitations Not all available cardiac devices have been tested. Long-term in-person or telephone follow-up was unavailable in 43 patients (10%), and some data were missing. Those with missing long-term capture threshold data had higher baseline right atrial and right ventricular capture thresholds and were more likely to have undergone thoracic imaging. Defibrillation threshold testing and random assignment to a control group were not performed. Conclusion With appropriate precautions, MRI can be done safely in patients with selected cardiac devices. Because changes in device variables and programming may occur, electrophysiologic monitoring during MRI is essential. Primary Funding Source National Institutes of Health. PMID:21969340
High speed color imaging through scattering media with a large field of view
NASA Astrophysics Data System (ADS)
Zhuang, Huichang; He, Hexiang; Xie, Xiangsheng; Zhou, Jianying
2016-09-01
Optical imaging through complex media has many important applications. Although research progresses have been made to recover optical image through various turbid media, the widespread application of the technology is hampered by the recovery speed, requirement on specific illumination, poor image quality and limited field of view. Here we demonstrate that above-mentioned drawbacks can be essentially overcome. The realization of high speed color imaging through turbid media is successfully carried out by taking into account the media memory effect, the point spread function, the exit pupil of the optical system, and the optimized signal to noise ratio. By retrieving selected speckles with enlarged field of view, high quality image is recovered with a responding speed only determined by the frame rates of the image capturing devices. The immediate application of the technique is expected to register static and dynamic imaging under human skin to recover information with a wearable device.
Multispectral Imaging for Determination of Astaxanthin Concentration in Salmonids
Dissing, Bjørn S.; Nielsen, Michael E.; Ersbøll, Bjarne K.; Frosch, Stina
2011-01-01
Multispectral imaging has been evaluated for characterization of the concentration of a specific cartenoid pigment; astaxanthin. 59 fillets of rainbow trout, Oncorhynchus mykiss, were filleted and imaged using a rapid multispectral imaging device for quantitative analysis. The multispectral imaging device captures reflection properties in 19 distinct wavelength bands, prior to determination of the true concentration of astaxanthin. The samples ranged from 0.20 to 4.34 g per g fish. A PLSR model was calibrated to predict astaxanthin concentration from novel images, and showed good results with a RMSEP of 0.27. For comparison a similar model were built for normal color images, which yielded a RMSEP of 0.45. The acquisition speed of the multispectral imaging system and the accuracy of the PLSR model obtained suggest this method as a promising technique for rapid in-line estimation of astaxanthin concentration in rainbow trout fillets. PMID:21573000
Stent deployment protocol for optimized real-time visualization during endovascular neurosurgery.
Silva, Michael A; See, Alfred P; Dasenbrock, Hormuzdiyar H; Ashour, Ramsey; Khandelwal, Priyank; Patel, Nirav J; Frerichs, Kai U; Aziz-Sultan, Mohammad A
2017-05-01
Successful application of endovascular neurosurgery depends on high-quality imaging to define the pathology and the devices as they are being deployed. This is especially challenging in the treatment of complex cases, particularly in proximity to the skull base or in patients who have undergone prior endovascular treatment. The authors sought to optimize real-time image guidance using a simple algorithm that can be applied to any existing fluoroscopy system. Exposure management (exposure level, pulse management) and image post-processing parameters (edge enhancement) were modified from traditional fluoroscopy to improve visualization of device position and material density during deployment. Examples include the deployment of coils in small aneurysms, coils in giant aneurysms, the Pipeline embolization device (PED), the Woven EndoBridge (WEB) device, and carotid artery stents. The authors report on the development of the protocol and their experience using representative cases. The stent deployment protocol is an image capture and post-processing algorithm that can be applied to existing fluoroscopy systems to improve real-time visualization of device deployment without hardware modifications. Improved image guidance facilitates aneurysm coil packing and proper positioning and deployment of carotid artery stents, flow diverters, and the WEB device, especially in the context of complex anatomy and an obscured field of view.
Automated cloud classification using a ground based infra-red camera and texture analysis techniques
NASA Astrophysics Data System (ADS)
Rumi, Emal; Kerr, David; Coupland, Jeremy M.; Sandford, Andrew P.; Brettle, Mike J.
2013-10-01
Clouds play an important role in influencing the dynamics of local and global weather and climate conditions. Continuous monitoring of clouds is vital for weather forecasting and for air-traffic control. Convective clouds such as Towering Cumulus (TCU) and Cumulonimbus clouds (CB) are associated with thunderstorms, turbulence and atmospheric instability. Human observers periodically report the presence of CB and TCU clouds during operational hours at airports and observatories; however such observations are expensive and time limited. Robust, automatic classification of cloud type using infrared ground-based instrumentation offers the advantage of continuous, real-time (24/7) data capture and the representation of cloud structure in the form of a thermal map, which can greatly help to characterise certain cloud formations. The work presented here utilised a ground based infrared (8-14 μm) imaging device mounted on a pan/tilt unit for capturing high spatial resolution sky images. These images were processed to extract 45 separate textural features using statistical and spatial frequency based analytical techniques. These features were used to train a weighted k-nearest neighbour (KNN) classifier in order to determine cloud type. Ground truth data were obtained by inspection of images captured simultaneously from a visible wavelength colour camera at the same installation, with approximately the same field of view as the infrared device. These images were classified by a trained cloud observer. Results from the KNN classifier gave an encouraging success rate. A Probability of Detection (POD) of up to 90% with a Probability of False Alarm (POFA) as low as 16% was achieved.
Text recognition and correction for automated data collection by mobile devices
NASA Astrophysics Data System (ADS)
Ozarslan, Suleyman; Eren, P. Erhan
2014-03-01
Participatory sensing is an approach which allows mobile devices such as mobile phones to be used for data collection, analysis and sharing processes by individuals. Data collection is the first and most important part of a participatory sensing system, but it is time consuming for the participants. In this paper, we discuss automatic data collection approaches for reducing the time required for collection, and increasing the amount of collected data. In this context, we explore automated text recognition on images of store receipts which are captured by mobile phone cameras, and the correction of the recognized text. Accordingly, our first goal is to evaluate the performance of the Optical Character Recognition (OCR) method with respect to data collection from store receipt images. Images captured by mobile phones exhibit some typical problems, and common image processing methods cannot handle some of them. Consequently, the second goal is to address these types of problems through our proposed Knowledge Based Correction (KBC) method used in support of the OCR, and also to evaluate the KBC method with respect to the improvement on the accurate recognition rate. Results of the experiments show that the KBC method improves the accurate data recognition rate noticeably.
Quantifying cell mono-layer cultures by video imaging.
Miller, K S; Hook, L A
1996-04-01
A method is described in which the relative number of adherent cells in multi-well tissue-culture plates is assayed by staining the cells with Giemsa and capturing the image of the stained cells with a video camera and charged-coupled device. The resultant image is quantified using the associated video imaging software. The method is shown to be sensitive and reproducible and should be useful for studies where quantifying relative cell numbers and/or proliferation in vitro is required.
Depth and thermal sensor fusion to enhance 3D thermographic reconstruction.
Cao, Yanpeng; Xu, Baobei; Ye, Zhangyu; Yang, Jiangxin; Cao, Yanlong; Tisse, Christel-Loic; Li, Xin
2018-04-02
Three-dimensional geometrical models with incorporated surface temperature data provide important information for various applications such as medical imaging, energy auditing, and intelligent robots. In this paper we present a robust method for mobile and real-time 3D thermographic reconstruction through depth and thermal sensor fusion. A multimodal imaging device consisting of a thermal camera and a RGB-D sensor is calibrated geometrically and used for data capturing. Based on the underlying principle that temperature information remains robust against illumination and viewpoint changes, we present a Thermal-guided Iterative Closest Point (T-ICP) methodology to facilitate reliable 3D thermal scanning applications. The pose of sensing device is initially estimated using correspondences found through maximizing the thermal consistency between consecutive infrared images. The coarse pose estimate is further refined by finding the motion parameters that minimize a combined geometric and thermographic loss function. Experimental results demonstrate that complimentary information captured by multimodal sensors can be utilized to improve performance of 3D thermographic reconstruction. Through effective fusion of thermal and depth data, the proposed approach generates more accurate 3D thermal models using significantly less scanning data.
Multi-Contrast Imaging and Digital Refocusing on a Mobile Microscope with a Domed LED Array.
Phillips, Zachary F; D'Ambrosio, Michael V; Tian, Lei; Rulison, Jared J; Patel, Hurshal S; Sadras, Nitin; Gande, Aditya V; Switz, Neil A; Fletcher, Daniel A; Waller, Laura
2015-01-01
We demonstrate the design and application of an add-on device for improving the diagnostic and research capabilities of CellScope--a low-cost, smartphone-based point-of-care microscope. We replace the single LED illumination of the original CellScope with a programmable domed LED array. By leveraging recent advances in computational illumination, this new device enables simultaneous multi-contrast imaging with brightfield, darkfield, and phase imaging modes. Further, we scan through illumination angles to capture lightfield datasets, which can be used to recover 3D intensity and phase images without any hardware changes. This digital refocusing procedure can be used for either 3D imaging or software-only focus correction, reducing the need for precise mechanical focusing during field experiments. All acquisition and processing is performed on the mobile phone and controlled through a smartphone application, making the computational microscope compact and portable. Using multiple samples and different objective magnifications, we demonstrate that the performance of our device is comparable to that of a commercial microscope. This unique device platform extends the field imaging capabilities of CellScope, opening up new clinical and research possibilities.
Multi-Contrast Imaging and Digital Refocusing on a Mobile Microscope with a Domed LED Array
Phillips, Zachary F.; D'Ambrosio, Michael V.; Tian, Lei; Rulison, Jared J.; Patel, Hurshal S.; Sadras, Nitin; Gande, Aditya V.; Switz, Neil A.; Fletcher, Daniel A.; Waller, Laura
2015-01-01
We demonstrate the design and application of an add-on device for improving the diagnostic and research capabilities of CellScope—a low-cost, smartphone-based point-of-care microscope. We replace the single LED illumination of the original CellScope with a programmable domed LED array. By leveraging recent advances in computational illumination, this new device enables simultaneous multi-contrast imaging with brightfield, darkfield, and phase imaging modes. Further, we scan through illumination angles to capture lightfield datasets, which can be used to recover 3D intensity and phase images without any hardware changes. This digital refocusing procedure can be used for either 3D imaging or software-only focus correction, reducing the need for precise mechanical focusing during field experiments. All acquisition and processing is performed on the mobile phone and controlled through a smartphone application, making the computational microscope compact and portable. Using multiple samples and different objective magnifications, we demonstrate that the performance of our device is comparable to that of a commercial microscope. This unique device platform extends the field imaging capabilities of CellScope, opening up new clinical and research possibilities. PMID:25969980
Live HDR video streaming on commodity hardware
NASA Astrophysics Data System (ADS)
McNamee, Joshua; Hatchett, Jonathan; Debattista, Kurt; Chalmers, Alan
2015-09-01
High Dynamic Range (HDR) video provides a step change in viewing experience, for example the ability to clearly see the soccer ball when it is kicked from the shadow of the stadium into sunshine. To achieve the full potential of HDR video, so-called true HDR, it is crucial that all the dynamic range that was captured is delivered to the display device and tone mapping is confined only to the display. Furthermore, to ensure widespread uptake of HDR imaging, it should be low cost and available on commodity hardware. This paper describes an end-to-end HDR pipeline for capturing, encoding and streaming high-definition HDR video in real-time using off-the-shelf components. All the lighting that is captured by HDR-enabled consumer cameras is delivered via the pipeline to any display, including HDR displays and even mobile devices with minimum latency. The system thus provides an integrated HDR video pipeline that includes everything from capture to post-production, archival and storage, compression, transmission, and display.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-23
... 14157 (2009). The complainant named Eastman Kodak Company of Rochester, New York (``Kodak'') as the respondent. On December 16, 2009, LG and Kodak jointly moved to terminate the investigation based on a...
Integral image rendering procedure for aberration correction and size measurement.
Sommer, Holger; Ihrig, Andreas; Ebenau, Melanie; Flühs, Dirk; Spaan, Bernhard; Eichmann, Marion
2014-05-20
The challenge in rendering integral images is to use as much information preserved by the light field as possible to reconstruct a captured scene in a three-dimensional way. We propose a rendering algorithm based on the projection of rays through a detailed simulation of the optical path, considering all the physical properties and locations of the optical elements. The rendered images contain information about the correct size of imaged objects without the need to calibrate the imaging device. Additionally, aberrations of the optical system may be corrected, depending on the setup of the integral imaging device. We show simulation data that illustrates the aberration correction ability and experimental data from our plenoptic camera, which illustrates the capability of our proposed algorithm to measure size and distance. We believe this rendering procedure will be useful in the future for three-dimensional ophthalmic imaging of the human retina.
Boushey, C J; Spoden, M; Zhu, F M; Delp, E J; Kerr, D A
2017-08-01
For nutrition practitioners and researchers, assessing dietary intake of children and adults with a high level of accuracy continues to be a challenge. Developments in mobile technologies have created a role for images in the assessment of dietary intake. The objective of this review was to examine peer-reviewed published papers covering development, evaluation and/or validation of image-assisted or image-based dietary assessment methods from December 2013 to January 2016. Images taken with handheld devices or wearable cameras have been used to assist traditional dietary assessment methods for portion size estimations made by dietitians (image-assisted methods). Image-assisted approaches can supplement either dietary records or 24-h dietary recalls. In recent years, image-based approaches integrating application technology for mobile devices have been developed (image-based methods). Image-based approaches aim at capturing all eating occasions by images as the primary record of dietary intake, and therefore follow the methodology of food records. The present paper reviews several image-assisted and image-based methods, their benefits and challenges; followed by details on an image-based mobile food record. Mobile technology offers a wide range of feasible options for dietary assessment, which are easier to incorporate into daily routines. The presented studies illustrate that image-assisted methods can improve the accuracy of conventional dietary assessment methods by adding eating occasion detail via pictures captured by an individual (dynamic images). All of the studies reduced underreporting with the help of images compared with results with traditional assessment methods. Studies with larger sample sizes are needed to better delineate attributes with regards to age of user, degree of error and cost.
NASA Astrophysics Data System (ADS)
Carlsohn, Matthias F.; Kemmling, André; Petersen, Arne; Wietzke, Lennart
2016-04-01
Cerebral aneurysms require endovascular treatment to eliminate potentially lethal hemorrhagic rupture by hemostasis of blood flow within the aneurysm. Devices (e.g. coils and flow diverters) promote homeostasis, however, measurement of blood flow within an aneurysm or cerebral vessel before and after device placement on a microscopic level has not been possible so far. This would allow better individualized treatment planning and improve manufacture design of devices. For experimental analysis, direct measurement of real-time microscopic cerebrovascular flow in micro-structures may be an alternative to computed flow simulations. An application of microscopic aneurysm flow measurement on a regular basis to empirically assess a high number of different anatomic shapes and the corresponding effect of different devices would require a fast and reliable method at low cost with high throughout assessment. Transparent three dimensional 3D models of brain vessels and aneurysms may be used for microscopic flow measurements by particle image velocimetry (PIV), however, up to now the size of structures has set the limits for conventional 3D-imaging camera set-ups. On line flow assessment requires additional computational power to cope with the processing large amounts of data generated by sequences of multi-view stereo images, e.g. generated by a light field camera capturing the 3D information by plenoptic imaging of complex flow processes. Recently, a fast and low cost workflow for producing patient specific three dimensional models of cerebral arteries has been established by stereo-lithographic (SLA) 3D printing. These 3D arterial models are transparent an exhibit a replication precision within a submillimeter range required for accurate flow measurements under physiological conditions. We therefore test the feasibility of microscopic flow measurements by PIV analysis using a plenoptic camera system capturing light field image sequences. Averaging across a sequence of single double or triple shots of flashed images enables reconstruction of the real-time corpuscular flow through the vessel system before and after device placement. This approach could enable 3D-insight of microscopic flow within blood vessels and aneurysms at submillimeter resolution. We present an approach that allows real-time assessment of 3D particle flow by high-speed light field image analysis including a solution that addresses high computational load by image processing. The imaging set-up accomplishes fast and reliable PIV analysis in transparent 3D models of brain aneurysms at low cost. High throughput microscopic flow assessment of different shapes of brain aneurysms may therefore be possibly required for patient specific device designs.
Classification of human carcinoma cells using multispectral imagery
NASA Astrophysics Data System (ADS)
Ćinar, Umut; Y. Ćetin, Yasemin; Ćetin-Atalay, Rengul; Ćetin, Enis
2016-03-01
In this paper, we present a technique for automatically classifying human carcinoma cell images using textural features. An image dataset containing microscopy biopsy images from different patients for 14 distinct cancer cell line type is studied. The images are captured using a RGB camera attached to an inverted microscopy device. Texture based Gabor features are extracted from multispectral input images. SVM classifier is used to generate a descriptive model for the purpose of cell line classification. The experimental results depict satisfactory performance, and the proposed method is versatile for various microscopy magnification options.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stefanczyk, Ludomir; Elgalal, Marcin T., E-mail: telgalal@yahoo.co.uk; Szubert, Wojciech
2013-10-15
A case of femoral artery obstruction following application of a StarClose type arterial puncture closing device (APCD) is presented. Ultrasonographic and angiographic imaging of this complication was obtained. The posterior wall of the vessel was accidentally caught in the anchoring element of the nitinol clip. This complication was successfully resolved by endovascular treatment and the implantation of a stent.
PlenoPatch: Patch-Based Plenoptic Image Manipulation.
Zhang, Fang-Lue; Wang, Jue; Shechtman, Eli; Zhou, Zi-Ye; Shi, Jia-Xin; Hu, Shi-Min
2017-05-01
Patch-based image synthesis methods have been successfully applied for various editing tasks on still images, videos and stereo pairs. In this work we extend patch-based synthesis to plenoptic images captured by consumer-level lenselet-based devices for interactive, efficient light field editing. In our method the light field is represented as a set of images captured from different viewpoints. We decompose the central view into different depth layers, and present it to the user for specifying the editing goals. Given an editing task, our method performs patch-based image synthesis on all affected layers of the central view, and then propagates the edits to all other views. Interaction is done through a conventional 2D image editing user interface that is familiar to novice users. Our method correctly handles object boundary occlusion with semi-transparency, thus can generate more realistic results than previous methods. We demonstrate compelling results on a wide range of applications such as hole-filling, object reshuffling and resizing, changing object depth, light field upscaling and parallax magnification.
Expansion of the visual angle of a car rear-view image via an image mosaic algorithm
NASA Astrophysics Data System (ADS)
Wu, Zhuangwen; Zhu, Liangrong; Sun, Xincheng
2015-05-01
The rear-view image system is one of the active safety devices in cars and is widely applied in all types of vehicles and traffic safety areas. However, studies made by both domestic and foreign researchers were based on a single image capture device while reversing, so a blind area still remained to drivers. Even if multiple cameras were used to expand the visual angle of the car's rear-view image in some studies, the blind area remained because different source images were not mosaicked together. To acquire an expanded visual angle of a car rear-view image, two charge-coupled device cameras with optical axes angled at 30 deg were mounted below the left and right fenders of a car in three light conditions-sunny outdoors, cloudy outdoors, and an underground garage-to capture rear-view heterologous images of the car. Then these rear-view heterologous images were rapidly registered through the scale invariant feature transform algorithm. Combined with the random sample consensus algorithm, the two heterologous images were finally mosaicked using the linear weighted gradated in-and-out fusion algorithm, and a seamless and visual-angle-expanded rear-view image was acquired. The four-index test results showed that the algorithms can mosaic rear-view images well in the underground garage condition, where the average rate of correct matching was the lowest among the three conditions. The rear-view image mosaic algorithm presented had the best information preservation, the shortest computation time and the most complete preservation of the image detail features compared to the mean value method (MVM) and segmental fusion method (SFM), and it was also able to perform better in real time and provided more comprehensive image details than MVM and SFM. In addition, it had the most complete image preservation from source images among the three algorithms. The method introduced by this paper provided the basis for researching the expansion of the visual angle of a car rear-view image in all-weather conditions.
Fast words boundaries localization in text fields for low quality document images
NASA Astrophysics Data System (ADS)
Ilin, Dmitry; Novikov, Dmitriy; Polevoy, Dmitry; Nikolaev, Dmitry
2018-04-01
The paper examines the problem of word boundaries precise localization in document text zones. Document processing on a mobile device consists of document localization, perspective correction, localization of individual fields, finding words in separate zones, segmentation and recognition. While capturing an image with a mobile digital camera under uncontrolled capturing conditions, digital noise, perspective distortions or glares may occur. Further document processing gets complicated because of its specifics: layout elements, complex background, static text, document security elements, variety of text fonts. However, the problem of word boundaries localization has to be solved at runtime on mobile CPU with limited computing capabilities under specified restrictions. At the moment, there are several groups of methods optimized for different conditions. Methods for the scanned printed text are quick but limited only for images of high quality. Methods for text in the wild have an excessively high computational complexity, thus, are hardly suitable for running on mobile devices as part of the mobile document recognition system. The method presented in this paper solves a more specialized problem than the task of finding text on natural images. It uses local features, a sliding window and a lightweight neural network in order to achieve an optimal algorithm speed-precision ratio. The duration of the algorithm is 12 ms per field running on an ARM processor of a mobile device. The error rate for boundaries localization on a test sample of 8000 fields is 0.3
NASA Astrophysics Data System (ADS)
Everard, Colm D.; Kim, Moon S.; Lee, Hoyoung
2014-05-01
The production of contaminant free fresh fruit and vegetables is needed to reduce foodborne illnesses and related costs. Leafy greens grown in the field can be susceptible to fecal matter contamination from uncontrolled livestock and wild animals entering the field. Pathogenic bacteria can be transferred via fecal matter and several outbreaks of E.coli O157:H7 have been associated with the consumption of leafy greens. This study examines the use of hyperspectral fluorescence imaging coupled with multivariate image analysis to detect fecal contamination on Spinach leaves (Spinacia oleracea). Hyperspectral fluorescence images from 464 to 800 nm were captured; ultraviolet excitation was supplied by two LED-based line light sources at 370 nm. Key wavelengths and algorithms useful for a contaminant screening optical imaging device were identified and developed, respectively. A non-invasive screening device has the potential to reduce the harmful consequences of foodborne illnesses.
Thege, Fredrik I; Lannin, Timothy B; Saha, Trisha N; Tsai, Shannon; Kochman, Michael L; Hollingsworth, Michael A; Rhim, Andrew D; Kirby, Brian J
2014-05-21
We have developed and optimized a microfluidic device platform for the capture and analysis of circulating pancreatic cells (CPCs) and pancreatic circulating tumor cells (CTCs). Our platform uses parallel anti-EpCAM and cancer-specific mucin 1 (MUC1) immunocapture in a silicon microdevice. Using a combination of anti-EpCAM and anti-MUC1 capture in a single device, we are able to achieve efficient capture while extending immunocapture beyond single marker recognition. We also have detected a known oncogenic KRAS mutation in cells spiked in whole blood using immunocapture, RNA extraction, RT-PCR and Sanger sequencing. To allow for downstream single-cell genetic analysis, intact nuclei were released from captured cells by using targeted membrane lysis. We have developed a staining protocol for clinical samples, including standard CTC markers; DAPI, cytokeratin (CK) and CD45, and a novel marker of carcinogenesis in CPCs, mucin 4 (MUC4). We have also demonstrated a semi-automated approach to image analysis and CPC identification, suitable for clinical hypothesis generation. Initial results from immunocapture of a clinical pancreatic cancer patient sample show that parallel capture may capture more of the heterogeneity of the CPC population. With this platform, we aim to develop a diagnostic biomarker for early pancreatic carcinogenesis and patient risk stratification.
Large beam deflection using cascaded prism array
NASA Astrophysics Data System (ADS)
Wang, Wei-Chih; Tsui, Chi-Leung
2012-04-01
Endoscopes have been utilize in the medical field to observe the internals of the human body to assist the diagnosis of diseases, such as breathing disorders, internal bleeding, stomach ulcers, and urinary tract infections. Endoscopy is also utilized in the procedure of biopsy for the diagnosis of cancer. Conventional endoscopes suffer from the compromise between overall size and image quality due to the required size of the sensor for acceptable image quality. To overcome the size constraint while maintaining the capture image quality, we propose an electro-optic beam steering device based on thermal-plastic polymer, which has a small foot-print (~5mmx5mm), and can be easily fabricated using conventional hot-embossing and micro-fabrication techniques. The proposed device can be implemented as an imaging device inside endoscopes to allow reduction in the overall system size. In our previous work, a single prism design has been used to amplify the deflection generated by the index change of the thermal-plastic polymer when a voltage is applied; it yields a result of 5.6° deflection. To further amplify the deflection, a new design utilizing a cascading three-prism array has been implemented and a deflection angle to 29.2° is observed. The new design amplifies the beam deflection, while keeping the advantage of simple fabrication made possible by thermal-plastic polymer. Also, a photo-resist based collimator lens array has been added to reduce and provide collimation of the beam for high quality imaging purposes. The collimator is able to collimate the exiting beam at 4 μm diameter for up to 25mm, which potentially allows high resolution image capturing.
Photonic crystal resonances for sensing and imaging
NASA Astrophysics Data System (ADS)
Pitruzzello, Giampaolo; Krauss, Thomas F.
2018-07-01
This review provides an insight into the recent developments of photonic crystal (PhC)-based devices for sensing and imaging, with a particular emphasis on biosensors. We focus on two main classes of devices, namely sensors based on PhC cavities and those on guided mode resonances (GMRs). This distinction is able to capture the richness of possibilities that PhCs are able to offer in this space. We present recent examples highlighting applications where PhCs can offer new capabilities, open up new applications or enable improved performance, with a clear emphasis on the different types of structures and photonic functions. We provide a critical comparison between cavity-based devices and GMR devices by highlighting strengths and weaknesses. We also compare PhC technologies and their sensing mechanism to surface plasmon resonance, microring resonators and integrated interferometric sensors.
Smartphones as image processing systems for prosthetic vision.
Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J
2013-01-01
The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.
Smartphone-Based Endoscope System for Advanced Point-of-Care Diagnostics: Feasibility Study
Bae, Jung Kweon; Vavilin, Andrey; You, Joon S; Kim, Hyeongeun; Ryu, Seon Young; Jang, Jeong Hun
2017-01-01
Background Endoscopic technique is often applied for the diagnosis of diseases affecting internal organs and image-guidance of surgical procedures. Although the endoscope has become an indispensable tool in the clinic, its utility has been limited to medical offices or operating rooms because of the large size of its ancillary devices. In addition, the basic design and imaging capability of the system have remained relatively unchanged for decades. Objective The objective of this study was to develop a smartphone-based endoscope system capable of advanced endoscopic functionalities in a compact size and at an affordable cost and to demonstrate its feasibility of point-of-care through human subject imaging. Methods We developed and designed to set up a smartphone-based endoscope system, incorporating a portable light source, relay-lens, custom adapter, and homebuilt Android app. We attached three different types of existing rigid or flexible endoscopic probes to our system and captured the endoscopic images using the homebuilt app. Both smartphone-based endoscope system and commercialized clinical endoscope system were utilized to compare the imaging quality and performance. Connecting the head-mounted display (HMD) wirelessly, the smartphone-based endoscope system could superimpose an endoscopic image to real-world view. Results A total of 15 volunteers who were accepted into our study were captured using our smartphone-based endoscope system, as well as the commercialized clinical endoscope system. It was found that the imaging performance of our device had acceptable quality compared with that of the conventional endoscope system in the clinical setting. In addition, images captured from the HMD used in the smartphone-based endoscope system improved eye-hand coordination between the manipulating site and the smartphone screen, which in turn reduced spatial disorientation. Conclusions The performance of our endoscope system was evaluated against a commercial system in routine otolaryngology examinations. We also demonstrated and evaluated the feasibility of conducting endoscopic procedures through a custom HMD. PMID:28751302
NASA Astrophysics Data System (ADS)
Ying, Changsheng; Zhao, Peng; Li, Ye
2018-01-01
The intensified charge-coupled device (ICCD) is widely used in the field of low-light-level (LLL) imaging. The LLL images captured by ICCD suffer from low spatial resolution and contrast, and the target details can hardly be recognized. Super-resolution (SR) reconstruction of LLL images captured by ICCDs is a challenging issue. The dispersion in the double-proximity-focused image intensifier is the main factor that leads to a reduction in image resolution and contrast. We divide the integration time into subintervals that are short enough to get photon images, so the overlapping effect and overstacking effect of dispersion can be eliminated. We propose an SR reconstruction algorithm based on iterative projection photon localization. In the iterative process, the photon image is sliced by projection planes, and photons are screened under the constraints of regularity. The accurate position information of the incident photons in the reconstructed SR image is obtained by the weighted centroids calculation. The experimental results show that the spatial resolution and contrast of our SR image are significantly improved.
Assessment of image quality in x-ray radiography imaging using a small plasma focus device
NASA Astrophysics Data System (ADS)
Kanani, A.; Shirani, B.; Jabbari, I.; Mokhtari, J.
2014-08-01
This paper offers a comprehensive investigation of image quality parameters for a small plasma focus as a pulsed hard x-ray source for radiography applications. A set of images were captured from some metal objects and electronic circuits using a low energy plasma focus at different voltages of capacitor bank and different pressures of argon gas. The x-ray source focal spot of this device was obtained to be about 0.6 mm using the penumbra imaging method. The image quality was studied by several parameters such as image contrast, line spread function (LSF) and modulation transfer function (MTF). Results showed that the contrast changes by variations in gas pressure. The best contrast was obtained at a pressure of 0.5 mbar and 3.75 kJ stored energy. The results of x-ray dose from the device showed that about 0.6 mGy is sufficient to obtain acceptable images on the film. The measurements of LSF and MTF parameters were carried out by means of a thin stainless steel wire 0.8 mm in diameter and the cut-off frequency was obtained to be about 1.5 cycles/mm.
Comparison of three-dimensional surface-imaging systems.
Tzou, Chieh-Han John; Artner, Nicole M; Pona, Igor; Hold, Alina; Placheta, Eva; Kropatsch, Walter G; Frey, Manfred
2014-04-01
In recent decades, three-dimensional (3D) surface-imaging technologies have gained popularity worldwide, but because most published articles that mention them are technical, clinicians often have difficulties gaining a proper understanding of them. This article aims to provide the reader with relevant information on 3D surface-imaging systems. In it, we compare the most recent technologies to reveal their differences. We have accessed five international companies with the latest technologies in 3D surface-imaging systems: 3dMD, Axisthree, Canfield, Crisalix and Dimensional Imaging (Di3D; in alphabetical order). We evaluated their technical equipment, independent validation studies and corporate backgrounds. The fastest capturing devices are the 3dMD and Di3D systems, capable of capturing images within 1.5 and 1 ms, respectively. All companies provide software for tissue modifications. Additionally, 3dMD, Canfield and Di3D can fuse computed tomography (CT)/cone-beam computed tomography (CBCT) images into their 3D surface-imaging data. 3dMD and Di3D provide 4D capture systems, which allow capturing the movement of a 3D surface over time. Crisalix greatly differs from the other four systems as it is purely web based and realised via cloud computing. 3D surface-imaging systems are becoming important in today's plastic surgical set-ups, taking surgeons to a new level of communication with patients, surgical planning and outcome evaluation. Technologies used in 3D surface-imaging systems and their intended field of application vary within the companies evaluated. Potential users should define their requirements and assignment of 3D surface-imaging systems in their clinical as research environment before making the final decision for purchase. Copyright © 2014 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Zhang, Zhi-Feng; Gao, Zhan; Liu, Yuan-Yuan; Jiang, Feng-Chun; Yang, Yan-Li; Ren, Yu-Fen; Yang, Hong-Jun; Yang, Kun; Zhang, Xiao-Dong
2012-01-01
Train wheel sets must be periodically inspected for possible or actual premature failures and it is very significant to record the wear history for the full life of utilization of wheel sets. This means that an online measuring system could be of great benefit to overall process control. An online non-contact method for measuring a wheel set's geometric parameters based on the opto-electronic measuring technique is presented in this paper. A charge coupled device (CCD) camera with a selected optical lens and a frame grabber was used to capture the image of the light profile of the wheel set illuminated by a linear laser. The analogue signals of the image were transformed into corresponding digital grey level values. The 'mapping function method' is used to transform an image pixel coordinate to a space coordinate. The images of wheel sets were captured when the train passed through the measuring system. The rim inside thickness and flange thickness were measured and analyzed. The spatial resolution of the whole image capturing system is about 0.33 mm. Theoretic and experimental results show that the online measurement system based on computer vision can meet wheel set measurement requirements.
A four-lens based plenoptic camera for depth measurements
NASA Astrophysics Data System (ADS)
Riou, Cécile; Deng, Zhiyuan; Colicchio, Bruno; Lauffenburger, Jean-Philippe; Kohler, Sophie; Haeberlé, Olivier; Cudel, Christophe
2015-04-01
In previous works, we have extended the principles of "variable homography", defined by Zhang and Greenspan, for measuring height of emergent fibers on glass and non-woven fabrics. This method has been defined for working with fabric samples progressing on a conveyor belt. Triggered acquisition of two successive images was needed to perform the 3D measurement. In this work, we have retained advantages of homography variable for measurements along Z axis, but we have reduced acquisitions number to a single one, by developing an acquisition device characterized by 4 lenses placed in front of a single image sensor. The idea is then to obtain four projected sub-images on a single CCD sensor. The device becomes a plenoptic or light field camera, capturing multiple views on the same image sensor. We have adapted the variable homography formulation for this device and we propose a new formulation to calculate a depth with plenoptic cameras. With these results, we have transformed our plenoptic camera in a depth camera and first results given are very promising.
A fast and automatic fusion algorithm for unregistered multi-exposure image sequence
NASA Astrophysics Data System (ADS)
Liu, Yan; Yu, Feihong
2014-09-01
Human visual system (HVS) can visualize all the brightness levels of the scene through visual adaptation. However, the dynamic range of most commercial digital cameras and display devices are smaller than the dynamic range of human eye. This implies low dynamic range (LDR) images captured by normal digital camera may lose image details. We propose an efficient approach to high dynamic (HDR) image fusion that copes with image displacement and image blur degradation in a computationally efficient manner, which is suitable for implementation on mobile devices. The various image registration algorithms proposed in the previous literatures are unable to meet the efficiency and performance requirements in the application of mobile devices. In this paper, we selected Oriented Brief (ORB) detector to extract local image structures. The descriptor selected in multi-exposure image fusion algorithm has to be fast and robust to illumination variations and geometric deformations. ORB descriptor is the best candidate in our algorithm. Further, we perform an improved RANdom Sample Consensus (RANSAC) algorithm to reject incorrect matches. For the fusion of images, a new approach based on Stationary Wavelet Transform (SWT) is used. The experimental results demonstrate that the proposed algorithm generates high quality images at low computational cost. Comparisons with a number of other feature matching methods show that our method gets better performance.
NASA Astrophysics Data System (ADS)
Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod
2015-10-01
In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.
3D image processing architecture for camera phones
NASA Astrophysics Data System (ADS)
Atanassov, Kalin; Ramachandra, Vikas; Goma, Sergio R.; Aleksic, Milivoje
2011-03-01
Putting high quality and easy-to-use 3D technology into the hands of regular consumers has become a recent challenge as interest in 3D technology has grown. Making 3D technology appealing to the average user requires that it be made fully automatic and foolproof. Designing a fully automatic 3D capture and display system requires: 1) identifying critical 3D technology issues like camera positioning, disparity control rationale, and screen geometry dependency, 2) designing methodology to automatically control them. Implementing 3D capture functionality on phone cameras necessitates designing algorithms to fit within the processing capabilities of the device. Various constraints like sensor position tolerances, sensor 3A tolerances, post-processing, 3D video resolution and frame rate should be carefully considered for their influence on 3D experience. Issues with migrating functions such as zoom and pan from the 2D usage model (both during capture and display) to 3D needs to be resolved to insure the highest level of user experience. It is also very important that the 3D usage scenario (including interactions between the user and the capture/display device) is carefully considered. Finally, both the processing power of the device and the practicality of the scheme needs to be taken into account while designing the calibration and processing methodology.
Real-time image sequence segmentation using curve evolution
NASA Astrophysics Data System (ADS)
Zhang, Jun; Liu, Weisong
2001-04-01
In this paper, we describe a novel approach to image sequence segmentation and its real-time implementation. This approach uses the 3D structure tensor to produce a more robust frame difference signal and uses curve evolution to extract whole objects. Our algorithm is implemented on a standard PC running the Windows operating system with video capture from a USB camera that is a standard Windows video capture device. Using the Windows standard video I/O functionalities, our segmentation software is highly portable and easy to maintain and upgrade. In its current implementation on a Pentium 400, the system can perform segmentation at 5 frames/sec with a frame resolution of 160 by 120.
Kinect-based sign language recognition of static and dynamic hand movements
NASA Astrophysics Data System (ADS)
Dalawis, Rando C.; Olayao, Kenneth Deniel R.; Ramos, Evan Geoffrey I.; Samonte, Mary Jane C.
2017-02-01
A different approach of sign language recognition of static and dynamic hand movements was developed in this study using normalized correlation algorithm. The goal of this research was to translate fingerspelling sign language into text using MATLAB and Microsoft Kinect. Digital input image captured by Kinect devices are matched from template samples stored in a database. This Human Computer Interaction (HCI) prototype was developed to help people with communication disability to express their thoughts with ease. Frame segmentation and feature extraction was used to give meaning to the captured images. Sequential and random testing was used to test both static and dynamic fingerspelling gestures. The researchers explained some factors they encountered causing some misclassification of signs.
[Medical image compression: a review].
Noreña, Tatiana; Romero, Eduardo
2013-01-01
Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.
NASA Astrophysics Data System (ADS)
Walker, Justin A.
The introduction of complex treatment modalities such as IMRT and VMAT has led to the development of many devices for plan verification. One such innovation in this field is the repurposing of the portal imager to not only be used for tumor localization but for recording dose distributions as well. Several advantages make portal imagers attractive options for this purpose. Very high spatial resolution allows for better verification of small field plans than may be possible with commercially available devices. Because the portal imager is attached to the gantry set up is simpler than any other method available, requiring no additional accessories, and often can be accomplished from outside the treatment room. Dose images capture by the portal imager are in digital format make permanent records that can be analyzed immediately. Portal imaging suffers from a few limitations however that must be overcome. Images captured contain dose information and a calibration must be maintained for image to dose conversion. Dose images can only be taken perpendicular to the treatment beam allowing only for planar dose comparison. Planar dose files are themself difficult to obtain for VMAT treatments and an in-house script had to be developed to create such a file before analysis could be performed. Using the methods described in this study, excellent agreement between planar dose files generated and dose images taken were found. The average agreement for IMRT field analyzed being greater than 97% for non-normalized images at 3mm and 3%. Comparable agreement for VAMT plans was found as well with the average agreement being greater than 98%.
Video System for Viewing From a Remote or Windowless Cockpit
NASA Technical Reports Server (NTRS)
Banerjee, Amamath
2009-01-01
A system of electronic hardware and software synthesizes, in nearly real time, an image of a portion of a scene surveyed by as many as eight video cameras aimed, in different directions, at portions of the scene. This is a prototype of systems that would enable a pilot to view the scene outside a remote or windowless cockpit. The outputs of the cameras are digitized. Direct memory addressing is used to store the data of a few captured images in sequence, and the sequence is repeated in cycles. Cylindrical warping is used in merging adjacent images at their borders to construct a mosaic image of the scene. The mosaic-image data are written to a memory block from which they can be rendered on a head-mounted display (HMD) device. A subsystem in the HMD device tracks the direction of gaze of the wearer, providing data that are used to select, for display, the portion of the mosaic image corresponding to the direction of gaze. The basic functionality of the system has been demonstrated by mounting the cameras on the roof of a van and steering the van by use of the images presented on the HMD device.
Electronic Photography at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Holm, Jack; Judge, Nancianne
1995-01-01
An electronic photography facility has been established in the Imaging & Photographic Technology Section, Visual Imaging Branch, at the NASA Langley Research Center (LaRC). The purpose of this facility is to provide the LaRC community with access to digital imaging technology. In particular, capabilities have been established for image scanning, direct image capture, optimized image processing for storage, image enhancement, and optimized device dependent image processing for output. Unique approaches include: evaluation and extraction of the entire film information content through scanning; standardization of image file tone reproduction characteristics for optimal bit utilization and viewing; education of digital imaging personnel on the effects of sampling and quantization to minimize image processing related information loss; investigation of the use of small kernel optimal filters for image restoration; characterization of a large array of output devices and development of image processing protocols for standardized output. Currently, the laboratory has a large collection of digital image files which contain essentially all the information present on the original films. These files are stored at 8-bits per color, but the initial image processing was done at higher bit depths and/or resolutions so that the full 8-bits are used in the stored files. The tone reproduction of these files has also been optimized so the available levels are distributed according to visual perceptibility. Look up tables are available which modify these files for standardized output on various devices, although color reproduction has been allowed to float to some extent to allow for full utilization of output device gamut.
Introduction to Color Imaging Science
NASA Astrophysics Data System (ADS)
Lee, Hsien-Che
2005-04-01
Color imaging technology has become almost ubiquitous in modern life in the form of monitors, liquid crystal screens, color printers, scanners, and digital cameras. This book is a comprehensive guide to the scientific and engineering principles of color imaging. It covers the physics of light and color, how the eye and physical devices capture color images, how color is measured and calibrated, and how images are processed. It stresses physical principles and includes a wealth of real-world examples. The book will be of value to scientists and engineers in the color imaging industry and, with homework problems, can also be used as a text for graduate courses on color imaging.
Towards Kilo-Hertz 6-DoF Visual Tracking Using an Egocentric Cluster of Rolling Shutter Cameras.
Bapat, Akash; Dunn, Enrique; Frahm, Jan-Michael
2016-11-01
To maintain a reliable registration of the virtual world with the real world, augmented reality (AR) applications require highly accurate, low-latency tracking of the device. In this paper, we propose a novel method for performing this fast 6-DOF head pose tracking using a cluster of rolling shutter cameras. The key idea is that a rolling shutter camera works by capturing the rows of an image in rapid succession, essentially acting as a high-frequency 1D image sensor. By integrating multiple rolling shutter cameras on the AR device, our tracker is able to perform 6-DOF markerless tracking in a static indoor environment with minimal latency. Compared to state-of-the-art tracking systems, this tracking approach performs at significantly higher frequency, and it works in generalized environments. To demonstrate the feasibility of our system, we present thorough evaluations on synthetically generated data with tracking frequencies reaching 56.7 kHz. We further validate the method's accuracy on real-world images collected from a prototype of our tracking system against ground truth data using standard commodity GoPro cameras capturing at 120 Hz frame rate.
Performance characterization of structured light-based fingerprint scanner
NASA Astrophysics Data System (ADS)
Hassebrook, Laurence G.; Wang, Minghao; Daley, Raymond C.
2013-05-01
Our group believes that the evolution of fingerprint capture technology is in transition to include 3-D non-contact fingerprint capture. More specifically we believe that systems based on structured light illumination provide the highest level of depth measurement accuracy. However, for these new technologies to be fully accepted by the biometric community, they must be compliant with federal standards of performance. At present these standards do not exist for this new biometric technology. We propose and define a set of test procedures to be used to verify compliance with the Federal Bureau of Investigation's image quality specification for Personal Identity Verification single fingerprint capture devices. The proposed test procedures include: geometric accuracy, lateral resolution based on intensity or depth, gray level uniformity and flattened fingerprint image quality. Several 2-D contact analogies, performance tradeoffs and optimization dilemmas are evaluated and proposed solutions are presented.
Finger vein verification system based on sparse representation.
Xin, Yang; Liu, Zhi; Zhang, Haixia; Zhang, Hong
2012-09-01
Finger vein verification is a promising biometric pattern for personal identification in terms of security and convenience. The recognition performance of this technology heavily relies on the quality of finger vein images and on the recognition algorithm. To achieve efficient recognition performance, a special finger vein imaging device is developed, and a finger vein recognition method based on sparse representation is proposed. The motivation for the proposed method is that finger vein images exhibit a sparse property. In the proposed system, the regions of interest (ROIs) in the finger vein images are segmented and enhanced. Sparse representation and sparsity preserving projection on ROIs are performed to obtain the features. Finally, the features are measured for recognition. An equal error rate of 0.017% was achieved based on the finger vein image database, which contains images that were captured by using the near-IR imaging device that was developed in this study. The experimental results demonstrate that the proposed method is faster and more robust than previous methods.
Jamaludin, Juliza; Rahim, Ruzairi Abdul; Fazul Rahiman, Mohd Hafiz; Mohd Rohani, Jemmy
2018-04-01
Optical tomography (OPT) is a method to capture a cross-sectional image based on the data obtained by sensors, distributed around the periphery of the analyzed system. This system is based on the measurement of the final light attenuation or absorption of radiation after crossing the measured objects. The number of sensor views will affect the results of image reconstruction, where the high number of sensor views per projection will give a high image quality. This research presents an application of charge-coupled device linear sensor and laser diode in an OPT system. Experiments in detecting solid and transparent objects in crystal clear water were conducted. Two numbers of sensors views, 160 and 320 views are evaluated in this research in reconstructing the images. The image reconstruction algorithms used were filtered images of linear back projection algorithms. Analysis on comparing the simulation and experiments image results shows that, with 320 image views giving less area error than 160 views. This suggests that high image view resulted in the high resolution of image reconstruction.
Capture and X-ray diffraction studies of protein microcrystals in a microfluidic trap array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyubimov, Artem Y.; Stanford University, Stanford, CA 94305; Stanford University, Stanford, CA 94305
A microfluidic platform has been developed for the capture and X-ray analysis of protein microcrystals, affording a means to improve the efficiency of XFEL and synchrotron experiments. X-ray free-electron lasers (XFELs) promise to enable the collection of interpretable diffraction data from samples that are refractory to data collection at synchrotron sources. At present, however, more efficient sample-delivery methods that minimize the consumption of microcrystalline material are needed to allow the application of XFEL sources to a wide range of challenging structural targets of biological importance. Here, a microfluidic chip is presented in which microcrystals can be captured at fixed, addressablemore » points in a trap array from a small volume (<10 µl) of a pre-existing slurry grown off-chip. The device can be mounted on a standard goniostat for conducting diffraction experiments at room temperature without the need for flash-cooling. Proof-of-principle tests with a model system (hen egg-white lysozyme) demonstrated the high efficiency of the microfluidic approach for crystal harvesting, permitting the collection of sufficient data from only 265 single-crystal still images to permit determination and refinement of the structure of the protein. This work shows that microfluidic capture devices can be readily used to facilitate data collection from protein microcrystals grown in traditional laboratory formats, enabling analysis when cryopreservation is problematic or when only small numbers of crystals are available. Such microfluidic capture devices may also be useful for data collection at synchrotron sources.« less
Bio-inspired color image enhancement
NASA Astrophysics Data System (ADS)
Meylan, Laurence; Susstrunk, Sabine
2004-06-01
Capturing and rendering an image that fulfills the observer's expectations is a difficult task. This is due to the fact that the signal reaching the eye is processed by a complex mechanism before forming a percept, whereas a capturing device only retains the physical value of light intensities. It is especially difficult to render complex scenes with highly varying luminances. For example, a picture taken inside a room where objects are visible through the windows will not be rendered correctly by a global technique. Either details in the dim room will be hidden in shadow or the objects viewed through the window will be too bright. The image has to be treated locally to resemble more closely to what the observer remembers. The purpose of this work is to develop a technique for rendering images based on human local adaptation. We take inspiration from a model of color vision called Retinex. This model determines the perceived color given spatial relationships of the captured signals. Retinex has been used as a computational model for image rendering. In this article, we propose a new solution inspired by Retinex that is based on a single filter applied to the luminance channel. All parameters are image-dependent so that the process requires no parameter tuning. That makes the method more flexible than other existing ones. The presented results show that our method suitably enhances high dynamic range images.
Design and Construction of a Field Capable Snapshot Hyperspectral Imaging Spectrometer
NASA Technical Reports Server (NTRS)
Arik, Glenda H.
2005-01-01
The computed-tomography imaging spectrometer (CTIS) is a device which captures the spatial and spectral content of a rapidly evolving same in a single image frame. The most recent CTIS design is optically all reflective and uses as its dispersive device a stated the-art reflective computer generated hologram (CGH). This project focuses on the instrument's transition from laboratory to field. This design will enable the CTIS to withstand a harsh desert environment. The system is modeled in optical design software using a tolerance analysis. The tolerances guide the design of the athermal mount and component parts. The parts are assembled into a working mount shell where the performance of the mounts is tested for thermal integrity. An interferometric analysis of the reflective CGH is also performed.
Safety of Magnetic Resonance Imaging in Patients with Cardiac Devices.
Nazarian, Saman; Hansford, Rozann; Rahsepar, Amir A; Weltin, Valeria; McVeigh, Diana; Gucuk Ipek, Esra; Kwan, Alan; Berger, Ronald D; Calkins, Hugh; Lardo, Albert C; Kraut, Michael A; Kamel, Ihab R; Zimmerman, Stefan L; Halperin, Henry R
2017-12-28
Patients who have pacemakers or defibrillators are often denied the opportunity to undergo magnetic resonance imaging (MRI) because of safety concerns, unless the devices meet certain criteria specified by the Food and Drug Administration (termed "MRI-conditional" devices). We performed a prospective, nonrandomized study to assess the safety of MRI at a magnetic field strength of 1.5 Tesla in 1509 patients who had a pacemaker (58%) or an implantable cardioverter-defibrillator (42%) that was not considered to be MRI-conditional (termed a "legacy" device). Overall, the patients underwent 2103 thoracic and nonthoracic MRI examinations that were deemed to be clinically necessary. The pacing mode was changed to asynchronous mode for pacing-dependent patients and to demand mode for other patients. Tachyarrhythmia functions were disabled. Outcome assessments included adverse events and changes in the variables that indicate lead and generator function and interaction with surrounding tissue (device parameters). No long-term clinically significant adverse events were reported. In nine MRI examinations (0.4%; 95% confidence interval, 0.2 to 0.7), the patient's device reset to a backup mode. The reset was transient in eight of the nine examinations. In one case, a pacemaker with less than 1 month left of battery life reset to ventricular inhibited pacing and could not be reprogrammed; the device was subsequently replaced. The most common notable change in device parameters (>50% change from baseline) immediately after MRI was a decrease in P-wave amplitude, which occurred in 1% of the patients. At long-term follow-up (results of which were available for 63% of the patients), the most common notable changes from baseline were decreases in P-wave amplitude (in 4% of the patients), increases in atrial capture threshold (4%), increases in right ventricular capture threshold (4%), and increases in left ventricular capture threshold (3%). The observed changes in lead parameters were not clinically significant and did not require device revision or reprogramming. We evaluated the safety of MRI, performed with the use of a prespecified safety protocol, in 1509 patients who had a legacy pacemaker or a legacy implantable cardioverter-defibrillator system. No long-term clinically significant adverse events were reported. (Funded by Johns Hopkins University and the National Institutes of Health; ClinicalTrials.gov number, NCT01130896 .).
Holmström, Oscar; Linder, Nina; Ngasala, Billy; Mårtensson, Andreas; Linder, Ewert; Lundin, Mikael; Moilanen, Hannu; Suutala, Antti; Diwan, Vinod; Lundin, Johan
2017-06-01
Microscopy remains the gold standard in the diagnosis of neglected tropical diseases. As resource limited, rural areas often lack laboratory equipment and trained personnel, new diagnostic techniques are needed. Low-cost, point-of-care imaging devices show potential in the diagnosis of these diseases. Novel, digital image analysis algorithms can be utilized to automate sample analysis. Evaluation of the imaging performance of a miniature digital microscopy scanner for the diagnosis of soil-transmitted helminths and Schistosoma haematobium, and training of a deep learning-based image analysis algorithm for automated detection of soil-transmitted helminths in the captured images. A total of 13 iodine-stained stool samples containing Ascaris lumbricoides, Trichuris trichiura and hookworm eggs and 4 urine samples containing Schistosoma haematobium were digitized using a reference whole slide-scanner and the mobile microscopy scanner. Parasites in the images were identified by visual examination and by analysis with a deep learning-based image analysis algorithm in the stool samples. Results were compared between the digital and visual analysis of the images showing helminth eggs. Parasite identification by visual analysis of digital slides captured with the mobile microscope was feasible for all analyzed parasites. Although the spatial resolution of the reference slide-scanner is higher, the resolution of the mobile microscope is sufficient for reliable identification and classification of all parasites studied. Digital image analysis of stool sample images captured with the mobile microscope showed high sensitivity for detection of all helminths studied (range of sensitivity = 83.3-100%) in the test set (n = 217) of manually labeled helminth eggs. In this proof-of-concept study, the imaging performance of a mobile, digital microscope was sufficient for visual detection of soil-transmitted helminths and Schistosoma haematobium. Furthermore, we show that deep learning-based image analysis can be utilized for the automated detection and classification of helminths in the captured images.
Holmström, Oscar; Linder, Nina; Ngasala, Billy; Mårtensson, Andreas; Linder, Ewert; Lundin, Mikael; Moilanen, Hannu; Suutala, Antti; Diwan, Vinod; Lundin, Johan
2017-01-01
ABSTRACT Background: Microscopy remains the gold standard in the diagnosis of neglected tropical diseases. As resource limited, rural areas often lack laboratory equipment and trained personnel, new diagnostic techniques are needed. Low-cost, point-of-care imaging devices show potential in the diagnosis of these diseases. Novel, digital image analysis algorithms can be utilized to automate sample analysis. Objective: Evaluation of the imaging performance of a miniature digital microscopy scanner for the diagnosis of soil-transmitted helminths and Schistosoma haematobium, and training of a deep learning-based image analysis algorithm for automated detection of soil-transmitted helminths in the captured images. Methods: A total of 13 iodine-stained stool samples containing Ascaris lumbricoides, Trichuris trichiura and hookworm eggs and 4 urine samples containing Schistosoma haematobium were digitized using a reference whole slide-scanner and the mobile microscopy scanner. Parasites in the images were identified by visual examination and by analysis with a deep learning-based image analysis algorithm in the stool samples. Results were compared between the digital and visual analysis of the images showing helminth eggs. Results: Parasite identification by visual analysis of digital slides captured with the mobile microscope was feasible for all analyzed parasites. Although the spatial resolution of the reference slide-scanner is higher, the resolution of the mobile microscope is sufficient for reliable identification and classification of all parasites studied. Digital image analysis of stool sample images captured with the mobile microscope showed high sensitivity for detection of all helminths studied (range of sensitivity = 83.3–100%) in the test set (n = 217) of manually labeled helminth eggs. Conclusions: In this proof-of-concept study, the imaging performance of a mobile, digital microscope was sufficient for visual detection of soil-transmitted helminths and Schistosoma haematobium. Furthermore, we show that deep learning-based image analysis can be utilized for the automated detection and classification of helminths in the captured images. PMID:28838305
A Soft, Wearable Microfluidic Device for the Capture, Storage, and Colorimetric Sensing of Sweat
Koh, Ahyeon; Kang, Daeshik; Xue, Yeguang; Lee, Seungmin; Pielak, Rafal M.; Kim, Jeonghyun; Hwang, Taehwan; Min, Seunghwan; Banks, Anthony; Bastien, Philippe; Manco, Megan C.; Wang, Liang; Ammann, Kaitlyn R.; Jang, Kyung-In; Won, Phillip; Han, Seungyong; Ghaffari, Roozbeh; Paik, Ungyu; Slepian, Marvin J.; Balooch, Guive; Huang, Yonggang; Rogers, John A.
2017-01-01
Capabilities in health monitoring via capture and quantitative chemical analysis of sweat could complement, or potentially obviate the need for, approaches based on sporadic assessment of blood samples. Established sweat monitoring technologies use simple fabric swatches and are limited to basic analysis in controlled laboratory or hospital settings. We present a collection of materials and device designs for soft, flexible and stretchable microfluidic systems, including embodiments that integrate wireless communication electronics, which can intimately and robustly bond to the surface of skin without chemical and mechanical irritation. This integration defines access points for a small set of sweat glands such that perspiration spontaneously initiates routing of sweat through a microfluidic network and set of reservoirs. Embedded chemical analyses respond in colorimetric fashion to markers such as chloride and hydronium ions, glucose and lactate. Wireless interfaces to digital image capture hardware serve as a means for quantitation. Human studies demonstrated the functionality of this microfluidic device during fitness cycling in a controlled environment and during long-distance bicycle racing in arid, outdoor conditions. The results include quantitative values for sweat rate, total sweat loss, pH and concentration of both chloride and lactate. PMID:27881826
MPCM: a hardware coder for super slow motion video sequences
NASA Astrophysics Data System (ADS)
Alcocer, Estefanía; López-Granado, Otoniel; Gutierrez, Roberto; Malumbres, Manuel P.
2013-12-01
In the last decade, the improvements in VLSI levels and image sensor technologies have led to a frenetic rush to provide image sensors with higher resolutions and faster frame rates. As a result, video devices were designed to capture real-time video at high-resolution formats with frame rates reaching 1,000 fps and beyond. These ultrahigh-speed video cameras are widely used in scientific and industrial applications, such as car crash tests, combustion research, materials research and testing, fluid dynamics, and flow visualization that demand real-time video capturing at extremely high frame rates with high-definition formats. Therefore, data storage capability, communication bandwidth, processing time, and power consumption are critical parameters that should be carefully considered in their design. In this paper, we propose a fast FPGA implementation of a simple codec called modulo-pulse code modulation (MPCM) which is able to reduce the bandwidth requirements up to 1.7 times at the same image quality when compared with PCM coding. This allows current high-speed cameras to capture in a continuous manner through a 40-Gbit Ethernet point-to-point access.
Aquino, Arturo; Millan, Borja; Gaston, Daniel; Diago, María-Paz; Tardaguila, Javier
2015-08-28
Grapevine flowering and fruit set greatly determine crop yield. This paper presents a new smartphone application for automatically counting, non-invasively and directly in the vineyard, the flower number in grapevine inflorescence photos by implementing artificial vision techniques. The application, called vitisFlower(®), firstly guides the user to appropriately take an inflorescence photo using the smartphone's camera. Then, by means of image analysis, the flowers in the image are detected and counted. vitisFlower(®) has been developed for Android devices and uses the OpenCV libraries to maximize computational efficiency. The application was tested on 140 inflorescence images of 11 grapevine varieties taken with two different devices. On average, more than 84% of flowers in the captures were found, with a precision exceeding 94%. Additionally, the application's efficiency on four different devices covering a wide range of the market's spectrum was also studied. The results of this benchmarking study showed significant differences among devices, although indicating that the application is efficiently usable even with low-range devices. vitisFlower is one of the first applications for viticulture that is currently freely available on Google Play.
Design and preliminary analysis of a vaginal inserter for speculum-free cervical cancer screening
Agudogo, Júlia; Krieger, Marlee S.; Miros, Robert; Proeschold-Bell, Rae Jean; Schmitt, John W.; Ramanujam, Nimmi
2017-01-01
Objective Cervical cancer screening usually requires use of a speculum to provide a clear view of the cervix. The speculum is one potential barrier to screening due to fear of pain, discomfort and embarrassment. The aim of this paper is to present and demonstrate the feasibility of a tampon-sized inserter and the POCkeT Colposcope, a miniature pen sized-colposcope, for comfortable, speculum-free and potentially self-colposcopy. Study design We explored different designs using 3D computer-aided design (CAD) software and performed mechanical testing simulations on each. Designs were rapid prototyped and tested using a custom vaginal phantom across a range of vaginal pressures and uterine tilts to select an optimal design. Two final designs were tested with fifteen volunteers to assess cervix visualization, comfort and usability compared to the speculum and the optimal design, the curved-tip inserter, was selected for testing in volunteers. Results We present a vaginal inserter as an alternative to the standard speculum for use with the POCkeT Colposcope. The device has a slim tubular body with a funnel-like curved tip measuring approximately 2.5 cm in diameter. The inserter has a channel through which a 2 megapixel (MP) mini camera with LED illumination fits to enable image capture. Mechanical finite element testing simulations with an applied pressure of 15 cm H2O indicated a high factor of safety (90.9) for the inserter. Testing of the device with a custom vaginal phantom, across a range of supine vaginal pressures and uterine tilts (retroverted, anteverted and sideverted), demonstrated image capture with a visual area comparable to the speculum for a normal/axial positioned uteri and significantly better than the speculum for anteverted and sideverted uteri (p<0.00001). Volunteer studies with self-insertion and physician-assisted cervix image capture showed adequate cervix visualization for 83% of patients. In addition, questionnaire responses from volunteers indicated a 92.3% overall preference for the inserter over the speculum and all indicated that the inserter was more comfortable than the speculum. The inserter provides a platform for self-cervical cancer screening and also enables acetic acid/Lugol’s iodine application and insertion of swabs for Pap smear sample collection. Conclusion This study demonstrates the feasibility of an inserter and miniature-imaging device for comfortable cervical image capture of women with potential for synergistic HPV and Pap smear sample collection. PMID:28562669
Capture and X-ray diffraction studies of protein microcrystals in a microfluidic trap array
Lyubimov, Artem Y.; Murray, Thomas D.; Koehl, Antoine; ...
2015-03-27
X-ray free-electron lasers (XFELs) promise to enable the collection of interpretable diffraction data from samples that are refractory to data collection at synchrotron sources. At present, however, more efficient sample-delivery methods that minimize the consumption of microcrystalline material are needed to allow the application of XFEL sources to a wide range of challenging structural targets of biological importance. Here, a microfluidic chip is presented in which microcrystals can be captured at fixed, addressable points in a trap array from a small volume (<10 µl) of a pre-existing slurry grown off-chip. The device can be mounted on a standard goniostat formore » conducting diffraction experiments at room temperature without the need for flash-cooling. Proof-of-principle tests with a model system (hen egg-white lysozyme) demonstrated the high efficiency of the microfluidic approach for crystal harvesting, permitting the collection of sufficient data from only 265 single-crystal still images to permit determination and refinement of the structure of the protein. This work shows that microfluidic capture devices can be readily used to facilitate data collection from protein microcrystals grown in traditional laboratory formats, enabling analysis when cryopreservation is problematic or when only small numbers of crystals are available. Such microfluidic capture devices may also be useful for data collection at synchrotron sources.« less
Capture and X-ray diffraction studies of protein microcrystals in a microfluidic trap array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyubimov, Artem Y.; Murray, Thomas D.; Koehl, Antoine
X-ray free-electron lasers (XFELs) promise to enable the collection of interpretable diffraction data from samples that are refractory to data collection at synchrotron sources. At present, however, more efficient sample-delivery methods that minimize the consumption of microcrystalline material are needed to allow the application of XFEL sources to a wide range of challenging structural targets of biological importance. Here, a microfluidic chip is presented in which microcrystals can be captured at fixed, addressable points in a trap array from a small volume (<10 µl) of a pre-existing slurry grown off-chip. The device can be mounted on a standard goniostat formore » conducting diffraction experiments at room temperature without the need for flash-cooling. Proof-of-principle tests with a model system (hen egg-white lysozyme) demonstrated the high efficiency of the microfluidic approach for crystal harvesting, permitting the collection of sufficient data from only 265 single-crystal still images to permit determination and refinement of the structure of the protein. This work shows that microfluidic capture devices can be readily used to facilitate data collection from protein microcrystals grown in traditional laboratory formats, enabling analysis when cryopreservation is problematic or when only small numbers of crystals are available. Such microfluidic capture devices may also be useful for data collection at synchrotron sources.« less
Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei
2016-01-01
High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023
Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei
2016-03-04
High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera.
Endoscopic device for functional imaging of the retina
NASA Astrophysics Data System (ADS)
Barriga, Simon; Lohani, Sweyta; Martell, Bret; Soliz, Peter; Ts'o, Dan
2011-03-01
Non-invasive imaging of retinal function based on the recording of spatially distributed reflectance changes evoked by visual stimuli has to-date been performed primarily using modified commercial fundus cameras. We have constructed a prototype retinal functional imager, using a commercial endoscope (Storz) for the frontend optics, and a low-cost back-end that includes the needed dichroic beam splitter to separate the stimulus path from the imaging path. This device has been tested to demonstrate its performance for the delivery of adequate near infrared (NIR) illumination, intensity of the visual stimulus and reflectance return in the imaging path. The current device was found to be capable of imaging reflectance changes of 0.1%, similar to that observable using the modified commercial fundus camera approach. The visual stimulus (a 505nm spot of 0.5secs) was used with an interrogation illumination of 780nm, and a sequence of imaged captured. At each pixel, the imaged signal was subtracted and normalized by the baseline reflectance, so that the measurement was ΔR/R. The typical retinal activity signal observed had a ΔR/R of 0.3-1.0%. The noise levels were measured when no stimulus was applied and found to vary between +/- 0.05%. Functional imaging has been suggested as a means to provide objective information on retina function that may be a preclinical indicator of ocular diseases, such as age-related macular degeneration (AMD), glaucoma, and diabetic retinopathy. The endoscopic approach promises to yield a significantly more economical retinal functional imaging device that would be clinically important.
Low-cost, high-performance and efficiency computational photometer design
NASA Astrophysics Data System (ADS)
Siewert, Sam B.; Shihadeh, Jeries; Myers, Randall; Khandhar, Jay; Ivanov, Vitaly
2014-05-01
Researchers at the University of Alaska Anchorage and University of Colorado Boulder have built a low cost high performance and efficiency drop-in-place Computational Photometer (CP) to test in field applications ranging from port security and safety monitoring to environmental compliance monitoring and surveying. The CP integrates off-the-shelf visible spectrum cameras with near to long wavelength infrared detectors and high resolution digital snapshots in a single device. The proof of concept combines three or more detectors into a single multichannel imaging system that can time correlate read-out, capture, and image process all of the channels concurrently with high performance and energy efficiency. The dual-channel continuous read-out is combined with a third high definition digital snapshot capability and has been designed using an FPGA (Field Programmable Gate Array) to capture, decimate, down-convert, re-encode, and transform images from two standard definition CCD (Charge Coupled Device) cameras at 30Hz. The continuous stereo vision can be time correlated to megapixel high definition snapshots. This proof of concept has been fabricated as a fourlayer PCB (Printed Circuit Board) suitable for use in education and research for low cost high efficiency field monitoring applications that need multispectral and three dimensional imaging capabilities. Initial testing is in progress and includes field testing in ports, potential test flights in un-manned aerial systems, and future planned missions to image harsh environments in the arctic including volcanic plumes, ice formation, and arctic marine life.
A Novel, Real-Time, In Vivo Mouse Retinal Imaging System.
Butler, Mark C; Sullivan, Jack M
2015-11-01
To develop an efficient, low-cost instrument for robust real-time imaging of the mouse retina in vivo, and assess system capabilities by evaluating various animal models. Following multiple disappointing attempts to visualize the mouse retina during a subretinal injection using commercially available systems, we identified the key limitation to be inadequate illumination due to off axis illumination and poor optical train optimization. Therefore, we designed a paraxial illumination system for Greenough-type stereo dissecting microscope incorporating an optimized optical launch and an efficiently coupled fiber optic delivery system. Excitation and emission filters control spectral bandwidth. A color coupled-charged device (CCD) camera is coupled to the microscope for image capture. Although, field of view (FOV) is constrained by the small pupil aperture, the high optical power of the mouse eye, and the long working distance (needed for surgical manipulations), these limitations can be compensated by eye positioning in order to observe the entire retina. The retinal imaging system delivers an adjustable narrow beam to the dilated pupil with minimal vignetting. The optic nerve, vasculature, and posterior pole are crisply visualized and the entire retina can be observed through eye positioning. Normal and degenerative retinal phenotypes can be followed over time. Subretinal or intraocular injection procedures are followed in real time. Real-time, intravenous fluorescein angiography for the live mouse has been achieved. A novel device is established for real-time viewing and image capture of the small animal retina during subretinal injections for preclinical gene therapy studies.
Virtual view image synthesis for eye-contact in TV conversation system
NASA Astrophysics Data System (ADS)
Murayama, Daisuke; Kimura, Keiichi; Hosaka, Tadaaki; Hamamoto, Takayuki; Shibuhisa, Nao; Tanaka, Seiichi; Sato, Shunichi; Saito, Sakae
2010-02-01
Eye-contact plays an important role for human communications in the sense that it can convey unspoken information. However, it is highly difficult to realize eye-contact in teleconferencing systems because of camera configurations. Conventional methods to overcome this difficulty mainly resorted to space-consuming optical devices such as half mirrors. In this paper, we propose an alternative approach to achieve eye-contact by techniques of arbitrary view image synthesis. In our method, multiple images captured by real cameras are converted to the virtual viewpoint (the center of the display) by homography, and evaluation of matching errors among these projected images provides the depth map and the virtual image. Furthermore, we also propose a simpler version of this method by using a single camera to save the computational costs, in which the only one real image is transformed to the virtual viewpoint based on the hypothesis that the subject is located at a predetermined distance. In this simple implementation, eye regions are separately generated by comparison with pre-captured frontal face images. Experimental results of both the methods show that the synthesized virtual images enable the eye-contact favorably.
Code of Federal Regulations, 2014 CFR
2014-07-01
... capture system and add-on control device operating limits during the performance test? 63.3546 Section 63... of key parameters of the valve operating system (e.g., solenoid valve operation, air pressure... minimum operating limit for that specific capture device or system of multiple capture devices. The...
Code of Federal Regulations, 2012 CFR
2012-07-01
... capture system and add-on control device operating limits during the performance test? 63.3546 Section 63... of key parameters of the valve operating system (e.g., solenoid valve operation, air pressure... minimum operating limit for that specific capture device or system of multiple capture devices. The...
An update on carbon nanotube-enabled X-ray sources for biomedical imaging.
Puett, Connor; Inscoe, Christina; Hartman, Allison; Calliste, Jabari; Franceschi, Dora K; Lu, Jianping; Zhou, Otto; Lee, Yueh Z
2018-01-01
A new imaging technology has emerged that uses carbon nanotubes (CNT) as the electron emitter (cathode) for the X-ray tube. Since the performance of the CNT cathode is controlled by simple voltage manipulation, CNT-enabled X-ray sources are ideal for the repetitive imaging steps needed to capture three-dimensional information. As such, they have allowed the development of a gated micro-computed tomography (CT) scanner for small animal research as well as stationary tomosynthesis, an experimental technology for large field-of-view human imaging. The small animal CT can acquire images at specific points in the respiratory and cardiac cycles. Longitudinal imaging therefore becomes possible and has been applied to many research questions, ranging from tumor response to the noninvasive assessment of cardiac output. Digital tomosynthesis (DT) is a low-dose and low-cost human imaging tool that captures some depth information. Known as three-dimensional mammography, DT is now used clinically for breast imaging. However, the resolution of currently-approved DT is limited by the need to swing the X-ray source through space to collect a series of projection views. An array of fixed and distributed CNT-enabled sources provides the solution and has been used to construct stationary DT devices for breast, lung, and dental imaging. To date, over 100 patients have been imaged on Institutional Review Board-approved study protocols. Early experience is promising, showing an excellent conspicuity of soft-tissue features, while also highlighting technical and post-acquisition processing limitations that are guiding continued research and development. Additionally, CNT-enabled sources are being tested in miniature X-ray tubes that are capable of generating adequate photon energies and tube currents for clinical imaging. Although there are many potential applications for these small field-of-view devices, initial experience has been with an X-ray source that can be inserted into the mouth for dental imaging. Conceived less than 20 years ago, CNT-enabled X-ray sources are now being manufactured on a commercial scale and are powering both research tools and experimental human imaging devices. WIREs Nanomed Nanobiotechnol 2018, 10:e1475. doi: 10.1002/wnan.1475 This article is categorized under: Diagnostic Tools > Diagnostic Nanodevices Diagnostic Tools > In Vivo Nanodiagnostics and Imaging. © 2017 Wiley Periodicals, Inc.
Multi-image acquisition-based distance sensor using agile laser spot beam.
Riza, Nabeel A; Amin, M Junaid
2014-09-01
We present a novel laser-based distance measurement technique that uses multiple-image-based spatial processing to enable distance measurements. Compared with the first-generation distance sensor using spatial processing, the modified sensor is no longer hindered by the classic Rayleigh axial resolution limit for the propagating laser beam at its minimum beam waist location. The proposed high-resolution distance sensor design uses an electronically controlled variable focus lens (ECVFL) in combination with an optical imaging device, such as a charged-coupled device (CCD), to produce and capture different laser spot size images on a target with these beam spot sizes different from the minimal spot size possible at this target distance. By exploiting the unique relationship of the target located spot sizes with the varying ECVFL focal length for each target distance, the proposed distance sensor can compute the target distance with a distance measurement resolution better than the axial resolution via the Rayleigh resolution criterion. Using a 30 mW 633 nm He-Ne laser coupled with an electromagnetically actuated liquid ECVFL, along with a 20 cm focal length bias lens, and using five spot images captured per target position by a CCD-based Nikon camera, a proof-of-concept proposed distance sensor is successfully implemented in the laboratory over target ranges from 10 to 100 cm with a demonstrated sub-cm axial resolution, which is better than the axial Rayleigh resolution limit at these target distances. Applications for the proposed potentially cost-effective distance sensor are diverse and include industrial inspection and measurement and 3D object shape mapping and imaging.
Lara-Capi, Cynthia; Lingström, Peter; Lai, Gianfranco; Cocco, Fabio; Simark-Mattsson, Charlotte; Campus, Guglielmo
2017-01-01
Objectives: This article aimed to evaluate: (a) the agreement between a near-infrared light transillumination device and clinical and radiographic examinations in caries lesion detection and (b) the reliability of images captured by the transillumination device. Methods: Two calibrated examiners evaluated the caries status in premolars and molars on 52 randomly selected subjects by comparing the transillumination device with a clinical examination for the occlusal surfaces and by comparing the transillumination device with a radiographic examination (bitewing radiographs) for the approximal surfaces. Forty-eight trained dental hygienists evaluated and reevaluated 30 randomly selected images 1-month later. Results: A high concordance between transillumination method and clinical examination (kappa = 0.99) was detected for occlusal caries lesions, while for approximal surfaces, the transillumination device identified a higher number of lesions with respect to bitewing (kappa = 0.91). At the dentinal level, the two methods identified the same number of caries lesions (kappa = 1), whereas more approximal lesions were recorded using the transillumination device in the enamel (kappa = 0.24). The intraexaminer reliability was substantial/almost perfect in 59.4% of the participants. Conclusions: The transillumination method showed a high concordance compared with traditional methods (clinical examination and bitewing radiographs). Caries detection reliability using the transillumination device images showed a high intraexaminer agreement. Transillumination showed to be a reliable method and as effective as traditional methods in caries detection. PMID:28191797
Roguin, Ariel; Zviman, Menekhem M.; Meininger, Glenn R.; Rodrigues, E. Rene; Dickfeld, Timm M.; Bluemke, David A.; Lardo, Albert; Berger, Ronald D.; Calkins, Hugh; Halperin, Henry R.
2011-01-01
Background MRI has unparalleled soft-tissue imaging capabilities. The presence of devices such as pacemakers and implantable cardioverter/defibrillators (ICDs), however, is historically considered a contraindication to MRI. These devices are now smaller, with less magnetic material and improved electromagnetic interference protection. Our aim was to determine whether these modern systems can be used in an MR environment. Methods and Results We tested in vitro and in vivo lead heating, device function, force acting on the device, and image distortion at 1.5 T. Clinical MR protocols and in vivo measurements yielded temperature changes <0.5°C. Older (manufactured before 2000) ICDs were damaged by the MR scans. Newer ICD systems and most pacemakers, however, were not. The maximal force acting on newer devices was <100 g. Modern (manufactured after 2000) ICD systems were implanted in dogs (n=18), and after 4 weeks, 3- to 4-hour MR scans were performed (n=15). No device dysfunction occurred. The images were of high quality with distortion dependent on the scan sequence and plane. Pacing threshold and intracardiac electrogram amplitude were unchanged over the 8 weeks, except in 1 animal that, after MRI, had a transient (<12 hours) capture failure. Pathological data of the scanned animals revealed very limited necrosis or fibrosis at the tip of the lead area, which was not different from controls (n=3) not subjected to MRI. Conclusions These data suggest that certain modern pacemaker and ICD systems may indeed be MRI safe. This may have major clinical implications for current imaging practices. PMID:15277324
D Capturing Performances of Low-Cost Range Sensors for Mass-Market Applications
NASA Astrophysics Data System (ADS)
Guidi, G.; Gonizzi, S.; Micoli, L.
2016-06-01
Since the advent of the first Kinect as motion controller device for the Microsoft XBOX platform (November 2010), several similar active and low-cost range sensing devices have been introduced on the mass-market for several purposes, including gesture based interfaces, 3D multimedia interaction, robot navigation, finger tracking, 3D body scanning for garment design and proximity sensors for automotive. However, given their capability to generate a real time stream of range images, these has been used in some projects also as general purpose range devices, with performances that for some applications might be satisfying. This paper shows the working principle of the various devices, analyzing them in terms of systematic errors and random errors for exploring the applicability of them in standard 3D capturing problems. Five actual devices have been tested featuring three different technologies: i) Kinect V1 by Microsoft, Structure Sensor by Occipital, and Xtion PRO by ASUS, all based on different implementations of the Primesense sensor; ii) F200 by Intel/Creative, implementing the Realsense pattern projection technology; Kinect V2 by Microsoft, equipped with the Canesta TOF Camera. A critical analysis of the results tries first of all to compare them, and secondarily to focus the range of applications for which such devices could actually work as a viable solution.
Processing, Cataloguing and Distribution of Uas Images in Near Real Time
NASA Astrophysics Data System (ADS)
Runkel, I.
2013-08-01
Why are UAS such a hype? UAS make the data capture flexible, fast and easy. For many applications this is more important than a perfect photogrammetric aerial image block. To ensure, that the advantage of a fast data capturing will be valid up to the end of the processing chain, all intermediate steps like data processing and data dissemination to the customer need to be flexible and fast as well. GEOSYSTEMS has established the whole processing workflow as server/client solution. This is the focus of the presentation. Depending on the image acquisition system the image data can be down linked during the flight to the data processing computer or it is stored on a mobile device and hooked up to the data processing computer after the flight campaign. The image project manager reads the data from the device and georeferences the images according to the position data. The meta data is converted into an ISO conform format and subsequently all georeferenced images are catalogued in the raster data management System ERDAS APOLLO. APOLLO provides the data, respectively the images as an OGC-conform services to the customer. Within seconds the UAV-images are ready to use for GIS application, image processing or direct interpretation via web applications - where ever you want. The whole processing chain is built in a generic manner. It can be adapted to a magnitude of applications. The UAV imageries can be processed and catalogued as single ortho imges or as image mosaic. Furthermore, image data of various cameras can be fusioned. By using WPS (web processing services) image enhancement, image analysis workflows like change detection layers can be calculated and provided to the image analysts. The processing of the WPS runs direct on the raster data management server. The image analyst has no data and no software on his local computer. This workflow is proven to be fast, stable and accurate. It is designed to support time critical applications for security demands - the images can be checked and interpreted in near real-time. For sensible areas it gives you the possibility to inform remote decision makers or interpretation experts in order to provide them situations awareness, wherever they are. For monitoring and inspection tasks it speeds up the process of data capture and data interpretation. The fully automated workflow of data pre-processing, data georeferencing, data cataloguing and data dissemination in near real time was developed based on the Intergraph products ERDAS IMAGINE, ERDAS APOLLO and GEOSYSTEMS METAmorph!IT. It is offered as adaptable solution by GEOSYSTEMS GmbH.
Development of on line automatic separation device for apple and sleeve
NASA Astrophysics Data System (ADS)
Xin, Dengke; Ning, Duo; Wang, Kangle; Han, Yuhang
2018-04-01
Based on STM32F407 single chip microcomputer as control core, automatic separation device of fruit sleeve is designed. This design consists of hardware and software. In hardware, it includes mechanical tooth separator and three degree of freedom manipulator, as well as industrial control computer, image data acquisition card, end effector and other structures. The software system is based on Visual C++ development environment, to achieve localization and recognition of fruit sleeve with the technology of image processing and machine vision, drive manipulator of foam net sets of capture, transfer, the designated position task. Test shows: The automatic separation device of the fruit sleeve has the advantages of quick response speed and high separation success rate, and can realize separation of the apple and plastic foam sleeve, and lays the foundation for further studying and realizing the application of the enterprise production line.
Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur
2012-01-01
This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956
Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur
2012-01-01
This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.
[True color accuracy in digital forensic photography].
Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A
2016-01-01
Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation).
Aquino, Arturo; Millan, Borja; Gaston, Daniel; Diago, María-Paz; Tardaguila, Javier
2015-01-01
Grapevine flowering and fruit set greatly determine crop yield. This paper presents a new smartphone application for automatically counting, non-invasively and directly in the vineyard, the flower number in grapevine inflorescence photos by implementing artificial vision techniques. The application, called vitisFlower®, firstly guides the user to appropriately take an inflorescence photo using the smartphone’s camera. Then, by means of image analysis, the flowers in the image are detected and counted. vitisFlower® has been developed for Android devices and uses the OpenCV libraries to maximize computational efficiency. The application was tested on 140 inflorescence images of 11 grapevine varieties taken with two different devices. On average, more than 84% of flowers in the captures were found, with a precision exceeding 94%. Additionally, the application’s efficiency on four different devices covering a wide range of the market’s spectrum was also studied. The results of this benchmarking study showed significant differences among devices, although indicating that the application is efficiently usable even with low-range devices. vitisFlower is one of the first applications for viticulture that is currently freely available on Google Play. PMID:26343664
Precision platform for convex lens-induced confinement microscopy
NASA Astrophysics Data System (ADS)
Berard, Daniel; McFaul, Christopher M. J.; Leith, Jason S.; Arsenault, Adriel K. J.; Michaud, François; Leslie, Sabrina R.
2013-10-01
We present the conception, fabrication, and demonstration of a versatile, computer-controlled microscopy device which transforms a standard inverted fluorescence microscope into a precision single-molecule imaging station. The device uses the principle of convex lens-induced confinement [S. R. Leslie, A. P. Fields, and A. E. Cohen, Anal. Chem. 82, 6224 (2010)], which employs a tunable imaging chamber to enhance background rejection and extend diffusion-limited observation periods. Using nanopositioning stages, this device achieves repeatable and dynamic control over the geometry of the sample chamber on scales as small as the size of individual molecules, enabling regulation of their configurations and dynamics. Using microfluidics, this device enables serial insertion as well as sample recovery, facilitating temporally controlled, high-throughput measurements of multiple reagents. We report on the simulation and experimental characterization of this tunable chamber geometry, and its influence upon the diffusion and conformations of DNA molecules over extended observation periods. This new microscopy platform has the potential to capture, probe, and influence the configurations of single molecules, with dramatically improved imaging conditions in comparison to existing technologies. These capabilities are of immediate interest to a wide range of research and industry sectors in biotechnology, biophysics, materials, and chemistry.
A soft, wearable microfluidic device for the capture, storage, and colorimetric sensing of sweat.
Koh, Ahyeon; Kang, Daeshik; Xue, Yeguang; Lee, Seungmin; Pielak, Rafal M; Kim, Jeonghyun; Hwang, Taehwan; Min, Seunghwan; Banks, Anthony; Bastien, Philippe; Manco, Megan C; Wang, Liang; Ammann, Kaitlyn R; Jang, Kyung-In; Won, Phillip; Han, Seungyong; Ghaffari, Roozbeh; Paik, Ungyu; Slepian, Marvin J; Balooch, Guive; Huang, Yonggang; Rogers, John A
2016-11-23
Capabilities in health monitoring enabled by capture and quantitative chemical analysis of sweat could complement, or potentially obviate the need for, approaches based on sporadic assessment of blood samples. Established sweat monitoring technologies use simple fabric swatches and are limited to basic analysis in controlled laboratory or hospital settings. We present a collection of materials and device designs for soft, flexible, and stretchable microfluidic systems, including embodiments that integrate wireless communication electronics, which can intimately and robustly bond to the surface of the skin without chemical and mechanical irritation. This integration defines access points for a small set of sweat glands such that perspiration spontaneously initiates routing of sweat through a microfluidic network and set of reservoirs. Embedded chemical analyses respond in colorimetric fashion to markers such as chloride and hydronium ions, glucose, and lactate. Wireless interfaces to digital image capture hardware serve as a means for quantitation. Human studies demonstrated the functionality of this microfluidic device during fitness cycling in a controlled environment and during long-distance bicycle racing in arid, outdoor conditions. The results include quantitative values for sweat rate, total sweat loss, pH, and concentration of chloride and lactate. Copyright © 2016, American Association for the Advancement of Science.
NASA Astrophysics Data System (ADS)
Starks, Michael R.
1990-09-01
A variety of low cost devices for capturing, editing and displaying field sequential 60 cycle stereoscopic video have recently been marketed by 3D TV Corp. and others. When properly used, they give very high quality images with most consumer and professional equipment. Our stereoscopic multiplexers for creating and editing field sequential video in NTSC or component(SVHS, Betacain, RGB) and Home 3D Theater system employing LCD eyeglasses have made 3D movies and television available to a large audience.
Portable smartphone based quantitative phase microscope
NASA Astrophysics Data System (ADS)
Meng, Xin; Tian, Xiaolin; Yu, Wei; Kong, Yan; Jiang, Zhilong; Liu, Fei; Xue, Liang; Liu, Cheng; Wang, Shouyu
2018-01-01
To realize portable device with high contrast imaging capability, we designed a quantitative phase microscope using transport of intensity equation method based on a smartphone. The whole system employs an objective and an eyepiece as imaging system and a cost-effective LED as illumination source. A 3-D printed cradle is used to align these components. Images of different focal planes are captured by manual focusing, followed by calculation of sample phase via a self-developed Android application. To validate its accuracy, we first tested the device by measuring a random phase plate with known phases, and then red blood cell smear, Pap smear, broad bean epidermis sections and monocot root were also measured to show its performance. Owing to its advantages as accuracy, high-contrast, cost-effective and portability, the portable smartphone based quantitative phase microscope is a promising tool which can be future adopted in remote healthcare and medical diagnosis.
Willmott, Jon R.; Mims, Forrest M.; Parisi, Alfio V.
2018-01-01
Smartphones are playing an increasing role in the sciences, owing to the ubiquitous proliferation of these devices, their relatively low cost, increasing processing power and their suitability for integrated data acquisition and processing in a ‘lab in a phone’ capacity. There is furthermore the potential to deploy these units as nodes within Internet of Things architectures, enabling massive networked data capture. Hitherto, considerable attention has been focused on imaging applications of these devices. However, within just the last few years, another possibility has emerged: to use smartphones as a means of capturing spectra, mostly by coupling various classes of fore-optics to these units with data capture achieved using the smartphone camera. These highly novel approaches have the potential to become widely adopted across a broad range of scientific e.g., biomedical, chemical and agricultural application areas. In this review, we detail the exciting recent development of smartphone spectrometer hardware, in addition to covering applications to which these units have been deployed, hitherto. The paper also points forward to the potentially highly influential impacts that such units could have on the sciences in the coming decades. PMID:29342899
A Novel, Real-Time, In Vivo Mouse Retinal Imaging System
Butler, Mark C.; Sullivan, Jack M.
2015-01-01
Purpose To develop an efficient, low-cost instrument for robust real-time imaging of the mouse retina in vivo, and assess system capabilities by evaluating various animal models. Methods Following multiple disappointing attempts to visualize the mouse retina during a subretinal injection using commercially available systems, we identified the key limitation to be inadequate illumination due to off axis illumination and poor optical train optimization. Therefore, we designed a paraxial illumination system for Greenough-type stereo dissecting microscope incorporating an optimized optical launch and an efficiently coupled fiber optic delivery system. Excitation and emission filters control spectral bandwidth. A color coupled-charged device (CCD) camera is coupled to the microscope for image capture. Although, field of view (FOV) is constrained by the small pupil aperture, the high optical power of the mouse eye, and the long working distance (needed for surgical manipulations), these limitations can be compensated by eye positioning in order to observe the entire retina. Results The retinal imaging system delivers an adjustable narrow beam to the dilated pupil with minimal vignetting. The optic nerve, vasculature, and posterior pole are crisply visualized and the entire retina can be observed through eye positioning. Normal and degenerative retinal phenotypes can be followed over time. Subretinal or intraocular injection procedures are followed in real time. Real-time, intravenous fluorescein angiography for the live mouse has been achieved. Conclusions A novel device is established for real-time viewing and image capture of the small animal retina during subretinal injections for preclinical gene therapy studies. PMID:26551329
NASA Astrophysics Data System (ADS)
Mun, Seong K.; Freedman, Matthew T.; Gelish, Anthony; de Treville, Robert E.; Sheehy, Monet R.; Hansen, Mark; Hill, Mac; Zacharia, Elisabeth; Sullivan, Michael J.; Sebera, C. Wayne
1993-01-01
Image management and communications (IMAC) network, also known as picture archiving and communication system (PACS) consists of (1) digital image acquisition, (2) image review station (3) image storage device(s), image reading workstation, and (4) communication capability. When these subsystems are integrated over a high speed communication technology, possibilities are numerous in improving the timeliness and quality of diagnostic services within a hospital or at remote clinical sites. Teleradiology system uses basically the same hardware configuration together with a long distance communication capability. Functional characteristics of components are highlighted. Many medical imaging systems are already in digital form. These digital images constitute approximately 30% of the total volume of images produced in a radiology department. The remaining 70% of images include conventional x-ray films of the chest, skeleton, abdomen, and GI tract. Unless one develops a method of handling these conventional film images, global improvement in productivity in image management and radiology service throughout a hospital cannot be achieved. Currently, there are two method of producing digital information representing these conventional analog images for IMAC: film digitizers that scan the conventional films, and computed radiography (CR) that captures x-ray images using storage phosphor plate that is subsequently scanned by a laser beam.
NASA Astrophysics Data System (ADS)
Hall, D. J.; Skottfelt, J.; Soman, M. R.; Bush, N.; Holland, A.
2017-12-01
Charge-Coupled Devices (CCDs) have been the detector of choice for imaging and spectroscopy in space missions for several decades, such as those being used for the Euclid VIS instrument and baselined for the SMILE SXI. Despite the many positive properties of CCDs, such as the high quantum efficiency and low noise, when used in a space environment the detectors suffer damage from the often-harsh radiation environment. High energy particles can create defects in the silicon lattice which act to trap the signal electrons being transferred through the device, reducing the signal measured and effectively increasing the noise. We can reduce the impact of radiation on the devices through four key methods: increased radiation shielding, device design considerations, optimisation of operating conditions, and image correction. Here, we concentrate on device design operations, investigating the impact of narrowing the charge-transfer channel in the device with the aim of minimising the impact of traps during readout. Previous studies for the Euclid VIS instrument considered two devices, the e2v CCD204 and CCD273, the serial register of the former having a 50 μm channel and the latter having a 20 μm channel. The reduction in channel width was previously modelled to give an approximate 1.6× reduction in charge storage volume, verified experimentally to have a reduction in charge transfer inefficiency of 1.7×. The methods used to simulate the reduction approximated the charge cloud to a sharp-edged volume within which the probability of capture by traps was 100%. For high signals and slow readout speeds, this is a reasonable approximation. However, for low signals and higher readout speeds, the approximation falls short. Here we discuss a new method of simulating and calculating charge storage variations with device design changes, considering the absolute probability of capture across the pixel, bringing validity to all signal sizes and readout speeds. Using this method, we can optimise the device design to suffer minimum impact from radiation damage effects, here using detector development for the SMILE mission to demonstrate the process.
NASA Technical Reports Server (NTRS)
2009-01-01
Topics covered include: Image-Capture Devices Extend Medicine's Reach; Medical Devices Assess, Treat Balance Disorders; NASA Bioreactors Advance Disease Treatments; Robotics Algorithms Provide Nutritional Guidelines; "Anti-Gravity" Treadmills Speed Rehabilitation; Crew Management Processes Revitalize Patient Care; Hubble Systems Optimize Hospital Schedules; Web-based Programs Assess Cognitive Fitness; Electrolyte Concentrates Treat Dehydration; Tools Lighten Designs, Maintain Structural Integrity; Insulating Foams Save Money, Increase Safety; Polyimide Resins Resist Extreme Temperatures; Sensors Locate Radio Interference; Surface Operations Systems Improve Airport Efficiency; Nontoxic Resins Advance Aerospace Manufacturing; Sensors Provide Early Warning of Biological Threats; Robot Saves Soldier's Lives Overseas (MarcBot); Apollo-Era Life Raft Saves Hundreds of Sailors; Circuits Enhance Scientific Instruments and Safety Devices; Tough Textiles Protect Payloads and Public Safety Officers; Forecasting Tools Point to Fishing Hotspots; Air Purifiers Eliminate Pathogens, Preserve Food; Fabrics Protect Sensitive Skin from UV Rays; Phase Change Fabrics Control Temperature; Tiny Devices Project Sharp, Colorful Images; Star-Mapping Tools Enable Tracking of Endangered Animals; Nanofiber Filters Eliminate Contaminants; Modeling Innovations Advance Wind Energy Industry; Thermal Insulation Strips Conserve Energy; Satellite Respondent Buoys Identify Ocean Debris; Mobile Instruments Measure Atmospheric Pollutants; Cloud Imagers Offer New Details on Earth's Health; Antennas Lower Cost of Satellite Access; Feature Detection Systems Enhance Satellite Imagery; Chlorophyll Meters Aid Plant Nutrient Management; Telemetry Boards Interpret Rocket, Airplane Engine Data; Programs Automate Complex Operations Monitoring; Software Tools Streamline Project Management; Modeling Languages Refine Vehicle Design; Radio Relays Improve Wireless Products; Advanced Sensors Boost Optical Communication, Imaging; Tensile Fabrics Enhance Architecture Around the World; Robust Light Filters Support Powerful Imaging Devices; Thermoelectric Devices Cool, Power Electronics; Innovative Tools Advance Revolutionary Weld Technique; Methods Reduce Cost, Enhance Quality of Nanotubes; Gauging Systems Monitor Cryogenic Liquids; Voltage Sensors Monitor Harmful Static; and Compact Instruments Measure Heat Potential.
Secure steganography designed for mobile platforms
NASA Astrophysics Data System (ADS)
Agaian, Sos S.; Cherukuri, Ravindranath; Sifuentes, Ronnie R.
2006-05-01
Adaptive steganography, an intelligent approach to message hiding, integrated with matrix encoding and pn-sequences serves as a promising resolution to recent security assurance concerns. Incorporating the above data hiding concepts with established cryptographic protocols in wireless communication would greatly increase the security and privacy of transmitting sensitive information. We present an algorithm which will address the following problems: 1) low embedding capacity in mobile devices due to fixed image dimensions and memory constraints, 2) compatibility between mobile and land based desktop computers, and 3) detection of stego images by widely available steganalysis software [1-3]. Consistent with the smaller available memory, processor capabilities, and limited resolution associated with mobile devices, we propose a more magnified approach to steganography by focusing adaptive efforts at the pixel level. This deeper method, in comparison to the block processing techniques commonly found in existing adaptive methods, allows an increase in capacity while still offering a desired level of security. Based on computer simulations using high resolution, natural imagery and mobile device captured images, comparisons show that the proposed method securely allows an increased amount of embedding capacity but still avoids detection by varying steganalysis techniques.
ERIC Educational Resources Information Center
Catelli, Francisco; Giovannini, Odilon; Bolzan, Vicente Dall Agnol
2011-01-01
The interference fringes produced by a diffraction grating illuminated with radiation from a TV remote control and a red laser beam are, simultaneously, captured by a digital camera. Based on an image with two interference patterns, an estimate of the infrared radiation wavelength emitted by a TV remote control is made. (Contains 4 figures.)
Integrated Device for Circulating Tumor Cell Capture, Characterization, and Lens-Free Microscopy
2012-08-01
prototype consists of an Excelsior-532-200- CDRH laser (wavelength = 532 nm) as the light source, a simple Thorlabs Fig. 2. (a) Wide FOV image of a...demonstration, as shown in Fig. 1(a), used a laser (Excelsior-532-200- CDRH , Spectra Physics, with wavelength of 532 nm and power of 200 mW) as light
NASA Astrophysics Data System (ADS)
Kim, Moon Sung; Lee, Kangjin; Chao, Kaunglin; Lefcourt, Alan; Cho, Byung-Kwan; Jun, Won
We developed a push-broom, line-scan imaging system capable of simultaneous measurements of reflectance and fluorescence. The system allows multitasking inspections for quality and safety attributes of apples due to its dynamic capabilities in simultaneously capturing fluorescence and reflectance, and selectivity in multispectral bands. A multitasking image-based inspection system for online applications has been suggested in that a single imaging device that could perform a multitude of both safety and quality inspection needs. The presented multitask inspection approach in online applications may provide an economically viable means for a number of food processing industries being able to adapt to operate and meet the dynamic and specific inspection and sorting needs.
Code of Federal Regulations, 2013 CFR
2013-07-01
... system and add-on control device operating limits during the performance test? 63.3546 Section 63.3546... of key parameters of the valve operating system (e.g., solenoid valve operation, air pressure... minimum operating limit for that specific capture device or system of multiple capture devices. The...
Metasurface optics for full-color computational imaging.
Colburn, Shane; Zhan, Alan; Majumdar, Arka
2018-02-01
Conventional imaging systems comprise large and expensive optical components that successively mitigate aberrations. Metasurface optics offers a route to miniaturize imaging systems by replacing bulky components with flat and compact implementations. The diffractive nature of these devices, however, induces severe chromatic aberrations, and current multiwavelength and narrowband achromatic metasurfaces cannot support full visible spectrum imaging (400 to 700 nm). We combine principles of both computational imaging and metasurface optics to build a system with a single metalens of numerical aperture ~0.45, which generates in-focus images under white light illumination. Our metalens exhibits a spectrally invariant point spread function that enables computational reconstruction of captured images with a single digital filter. This work connects computational imaging and metasurface optics and demonstrates the capabilities of combining these disciplines by simultaneously reducing aberrations and downsizing imaging systems using simpler optics.
Smartstones: a small e-compass, accelerometer and gyroscope embedded in stones
NASA Astrophysics Data System (ADS)
Gronz, Oliver; Hiller, Priska H.; Wirtz, Stefan; Becker, Kerstin; Iserloh, Thomas; Aberle, Jochen; Casper, Markus C.
2015-04-01
Pebbles or rock fragments influence soil erosion processes in various ways: they can protect the soil but also enhance the erosion as soon as they are moved by water and impact onto soil. So far, stone-embedded devices to measure the movements have been quite big, up to several decimetres, which does not allow for the analysis of pebbles from medium and coarse gravel classes. In this study, we used a novel device called Smartstones, which is significantly smaller. The Smartstone device's dimensions are 55 mm in length, 8 mm in diameter and an approximately 70 mm long flexible antenna (device developer: SMART-RFID solutions Rheinberg, Germany). It is powered by two button cells, contains an own data storage and is able to wait inactive for longer times until it is activated by movement. It communicates via active RFID (radio frequency identification) technology to a Linux gateway, which stores the sensor data in a database after transmission and is able to handle several devices simultaneously. The device contains a Bosch sensor that measures magnetic flux density, acceleration and rotation, in each case for / around three axes. In our study, the device has been used in a laboratory flume (270 cm in length, 5° to 10° slope, approx. 2 cm water level, mean flow velocities between 0.66 and 1 ms-1) in combination with a high speed camera to capture the movement of the pebbles. The simultaneous usage of two capture devices allows for a comparison of the results: movement patterns derived from image analysis and sensor data analysis. In the device's first software version, all three sensors - acceleration, compass, and gyroscope - were active. The acquisition of all values resulted in a sampling rate of 10 Hz. After the experiments using this setup, the data analysis of the high speed images and the device's data showed that the pebble reached rotation velocities beyond 5 rotations per second, even on the relatively short flume and low water levels. Thus, the device produced only sub-Nyquist sampling values and the rotation velocity of the pebble could not be derived correctly using solely the device's data. Consequently, the device's software was adapted by the developers: the second (and current) version of the device only acquires acceleration and compass, as the acquisition of the gyroscope's value does not allow for higher sampling rates. The second version samples every 12 ms. All aforementioned experiments have been repeated using the adapted device. For data analysis, the high-speed camera images were merged with the device data using a MATLAB script. Furthermore, the derived relative pebble orientation - yaw, pitch and roll - is illustrated using a rotated CAD model of the pebble. The pebble's orientation is derived from compass and accelerometer data using sensor fusion and algorithms for tilt compensated compasses. The results show that the device is perfectly able to capture the movement of the pebble such as rotation (including the rotation axis), sliding or saltation. The interacting forces between the pebble and the underground can be calculated from the acceleration data. However, the accelerometer data also showed that the range of the sensor is not sufficiently large: clipping of values occurred. According to present instrument specifications, the sensor is able to capture up to 4 g for each axis but the resulting vectors for acceleration along all three axes showed values greater than 4 g, even up to the theoretical maximum of approximately 6.9 g. Thus, an impact of this strength that only stresses one axis cannot be measured. As a result of this clipping, the derivation of the pebble's absolute position using double integration of acceleration values is associated with flaws. Besides this clipping, the derived position will deviate from the true position for larger distances or longer experiment durations as the noise of the data will be integrated, too. Several requirements for the next device version were formulated: The range of the accelerometer will be set to the sensor's maximum of 16 g. The device will be water proof. Data analysis will include further methods like Hidden Markov Models or Kalman Filtering as the tilt-compensation is actually not intended for irregular moving devices. These techniques are well-established for other devices and purposes like navigation using GPS. In near future, the Smartstone device will be used outside the laboratory in natural rills and rill experiments. In these experiments, the water is turbid and the pebble will not be visible at all, which does not allow for the usage of the high speed camera. However, the present results showed that the movement of the pebble in addition to the applied forces to the underground and the rill's sidewalls can be captured solely by the Smartstone.
NASA Astrophysics Data System (ADS)
Chen, Y. L.
2015-12-01
Measurement technologies for velocity of river flow are divided into intrusive and nonintrusive methods. Intrusive method requires infield operations. The measuring process of intrusive methods are time consuming, and likely to cause damages of operator and instrument. Nonintrusive methods require fewer operators and can reduce instrument damages from directly attaching to the flow. Nonintrusive measurements may use radar or image velocimetry to measure the velocities at the surface of water flow. The image velocimetry, such as large scale particle image velocimetry (LSPIV) accesses not only the point velocity but the flow velocities in an area simultaneously. Flow properties of an area hold the promise of providing spatially information of flow fields. This study attempts to construct a mobile system UAV-LSPIV by using an unmanned aerial vehicle (UAV) with LSPIV to measure flows in fields. The mobile system consists of a six-rotor UAV helicopter, a Sony nex5T camera, a gimbal, an image transfer device, a ground station and a remote control device. The activate gimbal helps maintain the camera lens orthogonal to the water surface and reduce the extent of images being distorted. The image transfer device can monitor the captured image instantly. The operator controls the UAV by remote control device through ground station and can achieve the flying data such as flying height and GPS coordinate of UAV. The mobile system was then applied to field experiments. The deviation of velocities measured by UAV-LSPIV of field experiments and handhold Acoustic Doppler Velocimeter (ADV) is under 8%. The results of the field experiments suggests that the application of UAV-LSPIV can be effectively applied to surface flow studies.
Reed, Terrie L; Drozda, Joseph P; Baskin, Kevin M; Tcheng, James; Conway, Karen; Wilson, Natalia; Marinac-Dabic, Danica; Heise, Theodore; Krucoff, Mitchell W
2017-12-01
The Medical Device Epidemiology Network (MDEpiNet) is a public private partnership (PPP) that provides a platform for collaboration on medical device evaluation and depth of expertise for supporting pilots to capture, exchange and use device information for improving device safety and protecting public health. The MDEpiNet SMART Think Tank, held in February, 2013, sought to engage expert stakeholders who were committed to improving the capture of device data, including Unique Device Identification (UDI), in key electronic health information. Prior to the Think Tank there was limited collaboration among stakeholders beyond a few single health care organizations engaged in electronic capture and exchange of device data. The Think Tank resulted in what has become two sustainable multi-stakeholder device data capture initiatives, BUILD and VANGUARD. These initiatives continue to mature within the MDEpiNet PPP structure and are well aligned with the goals outlined in recent FDA-initiated National Medical Device Planning Board and Medical Device Registry Task Force white papers as well as the vision for the National Evaluation System for health Technology.%. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Bucher, François-Xavier; Cao, Frédéric; Viard, Clément; Guichard, Frédéric
2014-03-01
We present in this paper a novel capacitive device that stimulates the touchscreen interface of a smartphone (or of any imaging device equipped with a capacitive touchscreen) and synchronizes triggering with the DxO LED Universal Timer to measure shooting time lag and shutter lag according to ISO 15781:2013. The device and protocol extend the time lag measurement beyond the standard by including negative shutter lag, a phenomenon that is more and more commonly found in smartphones. The device is computer-controlled, and this feature, combined with measurement algorithms, makes it possible to automatize a large series of captures so as to provide more refined statistical analyses when, for example, the shutter lag of "zero shutter lag" devices is limited by the frame time as our measurements confirm.
A novel smartphone ophthalmic imaging adapter: User feasibility studies in Hyderabad, India
Ludwig, Cassie A; Murthy, Somasheila I; Pappuru, Rajeev R; Jais, Alexandre; Myung, David J; Chang, Robert T
2016-01-01
Aim of Study: To evaluate the ability of ancillary health staff to use a novel smartphone imaging adapter system (EyeGo, now known as Paxos Scope) to capture images of sufficient quality to exclude emergent eye findings. Secondary aims were to assess user and patient experiences during image acquisition, interuser reproducibility, and subjective image quality. Materials and Methods: The system captures images using a macro lens and an indirect ophthalmoscopy lens coupled with an iPhone 5S. We conducted a prospective cohort study of 229 consecutive patients presenting to L. V. Prasad Eye Institute, Hyderabad, India. Primary outcome measure was mean photographic quality (FOTO-ED study 1–5 scale, 5 best). 210 patients and eight users completed surveys assessing comfort and ease of use. For 46 patients, two users imaged the same patient's eyes sequentially. For 182 patients, photos taken with the EyeGo system were compared to images taken by existing clinic cameras: a BX 900 slit-lamp with a Canon EOS 40D Digital Camera and an FF 450 plus Fundus Camera with VISUPAC™ Digital Imaging System. Images were graded post hoc by a reviewer blinded to diagnosis. Results: Nine users acquired 719 useable images and 253 videos of 229 patients. Mean image quality was ≥ 4.0/5.0 (able to exclude subtle findings) for all users. 8/8 users and 189/210 patients surveyed were comfortable with the EyeGo device on a 5-point Likert scale. For 21 patients imaged with the anterior adapter by two users, a weighted κ of 0.597 (95% confidence interval: 0.389–0.806) indicated moderate reproducibility. High level of agreement between EyeGo and existing clinic cameras (92.6% anterior, 84.4% posterior) was found. Conclusion: The novel, ophthalmic imaging system is easily learned by ancillary eye care providers, well tolerated by patients, and captures high-quality images of eye findings. PMID:27146928
Arabic word recognizer for mobile applications
NASA Astrophysics Data System (ADS)
Khanna, Nitin; Abdollahian, Golnaz; Brame, Ben; Boutin, Mireille; Delp, Edward J.
2011-03-01
When traveling in a region where the local language is not written using a "Roman alphabet," translating written text (e.g., documents, road signs, or placards) is a particularly difficult problem since the text cannot be easily entered into a translation device or searched using a dictionary. To address this problem, we are developing the "Rosetta Phone," a handheld device (e.g., PDA or mobile telephone) capable of acquiring an image of the text, locating the region (word) of interest within the image, and producing both an audio and a visual English interpretation of the text. This paper presents a system targeted for interpreting words written in Arabic script. The goal of this work is to develop an autonomous, segmentation-free Arabic phrase recognizer, with computational complexity low enough to deploy on a mobile device. A prototype of the proposed system has been deployed on an iPhone with a suitable user interface. The system was tested on a number of noisy images, in addition to the images acquired from the iPhone's camera. It identifies Arabic words or phrases by extracting appropriate features and assigning "codewords" to each word or phrase. On a dictionary of 5,000 words, the system uniquely mapped (word-image to codeword) 99.9% of the words. The system has a 82% recognition accuracy on images of words captured using the iPhone's built-in camera.
Park, Sung Pyo; Siringo, Frank S.; Pensec, Noelle; Hong, In Hwan; Sparrow, Janet; Barile, Gaetano; Tsang, Stephen H.; Chang, Stanley
2015-01-01
BACKGROUND AND OBJECTIVE To compare fundus autofluorescence (FAF) imaging via fundus camera (FC) and confocal scanning laser ophthalmoscope (cSLO). PATIENTS AND METHODS FAF images were obtained with a digital FC (530 to 580 nm excitation) and a cSLO (488 nm excitation). Two authors evaluated correlation of autofluorescence pattern, atrophic lesion size, and image quality between the two devices. RESULTS In 120 eyes, the autofluorescence pattern correlated in 86% of lesions. By lesion subtype, correlation rates were 100% in hemorrhage, 97% in geographic atrophy, 82% in flecks, 75% in drusen, 70% in exudates, 67% in pigment epithelial detachment, 50% in fibrous scars, and 33% in macular hole. The mean lesion size in geographic atrophy was 4.57 ± 2.3 mm2 via cSLO and 3.81 ± 1.94 mm2 via FC (P < .0001). Image quality favored cSLO in 71 eyes. CONCLUSION FAF images were highly correlated between the FC and cSLO. Differences between the two devices revealed contrasts. Multiple image capture and confocal optics yielded higher image contrast with the cSLO, although acquisition and exposure time was longer. PMID:24221461
NASA Astrophysics Data System (ADS)
Saager, Rolf B.; Baldado, Melissa L.; Rowland, Rebecca A.; Kelly, Kristen M.; Durkin, Anthony J.
2018-04-01
With recent proliferation in compact and/or low-cost clinical multispectral imaging approaches and commercially available components, questions remain whether they adequately capture the requisite spectral content of their applications. We present a method to emulate the spectral range and resolution of a variety of multispectral imagers, based on in-vivo data acquired from spatial frequency domain spectroscopy (SFDS). This approach simulates spectral responses over 400 to 1100 nm. Comparing emulated data with full SFDS spectra of in-vivo tissue affords the opportunity to evaluate whether the sparse spectral content of these imagers can (1) account for all sources of optical contrast present (completeness) and (2) robustly separate and quantify sources of optical contrast (crosstalk). We validate the approach over a range of tissue-simulating phantoms, comparing the SFDS-based emulated spectra against measurements from an independently characterized multispectral imager. Emulated results match the imager across all phantoms (<3 % absorption, <1 % reduced scattering). In-vivo test cases (burn wounds and photoaging) illustrate how SFDS can be used to evaluate different multispectral imagers. This approach provides an in-vivo measurement method to evaluate the performance of multispectral imagers specific to their targeted clinical applications and can assist in the design and optimization of new spectral imaging devices.
Wide-Field-of-View, High-Resolution, Stereoscopic Imager
NASA Technical Reports Server (NTRS)
Prechtl, Eric F.; Sedwick, Raymond J.
2010-01-01
A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.
Imaging Beyond What Man Can See
NASA Technical Reports Server (NTRS)
May, George; Mitchell, Brian
2004-01-01
Three lightweight, portable hyperspectral sensor systems have been built that capture energy from 200 to 1700 nanometers (ultravio1et to shortwave infrared). The sensors incorporate a line scanning technique that requires no relative movement between the target and the sensor. This unique capability, combined with portability, opens up new uses of hyperspectral imaging for laboratory and field environments. Each system has a GUI-based software package that allows the user to communicate with the imaging device for setting spatial resolution, spectral bands and other parameters. NASA's Space Partnership Development has sponsored these innovative developments and their application to human problems on Earth and in space. Hyperspectral datasets have been captured and analyzed in numerous areas including precision agriculture, food safety, biomedical imaging, and forensics. Discussion on research results will include realtime detection of food contaminants, molds and toxin research on corn, identifying counterfeit documents, non-invasive wound monitoring and aircraft applications. Future research will include development of a thermal infrared hyperspectral sensor that will support natural resource applications on Earth and thermal analyses during long duration space flight. This paper incorporates a variety of disciplines and imaging technologies that have been linked together to allow the expansion of remote sensing across both traditional and non-traditional boundaries.
Pan, Bing; Jiang, Tianyun; Wu, Dafang
2014-11-01
In thermomechanical testing of hypersonic materials and structures, direct observation and quantitative strain measurement of the front surface of a test specimen directly exposed to severe aerodynamic heating has been considered as a very challenging task. In this work, a novel quartz infrared heating device with an observation window is designed to reproduce the transient thermal environment experienced by hypersonic vehicles. The specially designed experimental system allows the capture of test article's surface images at various temperatures using an optical system outfitted with a bandpass filter. The captured images are post-processed by digital image correlation to extract full-field thermal deformation. To verify the viability and accuracy of the established system, thermal strains of a chromiumnickel austenite stainless steel sample heated from room temperature up to 600 °C were determined. The preliminary results indicate that the air disturbance between the camera and the specimen due to heat haze induces apparent distortions in the recorded images and large errors in the measured strains, but the average values of the measured strains are accurate enough. Limitations and further improvements of the proposed technique are discussed.
Calibration of Action Cameras for Photogrammetric Purposes
Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo
2014-01-01
The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898
Calibration of action cameras for photogrammetric purposes.
Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo
2014-09-18
The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.
Rhoads, Daniel D.; Mathison, Blaine A.; Bishop, Henry S.; da Silva, Alexandre J.; Pantanowitz, Liron
2016-01-01
Context Microbiology laboratories are continually pursuing means to improve quality, rapidity, and efficiency of specimen analysis in the face of limited resources. One means by which to achieve these improvements is through the remote analysis of digital images. Telemicrobiology enables the remote interpretation of images of microbiology specimens. To date, the practice of clinical telemicrobiology has not been thoroughly reviewed. Objective Identify the various methods that can be employed for telemicrobiology, including emerging technologies that may provide value to the clinical laboratory. Data Sources Peer-reviewed literature, conference proceedings, meeting presentations, and expert opinions pertaining to telemicrobiology have been evaluated. Results A number of modalities have been employed for telemicroscopy including static capture techniques, whole slide imaging, video telemicroscopy, mobile devices, and hybrid systems. Telemicrobiology has been successfully implemented for applications including routine primary diagnois, expert teleconsultation, and proficiency testing. Emerging areas include digital culture plate reading, mobile health applications and computer-augmented analysis of digital images. Conclusions Static image capture techniques to date have been the most widely used modality for telemicrobiology, despite the fact that other newer technologies are available and may produce better quality interpretations. Increased adoption of telemicrobiology offers added value, quality, and efficiency to the clinical microbiology laboratory. PMID:26317376
Sugimura, Daisuke; Kobayashi, Suguru; Hamamoto, Takayuki
2017-11-01
Light field imaging is an emerging technique that is employed to realize various applications such as multi-viewpoint imaging, focal-point changing, and depth estimation. In this paper, we propose a concept of a dual-resolution light field imaging system to synthesize super-resolved multi-viewpoint images. The key novelty of this study is the use of an organic photoelectric conversion film (OPCF), which is a device that converts spectra information of incoming light within a certain wavelength range into an electrical signal (pixel value), for light field imaging. In our imaging system, we place the OPCF having the green spectral sensitivity onto the micro-lens array of the conventional light field camera. The OPCF allows us to acquire the green spectra information only at the center viewpoint with the full resolution of the image sensor. In contrast, the optical system of the light field camera in our imaging system captures the other spectra information (red and blue) at multiple viewpoints (sub-aperture images) but with low resolution. Thus, our dual-resolution light field imaging system enables us to simultaneously capture information about the target scene at a high spatial resolution as well as the direction information of the incoming light. By exploiting these advantages of our imaging system, our proposed method enables the synthesis of full-resolution multi-viewpoint images. We perform experiments using synthetic images, and the results demonstrate that our method outperforms other previous methods.
Use of technology in children’s dietary assessment
Boushey, CJ; Kerr, DA; Wright, J; Lutes, KD; Ebert, DS; Delp, EJ
2010-01-01
Background Information on dietary intake provides some of the most valuable insights for mounting intervention programmes for the prevention of chronic diseases. With the growing concern about adolescent overweight, the need to accurately measure diet becomes imperative. Assessment among adolescents is problematic as this group has irregular eating patterns and they have less enthusiasm for recording food intake. Subjects/Methods We used qualitative and quantitative techniques among adolescents to assess their preferences for dietary assessment methods. Results Dietary assessment methods using technology, for example, a personal digital assistant (PDA) or a disposable camera, were preferred over the pen and paper food record. Conclusions There was a strong preference for using methods that incorporate technology such as capturing images of food. This suggests that for adolescents, dietary methods that incorporate technology may improve cooperation and accuracy. Current computing technology includes higher resolution images, improved memory capacity and faster processors that allow small mobile devices to process information not previously possible. Our goal is to develop, implement and evaluate a mobile device (for example, PDA, mobile phone) food record that will translate to an accurate account of daily food and nutrient intake among adolescents. This mobile computing device will include digital images, a nutrient database and image analysis for identification and quantification of food consumption. Mobile computing devices provide a unique vehicle for collecting dietary information that reduces the burden on record keepers. Images of food can be marked with a variety of input methods that link the item for image processing and analysis to estimate the amount of food. Images before and after the foods are eaten can estimate the amount of food consumed. The initial stages and potential of this project will be described. PMID:19190645
Use of technology in children's dietary assessment.
Boushey, C J; Kerr, D A; Wright, J; Lutes, K D; Ebert, D S; Delp, E J
2009-02-01
Information on dietary intake provides some of the most valuable insights for mounting intervention programmes for the prevention of chronic diseases. With the growing concern about adolescent overweight, the need to accurately measure diet becomes imperative. Assessment among adolescents is problematic as this group has irregular eating patterns and they have less enthusiasm for recording food intake. We used qualitative and quantitative techniques among adolescents to assess their preferences for dietary assessment methods. Dietary assessment methods using technology, for example, a personal digital assistant (PDA) or a disposable camera, were preferred over the pen and paper food record. There was a strong preference for using methods that incorporate technology such as capturing images of food. This suggests that for adolescents, dietary methods that incorporate technology may improve cooperation and accuracy. Current computing technology includes higher resolution images, improved memory capacity and faster processors that allow small mobile devices to process information not previously possible. Our goal is to develop, implement and evaluate a mobile device (for example, PDA, mobile phone) food record that will translate to an accurate account of daily food and nutrient intake among adolescents. This mobile computing device will include digital images, a nutrient database and image analysis for identification and quantification of food consumption. Mobile computing devices provide a unique vehicle for collecting dietary information that reduces the burden on record keepers. Images of food can be marked with a variety of input methods that link the item for image processing and analysis to estimate the amount of food. Images before and after the foods are eaten can estimate the amount of food consumed. The initial stages and potential of this project will be described.
Code of Federal Regulations, 2010 CFR
2010-07-01
... system and add-on control device operating limits during the performance test? 63.3546 Section 63.3546... device or system of multiple capture devices. The average duct static pressure is the maximum operating... Add-on Controls Option § 63.3546 How do I establish the emission capture system and add-on control...
Code of Federal Regulations, 2011 CFR
2011-07-01
... system and add-on control device operating limits during the performance test? 63.3546 Section 63.3546... device or system of multiple capture devices. The average duct static pressure is the maximum operating... Add-on Controls Option § 63.3546 How do I establish the emission capture system and add-on control...
Imaging and controlling plasmonic interference fields at buried interfaces
NASA Astrophysics Data System (ADS)
Lummen, Tom T. A.; Lamb, Raymond J.; Berruto, Gabriele; Lagrange, Thomas; Dal Negro, Luca; García de Abajo, F. Javier; McGrouther, Damien; Barwick, B.; Carbone, F.
2016-10-01
Capturing and controlling plasmons at buried interfaces with nanometre and femtosecond resolution has yet to be achieved and is critical for next generation plasmonic devices. Here we use light to excite plasmonic interference patterns at a buried metal-dielectric interface in a nanostructured thin film. Plasmons are launched from a photoexcited array of nanocavities and their propagation is followed via photon-induced near-field electron microscopy (PINEM). The resulting movie directly captures the plasmon dynamics, allowing quantification of their group velocity at ~0.3 times the speed of light, consistent with our theoretical predictions. Furthermore, we show that the light polarization and nanocavity design can be tailored to shape transient plasmonic gratings at the nanoscale. This work, demonstrating dynamical imaging with PINEM, paves the way for the femtosecond and nanometre visualization and control of plasmonic fields in advanced heterostructures based on novel two-dimensional materials such as graphene, MoS2, and ultrathin metal films.
Capturing Revolute Motion and Revolute Joint Parameters with Optical Tracking
NASA Astrophysics Data System (ADS)
Antonya, C.
2017-12-01
Optical tracking of users and various technical systems are becoming more and more popular. It consists of analysing sequence of recorded images using video capturing devices and image processing algorithms. The returned data contains mainly point-clouds, coordinates of markers or coordinates of point of interest. These data can be used for retrieving information related to the geometry of the objects, but also to extract parameters for the analytical model of the system useful in a variety of computer aided engineering simulations. The parameter identification of joints deals with extraction of physical parameters (mainly geometric parameters) for the purpose of constructing accurate kinematic and dynamic models. The input data are the time-series of the marker’s position. The least square method was used for fitting the data into different geometrical shapes (ellipse, circle, plane) and for obtaining the position and orientation of revolute joins.
Imaging and controlling plasmonic interference fields at buried interfaces
Lummen, Tom T. A.; Lamb, Raymond J.; Berruto, Gabriele; LaGrange, Thomas; Dal Negro, Luca; García de Abajo, F. Javier; McGrouther, Damien; Barwick, B.; Carbone, F.
2016-01-01
Capturing and controlling plasmons at buried interfaces with nanometre and femtosecond resolution has yet to be achieved and is critical for next generation plasmonic devices. Here we use light to excite plasmonic interference patterns at a buried metal–dielectric interface in a nanostructured thin film. Plasmons are launched from a photoexcited array of nanocavities and their propagation is followed via photon-induced near-field electron microscopy (PINEM). The resulting movie directly captures the plasmon dynamics, allowing quantification of their group velocity at ∼0.3 times the speed of light, consistent with our theoretical predictions. Furthermore, we show that the light polarization and nanocavity design can be tailored to shape transient plasmonic gratings at the nanoscale. This work, demonstrating dynamical imaging with PINEM, paves the way for the femtosecond and nanometre visualization and control of plasmonic fields in advanced heterostructures based on novel two-dimensional materials such as graphene, MoS2, and ultrathin metal films. PMID:27725670
Deep learning application: rubbish classification with aid of an android device
NASA Astrophysics Data System (ADS)
Liu, Sijiang; Jiang, Bo; Zhan, Jie
2017-06-01
Deep learning is a very hot topic currently in pattern recognition and artificial intelligence researches. Aiming at the practical problem that people usually don't know correct classifications some rubbish should belong to, based on the powerful image classification ability of the deep learning method, we have designed a prototype system to help users to classify kinds of rubbish. Firstly the CaffeNet Model was adopted for our classification network training on the ImageNet dataset, and the trained network was deployed on a web server. Secondly an android app was developed for users to capture images of unclassified rubbish, upload images to the web server for analyzing backstage and retrieve the feedback, so that users can obtain the classification guide by an android device conveniently. Tests on our prototype system of rubbish classification show that: an image of one single type of rubbish with origin shape can be better used to judge its classification, while an image containing kinds of rubbish or rubbish with changed shape may fail to help users to decide rubbish's classification. However, the system still shows promising auxiliary function for rubbish classification if the network training strategy can be optimized further.
A real-time 3D end-to-end augmented reality system (and its representation transformations)
NASA Astrophysics Data System (ADS)
Tytgat, Donny; Aerts, Maarten; De Busser, Jeroen; Lievens, Sammy; Rondao Alface, Patrice; Macq, Jean-Francois
2016-09-01
The new generation of HMDs coming to the market is expected to enable many new applications that allow free viewpoint experiences with captured video objects. Current applications usually rely on 3D content that is manually created or captured in an offline manner. In contrast, this paper focuses on augmented reality applications that use live captured 3D objects while maintaining free viewpoint interaction. We present a system that allows live dynamic 3D objects (e.g. a person who is talking) to be captured in real-time. Real-time performance is achieved by traversing a number of representation formats and exploiting their specific benefits. For instance, depth images are maintained for fast neighborhood retrieval and occlusion determination, while implicit surfaces are used to facilitate multi-source aggregation for both geometry and texture. The result is a 3D reconstruction system that outputs multi-textured triangle meshes at real-time rates. An end-to-end system is presented that captures and reconstructs live 3D data and allows for this data to be used on a networked (AR) device. For allocating the different functional blocks onto the available physical devices, a number of alternatives are proposed considering the available computational power and bandwidth for each of the components. As we will show, the representation format can play an important role in this functional allocation and allows for a flexible system that can support a highly heterogeneous infrastructure.
Simultaneous immersion Mirau interferometry.
Lyulko, Oleksandra V; Randers-Pehrson, Gerhard; Brenner, David J
2013-05-01
A novel technique for label-free imaging of live biological cells in aqueous medium that is insensitive to ambient vibrations is presented. This technique is a spin-off from previously developed immersion Mirau interferometry. Both approaches utilize a modified Mirau interferometric attachment for a microscope objective that can be used both in air and in immersion mode, when the device is submerged in cell medium and has its internal space filled with liquid. While immersion Mirau interferometry involves first capturing a series of images, the resulting images are potentially distorted by ambient vibrations. Overcoming these serial-acquisition challenges, simultaneous immersion Mirau interferometry incorporates polarizing elements into the optics to allow simultaneous acquisition of two interferograms. The system design and production are described and images produced with the developed techniques are presented.
NASA Astrophysics Data System (ADS)
Wong, Erwin
2000-03-01
Traditional methods of linear based imaging limits the viewer to a single fixed-point perspective. By means of a single lens multiple perspective mirror system, a 360-degree representation of the area around the camera is reconstructed. This reconstruction is used overcome the limitations of a traditional camera by providing the viewer with many different perspectives. By constructing the mirror into a hemispherical surface with multiple focal lengths at various diameters on the mirror, and by placing a parabolic mirror overhead, a stereoscopic image can be extracted from the image captured by a high-resolution camera placed beneath the mirror. Image extraction and correction is made by computer processing of the image obtained by camera; the image present up to five distinguishable different viewpoints that a computer can extrapolate pseudo- perspective data from. Geometric and depth for field can be extrapolated via comparison and isolation of objects within a virtual scene post processed by the computer. Combining data with scene rendering software provides the viewer with the ability to choose a desired viewing position, multiple dynamic perspectives, and virtually constructed perspectives based on minimal existing data. An examination into the workings of the mirror relay system is provided, including possible image extrapolation and correctional methods. Generation of data and virtual interpolated and constructed data is also mentioned.
Particle Capture Devices and Methods of Use Thereof
NASA Technical Reports Server (NTRS)
Voldman, Joel (Inventor); Skelley, Alison M. (Inventor); Kirak, Oktay (Inventor); Jaenisch, Rudolf (Inventor)
2015-01-01
The present invention provides a device and methods of use thereof in microscale particle capturing and particle pairing. This invention provides particle patterning device, which mechanically traps individual particles within first chambers of capture units, transfer the particles to second chambers of opposing capture units, and traps a second type of particle in the same second chamber. The device and methods allow for high yield assaying of trapped cells, high yield fusion of trapped, paired cells, for controlled binding of particles to cells and for specific chemical reactions between particle interfaces and particle contents. The device and method provide means of identification of the particle population and a facile route to particle collection.
How Willing Are Adolescents to Record Their Dietary Intake? The Mobile Food Record
Harray, Amelia J; Kerr, Deborah Anne; Paterson, Stacey; Aflague, Tanisha; Bosch Ruiz, Marc; Ahmad, Ziad; Delp, Edward J
2015-01-01
Background Accurately assessing the diets of children and adolescents can be problematic. Use of technologies, such as mobile apps designed to capture food and beverages consumed at eating occasions with images taken using device-embedded cameras, may address many of the barriers to gathering accurate dietary intake data from adolescents. Objective The objectives of this study were to assess the willingness of adolescents to take images of food and beverages at their eating occasions using a novel mobile food record (mFR) and to evaluate the usability of the user confirmation component of the mFR app, referred to as the “review process.” Methods Mixed methods combining quantitative and qualitative protocols were used in this study. Adolescents (11-15-year olds) attending a summer camp were recruited to participate in the study. First, the participants were asked to take images of foods and beverages consumed as meals and snacks for 2 consecutive days using the mFR app running on an iPhone and the number of images taken was noted. This was followed by focus group sessions to evaluate usability, which was analyzed by content and themes. After using the mFR, a think-aloud method was used to evaluate the usability of the mFR method for reviewing system-identified foods (ie, the review process). A usability questionnaire was administered at the end of all activities. Results The mFR was accepted by the majority of the 24 boys and 17 girls (n=41) but varied according to gender and eating occasion. Girls were significantly more likely than boys to capture images of their eating occasions (Fisher exact test, P=.03). Participants were more likely to take images of their breakfasts (90%, 36/40) and lunches (90%, 72/80) and least likely to capture afternoon and evening snacks, 54% (43/80) and 40% (32/80), respectively. The major themes from the focus groups with regard to using the mFR were games, rewards, and the need to know more about why they were using the app. Results of the usability questionnaire indicated that including a game component would be important to increase willingness to use the mFR, and a high majority of the participants indicated a willingness to use the mFR for 7 days or more. The image review process was found to be easy to use except for some confusion with overlapping markers on the screen. Conclusions The adolescents’ experiences with and feedback about the mFR highlighted the importance of increased training, reminders, entertainment (eg, games), and training with practice in using the device to capture complete dietary intake as part of their active lifestyles. PMID:26024996
Meng, Xin; Huang, Huachuan; Yan, Keding; Tian, Xiaolin; Yu, Wei; Cui, Haoyang; Kong, Yan; Xue, Liang; Liu, Cheng; Wang, Shouyu
2016-12-20
In order to realize high contrast imaging with portable devices for potential mobile healthcare, we demonstrate a hand-held smartphone based quantitative phase microscope using the transport of intensity equation method. With a cost-effective illumination source and compact microscope system, multi-focal images of samples can be captured by the smartphone's camera via manual focusing. Phase retrieval is performed using a self-developed Android application, which calculates sample phases from multi-plane intensities via solving the Poisson equation. We test the portable microscope using a random phase plate with known phases, and to further demonstrate its performance, a red blood cell smear, a Pap smear and monocot root and broad bean epidermis sections are also successfully imaged. Considering its advantages as an accurate, high-contrast, cost-effective and field-portable device, the smartphone based hand-held quantitative phase microscope is a promising tool which can be adopted in the future in remote healthcare and medical diagnosis.
An FPGA-based heterogeneous image fusion system design method
NASA Astrophysics Data System (ADS)
Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong
2011-08-01
Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.
Design of integrated eye tracker-display device for head mounted systems
NASA Astrophysics Data System (ADS)
David, Y.; Apter, B.; Thirer, N.; Baal-Zedaka, I.; Efron, U.
2009-08-01
We propose an Eye Tracker/Display system, based on a novel, dual function device termed ETD, which allows sharing the optical paths of the Eye tracker and the display and on-chip processing. The proposed ETD design is based on a CMOS chip combining a Liquid-Crystal-on-Silicon (LCoS) micro-display technology with near infrared (NIR) Active Pixel Sensor imager. The ET operation allows capturing the Near IR (NIR) light, back-reflected from the eye's retina. The retinal image is then used for the detection of the current direction of eye's gaze. The design of the eye tracking imager is based on the "deep p-well" pixel technology, providing low crosstalk while shielding the active pixel circuitry, which serves the imaging and the display drivers, from the photo charges generated in the substrate. The use of the ETD in the HMD Design enables a very compact design suitable for Smart Goggle applications. A preliminary optical, electronic and digital design of the goggle and its associated ETD chip and digital control, are presented.
Public health advocacy in action: the case of unproven breast cancer screening in Australia.
Johnson, Rebecca S; Croager, Emma J; Kameron, Caitlin B; Pratt, Iain S; Vreugdenburg, Thomas D; Slevin, Terry
2016-09-30
In recent years, nonmammographic breast imaging devices, such as thermography, electrical impedance scanning and elastography, have been promoted directly to consumers, which has captured the attention of governments, researchers and health organisations. These devices are not supported by evidence and risk undermining existing mammographic breast cancer screening services. During a 5-year period, Cancer Council Western Australia (CCWA) used strategic research combined with legal, policy and media advocacy to contest claims that these devices were proven alternatives to mammography for breast cancer screening. The campaign was successful because it had input from people with public health, academic, clinical and legal backgrounds, and took advantage of existing legal and regulatory avenues. CCWA's experience provides a useful advocacy model for public health practitioners who are concerned about unsafe consumer products, unproven medical devices, and misleading health information and advertising.
A method to perform a fast fourier transform with primitive image transformations.
Sheridan, Phil
2007-05-01
The Fourier transform is one of the most important transformations in image processing. A major component of this influence comes from the ability to implement it efficiently on a digital computer. This paper describes a new methodology to perform a fast Fourier transform (FFT). This methodology emerges from considerations of the natural physical constraints imposed by image capture devices (camera/eye). The novel aspects of the specific FFT method described include: 1) a bit-wise reversal re-grouping operation of the conventional FFT is replaced by the use of lossless image rotation and scaling and 2) the usual arithmetic operations of complex multiplication are replaced with integer addition. The significance of the FFT presented in this paper is introduced by extending a discrete and finite image algebra, named Spiral Honeycomb Image Algebra (SHIA), to a continuous version, named SHIAC.
Sky Radiance Distributions for Thermal Imaging Backgrounds.
1987-12-01
background noise limited system. In infrared devices we have a spectral discrimination which is due to the spectral response of the detector /filter...cannot apply the central limit theorem [Ref.]- because the detector can capture only a few shots of the cloud form and the characteristics of the...objects most infrared systems can be used as detectors or target designators. Since infrared systems are passive the advantages of such systems are enormous
Forensic print extraction using 3D technology and its processing
NASA Astrophysics Data System (ADS)
Rajeev, Srijith; Shreyas, Kamath K. M.; Panetta, Karen; Agaian, Sos S.
2017-05-01
Biometric evidence plays a crucial role in criminal scene analysis. Forensic prints can be extracted from any solid surface such as firearms, doorknobs, carpets and mugs. Prints such as fingerprints, palm prints, footprints and lip-prints can be classified into patent, latent, and three-dimensional plastic prints. Traditionally, law enforcement officers capture these forensic traits using an electronic device or extract them manually, and save the data electronically using special scanners. The reliability and accuracy of the method depends on the ability of the officer or the electronic device to extract and analyze the data. Furthermore, the 2-D acquisition and processing system is laborious and cumbersome. This can lead to the increase in false positive and true negative rates in print matching. In this paper, a method and system to extract forensic prints from any surface, irrespective of its shape, is presented. First, a suitable 3-D camera is used to capture images of the forensic print, and then the 3-D image is processed and unwrapped to obtain 2-D equivalent biometric prints. Computer simulations demonstrate the effectiveness of using 3-D technology for biometric matching of fingerprints, palm prints, and lip-prints. This system can be further extended to other biometric and non-biometric modalities.
Design and implementation of modular home security system with short messaging system
NASA Astrophysics Data System (ADS)
Budijono, Santoso; Andrianto, Jeffri; Axis Novradin Noor, Muhammad
2014-03-01
Today we are living in 21st century where crime become increasing and everyone wants to secure they asset at their home. In that situation user must have system with advance technology so person do not worry when getting away from his home. It is therefore the purpose of this design to provide home security device, which send fast information to user GSM (Global System for Mobile) mobile device using SMS (Short Messaging System) and also activate - deactivate system by SMS. The Modular design of this Home Security System make expandable their capability by add more sensors on that system. Hardware of this system has been designed using microcontroller AT Mega 328, PIR (Passive Infra Red) motion sensor as the primary sensor for motion detection, camera for capturing images, GSM module for sending and receiving SMS and buzzer for alarm. For software this system using Arduino IDE for Arduino and Putty for testing connection programming in GSM module. This Home Security System can monitor home area that surrounding by PIR sensor and sending SMS, save images capture by camera, and make people panic by turn on the buzzer when trespassing surrounding area that detected by PIR sensor. The Modular Home Security System has been tested and succeed detect human movement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... capture system and add-on control device operating limits during the performance test? 63.4966 Section 63... system and add-on control device operating limits during the performance test? During the performance... outlet gas temperature is the maximum operating limit for your condenser. (e) Emission capture system...
Code of Federal Regulations, 2012 CFR
2012-07-01
... capture system and add-on control device operating limits during the performance test? 63.4966 Section 63... system and add-on control device operating limits during the performance test? During the performance... outlet gas temperature is the maximum operating limit for your condenser. (e) Emission capture system...
The 3D scanner prototype utilize object profile imaging using line laser and octave software
NASA Astrophysics Data System (ADS)
Nurdini, Mugi; Manunggal, Trikarsa Tirtadwipa; Samsi, Agus
2016-11-01
Three-dimensional scanner or 3D Scanner is a device to reconstruct the real object into digital form on a computer. 3D Scanner is a technology that is being developed, especially in developed countries, where the current 3D Scanner devices is the advanced version with a very expensive prices. This study is basically a simple prototype of 3D Scanner with a very low investment costs. 3D Scanner prototype device consists of a webcam, a rotating desk system controlled by a stepper motor and Arduino UNO, and a line laser. Objects that limit the research is the object with same radius from its center point (object pivot). Scanning is performed by using object profile imaging by line laser which is then captured by the camera and processed by a computer (image processing) using Octave software. On each image acquisition, the scanned object on a rotating desk rotated by a certain degree, so for one full turn multiple images of a number of existing side are finally obtained. Then, the profile of the entire images is extracted in order to obtain digital object dimension. Digital dimension is calibrated by length standard, called gage block. Overall dimensions are then digitally reconstructed into a three-dimensional object. Validation of the scanned object reconstruction of the original object dimensions expressed as a percentage error. Based on the results of data validation, horizontal dimension error is about 5% to 23% and vertical dimension error is about +/- 3%.
Image-Capture Devices Extend Medicine's Reach
NASA Technical Reports Server (NTRS)
2009-01-01
Johnson Space Center, Henry Ford Hospital in Detroit, and Houston-based Wyle Laboratories collaborated on NASA's Advanced Diagnostic Ultrasound in Microgravity (ADUM) experiment, which developed revolutionary medical ultrasound diagnostic techniques for long-distance use. Mediphan, a Canadian company with U.S. operations in Springfield, New Jersey drew on NASA expertise to create frame-grabber and data archiving technology that enables ultrasound users with minimal training to send diagnostic-quality ultrasound images and video to medical professionals via the Internet in near real time allowing patients as varied as professional athletes, Olympians, and mountain climbers to receive medical attention as soon as it is needed.
Autebert, Julien; Coudert, Benoit; Champ, Jérôme; Saias, Laure; Guneri, Ezgi Tulukcuoglu; Lebofsky, Ronald; Bidard, François-Clément; Pierga, Jean-Yves; Farace, Françoise; Descroix, Stéphanie; Malaquin, Laurent; Viovy, Jean-Louis
2015-05-07
A new generation of the Ephesia cell capture technology optimized for CTC capture and genetic analysis is presented, characterized in depth and compared with the CellSearch system as a reference. This technology uses magnetic particles bearing tumour-cell specific EpCAM antibodies, self-assembled in a regular array in a microfluidic flow cell. 48,000 high aspect-ratio columns are generated using a magnetic field in a high throughput (>3 ml h(-1)) device and act as sieves to specifically capture the cells of interest through antibody-antigen interactions. Using this device optimized for CTC capture and analysis, we demonstrated the capture of epithelial cells with capture efficiency above 90% for concentrations as low as a few cells per ml. We showed the high specificity of capture with only 0.26% of non-epithelial cells captured for concentrations above 10 million cells per ml. We investigated the capture behavior of cells in the device, and correlated the cell attachment rate with the EpCAM expression on the cell membranes for six different cell lines. We developed and characterized a two-step blood processing method to allow for rapid processing of 10 ml blood tubes in less than 4 hours, and showed a capture rate of 70% for as low as 25 cells spiked in 10 ml blood tubes, with less than 100 contaminating hematopoietic cells. Using this device and procedure, we validated our system on patient samples using an automated cell immunostaining procedure and a semi-automated cell counting method. Our device captured CTCs in 75% of metastatic prostate cancer patients and 80% of metastatic breast cancer patients, and showed similar or better results than the CellSearch device in 10 out of 13 samples. Finally, we demonstrated the possibility of detecting cancer-related PIK3CA gene mutation in 20 cells captured in the chip with a good correlation between the cell count and the quantitation value Cq of the post-capture qPCR.
Hydrogel nanoparticle based immunoassay
Liotta, Lance A; Luchini, Alessandra; Petricoin, Emanuel F; Espina, Virginia
2015-04-21
An immunoassay device incorporating porous polymeric capture nanoparticles within either the sample collection vessel or pre-impregnated into a porous substratum within fluid flow path of the analytical device is presented. This incorporation of capture particles within the immunoassay device improves sensitivity while removing the requirement for pre-processing of samples prior to loading the immunoassay device. A preferred embodiment is coreshell bait containing capture nanoparticles which perform three functions in one step, in solution: a) molecular size sieving, b) target analyte sequestration and concentration, and c) protection from degradation. The polymeric matrix of the capture particles may be made of co-polymeric materials having a structural monomer and an affinity monomer, the affinity monomer having properties that attract the analyte to the capture particle. This device is useful for point of care diagnostic assays for biomedical applications and as field deployable assays for environmental, pathogen and chemical or biological threat identification.
Visible camera imaging of plasmas in Proto-MPEX
NASA Astrophysics Data System (ADS)
Mosby, R.; Skeen, C.; Biewer, T. M.; Renfro, R.; Ray, H.; Shaw, G. C.
2015-11-01
The prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device being developed at Oak Ridge National Laboratory (ORNL). This machine plans to study plasma-material interaction (PMI) physics relevant to future fusion reactors. Measurements of plasma light emission will be made on Proto-MPEX using fast, visible framing cameras. The cameras utilize a global shutter, which allows a full frame image of the plasma to be captured and compared at multiple times during the plasma discharge. Typical exposure times are ~10-100 microseconds. The cameras are capable of capturing images at up to 18,000 frames per second (fps). However, the frame rate is strongly dependent on the size of the ``region of interest'' that is sampled. The maximum ROI corresponds to the full detector area, of ~1000x1000 pixels. The cameras have an internal gain, which controls the sensitivity of the 10-bit detector. The detector includes a Bayer filter, for ``true-color'' imaging of the plasma emission. This presentation will exmine the optimized camera settings for use on Proto-MPEX. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.
Optical diagnostics of mercury jet for an intense proton target.
Park, H; Tsang, T; Kirk, H G; Ladeinde, F; Graves, V B; Spampinato, P T; Carroll, A J; Titus, P H; McDonald, K T
2008-04-01
An optical diagnostic system is designed and constructed for imaging a free mercury jet interacting with a high intensity proton beam in a pulsed high-field solenoid magnet. The optical imaging system employs a backilluminated, laser shadow photography technique. Object illumination and image capture are transmitted through radiation-hard multimode optical fibers and flexible coherent imaging fibers. A retroreflected illumination design allows the entire passive imaging system to fit inside the bore of the solenoid magnet. A sequence of synchronized short laser light pulses are used to freeze the transient events, and the images are recorded by several high speed charge coupled devices. Quantitative and qualitative data analysis using image processing based on probability approach is described. The characteristics of free mercury jet as a high power target for beam-jet interaction at various levels of the magnetic induction field is reported in this paper.
NASA Astrophysics Data System (ADS)
Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.
2016-06-01
Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.
Code of Federal Regulations, 2010 CFR
2010-07-01
... capture system and add-on control device operating limits during the performance test? 63.9324 Section 63... Requirements § 63.9324 How do I establish the emission capture system and add-on control device operating... the operating limits required by § 63.9302 according to this section, unless you have received...
Code of Federal Regulations, 2012 CFR
2012-07-01
... capture system and add-on control device operating limits during the performance test? 63.9324 Section 63... Requirements § 63.9324 How do I establish the emission capture system and add-on control device operating... the operating limits required by § 63.9302 according to this section, unless you have received...
Code of Federal Regulations, 2012 CFR
2012-07-01
... capture system and add-on control device operating limits during the performance test? 63.4167 Section 63... capture system and add-on control device operating limits during the performance test? During the... the operating limits required by § 63.4092 according to this section unless you have received approval...
Code of Federal Regulations, 2013 CFR
2013-07-01
... capture system and add-on control device operating limits during the performance test? 63.9324 Section 63... Requirements § 63.9324 How do I establish the emission capture system and add-on control device operating... the operating limits required by § 63.9302 according to this section, unless you have received...
Code of Federal Regulations, 2014 CFR
2014-07-01
... capture system and add-on control device operating limits during the performance test? 63.9324 Section 63... Requirements § 63.9324 How do I establish the emission capture system and add-on control device operating... the operating limits required by § 63.9302 according to this section, unless you have received...
Code of Federal Regulations, 2014 CFR
2014-07-01
... capture system and add-on control device operating limits during the performance test? 63.4567 Section 63... emission capture system and add-on control device operating limits during the performance test? During the... the operating limits required by § 63.4492 according to this section, unless you have received...
Code of Federal Regulations, 2014 CFR
2014-07-01
... capture system and add-on control device operating limits during the performance test? 63.3967 Section 63... establish the emission capture system and add-on control device operating limits during the performance test... must establish the operating limits required by § 63.3892 according to this section, unless you have...
Code of Federal Regulations, 2011 CFR
2011-07-01
... capture system and add-on control device operating limits during the performance test? 63.9324 Section 63... Requirements § 63.9324 How do I establish the emission capture system and add-on control device operating... the operating limits required by § 63.9302 according to this section, unless you have received...
Code of Federal Regulations, 2014 CFR
2014-07-01
... capture system and add-on control device operating limits during the performance test? 63.4167 Section 63... capture system and add-on control device operating limits during the performance test? During the... the operating limits required by § 63.4092 according to this section unless you have received approval...
Code of Federal Regulations, 2014 CFR
2014-07-01
... capture system and add-on control device operating limits during the performance test? 63.4767 Section 63... capture system and add-on control device operating limits during the performance test? During the... the operating limits required by § 63.4692 according to this section, unless you have received...
Code of Federal Regulations, 2012 CFR
2012-07-01
... capture system and add-on control device operating limits during the performance test? 63.4767 Section 63... capture system and add-on control device operating limits during the performance test? During the... the operating limits required by § 63.4692 according to this section, unless you have received...
Code of Federal Regulations, 2012 CFR
2012-07-01
... capture system and add-on control device operating limits during the performance test? 63.3967 Section 63... establish the emission capture system and add-on control device operating limits during the performance test... must establish the operating limits required by § 63.3892 according to this section, unless you have...
Code of Federal Regulations, 2012 CFR
2012-07-01
... capture system and add-on control device operating limits during the performance test? 63.4567 Section 63... emission capture system and add-on control device operating limits during the performance test? During the... the operating limits required by § 63.4492 according to this section, unless you have received...
NASA Astrophysics Data System (ADS)
Hoi, Jennifer W.; Kim, Hyun K.; Khalil, Michael A.; Fong, Christopher J.; Marone, Alessandro; Shrikhande, Gautam; Hielscher, Andreas H.
2015-03-01
Dynamic optical tomographic imaging has shown promise in diagnosing and monitoring peripheral arterial disease (PAD), which affects 8 to 12 million in the United States. PAD is the narrowing of the arteries that supply blood to the lower extremities. Prolonged reduced blood flow to the foot leads to ulcers and gangrene, which makes placement of optical fibers for contact-based optical tomography systems difficult and cumbersome. Since many diabetic PAD patients have foot wounds, a non-contact interface is highly desirable. We present a novel non-contact dynamic continuous-wave optical tomographic imaging system that images the vasculature in the foot for evaluating PAD. The system images at up to 1Hz by delivering 2 wavelengths of light to the top of the foot at up to 20 source positions through collimated source fibers. Transmitted light is collected with an electron multiplying charge couple device (EMCCD) camera. We demonstrate that the system can resolve absorbers at various locations in a phantom study and show the system's first clinical 3D images of total hemoglobin changes in the foot during venous occlusion at the thigh. Our initial results indicate that this system is effective in capturing the vascular dynamics within the foot and can be used to diagnose and monitor treatment of PAD in diabetic patients.
New concept high-speed and high-resolution color scanner
NASA Astrophysics Data System (ADS)
Nakashima, Keisuke; Shinoda, Shin'ichi; Konishi, Yoshiharu; Sugiyama, Kenji; Hori, Tetsuya
2003-05-01
We have developed a new concept high-speed and high-resolution color scanner (Blinkscan) using digital camera technology. With our most advanced sub-pixel image processing technology, approximately 12 million pixel image data can be captured. High resolution imaging capability allows various uses such as OCR, color document read, and document camera. The scan time is only about 3 seconds for a letter size sheet. Blinkscan scans documents placed "face up" on its scan stage and without any special illumination lights. Using Blinkscan, a high-resolution color document can be easily inputted into a PC at high speed, a paperless system can be built easily. It is small, and since the occupancy area is also small, setting it on an individual desk is possible. Blinkscan offers the usability of a digital camera and accuracy of a flatbed scanner with high-speed processing. Now, about several hundred of Blinkscan are mainly shipping for the receptionist operation in a bank and a security. We will show the high-speed and high-resolution architecture of Blinkscan. Comparing operation-time with conventional image capture device, the advantage of Blinkscan will make clear. And image evaluation for variety of environment, such as geometric distortions or non-uniformity of brightness, will be made.
Student-Built Underwater Video and Data Capturing Device
NASA Astrophysics Data System (ADS)
Whitt, F.
2016-12-01
Students from Stockbridge High School Robotics Team invention is a low cost underwater video and data capturing device. This system is capable of shooting time-lapse photography and/or video for up to 3 days of video at a time. It can be used in remote locations without having to change batteries or adding additional external hard drives for data storage. The video capturing device has a unique base and mounting system which houses a pi drive and a programmable raspberry pi with a camera module. This system is powered by two 12 volt batteries, which makes it easier for users to recharge after use. Our data capturing device has the same unique base and mounting system as the underwater camera. The data capturing device consists of an Arduino and SD card shield that is capable of collecting continuous temperature and pH readings underwater. This data will then be logged onto the SD card for easy access and recording. The low cost underwater video and data capturing device can reach depths up to 100 meters while recording 36 hours of video on 1 terabyte of storage. It also features night vision infrared light capabilities. The cost to build our invention is $500. The goal of this was to provide a device that can easily be accessed by marine biologists, teachers, researchers and citizen scientists to capture photographic and water quality data in marine environments over extended periods of time.
Computational and design methods for advanced imaging
NASA Astrophysics Data System (ADS)
Birch, Gabriel C.
This dissertation merges the optical design and computational aspects of imaging systems to create novel devices that solve engineering problems in optical science and attempts to expand the solution space available to the optical designer. This dissertation is divided into two parts: the first discusses a new active illumination depth sensing modality, while the second part discusses a passive illumination system called plenoptic, or lightfield, imaging. The new depth sensing modality introduced in part one is called depth through controlled aberration. This technique illuminates a target with a known, aberrated projected pattern and takes an image using a traditional, unmodified imaging system. Knowing how the added aberration in the projected pattern changes as a function of depth, we are able to quantitatively determine depth of a series of points from the camera. A major advantage this method permits is the ability for illumination and imaging axes to be coincident. Plenoptic cameras capture both spatial and angular data simultaneously. This dissertation present a new set of parameters that permit the design and comparison of plenoptic devices outside the traditionally published plenoptic 1.0 and plenoptic 2.0 configurations. Additionally, a series of engineering advancements are presented, including full system raytraces of raw plenoptic images, Zernike compression techniques of raw image files, and non-uniform lenslet arrays to compensate for plenoptic system aberrations. Finally, a new snapshot imaging spectrometer is proposed based off the plenoptic configuration.
Application of automatic threshold in dynamic target recognition with low contrast
NASA Astrophysics Data System (ADS)
Miao, Hua; Guo, Xiaoming; Chen, Yu
2014-11-01
Hybrid photoelectric joint transform correlator can realize automatic real-time recognition with high precision through the combination of optical devices and electronic devices. When recognizing targets with low contrast using photoelectric joint transform correlator, because of the difference of attitude, brightness and grayscale between target and template, only four to five frames of dynamic targets can be recognized without any processing. CCD camera is used to capture the dynamic target images and the capturing speed of CCD is 25 frames per second. Automatic threshold has many advantages like fast processing speed, effectively shielding noise interference, enhancing diffraction energy of useful information and better reserving outline of target and template, so this method plays a very important role in target recognition with optical correlation method. However, the automatic obtained threshold by program can not achieve the best recognition results for dynamic targets. The reason is that outline information is broken to some extent. Optimal threshold is obtained by manual intervention in most cases. Aiming at the characteristics of dynamic targets, the processing program of improved automatic threshold is finished by multiplying OTSU threshold of target and template by scale coefficient of the processed image, and combining with mathematical morphology. The optimal threshold can be achieved automatically by improved automatic threshold processing for dynamic low contrast target images. The recognition rate of dynamic targets is improved through decreased background noise effect and increased correlation information. A series of dynamic tank images with the speed about 70 km/h are adapted as target images. The 1st frame of this series of tanks can correlate only with the 3rd frame without any processing. Through OTSU threshold, the 80th frame can be recognized. By automatic threshold processing of the joint images, this number can be increased to 89 frames. Experimental results show that the improved automatic threshold processing has special application value for the recognition of dynamic target with low contrast.
In-vessel visible inspection system on KSTAR
NASA Astrophysics Data System (ADS)
Chung, Jinil; Seo, D. C.
2008-08-01
To monitor the global formation of the initial plasma and damage to the internal structures of the vacuum vessel, an in-vessel visible inspection system has been installed and operated on the Korean superconducting tokamak advanced research (KSTAR) device. It consists of four inspection illuminators and two visible/H-alpha TV cameras. Each illuminator uses four 150W metal-halide lamps with separate lamp controllers, and programmable progressive scan charge-coupled device cameras with 1004×1004 resolution at 48frames/s and a resolution of 640×480 at 210frames/s are used to capture images. In order to provide vessel inspection capability under any operation condition, the lamps and cameras are fully controlled from the main control room and protected by shutters from deposits during plasma operation. In this paper, we describe the design and operation results of the visible inspection system with the images of the KSTAR Ohmic discharges during the first plasma campaign.
Code of Federal Regulations, 2014 CFR
2014-07-01
... to Subpart OOOO of Part 63—Operating Limits if Using Add-On Control Devices and Capture System If you... 40 Protection of Environment 13 2014-07-01 2014-07-01 false Operating Limits if Using Add-On Control Devices and Capture System 2 Table 2 to Subpart OOOO of Part 63 Protection of Environment...
Code of Federal Regulations, 2014 CFR
2014-07-01
...—Operating Limits if Using Add-On Control Devices and Capture System If you are required to comply with... 40 Protection of Environment 13 2014-07-01 2014-07-01 false Operating Limits if Using Add-On Control Devices and Capture System 1 Table 1 to Subpart JJJJ of Part 63 Protection of Environment...
Code of Federal Regulations, 2014 CFR
2014-07-01
...—Operating Limits if Using Add-on Control Devices and Capture System If you are required to comply with... 40 Protection of Environment 13 2014-07-01 2014-07-01 false Operating Limits if Using Add-on Control Devices and Capture System 1 Table 1 to Subpart SSSS of Part 63 Protection of Environment...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Limits if Using Add-On Control Devices and Capture System If you are required to comply with operating... 40 Protection of Environment 12 2011-07-01 2009-07-01 true Operating Limits if Using Add-On Control Devices and Capture System 1 Table 1 to Subpart JJJJ of Part 63 Protection of Environment...
Code of Federal Regulations, 2012 CFR
2012-07-01
...—Operating Limits if Using Add-On Control Devices and Capture System If you are required to comply with... 40 Protection of Environment 13 2012-07-01 2012-07-01 false Operating Limits if Using Add-On Control Devices and Capture System 1 Table 1 to Subpart JJJJ of Part 63 Protection of Environment...
Code of Federal Regulations, 2013 CFR
2013-07-01
... to Subpart OOOO of Part 63—Operating Limits if Using Add-On Control Devices and Capture System If you... 40 Protection of Environment 13 2013-07-01 2012-07-01 true Operating Limits if Using Add-On Control Devices and Capture System 2 Table 2 to Subpart OOOO of Part 63 Protection of Environment...
Code of Federal Regulations, 2013 CFR
2013-07-01
...—Operating Limits if Using Add-on Control Devices and Capture System If you are required to comply with... 40 Protection of Environment 13 2013-07-01 2012-07-01 true Operating Limits if Using Add-on Control Devices and Capture System 1 Table 1 to Subpart SSSS of Part 63 Protection of Environment...
Code of Federal Regulations, 2012 CFR
2012-07-01
... to Subpart OOOO of Part 63—Operating Limits if Using Add-On Control Devices and Capture System If you... 40 Protection of Environment 13 2012-07-01 2012-07-01 false Operating Limits if Using Add-On Control Devices and Capture System 2 Table 2 to Subpart OOOO of Part 63 Protection of Environment...
Code of Federal Regulations, 2010 CFR
2010-07-01
... OOOO of Part 63—Operating Limits if Using Add-On Control Devices and Capture System If you are required... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Operating Limits if Using Add-On Control Devices and Capture System 2 Table 2 to Subpart OOOO of Part 63 Protection of Environment...
Code of Federal Regulations, 2012 CFR
2012-07-01
...—Operating Limits if Using Add-on Control Devices and Capture System If you are required to comply with... 40 Protection of Environment 13 2012-07-01 2012-07-01 false Operating Limits if Using Add-on Control Devices and Capture System 1 Table 1 to Subpart SSSS of Part 63 Protection of Environment...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Using Add-on Control Devices and Capture System If you are required to comply with operating limits by... 40 Protection of Environment 12 2011-07-01 2009-07-01 true Operating Limits if Using Add-on Control Devices and Capture System 1 Table 1 to Subpart SSSS of Part 63 Protection of Environment...
Code of Federal Regulations, 2011 CFR
2011-07-01
... OOOO of Part 63—Operating Limits if Using Add-On Control Devices and Capture System If you are required... 40 Protection of Environment 12 2011-07-01 2009-07-01 true Operating Limits if Using Add-On Control Devices and Capture System 2 Table 2 to Subpart OOOO of Part 63 Protection of Environment...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Using Add-on Control Devices and Capture System If you are required to comply with operating limits by... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Operating Limits if Using Add-on Control Devices and Capture System 1 Table 1 to Subpart SSSS of Part 63 Protection of Environment...
Code of Federal Regulations, 2013 CFR
2013-07-01
...—Operating Limits if Using Add-On Control Devices and Capture System If you are required to comply with... 40 Protection of Environment 13 2013-07-01 2012-07-01 true Operating Limits if Using Add-On Control Devices and Capture System 1 Table 1 to Subpart JJJJ of Part 63 Protection of Environment...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Limits if Using Add-On Control Devices and Capture System If you are required to comply with operating... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Operating Limits if Using Add-On Control Devices and Capture System 1 Table 1 to Subpart JJJJ of Part 63 Protection of Environment...
Simultaneous immersion Mirau interferometry
Lyulko, Oleksandra V.; Randers-Pehrson, Gerhard; Brenner, David J.
2013-01-01
A novel technique for label-free imaging of live biological cells in aqueous medium that is insensitive to ambient vibrations is presented. This technique is a spin-off from previously developed immersion Mirau interferometry. Both approaches utilize a modified Mirau interferometric attachment for a microscope objective that can be used both in air and in immersion mode, when the device is submerged in cell medium and has its internal space filled with liquid. While immersion Mirau interferometry involves first capturing a series of images, the resulting images are potentially distorted by ambient vibrations. Overcoming these serial-acquisition challenges, simultaneous immersion Mirau interferometry incorporates polarizing elements into the optics to allow simultaneous acquisition of two interferograms. The system design and production are described and images produced with the developed techniques are presented. PMID:23742552
NASA Astrophysics Data System (ADS)
Baskoro, Ario Sunar; Kabutomori, Masashi; Suga, Yasuo
An automatic welding system using Tungsten Inert Gas (TIG) welding with vision sensor for welding of aluminum pipe was constructed. This research studies the intelligent welding process of aluminum alloy pipe 6063S-T5 in fixed position and moving welding torch with the AC welding machine. The monitoring system consists of a vision sensor using a charge-coupled device (CCD) camera to monitor backside image of molten pool. The captured image was processed to recognize the edge of molten pool by image processing algorithm. Neural network model for welding speed control were constructed to perform the process automatically. From the experimental results it shows the effectiveness of the control system confirmed by good detection of molten pool and sound weld of experimental result.
Hong, Xun Jie Jeesmond; Shinoj, Vengalathunadakal K.; Murukeshan, Vadakke Matham; Baskaran, Mani; Aung, Tin
2017-01-01
Abstract. A flexible handheld imaging probe consisting of a 3 mm×3 mm charge-coupled device camera, light-emitting diode light sources, and near-infrared laser source is designed and developed. The imaging probe is designed with specifications to capture the iridocorneal angle images and posterior segment images. Light propagation from the anterior chamber of the eye to the exterior is considered analytically using Snell’s law. Imaging of the iridocorneal angle region and fundus is performed on ex vivo porcine samples and subsequently on small laboratory animals, such as the New Zealand white rabbit and nonhuman primate, in vivo. The integrated flexible handheld probe demonstrates high repeatability in iridocorneal angle and fundus documentation. The proposed concept and methodology are expected to find potential application in the diagnosis, prognosis, and management of glaucoma. PMID:28413809
Personal authentication through dorsal hand vein patterns
NASA Astrophysics Data System (ADS)
Hsu, Chih-Bin; Hao, Shu-Sheng; Lee, Jen-Chun
2011-08-01
Biometric identification is an emerging technology that can solve security problems in our networked society. A reliable and robust personal verification approach using dorsal hand vein patterns is proposed in this paper. The characteristic of the approach needs less computational and memory requirements and has a higher recognition accuracy. In our work, the near-infrared charge-coupled device (CCD) camera is adopted as an input device for capturing dorsal hand vein images, it has the advantages of the low-cost and noncontact imaging. In the proposed approach, two finger-peaks are automatically selected as the datum points to define the region of interest (ROI) in the dorsal hand vein images. The modified two-directional two-dimensional principal component analysis, which performs an alternate two-dimensional PCA (2DPCA) in the column direction of images in the 2DPCA subspace, is proposed to exploit the correlation of vein features inside the ROI between images. The major advantage of the proposed method is that it requires fewer coefficients for efficient dorsal hand vein image representation and recognition. The experimental results on our large dorsal hand vein database show that the presented schema achieves promising performance (false reject rate: 0.97% and false acceptance rate: 0.05%) and is feasible for dorsal hand vein recognition.
Design and fabrication of vertically-integrated CMOS image sensors.
Skorka, Orit; Joseph, Dileepan
2011-01-01
Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors.
Design and Fabrication of Vertically-Integrated CMOS Image Sensors
Skorka, Orit; Joseph, Dileepan
2011-01-01
Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors. PMID:22163860
Solution processed integrated pixel element for an imaging device
NASA Astrophysics Data System (ADS)
Swathi, K.; Narayan, K. S.
2016-09-01
We demonstrate the implementation of a solid state circuit/structure comprising of a high performing polymer field effect transistor (PFET) utilizing an oxide layer in conjunction with a self-assembled monolayer (SAM) as the dielectric and a bulk-heterostructure based organic photodiode as a CMOS-like pixel element for an imaging sensor. Practical usage of functional organic photon detectors requires on chip components for image capture and signal transfer as in the CMOS/CCD architecture rather than simple photodiode arrays in order to increase speed and sensitivity of the sensor. The availability of high performing PFETs with low operating voltage and photodiodes with high sensitivity provides the necessary prerequisite to implement a CMOS type image sensing device structure based on organic electronic devices. Solution processing routes in organic electronics offers relatively facile procedures to integrate these components, combined with unique features of large-area, form factor and multiple optical attributes. We utilize the inherent property of a binary mixture in a blend to phase-separate vertically and create a graded junction for effective photocurrent response. The implemented design enables photocharge generation along with on chip charge to voltage conversion with performance parameters comparable to traditional counterparts. Charge integration analysis for the passive pixel element using 2D TCAD simulations is also presented to evaluate the different processes that take place in the monolithic structure.
Low vision goggles: optical design studies
NASA Astrophysics Data System (ADS)
Levy, Ofer; Apter, Boris; Efron, Uzi
2006-08-01
Low Vision (LV) due to Age Related Macular Degeneration (AMD), Glaucoma or Retinitis Pigmentosa (RP) is a growing problem, which will affect more than 15 million people in the U.S alone in 2010. Low Vision Aid Goggles (LVG) have been under development at Ben-Gurion University and the Holon Institute of Technology. The device is based on a unique Image Transceiver Device (ITD), combining both functions of imaging and Display in a single chip. Using the ITD-based goggles, specifically designed for the visually impaired, our aim is to develop a head-mounted device that will allow the capture of the ambient scenery, perform the necessary image enhancement and processing, and re-direct it to the healthy part of the patient's retina. This design methodology will allow the Goggles to be mobile, multi-task and environmental-adaptive. In this paper we present the optical design considerations of the Goggles, including a preliminary performance analysis. Common vision deficiencies of LV patients are usually divided into two main categories: peripheral vision loss (PVL) and central vision loss (CVL), each requiring different Goggles design. A set of design principles had been defined for each category. Four main optical designs are presented and compared according to the design principles. Each of the designs is presented in two main optical configurations: See-through system and Video imaging system. The use of a full-color ITD-Based Goggles is also discussed.
Vision-based overlay of a virtual object into real scene for designing room interior
NASA Astrophysics Data System (ADS)
Harasaki, Shunsuke; Saito, Hideo
2001-10-01
In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).
Visual Estimation of Bacterial Growth Level in Microfluidic Culture Systems.
Kim, Kyukwang; Kim, Seunggyu; Jeon, Jessie S
2018-02-03
Microfluidic devices are an emerging platform for a variety of experiments involving bacterial cell culture, and has advantages including cost and convenience. One inevitable step during bacterial cell culture is the measurement of cell concentration in the channel. The optical density measurement technique is generally used for bacterial growth estimation, but it is not applicable to microfluidic devices due to the small sample volumes in microfluidics. Alternately, cell counting or colony-forming unit methods may be applied, but these do not work in situ; nor do these methods show measurement results immediately. To this end, we present a new vision-based method to estimate the growth level of the bacteria in microfluidic channels. We use Fast Fourier transform (FFT) to detect the frequency level change of the microscopic image, focusing on the fact that the microscopic image becomes rough as the number of cells in the field of view increases, adding high frequencies to the spectrum of the image. Two types of microfluidic devices are used to culture bacteria in liquid and agar gel medium, and time-lapsed images are captured. The images obtained are analyzed using FFT, resulting in an increase in high-frequency noise proportional to the time passed. Furthermore, we apply the developed method in the microfluidic antibiotics susceptibility test by recognizing the regional concentration change of the bacteria that are cultured in the antibiotics gradient. Finally, a deep learning-based data regression is performed on the data obtained by the proposed vision-based method for robust reporting of data.
Backside imaging of a microcontroller with common-path digital holography
NASA Astrophysics Data System (ADS)
Finkeldey, Markus; Göring, Lena; Schellenberg, Falk; Gerhardt, Nils C.; Hofmann, Martin
2017-03-01
The investigation of integrated circuits (ICs), such as microcontrollers (MCUs) and system on a chip (SoCs) devices is a topic with growing interests. The need for fast and non-destructive imaging methods is given by the increasing importance of hardware Trojans, reverse engineering and further security related analysis of integrated cryptographic devices. In the field of side-channel attacks, for instance, the precise spot for laser fault attacks is important and could be determined by using modern high resolution microscopy methods. Digital holographic microscopy (DHM) is a promising technique to achieve high resolution phase images of surface structures. These phase images provide information about the change of the refractive index in the media and the topography. For enabling a high phase stability, we use the common-path geometry to create the interference pattern. The interference pattern, or hologram, is captured with a water cooled sCMOS camera. This provides a fast readout while maintaining a low level of noise. A challenge for these types of holograms is the interference of the reflected waves from the different interfaces inside the media. To distinguish between the phase signals from the buried layer and the surface reflection we use specific numeric filters. For demonstrating the performance of our setup we show results with devices under test (DUT), using a 1064 nm laser diode as light source. The DUTs are modern microcontrollers thinned to different levels of thickness of the Si-substrate. The effect of the numeric filter compared to unfiltered images is analyzed.
Fixed Pattern Noise pixel-wise linear correction for crime scene imaging CMOS sensor
NASA Astrophysics Data System (ADS)
Yang, Jie; Messinger, David W.; Dube, Roger R.; Ientilucci, Emmett J.
2017-05-01
Filtered multispectral imaging technique might be a potential method for crime scene documentation and evidence detection due to its abundant spectral information as well as non-contact and non-destructive nature. Low-cost and portable multispectral crime scene imaging device would be highly useful and efficient. The second generation crime scene imaging system uses CMOS imaging sensor to capture spatial scene and bandpass Interference Filters (IFs) to capture spectral information. Unfortunately CMOS sensors suffer from severe spatial non-uniformity compared to CCD sensors and the major cause is Fixed Pattern Noise (FPN). IFs suffer from "blue shift" effect and introduce spatial-spectral correlated errors. Therefore, Fixed Pattern Noise (FPN) correction is critical to enhance crime scene image quality and is also helpful for spatial-spectral noise de-correlation. In this paper, a pixel-wise linear radiance to Digital Count (DC) conversion model is constructed for crime scene imaging CMOS sensor. Pixel-wise conversion gain Gi,j and Dark Signal Non-Uniformity (DSNU) Zi,j are calculated. Also, conversion gain is divided into four components: FPN row component, FPN column component, defects component and effective photo response signal component. Conversion gain is then corrected to average FPN column and row components and defects component so that the sensor conversion gain is uniform. Based on corrected conversion gain and estimated image incident radiance from the reverse of pixel-wise linear radiance to DC model, corrected image spatial uniformity can be enhanced to 7 times as raw image, and the bigger the image DC value within its dynamic range, the better the enhancement.
NASA Astrophysics Data System (ADS)
Gupta, S.; Lohani, B.
2014-05-01
Mobile augmented reality system is the next generation technology to visualise 3D real world intelligently. The technology is expanding at a fast pace to upgrade the status of a smart phone to an intelligent device. The research problem identified and presented in the current work is to view actual dimensions of various objects that are captured by a smart phone in real time. The methodology proposed first establishes correspondence between LiDAR point cloud, that are stored in a server, and the image t hat is captured by a mobile. This correspondence is established using the exterior and interior orientation parameters of the mobile camera and the coordinates of LiDAR data points which lie in the viewshed of the mobile camera. A pseudo intensity image is generated using LiDAR points and their intensity. Mobile image and pseudo intensity image are then registered using image registration method SIFT thereby generating a pipeline to locate a point in point cloud corresponding to a point (pixel) on the mobile image. The second part of the method uses point cloud data for computing dimensional information corresponding to the pairs of points selected on mobile image and fetch the dimensions on top of the image. This paper describes all steps of the proposed method. The paper uses an experimental setup to mimic the mobile phone and server system and presents some initial but encouraging results
Lee, Ho Suk; Chu, Wai Keung; Zhang, Kun; Huang, Xiaohua
2013-09-07
We report a method for fabricating permeable polymer microstructure barriers in polydimethylsiloxane (PDMS) microfluidic devices and the use of the devices to capture and transport DNA and cells. The polymer microstructure in a desired location in a fluidic channel is formed in situ by the polymerization of acrylamide and polyethylene diacrylate cross-linker (PEG-DA) monomer in a solution which is trapped in the location using a pair of PDMS valves. The porous polymer microstructure provides a mechanical barrier to convective fluid flow in the channel or between two microfluidic chambers while it still conducts ions or small charged species under an electric field, allowing for the rapid capture and transport of biomolecules and cells by electrophoresis. We have demonstrated the application of the devices for the rapid capture and efficient release of bacteriophage λ genomic DNA, solution exchange and for the transport and capture of HeLa cells. Our devices will enable the multi-step processing of biomolecules and cells or individual cells within a single microfluidic chamber.
Micro-Hall devices for magnetic, electric and photo-detection
NASA Astrophysics Data System (ADS)
Gilbertson, A.; Sadeghi, H.; Panchal, V.; Kazakova, O.; Lambert, C. J.; Solin, S. A.; Cohen, L. F.
Multifunctional mesoscopic sensors capable of detecting local magnetic (B) , electric (E) , and optical fields can greatly facilitate image capture in nano-arrays that address a multitude of disciplines. The use of micro-Hall devices as B-field sensors and, more recently as E-field sensors is well established. Here we report the real-space voltage response of InSb/AlInSb micro-Hall devices to not only local E-, and B-fields but also to photo-excitation using scanning probe microscopy. We show that the ultrafast generation of localised photocarriers results in conductance perturbations analogous to those produced by local E-fields. Our experimental results are in good agreement with tight-binding transport calculations in the diffusive regime. At room temperature, samples exhibit a magnetic sensitivity of >500 nT/ √Hz, an optical noise equivalent power of >20 pW/ √Hz (λ = 635 nm) comparable to commercial photoconductive detectors, and charge sensitivity of >0.04 e/ √Hz comparable to that of single electron transistors. Work done while on sabbatical from Washington University. Co-founder of PixelEXX, a start-up whose focus is imaging nano-arrays.
Periscope for noninvasive two-photon imaging of murine retina in vivo
Stremplewski, Patrycjusz; Komar, Katarzyna; Palczewski, Krzysztof; Wojtkowski, Maciej; Palczewska, Grazyna
2015-01-01
Two-photon microscopy allows visualization of subcellular structures in the living animal retina. In previously reported experiments it was necessary to apply a contact lens to each subject. Extending this technology to larger animals would require fitting a custom contact lens to each animal and cumbersome placement of the living animal head on microscope stage. Here we demonstrate a new device, periscope, for coupling light energy into mouse eye and capturing emitted fluorescence. Using this periscope we obtained images of the RPE and their subcellular organelles, retinosomes, with larger field of view than previously reported. This periscope provides an interface with a commercial microscope, does not require contact lens and its design could be modified to image retina in larger animals. PMID:26417507
Google glass based immunochromatographic diagnostic test analysis
NASA Astrophysics Data System (ADS)
Feng, Steve; Caire, Romain; Cortazar, Bingen; Turan, Mehmet; Wong, Andrew; Ozcan, Aydogan
2015-03-01
Integration of optical imagers and sensors into recently emerging wearable computational devices allows for simpler and more intuitive methods of integrating biomedical imaging and medical diagnostics tasks into existing infrastructures. Here we demonstrate the ability of one such device, the Google Glass, to perform qualitative and quantitative analysis of immunochromatographic rapid diagnostic tests (RDTs) using a voice-commandable hands-free software-only interface, as an alternative to larger and more bulky desktop or handheld units. Using the built-in camera of Glass to image one or more RDTs (labeled with Quick Response (QR) codes), our Glass software application uploads the captured image and related information (e.g., user name, GPS, etc.) to our servers for remote analysis and storage. After digital analysis of the RDT images, the results are transmitted back to the originating Glass device, and made available through a website in geospatial and tabular representations. We tested this system on qualitative human immunodeficiency virus (HIV) and quantitative prostate-specific antigen (PSA) RDTs. For qualitative HIV tests, we demonstrate successful detection and labeling (i.e., yes/no decisions) for up to 6-fold dilution of HIV samples. For quantitative measurements, we activated and imaged PSA concentrations ranging from 0 to 200 ng/mL and generated calibration curves relating the RDT line intensity values to PSA concentration. By providing automated digitization of both qualitative and quantitative test results, this wearable colorimetric diagnostic test reader platform on Google Glass can reduce operator errors caused by poor training, provide real-time spatiotemporal mapping of test results, and assist with remote monitoring of various biomedical conditions.
Suitability of digital camcorders for virtual reality image data capture
NASA Astrophysics Data System (ADS)
D'Apuzzo, Nicola; Maas, Hans-Gerd
1998-12-01
Today's consumer market digital camcorders offer features which make them appear quite interesting devices for virtual reality data capture. The paper compares a digital camcorder with an analogue camcorder and a machine vision type CCD camera and discusses the suitability of these three cameras for virtual reality applications. Besides the discussion of technical features of the cameras, this includes a detailed accuracy test in order to define the range of applications. In combination with the cameras, three different framegrabbers are tested. The geometric accuracy potential of all three cameras turned out to be surprisingly large, and no problems were noticed in the radiometric performance. On the other hand, some disadvantages have to be reported: from the photogrammetrists point of view, the major disadvantage of most camcorders is the missing possibility to synchronize multiple devices, limiting the suitability for 3-D motion data capture. Moreover, the standard video format contains interlacing, which is also undesirable for all applications dealing with moving objects or moving cameras. Further disadvantages are computer interfaces with functionality, which is still suboptimal. While custom-made solutions to these problems are probably rather expensive (and will make potential users turn back to machine vision like equipment), this functionality could probably be included by the manufacturers at almost zero cost.
Visual object recognition for mobile tourist information systems
NASA Astrophysics Data System (ADS)
Paletta, Lucas; Fritz, Gerald; Seifert, Christin; Luley, Patrick; Almer, Alexander
2005-03-01
We describe a mobile vision system that is capable of automated object identification using images captured from a PDA or a camera phone. We present a solution for the enabling technology of outdoors vision based object recognition that will extend state-of-the-art location and context aware services towards object based awareness in urban environments. In the proposed application scenario, tourist pedestrians are equipped with GPS, W-LAN and a camera attached to a PDA or a camera phone. They are interested whether their field of view contains tourist sights that would point to more detailed information. Multimedia type data about related history, the architecture, or other related cultural context of historic or artistic relevance might be explored by a mobile user who is intending to learn within the urban environment. Learning from ambient cues is in this way achieved by pointing the device towards the urban sight, capturing an image, and consequently getting information about the object on site and within the focus of attention, i.e., the users current field of view.
Gao, Yingwang; Geng, Jinfeng; Rao, Xiuqin; Ying, Yibin
2016-01-01
Skinning injury on potato tubers is a kind of superficial wound that is generally inflicted by mechanical forces during harvest and postharvest handling operations. Though skinning injury is pervasive and obstructive, its detection is very limited. This study attempted to identify injured skin using two CCD (Charge Coupled Device) sensor-based machine vision technologies, i.e., visible imaging and biospeckle imaging. The identification of skinning injury was realized via exploiting features extracted from varied ROIs (Region of Interests). The features extracted from visible images were pixel-wise color and texture features, while region-wise BA (Biospeckle Activity) was calculated from biospeckle imaging. In addition, the calculation of BA using varied numbers of speckle patterns were compared. Finally, extracted features were implemented into classifiers of LS-SVM (Least Square Support Vector Machine) and BLR (Binary Logistic Regression), respectively. Results showed that color features performed better than texture features in classifying sound skin and injured skin, especially for injured skin stored no less than 1 day, with the average classification accuracy of 90%. Image capturing and processing efficiency can be speeded up in biospeckle imaging, with captured 512 frames reduced to 125 frames. Classification results obtained based on the feature of BA were acceptable for early skinning injury stored within 1 day, with the accuracy of 88.10%. It is concluded that skinning injury can be recognized by visible and biospeckle imaging during different stages. Visible imaging has the aptitude in recognizing stale skinning injury, while fresh injury can be discriminated by biospeckle imaging. PMID:27763555
Gao, Yingwang; Geng, Jinfeng; Rao, Xiuqin; Ying, Yibin
2016-10-18
Skinning injury on potato tubers is a kind of superficial wound that is generally inflicted by mechanical forces during harvest and postharvest handling operations. Though skinning injury is pervasive and obstructive, its detection is very limited. This study attempted to identify injured skin using two CCD (Charge Coupled Device) sensor-based machine vision technologies, i.e., visible imaging and biospeckle imaging. The identification of skinning injury was realized via exploiting features extracted from varied ROIs (Region of Interests). The features extracted from visible images were pixel-wise color and texture features, while region-wise BA (Biospeckle Activity) was calculated from biospeckle imaging. In addition, the calculation of BA using varied numbers of speckle patterns were compared. Finally, extracted features were implemented into classifiers of LS-SVM (Least Square Support Vector Machine) and BLR (Binary Logistic Regression), respectively. Results showed that color features performed better than texture features in classifying sound skin and injured skin, especially for injured skin stored no less than 1 day, with the average classification accuracy of 90%. Image capturing and processing efficiency can be speeded up in biospeckle imaging, with captured 512 frames reduced to 125 frames. Classification results obtained based on the feature of BA were acceptable for early skinning injury stored within 1 day, with the accuracy of 88.10%. It is concluded that skinning injury can be recognized by visible and biospeckle imaging during different stages. Visible imaging has the aptitude in recognizing stale skinning injury, while fresh injury can be discriminated by biospeckle imaging.
Balsam, Joshua; Bruck, Hugh Alan; Kostov, Yordan; Rasooly, Avraham
2012-01-01
Optical technologies are important for biological analysis. Current biomedical optical analyses rely on high-cost, high-sensitivity optical detectors such as photomultipliers, avalanched photodiodes or cooled CCD cameras. In contrast, Webcams, mobile phones and other popular consumer electronics use lower-sensitivity, lower-cost optical components such as photodiodes or CMOS sensors. In order for consumer electronics devices, such as webcams, to be useful for biomedical analysis, they must have increased sensitivity. We combined two strategies to increase the sensitivity of CMOS-based fluorescence detector. We captured hundreds of low sensitivity images using a Webcam in video mode, instead of a single image typically used in cooled CCD devices.We then used a computational approach consisting of an image stacking algorithm to remove the noise by combining all of the images into a single image. While video mode is widely used for dynamic scene imaging (e.g. movies or time-lapse photography), it is not used to capture a single static image, which removes noise and increases sensitivity by more than thirty fold. The portable, battery-operated Webcam-based fluorometer system developed here consists of five modules: (1) a low cost CMOS Webcam to monitor light emission, (2) a plate to perform assays, (3) filters and multi-wavelength LED illuminator for fluorophore excitation, (4) a portable computer to acquire and analyze images, and (5) image stacking software for image enhancement. The samples consisted of various concentrations of fluorescein, ranging from 30 μM to 1000 μM, in a 36-well miniature plate. In the single frame mode, the fluorometer's limit-of-detection (LOD) for fluorescein is ∼1000 μM, which is relatively insensitive. However, when used in video mode combined with image stacking enhancement, the LOD is dramatically reduced to 30 μM, sensitivity which is similar to that of state-of-the-art ELISA plate photomultiplier-based readers. Numerous medical diagnostics assays rely on optical and fluorescence readers. Our novel combination of detection technologies, which is new to biodetection may enable the development of new low cost optical detectors based on an inexpensive Webcam (<$10). It has the potential to form the basis for high sensitivity, low cost medical diagnostics in resource-poor settings.
Balsam, Joshua; Bruck, Hugh Alan; Kostov, Yordan; Rasooly, Avraham
2013-01-01
Optical technologies are important for biological analysis. Current biomedical optical analyses rely on high-cost, high-sensitivity optical detectors such as photomultipliers, avalanched photodiodes or cooled CCD cameras. In contrast, Webcams, mobile phones and other popular consumer electronics use lower-sensitivity, lower-cost optical components such as photodiodes or CMOS sensors. In order for consumer electronics devices, such as webcams, to be useful for biomedical analysis, they must have increased sensitivity. We combined two strategies to increase the sensitivity of CMOS-based fluorescence detector. We captured hundreds of low sensitivity images using a Webcam in video mode, instead of a single image typically used in cooled CCD devices.We then used a computational approach consisting of an image stacking algorithm to remove the noise by combining all of the images into a single image. While video mode is widely used for dynamic scene imaging (e.g. movies or time-lapse photography), it is not used to capture a single static image, which removes noise and increases sensitivity by more than thirty fold. The portable, battery-operated Webcam-based fluorometer system developed here consists of five modules: (1) a low cost CMOS Webcam to monitor light emission, (2) a plate to perform assays, (3) filters and multi-wavelength LED illuminator for fluorophore excitation, (4) a portable computer to acquire and analyze images, and (5) image stacking software for image enhancement. The samples consisted of various concentrations of fluorescein, ranging from 30 μM to 1000 μM, in a 36-well miniature plate. In the single frame mode, the fluorometer's limit-of-detection (LOD) for fluorescein is ∼1000 μM, which is relatively insensitive. However, when used in video mode combined with image stacking enhancement, the LOD is dramatically reduced to 30 μM, sensitivity which is similar to that of state-of-the-art ELISA plate photomultiplier-based readers. Numerous medical diagnostics assays rely on optical and fluorescence readers. Our novel combination of detection technologies, which is new to biodetection may enable the development of new low cost optical detectors based on an inexpensive Webcam (<$10). It has the potential to form the basis for high sensitivity, low cost medical diagnostics in resource-poor settings. PMID:23990697
Novel Three-Dimensional Understanding of Maxillary Cleft Distraction.
Vaughan, Stephen Michael; Kau, Chung How; Waite, Peter Daniel
2016-09-01
To set forth a universal standard methodology for quantifying volumetric and linear changes in the craniofacial complex, utilizing three-dimensional data captured from a cleft-lip palate patient who underwent rigid external device (RED) distraction. Cone beam computed tomography images of a 14-year-old patient were captured using a Kodak 9500 (Atlanta, GA) Cone Beam system device and a stereophotogrammetric system (3dMDface(TM) Atlanta, GA). The subject was a nonsyndromic unilateral cleft-lip palate patient who received RED distraction as part of maxillary advancement in conjunction with orthodontic treatment. Preop (T1) and postop (T2) images were superimposed using Invivo 5.2.3 (San Jose, CA) software. Volumetric rendering of the airway, bone, and soft tissues, as well as linear measurements were analyzed. Each measurement was captured 10 times to ensure reliability and reproducibility of methodology. Data from T1 to T2 revealed mean differences as follows: airway total volume +5250 mm, minimum cross-sectional area +67.84 mm; bone +1719 mm, soft tissue +44,432 mm. Mean of linear measurements: Pronasale 1.98 mm, Subnasale 3.35 mm, Labial superius 10.79 mm, Labial inferius 4.13 mm, Right alare 5.71 mm, Right cheilion 7.83 mm, Left alare 4.97 mm, Left cheilion 5.50 mm, Pogonion 3.01 mm, B-point 2.49 mm, U1-U1 9.77 mm, and L1-L1 0.00 mm. P values are <0.001 for each analysis. This paper represents a novel and innovative way to look at prepost RED distractions in a three-dimensional format. A universal standard analysis of the craniofacial complex can be implemented using the techniques and method outlined in this study.
Panoramic 3D Reconstruction by Fusing Color Intensity and Laser Range Data
NASA Astrophysics Data System (ADS)
Jiang, Wei; Lu, Jian
Technology for capturing panoramic (360 degrees) three-dimensional information in a real environment have many applications in fields: virtual and complex reality, security, robot navigation, and so forth. In this study, we examine an acquisition device constructed of a regular CCD camera and a 2D laser range scanner, along with a technique for panoramic 3D reconstruction using a data fusion algorithm based on an energy minimization framework. The acquisition device can capture two types of data of a panoramic scene without occlusion between two sensors: a dense spatio-temporal volume from a camera and distance information from a laser scanner. We resample the dense spatio-temporal volume for generating a dense multi-perspective panorama that has equal spatial resolution to that of the original images acquired using a regular camera, and also estimate a dense panoramic depth-map corresponding to the generated reference panorama by extracting trajectories from the dense spatio-temporal volume with a selecting camera. Moreover, for determining distance information robustly, we propose a data fusion algorithm that is embedded into an energy minimization framework that incorporates active depth measurements using a 2D laser range scanner and passive geometry reconstruction from an image sequence obtained using the CCD camera. Thereby, measurement precision and robustness can be improved beyond those available by conventional methods using either passive geometry reconstruction (stereo vision) or a laser range scanner. Experimental results using both synthetic and actual images show that our approach can produce high-quality panoramas and perform accurate 3D reconstruction in a panoramic environment.
Kim, Bumsoo; Koh, Jong Kwan; Park, Junyong; Ahn, Changui; Ahn, Joonmo; Kim, Jong Hak; Jeon, Seokwoo
2015-01-01
This paper reports a new type of transmitting mode electrochromic device that uses the high-contrast electrochromism of poly(3,4-ethylenedioxythiophene) (PEDOT) and operates at long-wavelength infrared (8-12 μm) . To maximize the transmittance contrast and transmittance contrast ratio of the device for thermal camouflage, we control the thickness of the thin PEDOT layer from 25 nm to 400 nm and develop a design of grid-type counter electrodes. The cyclability can be greatly improved by selective deposition of the PEDOT film on grid electrodes as an ion storage layer without any loss of overall transmittance. The device with optimized architectures shows a high transmittance contrast ratio of 83 % at a wavelength of 10 μm with a response rate under 1.4 s when alternating voltage is applied. Captured images of an LED lamp behind the device prove the possibility of active, film-type camouflage against thermal detection.
Enrichment of cancer cells using aptamers immobilized on a microfluidic channel
Phillips, Joseph A.; Xu, Ye; Xia, Zheng
2009-01-01
This work describes the development and investigation of an aptamer modified microfluidic device that captures rare cells to achieve a rapid assay without pre-treatment of cells. To accomplish this, aptamers are first immobilized on the surface of a poly (dimethylsiloxane) microchannel, followed by pumping a mixture of cells through the device. This process permits the use of optical microscopy to measure the cell-surface density from which we calculate the percentage of cells captured as a function of cell and aptamer concentration, flow velocity, and incubation time. This aptamer-based device was demonstrated to capture target cells with > 97% purity and > 80% efficiency. Since the cell capture assay is completed within minutes and requires no pre-treatment of cells, the device promises to play a key role in the early detection and diagnosis of cancer where rare diseased cells can first be enriched and then captured for detection. PMID:19115856
Ranade, Manisha K; Lynch, Bart D; Li, Jonathan G; Dempsey, James F
2006-01-01
We have developed an electronic portal imaging device (EPID) employing a fast scintillator and a high-speed camera. The device is designed to accurately and independently characterize the fluence delivered by a linear accelerator during intensity modulated radiation therapy (IMRT) with either step-and-shoot or dynamic multileaf collimator (MLC) delivery. Our aim is to accurately obtain the beam shape and fluence of all segments delivered during IMRT, in order to study the nature of discrepancies between the plan and the delivered doses. A commercial high-speed camera was combined with a terbium-doped gadolinium-oxy-sulfide (Gd2O2S:Tb) scintillator to form an EPID for the unaliased capture of two-dimensional fluence distributions of each beam in an IMRT delivery. The high speed EPID was synchronized to the accelerator pulse-forming network and gated to capture every possible pulse emitted from the accelerator, with an approximate frame rate of 360 frames-per-second (fps). A 62-segment beam from a head-and-neck IMRT treatment plan requiring 68 s to deliver was recorded with our high speed EPID producing approximately 6 Gbytes of imaging data. The EPID data were compared with the MLC instruction files and the MLC controller log files. The frames were binned to provide a frame rate of 72 fps with a signal-to-noise ratio that was sufficient to resolve leaf positions and segment fluence. The fractional fluence from the log files and EPID data agreed well. An ambiguity in the motion of the MLC during beam on was resolved. The log files reported leaf motions at the end of 33 of the 42 segments, while the EPID observed leaf motions in only 7 of the 42 segments. The static IMRT segment shapes observed by the high speed EPID were in good agreement with the shapes reported in the log files. The leaf motions observed during beam-on for step-and-shoot delivery were not temporally resolved by the log files.
Tyurin readies the NASDA exposure experiment cases for their EVA
2001-10-14
ISS003-E-6623 (14 October 2001) --- Cosmonaut Mikhail Tyurin, Expedition Three flight engineer representing Rosaviakosmos, works with hardware for the Micro-Particles Capturer (MPAC) and Space Environment Exposure Device (SEED) experiment and fixture mechanism in the Zvezda Service Module on the International Space Station (ISS). MPAC and SEED were developed by Japans National Space Development Agency (NASDA), and Russia developed the Fixture Mechanism. This image was taken with a digital still camera.
A simple novel device for air sampling by electrokinetic capture
Gordon, Julian; Gandhi, Prasanthi; Shekhawat, Gajendra; ...
2015-12-27
A variety of different sampling devices are currently available to acquire air samples for the study of the microbiome of the air. All have a degree of technical complexity that limits deployment. Here, we evaluate the use of a novel device, which has no technical complexity and is easily deployable. An air-cleaning device powered by electrokinetic propulsion has been adapted to provide a universal method for collecting samples of the aerobiome. Plasma-induced charge in aerosol particles causes propulsion to and capture on a counter-electrode. The flow of ions creates net bulk airflow, with no moving parts. A device and electrodemore » assembly have been re-designed from air-cleaning technology to provide an average air flow of 120 lpm. This compares favorably with current air sampling devices based on physical air pumping. Capture efficiency was determined by comparison with a 0.4 μm polycarbonate reference filter, using fluorescent latex particles in a controlled environment chamber. Performance was compared with the same reference filter method in field studies in three different environments. For 23 common fungal species by quantitative polymerase chain reaction (qPCR), there was 100 % sensitivity and apparent specificity of 87%, with the reference filter taken as “gold standard.” Further, bacterial analysis of 16S RNA by amplicon sequencing showed equivalent community structure captured by the electrokinetic device and the reference filter. Unlike other current air sampling methods, capture of particles is determined by charge and so is not controlled by particle mass. We analyzed particle sizes captured from air, without regard to specific analyte by atomic force microscopy: particles at least as low as 100 nM could be captured from ambient air. This work introduces a very simple plug-and-play device that can sample air at a high-volume flow rate with no moving parts and collect particles down to the sub-micron range. In conclusion, the performance of the device is substantially equivalent to capture by pumping through a filter for microbiome analysis by quantitative PCR and amplicon sequencing.« less
A simple novel device for air sampling by electrokinetic capture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordon, Julian; Gandhi, Prasanthi; Shekhawat, Gajendra
A variety of different sampling devices are currently available to acquire air samples for the study of the microbiome of the air. All have a degree of technical complexity that limits deployment. Here, we evaluate the use of a novel device, which has no technical complexity and is easily deployable. An air-cleaning device powered by electrokinetic propulsion has been adapted to provide a universal method for collecting samples of the aerobiome. Plasma-induced charge in aerosol particles causes propulsion to and capture on a counter-electrode. The flow of ions creates net bulk airflow, with no moving parts. A device and electrodemore » assembly have been re-designed from air-cleaning technology to provide an average air flow of 120 lpm. This compares favorably with current air sampling devices based on physical air pumping. Capture efficiency was determined by comparison with a 0.4 μm polycarbonate reference filter, using fluorescent latex particles in a controlled environment chamber. Performance was compared with the same reference filter method in field studies in three different environments. For 23 common fungal species by quantitative polymerase chain reaction (qPCR), there was 100 % sensitivity and apparent specificity of 87%, with the reference filter taken as “gold standard.” Further, bacterial analysis of 16S RNA by amplicon sequencing showed equivalent community structure captured by the electrokinetic device and the reference filter. Unlike other current air sampling methods, capture of particles is determined by charge and so is not controlled by particle mass. We analyzed particle sizes captured from air, without regard to specific analyte by atomic force microscopy: particles at least as low as 100 nM could be captured from ambient air. This work introduces a very simple plug-and-play device that can sample air at a high-volume flow rate with no moving parts and collect particles down to the sub-micron range. In conclusion, the performance of the device is substantially equivalent to capture by pumping through a filter for microbiome analysis by quantitative PCR and amplicon sequencing.« less
A simple novel device for air sampling by electrokinetic capture.
Gordon, Julian; Gandhi, Prasanthi; Shekhawat, Gajendra; Frazier, Angel; Hampton-Marcell, Jarrad; Gilbert, Jack A
2015-12-27
A variety of different sampling devices are currently available to acquire air samples for the study of the microbiome of the air. All have a degree of technical complexity that limits deployment. Here, we evaluate the use of a novel device, which has no technical complexity and is easily deployable. An air-cleaning device powered by electrokinetic propulsion has been adapted to provide a universal method for collecting samples of the aerobiome. Plasma-induced charge in aerosol particles causes propulsion to and capture on a counter-electrode. The flow of ions creates net bulk airflow, with no moving parts. A device and electrode assembly have been re-designed from air-cleaning technology to provide an average air flow of 120 lpm. This compares favorably with current air sampling devices based on physical air pumping. Capture efficiency was determined by comparison with a 0.4 μm polycarbonate reference filter, using fluorescent latex particles in a controlled environment chamber. Performance was compared with the same reference filter method in field studies in three different environments. For 23 common fungal species by quantitative polymerase chain reaction (qPCR), there was 100 % sensitivity and apparent specificity of 87 %, with the reference filter taken as "gold standard." Further, bacterial analysis of 16S RNA by amplicon sequencing showed equivalent community structure captured by the electrokinetic device and the reference filter. Unlike other current air sampling methods, capture of particles is determined by charge and so is not controlled by particle mass. We analyzed particle sizes captured from air, without regard to specific analyte by atomic force microscopy: particles at least as low as 100 nM could be captured from ambient air. This work introduces a very simple plug-and-play device that can sample air at a high-volume flow rate with no moving parts and collect particles down to the sub-micron range. The performance of the device is substantially equivalent to capture by pumping through a filter for microbiome analysis by quantitative PCR and amplicon sequencing.
NASA Technical Reports Server (NTRS)
Pope, Alan T. (Inventor); Stephens, Chad L. (Inventor); Habowski, Tyler (Inventor)
2017-01-01
Method for physiologically modulating videogames and simulations includes utilizing input from a motion-sensing video game system and input from a physiological signal acquisition device. The inputs from the physiological signal sensors are utilized to change the response of a user's avatar to inputs from the motion-sensing sensors. The motion-sensing system comprises a 3D sensor system having full-body 3D motion capture of a user's body. This arrangement encourages health-enhancing physiological self-regulation skills or therapeutic amplification of healthful physiological characteristics. The system provides increased motivation for users to utilize biofeedback as may be desired for treatment of various conditions.
Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope
Adams, Jesse K.; Boominathan, Vivek; Avants, Benjamin W.; Vercosa, Daniel G.; Ye, Fan; Baraniuk, Richard G.; Robinson, Jacob T.; Veeraraghavan, Ashok
2017-01-01
Modern biology increasingly relies on fluorescence microscopy, which is driving demand for smaller, lighter, and cheaper microscopes. However, traditional microscope architectures suffer from a fundamental trade-off: As lenses become smaller, they must either collect less light or image a smaller field of view. To break this fundamental trade-off between device size and performance, we present a new concept for three-dimensional (3D) fluorescence imaging that replaces lenses with an optimized amplitude mask placed a few hundred micrometers above the sensor and an efficient algorithm that can convert a single frame of captured sensor data into high-resolution 3D images. The result is FlatScope: perhaps the world’s tiniest and lightest microscope. FlatScope is a lensless microscope that is scarcely larger than an image sensor (roughly 0.2 g in weight and less than 1 mm thick) and yet able to produce micrometer-resolution, high–frame rate, 3D fluorescence movies covering a total volume of several cubic millimeters. The ability of FlatScope to reconstruct full 3D images from a single frame of captured sensor data allows us to image 3D volumes roughly 40,000 times faster than a laser scanning confocal microscope while providing comparable resolution. We envision that this new flat fluorescence microscopy paradigm will lead to implantable endoscopes that minimize tissue damage, arrays of imagers that cover large areas, and bendable, flexible microscopes that conform to complex topographies. PMID:29226243
Collision judgment when viewing minified images through a HMD visual field expander
NASA Astrophysics Data System (ADS)
Luo, Gang; Lichtenstein, Lee; Peli, Eli
2007-02-01
Purpose: Patients with tunnel vision have great difficulties in mobility. We have developed an augmented vision head mounted device, which can provide patients 5x expanded field by superimposing minified edge images of a wider field captured by a miniature video camera over the natural view seen through the display. In the minified display, objects appear closer to the heading direction than they really are. This might cause users to overestimate collision risks, and therefore to perform unnecessary obstacle-avoidance maneuvers. A study was conducted in a virtual environment to test the impact of minified view on collision judgment. Methods: Simulated scenes were presented to subjects as if they were walking in a shopping mall corridor. Subjects reported whether they would make any contact with stationary obstacles that appeared at variable distances from their walking path. Perceived safe passing distance (PSPD) was calculated by finding the transition point from reports of yes to no. Decision uncertainty was quantified by the sharpness of the transition. Collision envelope (CE) size was calculated by summing up PSPD for left and right sides. Ten normally sighted subjects were tested (1) when not using the device and with one eye patched, and (2) when the see-through view of device was blocked and only minified images were visible. Results: The use of the 5x minification device caused only an 18% increase of CE (13cm, p=0.048). Significant impact of the device on judgment uncertainty was not found (p=0.089). Conclusion: Minification had only a small impact on collision judgment. This supports the use of such a minifying device as an effective field expander for patients with tunnel vision.
Fast Fourier single-pixel imaging via binary illumination.
Zhang, Zibang; Wang, Xueying; Zheng, Guoan; Zhong, Jingang
2017-09-20
Fourier single-pixel imaging (FSI) employs Fourier basis patterns for encoding spatial information and is capable of reconstructing high-quality two-dimensional and three-dimensional images. Fourier-domain sparsity in natural scenes allows FSI to recover sharp images from undersampled data. The original FSI demonstration, however, requires grayscale Fourier basis patterns for illumination. This requirement imposes a limitation on the imaging speed as digital micro-mirror devices (DMDs) generate grayscale patterns at a low refreshing rate. In this paper, we report a new strategy to increase the speed of FSI by two orders of magnitude. In this strategy, we binarize the Fourier basis patterns based on upsampling and error diffusion dithering. We demonstrate a 20,000 Hz projection rate using a DMD and capture 256-by-256-pixel dynamic scenes at a speed of 10 frames per second. The reported technique substantially accelerates image acquisition speed of FSI. It may find broad imaging applications at wavebands that are not accessible using conventional two-dimensional image sensors.
Electron imaging with an EBSD detector.
Wright, Stuart I; Nowell, Matthew M; de Kloe, René; Camus, Patrick; Rampton, Travis
2015-01-01
Electron Backscatter Diffraction (EBSD) has proven to be a useful tool for characterizing the crystallographic orientation aspects of microstructures at length scales ranging from tens of nanometers to millimeters in the scanning electron microscope (SEM). With the advent of high-speed digital cameras for EBSD use, it has become practical to use the EBSD detector as an imaging device similar to a backscatter (or forward-scatter) detector. Using the EBSD detector in this manner enables images exhibiting topographic, atomic density and orientation contrast to be obtained at rates similar to slow scanning in the conventional SEM manner. The high-speed acquisition is achieved through extreme binning of the camera-enough to result in a 5 × 5 pixel pattern. At such high binning, the captured patterns are not suitable for indexing. However, no indexing is required for using the detector as an imaging device. Rather, a 5 × 5 array of images is formed by essentially using each pixel in the 5 × 5 pixel pattern as an individual scattered electron detector. The images can also be formed at traditional EBSD scanning rates by recording the image data during a scan or can also be formed through post-processing of patterns recorded at each point in the scan. Such images lend themselves to correlative analysis of image data with the usual orientation data provided by and with chemical data obtained simultaneously via X-Ray Energy Dispersive Spectroscopy (XEDS). Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Fluorescent detection of C-reactive protein using polyamide beads
NASA Astrophysics Data System (ADS)
Jagadeesh, Shreesha; Chen, Lu; Aitchison, Stewart
2016-03-01
Bacterial infection causes Sepsis which is one of the leading cause of mortality in hospitals. This infection can be quantified from blood plasma using C - reactive protein (CRP). A quick diagnosis at the patient's location through Point-of- Care (POC) testing could give doctors the confidence to prescribe antibiotics. In this paper, the development and testing of a bead-based procedure for CRP quantification is described. The size of the beads enable them to be trapped in wells without the need for magnetic methods of immobilization. Large (1.5 mm diameter) Polyamide nylon beads were used as the substrate for capturing CRP from pure analyte samples. The beads captured CRP either directly through adsorption or indirectly by having specific capture antibodies on their surface. Both methods used fluorescent imaging techniques to quantify the protein. The amount of CRP needed to give a sufficient fluorescent signal through direct capture method was found suitable for identifying bacterial causes of infection. Similarly, viral infections could be quantified by the more sensitive indirect capture method. This bead-based assay can be potentially integrated as a disposable cartridge in a POC device due to its passive nature and the small quantities needed.
NASA Astrophysics Data System (ADS)
Birkbeck, Aaron L.
A new technology is developed that functionally integrates arrays of lasers and micro-optics into microfluidic systems for the purpose of imaging, analyzing, and manipulating objects and biological cells. In general, the devices and technologies emerging from this area either lack functionality through the reliance on mechanical systems or provide a serial-based, time consuming approach. As compared to the current state of art, our all-optical design methodology has several distinguishing features, such as parallelism, high efficiency, low power, auto-alignment, and high yield fabrication methods, which all contribute to minimizing the cost of the integration process. The potential use of vertical cavity surface emitting lasers (VCSELs) for the creation of two-dimensional arrays of laser optical tweezers that perform independently controlled, parallel capture, and transport of large numbers of individual objects and biological cells is investigated. One of the primary biological applications for which VCSEL array sourced laser optical tweezers are considered is the formation of engineered tissues through the manipulation and spatial arrangement of different types of cells in a co-culture. Creating devices that combine laser optical tweezers with select micro-optical components permits optical imaging and analysis functions to take place inside the microfluidic channel. One such device is a micro-optical spatial filter whose motion and alignment is controlled using a laser optical tweezer. Unlike conventional spatial filter systems, our device utilizes a refractive optical element that is directly incorporated onto the lithographically patterned spatial filter. This allows the micro-optical spatial filter to automatically align itself in three-dimensions to the focal point of the microscope objective, where it then filters out the higher frequency additive noise components present in the laser beam. As a means of performing high resolution imaging in the microfluidic channel, we developed a novel technique that integrates the capacity of a laser tweezer to optically trap and manipulate objects in three-dimensions with the resolution-enhanced imaging capabilities of a solid immersion lens (SIL). In our design, the SIL is a free-floating device whose imaging beam, motion control and alignment is provided by a laser optical tweezer, which allows the microfluidic SIL to image in areas that are inaccessible to traditional solid immersion microscopes.
Co-registered photoacoustic, thermoacoustic, and ultrasound mouse imaging
NASA Astrophysics Data System (ADS)
Reinecke, Daniel R.; Kruger, Robert A.; Lam, Richard B.; DelRio, Stephen P.
2010-02-01
We have constructed and tested a prototype test bed that allows us to form 3D photoacoustic CT images using near-infrared (NIR) irradiation (700 - 900 nm), 3D thermoacoustic CT images using microwave irradiation (434 MHz), and 3D ultrasound images from a commercial ultrasound scanner. The device utilizes a vertically oriented, curved array to capture the photoacoustic and thermoacoustic data. In addition, an 8-MHz linear array fixed in a horizontal position provides the ultrasound data. The photoacoustic and thermoacoustic data sets are co-registered exactly because they use the same detector. The ultrasound data set requires only simple corrections to co-register its images. The photoacoustic, thermoacoustic, and ultrasound images of mouse anatomy reveal complementary anatomic information as they exploit different contrast mechanisms. The thermoacoustic images differentiate between muscle, fat and bone. The photoacoustic images reveal the hemoglobin distribution, which is localized predominantly in the vascular space. The ultrasound images provide detailed information about the bony structures. Superposition of all three images onto a co-registered hybrid image shows the potential of a trimodal photoacoustic-thermoacoustic-ultrasound small-animal imaging system.
Technical advances of interventional fluoroscopy and flat panel image receptor.
Lin, Pei-Jan Paul
2008-11-01
In the past decade, various radiation reducing devices and control circuits have been implemented on fluoroscopic imaging equipment. Because of the potential for lengthy fluoroscopic procedures in interventional cardiovascular angiography, these devices and control circuits have been developed for the cardiac catheterization laboratories and interventional angiography suites. Additionally, fluoroscopic systems equipped with image intensifiers have benefited from technological advances in x-ray tube, x-ray generator, and spectral shaping filter technologies. The high heat capacity x-ray tube, the medium frequency inverter generator with high performance switching capability, and the patient dose reduction spectral shaping filter had already been implemented on the image intensified fluoroscopy systems. These three underlying technologies together with the automatic dose rate and image quality (ADRIQ) control logic allow patients undergoing cardiovascular angiography procedures to benefit from "lower patient dose" with "high image quality." While photoconductor (or phosphor plate) x-ray detectors and signal capture thin film transistor (TFT) and charge coupled device (CCD) arrays are analog in nature, the advent of the flat panel image receptor allowed for fluoroscopy procedures to become more streamlined. With the analog-to-digital converter built into the data lines, the flat panel image receptor appears to become a digital device. While the transition from image intensified fluoroscopy systems to flat panel image receptor fluoroscopy systems is part of the on-going "digitization of imaging," the value of a flat panel image receptor may have to be evaluated with respect to patient dose, image quality, and clinical application capabilities. The advantage of flat panel image receptors has yet to be fully explored. For instance, the flat panel image receptor has its disadvantages as compared to the image intensifiers; the cost of the equipment is probably the most obvious. On the other hand, due to its wide dynamic range and linearity, lowering of patient dose beyond current practice could be achieved through the calibration process of the flat panel input dose rate being set to, for example, one half or less of current values. In this article various radiation saving devices and control circuits are briefly described. This includes various types of fluoroscopic systems designed to strive for reduction of patient exposure with the application of spectral shaping filters. The main thrust is to understand the ADRIQ control logic, through equipment testing, as it relates to clinical applications, and to show how this ADRIQ control logic "ties" those three technological advancements together to provide low radiation dose to the patient with high quality fluoroscopic images. Finally, rotational angiography with computed tomography (CT) and three dimensional (3-D) images utilizing flat panel technology will be reviewed as they pertain to diagnostic imaging in cardiovascular disease.
40 CFR 63.4291 - What are my options for meeting the emission limits?
Code of Federal Regulations, 2010 CFR
2010-07-01
... emission capture systems and add-on controls, the organic HAP emission rate for the web coating/printing... demonstrate that all capture systems and control devices for the web coating/printing operation(s) meet the... capture systems and control devices for the web coating/printing operation(s) meet the operating limits...
40 CFR 63.1459 - What definitions apply to this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... device that collects particulate matter by filtering the gas stream through bags. A baghouse is also... converter bath. Capture system means the collection of components used to capture gases and fumes released from one or more emission points, and to convey the captured gases and fumes to a control device. A...
40 CFR 63.4181 - What definitions apply to this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
... commercial or industrial HVAC systems. Manufacturer's formulation data means data on a material (such as a... capture system efficiency means the portion (expressed as a percentage) of the pollutants from an emission source that is delivered to an add-on control device. Capture system means one or more capture devices...
40 CFR 63.4181 - What definitions apply to this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... commercial or industrial HVAC systems. Manufacturer's formulation data means data on a material (such as a... capture system efficiency means the portion (expressed as a percentage) of the pollutants from an emission source that is delivered to an add-on control device. Capture system means one or more capture devices...
40 CFR 63.11516 - What are my standards and management practices?
Code of Federal Regulations, 2013 CFR
2013-07-01
... capture emissions and vent them to a filtration control device. You must operate the filtration control... requirement by maintaining a record of the manufacturer's specifications for the filtration control devices... to emit MFHAP. (1) You must capture emissions and vent them to a filtration control device. You must...
Shaw, S L; Salmon, E D; Quatrano, R S
1995-12-01
In this report, we describe a relatively inexpensive method for acquiring, storing and processing light microscope images that combines the advantages of video technology with the powerful medium now termed digital photography. Digital photography refers to the recording of images as digital files that are stored, manipulated and displayed using a computer. This report details the use of a gated video-rate charge-coupled device (CCD) camera and a frame grabber board for capturing 256 gray-level digital images from the light microscope. This camera gives high-resolution bright-field, phase contrast and differential interference contrast (DIC) images but, also, with gated on-chip integration, has the capability to record low-light level fluorescent images. The basic components of the digital photography system are described, and examples are presented of fluorescence and bright-field micrographs. Digital processing of images to remove noise, to enhance contrast and to prepare figures for printing is discussed.
STS-109 MS Linnehan with laser range finder on aft flight deck
2002-03-02
STS109-E-5003 (3 March 2002) --- Astronaut Richard M. Linnehan, mission specialist, uses a laser ranging device designed to measure the range between two spacecraft. Linnehan positioned himself on the cabin's aft flight deck as the Space Shuttle Columbia approached the Hubble Space Telescope. A short time later, the STS-109 crew captured and latched down the giant telescope in the vehicle's cargo bay for several days of work on the Hubble. The image was recorded with a digital still camera.
STS-109 MS Linnehan with laser range finder on aft flight deck
2002-03-02
STS109-E-5002 (3 March 2002) --- Astronaut Richard M. Linnehan, mission specialist, uses a laser ranging device designed to measure the range between two spacecraft. Linnehan positioned himself on the cabin's aft flight deck as the Space Shuttle Columbia approached the Hubble Space Telescope. A short time later, the STS-109 crew captured and latched down the giant telescope in the vehicle's cargo bay for several days of work on the Hubble. The image was recorded with a digital still camera.
Vasconcelos, Jayro Thadeu Paiva de; Martins, Sebastião; Sousa, João Francisco de; Portela, Antenor
2005-08-01
Takotsubo Cardiomiopathy is a rare cause of acute left ventricular aneurysm, in the absence of coronariopathy, only recently described in world literature. Symptoms may be similar to those from acute myocardial infarction with typical thoracic pain. The image of dumbbell or Takotsubo (a device used in Japan to capture octopus) suggestive ventricular ballooning is characteristic of that new syndrome and there is usually the disappearing of dyskinetic movement up to the 18th day from the beginning of the symptoms, in average.
Unsupervised color normalisation for H and E stained histopathology image analysis
NASA Astrophysics Data System (ADS)
Celis, Raúl; Romero, Eduardo
2015-12-01
In histology, each dye component attempts to specifically characterise different microscopic structures. In the case of the Hematoxylin-Eosin (H&E) stain, universally used for routine examination, quantitative analysis may often require the inspection of different morphological signatures related mainly to nuclei patterns, but also to stroma distribution. Nevertheless, computer systems for automatic diagnosis are often fraught by color variations ranging from the capturing device to the laboratory specific staining protocol and stains. This paper presents a novel colour normalisation method for H&E stained histopathology images. This method is based upon the opponent process theory and blindly estimates the best color basis for the Hematoxylin and Eosin stains without relying on prior knowledge. Stain Normalisation and Color Separation are transversal to any Framework of Histopathology Image Analysis.
An acoustic charge transport imager for high definition television applications
NASA Technical Reports Server (NTRS)
Hunt, W. D.; Brennan, K. F.; Summers, C. J.
1994-01-01
The primary goal of this research is to develop a solid-state television (HDTV) imager chip operating at a frame rate of about 170 frames/sec at 2 Megapixels/frame. This imager will offer an order of magnitude improvements in speed over CCD designs and will allow for monolithic imagers operating from the IR to UV. The technical approach of the project focuses on the development of the three basic components of the imager and their subsequent integration. The camera chip can be divided into three distinct functions: (1) image capture via an array of avalanche photodiodes (APD's); (2) charge collection, storage, and overflow control via a charge transfer transistor device (CTD); and (3) charge readout via an array of acoustic charge transport (ACT) channels. The use of APD's allows for front end gain at low noise and low operating voltages while the ACT readout enables concomitant high speed and high charge transfer efficiency. Currently work is progressing towards the optimization of each of these component devices. In addition to the development of each of the three distinct components, work towards their integration and manufacturability is also progressing. The component designs are considered not only to meet individual specifications but to provide overall system level performance suitable for HDTV operation upon integration. The ultimate manufacturability and reliability of the chip constrains the design as well. The progress made during this period is described in detail.
Advantages and Disadvantages in Image Processing with Free Software in Radiology.
Mujika, Katrin Muradas; Méndez, Juan Antonio Juanes; de Miguel, Andrés Framiñan
2018-01-15
Currently, there are sophisticated applications that make it possible to visualize medical images and even to manipulate them. These software applications are of great interest, both from a teaching and a radiological perspective. In addition, some of these applications are known as Free Open Source Software because they are free and the source code is freely available, and therefore it can be easily obtained even on personal computers. Two examples of free open source software are Osirix Lite® and 3D Slicer®. However, this last group of free applications have limitations in its use. For the radiological field, manipulating and post-processing images is increasingly important. Consequently, sophisticated computing tools that combine software and hardware to process medical images are needed. In radiology, graphic workstations allow their users to process, review, analyse, communicate and exchange multidimensional digital images acquired with different image-capturing radiological devices. These radiological devices are basically CT (Computerised Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), etc. Nevertheless, the programs included in these workstations have a high cost which always depends on the software provider and is always subject to its norms and requirements. With this study, we aim to present the advantages and disadvantages of these radiological image visualization systems in the advanced management of radiological studies. We will compare the features of the VITREA2® and AW VolumeShare 5® radiology workstation with free open source software applications like OsiriX® and 3D Slicer®, with examples from specific studies.
Code of Federal Regulations, 2010 CFR
2010-07-01
... systems or add-on control devices which you choose not to take into account when demonstrating compliance... system or add-on control device which is not taken into account when demonstrating compliance with the....3169 What are the requirements for a capture system or add-on control device which is not taken into...
Code of Federal Regulations, 2010 CFR
2010-07-01
... control device which is not taken into account when demonstrating compliance with the applicable emission limitations? You may have capture systems or add-on control devices which you choose not to take into account... system or add-on control device which is not taken into account when demonstrating compliance with the...
40 CFR 63.3100 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2013 CFR
2013-07-01
... be in compliance with the operating limits for emission capture systems and add-on control devices...) You must maintain a log detailing the operation and maintenance of the emission capture systems, add... capture system and add-on control device performance tests have been completed, as specified in § 63.3160...
40 CFR 63.3100 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2014 CFR
2014-07-01
... be in compliance with the operating limits for emission capture systems and add-on control devices...) You must maintain a log detailing the operation and maintenance of the emission capture systems, add... capture system and add-on control device performance tests have been completed, as specified in § 63.3160...
40 CFR 63.3100 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2012 CFR
2012-07-01
... be in compliance with the operating limits for emission capture systems and add-on control devices...) You must maintain a log detailing the operation and maintenance of the emission capture systems, add... capture system and add-on control device performance tests have been completed, as specified in § 63.3160...
Rasooly, Reuven; Bruck, Hugh Alan; Balsam, Joshua; Prickril, Ben; Ossandon, Miguel; Rasooly, Avraham
2016-05-17
Resource-poor countries and regions require effective, low-cost diagnostic devices for accurate identification and diagnosis of health conditions. Optical detection technologies used for many types of biological and clinical analysis can play a significant role in addressing this need, but must be sufficiently affordable and portable for use in global health settings. Most current clinical optical imaging technologies are accurate and sensitive, but also expensive and difficult to adapt for use in these settings. These challenges can be mitigated by taking advantage of affordable consumer electronics mobile devices such as webcams, mobile phones, charge-coupled device (CCD) cameras, lasers, and LEDs. Low-cost, portable multi-wavelength fluorescence plate readers have been developed for many applications including detection of microbial toxins such as C. Botulinum A neurotoxin, Shiga toxin, and S. aureus enterotoxin B (SEB), and flow cytometry has been used to detect very low cell concentrations. However, the relatively low sensitivities of these devices limit their clinical utility. We have developed several approaches to improve their sensitivity presented here for webcam based fluorescence detectors, including (1) image stacking to improve signal-to-noise ratios; (2) lasers to enable fluorescence excitation for flow cytometry; and (3) streak imaging to capture the trajectory of a single cell, enabling imaging sensors with high noise levels to detect rare cell events. These approaches can also help to overcome some of the limitations of other low-cost optical detection technologies such as CCD or phone-based detectors (like high noise levels or low sensitivities), and provide for their use in low-cost medical diagnostics in resource-poor settings.
Rasooly, Reuven; Bruck, Hugh Alan; Balsam, Joshua; Prickril, Ben; Ossandon, Miguel; Rasooly, Avraham
2016-01-01
Resource-poor countries and regions require effective, low-cost diagnostic devices for accurate identification and diagnosis of health conditions. Optical detection technologies used for many types of biological and clinical analysis can play a significant role in addressing this need, but must be sufficiently affordable and portable for use in global health settings. Most current clinical optical imaging technologies are accurate and sensitive, but also expensive and difficult to adapt for use in these settings. These challenges can be mitigated by taking advantage of affordable consumer electronics mobile devices such as webcams, mobile phones, charge-coupled device (CCD) cameras, lasers, and LEDs. Low-cost, portable multi-wavelength fluorescence plate readers have been developed for many applications including detection of microbial toxins such as C. Botulinum A neurotoxin, Shiga toxin, and S. aureus enterotoxin B (SEB), and flow cytometry has been used to detect very low cell concentrations. However, the relatively low sensitivities of these devices limit their clinical utility. We have developed several approaches to improve their sensitivity presented here for webcam based fluorescence detectors, including (1) image stacking to improve signal-to-noise ratios; (2) lasers to enable fluorescence excitation for flow cytometry; and (3) streak imaging to capture the trajectory of a single cell, enabling imaging sensors with high noise levels to detect rare cell events. These approaches can also help to overcome some of the limitations of other low-cost optical detection technologies such as CCD or phone-based detectors (like high noise levels or low sensitivities), and provide for their use in low-cost medical diagnostics in resource-poor settings. PMID:27196933
Validating models of target acquisition performance in the dismounted soldier context
NASA Astrophysics Data System (ADS)
Glaholt, Mackenzie G.; Wong, Rachel K.; Hollands, Justin G.
2018-04-01
The problem of predicting real-world operator performance with digital imaging devices is of great interest within the military and commercial domains. There are several approaches to this problem, including: field trials with imaging devices, laboratory experiments using imagery captured from these devices, and models that predict human performance based on imaging device parameters. The modeling approach is desirable, as both field trials and laboratory experiments are costly and time-consuming. However, the data from these experiments is required for model validation. Here we considered this problem in the context of dismounted soldiering, for which detection and identification of human targets are essential tasks. Human performance data were obtained for two-alternative detection and identification decisions in a laboratory experiment in which photographs of human targets were presented on a computer monitor and the images were digitally magnified to simulate range-to-target. We then compared the predictions of different performance models within the NV-IPM software package: Targeting Task Performance (TTP) metric model and the Johnson model. We also introduced a modification to the TTP metric computation that incorporates an additional correction for target angular size. We examined model predictions using NV-IPM default values for a critical model constant, V50, and we also considered predictions when this value was optimized to fit the behavioral data. When using default values, certain model versions produced a reasonably close fit to the human performance data in the detection task, while for the identification task all models substantially overestimated performance. When using fitted V50 values the models produced improved predictions, though the slopes of the performance functions were still shallow compared to the behavioral data. These findings are discussed in relation to the models' designs and parameters, and the characteristics of the behavioral paradigm.
de Lasarte, Marta; Pujol, Jaume; Arjona, Montserrat; Vilaseca, Meritxell
2007-01-10
We present an optimized linear algorithm for the spatial nonuniformity correction of a CCD color camera's imaging system and the experimental methodology developed for its implementation. We assess the influence of the algorithm's variables on the quality of the correction, that is, the dark image, the base correction image, and the reference level, and the range of application of the correction using a uniform radiance field provided by an integrator cube. The best spatial nonuniformity correction is achieved by having a nonzero dark image, by using an image with a mean digital level placed in the linear response range of the camera as the base correction image and taking the mean digital level of the image as the reference digital level. The response of the CCD color camera's imaging system to the uniform radiance field shows a high level of spatial uniformity after the optimized algorithm has been applied, which also allows us to achieve a high-quality spatial nonuniformity correction of captured images under different exposure conditions.
Multimode optical dermoscopy (SkinSpect) analysis for skin with melanocytic nevus
NASA Astrophysics Data System (ADS)
Vasefi, Fartash; MacKinnon, Nicholas; Saager, Rolf; Kelly, Kristen M.; Maly, Tyler; Chave, Robert; Booth, Nicholas; Durkin, Anthony J.; Farkas, Daniel L.
2016-04-01
We have developed a multimode dermoscope (SkinSpect™) capable of illuminating human skin samples in-vivo with spectrally-programmable linearly-polarized light at 33 wavelengths between 468nm and 857 nm. Diffusely reflected photons are separated into collinear and cross-polarized image paths and images captured for each illumination wavelength. In vivo human skin nevi (N = 20) were evaluated with the multimode dermoscope and melanin and hemoglobin concentrations were compared with Spatially Modulated Quantitative Spectroscopy (SMoQS) measurements. Both systems show low correlation between their melanin and hemoglobin concentrations, demonstrating the ability of the SkinSpect™ to separate these molecular signatures and thus act as a biologically plausible device capable of early onset melanoma detection.
Automated camera-phone experience with the frequency of imaging necessary to capture diet.
Arab, Lenore; Winter, Ashley
2010-08-01
Camera-enabled cell phones provide an opportunity to strengthen dietary recall through automated imaging of foods eaten during a specified period. To explore the frequency of imaging needed to capture all foods eaten, we examined the number of images of individual foods consumed in a pilot study of automated imaging using camera phones set to an image-capture frequency of one snapshot every 10 seconds. Food images were tallied from 10 young adult subjects who wore the phone continuously during the work day and consented to share their images. Based on the number of images received for each eating experience, the pilot data suggest that automated capturing of images at a frequency of once every 10 seconds is adequate for recording foods consumed during regular meals, whereas a greater frequency of imaging is necessary to capture snacks and beverages eaten quickly. 2010 American Dietetic Association. Published by Elsevier Inc. All rights reserved.
Enciso, R; Memon, A; Mah, J
2003-01-01
The research goal at the Craniofacial Virtual Reality Laboratory of the School of Dentistry in conjunction with the Integrated Media Systems Center, School of Engineering, University of Southern California, is to develop computer methods to accurately visualize patients in three dimensions using advanced imaging and data acquisition devices such as cone-beam computerized tomography (CT) and mandibular motion capture. Data from these devices were integrated for three-dimensional (3D) patient-specific visualization, modeling and animation. Generic methods are in development that can be used with common CT image format (DICOM), mesh format (STL) and motion data (3D position over time). This paper presents preliminary descriptive studies on: 1) segmentation of the lower and upper jaws with two types of CT data--(a) traditional whole head CT data and (b) the new dental Newtom CT; 2) manual integration of accurate 3D tooth crowns with the segmented lower jaw 3D model; 3) realistic patient-specific 3D animation of the lower jaw.
Photorefraction of eyes: history and future prospects.
Howland, Howard C
2009-06-01
A brief history of photorefraction, i.e., the refraction of eyes by photography or computer image capture, is given. The method of photorefraction originated from an optical scheme for secret communication across the Berlin wall. This scheme used a lens whose focus about infinity was modulated by a movable reflecting surface. From this device, it was recognized that the vertebrate eye was such a reflector and that its double-pass pointspread could be used to compute its degree of defocus. Subsequently, a second, totally independent invention, more accurately termed "photoretinoscopy," used an eccentric light source and obtained retinoscopic-like images of the reflex in the pupil of the subject's eyes. Photoretinoscopy has become the preferred method of photorefraction and has been instantiated in a wide variety of devices used in vision screening and research. This has been greatly helped by the parallel development of computer and digital camera technology. It seems likely that photorefractive methods will continue to be refined and may eventually become ubiquitous in clinical practice.
High-speed AFM for scanning the architecture of living cells
NASA Astrophysics Data System (ADS)
Li, Jing; Deng, Zhifeng; Chen, Daixie; Ao, Zhuo; Sun, Quanmei; Feng, Jiantao; Yin, Bohua; Han, Li; Han, Dong
2013-08-01
We address the modelling of tip-cell membrane interactions under high speed atomic force microscopy. Using a home-made device with a scanning area of 100 × 100 μm2, in situ imaging of living cells is successfully performed under loading rates from 1 to 50 Hz, intending to enable detailed descriptions of physiological processes in living samples.We address the modelling of tip-cell membrane interactions under high speed atomic force microscopy. Using a home-made device with a scanning area of 100 × 100 μm2, in situ imaging of living cells is successfully performed under loading rates from 1 to 50 Hz, intending to enable detailed descriptions of physiological processes in living samples. Electronic supplementary information (ESI) available: Movie of the real-time change of inner surface within fresh blood vessel. The movie was captured at a speed of 30 Hz in the range of 80 μm × 80 μm. See DOI: 10.1039/c3nr01464a
Elliott, Amicia D.; Gao, Liang; Ustione, Alessandro; Bedard, Noah; Kester, Robert; Piston, David W.; Tkaczyk, Tomasz S.
2012-01-01
Summary The development of multi-colored fluorescent proteins, nanocrystals and organic fluorophores, along with the resulting engineered biosensors, has revolutionized the study of protein localization and dynamics in living cells. Hyperspectral imaging has proven to be a useful approach for such studies, but this technique is often limited by low signal and insufficient temporal resolution. Here, we present an implementation of a snapshot hyperspectral imaging device, the image mapping spectrometer (IMS), which acquires full spectral information simultaneously from each pixel in the field without scanning. The IMS is capable of real-time signal capture from multiple fluorophores with high collection efficiency (∼65%) and image acquisition rate (up to 7.2 fps). To demonstrate the capabilities of the IMS in cellular applications, we have combined fluorescent protein (FP)-FRET and [Ca2+]i biosensors to measure simultaneously intracellular cAMP and [Ca2+]i signaling in pancreatic β-cells. Additionally, we have compared quantitatively the IMS detection efficiency with a laser-scanning confocal microscope. PMID:22854044
Capturing a failure of an ASIC in-situ, using infrared radiometry and image processing software
NASA Technical Reports Server (NTRS)
Ruiz, Ronald P.
2003-01-01
Failures in electronic devices can sometimes be tricky to locate-especially if they are buried inside radiation-shielded containers designed to work in outer space. Such was the case with a malfunctioning ASIC (Application Specific Integrated Circuit) that was drawing excessive power at a specific temperature during temperature cycle testing. To analyze the failure, infrared radiometry (thermography) was used in combination with image processing software to locate precisely where the power was being dissipated at the moment the failure took place. The IR imaging software was used to make the image of the target and background, appear as unity. As testing proceeded and the failure mode was reached, temperature changes revealed the precise location of the fault. The results gave the design engineers the information they needed to fix the problem. This paper describes the techniques and equipment used to accomplish this failure analysis.
Air-coupled acoustic thermography for in-situ evaluation
NASA Technical Reports Server (NTRS)
Zalameda, Joseph N. (Inventor); Winfree, William P. (Inventor); Yost, William T. (Inventor)
2010-01-01
Acoustic thermography uses a housing configured for thermal, acoustic and infrared radiation shielding. For in-situ applications, the housing has an open side adapted to be sealingly coupled to a surface region of a structure such that an enclosed chamber filled with air is defined. One or more acoustic sources are positioned to direct acoustic waves through the air in the enclosed chamber and towards the surface region. To activate and control each acoustic source, a pulsed signal is applied thereto. An infrared imager focused on the surface region detects a thermal image of the surface region. A data capture device records the thermal image in synchronicity with each pulse of the pulsed signal such that a time series of thermal images is generated. For enhanced sensitivity and/or repeatability, sound and/or vibrations at the surface region can be used in feedback control of the pulsed signal applied to the acoustic sources.
Real-time biscuit tile image segmentation method based on edge detection.
Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko; Kraus, Dieter
2018-05-01
In this paper we propose a novel real-time Biscuit Tile Segmentation (BTS) method for images from ceramic tile production line. BTS method is based on signal change detection and contour tracing with a main goal of separating tile pixels from background in images captured on the production line. Usually, human operators are visually inspecting and classifying produced ceramic tiles. Computer vision and image processing techniques can automate visual inspection process if they fulfill real-time requirements. Important step in this process is a real-time tile pixels segmentation. BTS method is implemented for parallel execution on a GPU device to satisfy the real-time constraints of tile production line. BTS method outperforms 2D threshold-based methods, 1D edge detection methods and contour-based methods. Proposed BTS method is in use in the biscuit tile production line. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Isolation of circulating tumor cells using a microvortex-generating herringbone-chip.
Stott, Shannon L; Hsu, Chia-Hsien; Tsukrov, Dina I; Yu, Min; Miyamoto, David T; Waltman, Belinda A; Rothenberg, S Michael; Shah, Ajay M; Smas, Malgorzata E; Korir, George K; Floyd, Frederick P; Gilman, Anna J; Lord, Jenna B; Winokur, Daniel; Springer, Simeon; Irimia, Daniel; Nagrath, Sunitha; Sequist, Lecia V; Lee, Richard J; Isselbacher, Kurt J; Maheswaran, Shyamala; Haber, Daniel A; Toner, Mehmet
2010-10-26
Rare circulating tumor cells (CTCs) present in the bloodstream of patients with cancer provide a potentially accessible source for detection, characterization, and monitoring of nonhematological cancers. We previously demonstrated the effectiveness of a microfluidic device, the CTC-Chip, in capturing these epithelial cell adhesion molecule (EpCAM)-expressing cells using antibody-coated microposts. Here, we describe a high-throughput microfluidic mixing device, the herringbone-chip, or "HB-Chip," which provides an enhanced platform for CTC isolation. The HB-Chip design applies passive mixing of blood cells through the generation of microvortices to significantly increase the number of interactions between target CTCs and the antibody-coated chip surface. Efficient cell capture was validated using defined numbers of cancer cells spiked into control blood, and clinical utility was demonstrated in specimens from patients with prostate cancer. CTCs were detected in 14 of 15 (93%) patients with metastatic disease (median = 63 CTCs/mL, mean = 386 ± 238 CTCs/mL), and the tumor-specific TMPRSS2-ERG translocation was readily identified following RNA isolation and RT-PCR analysis. The use of transparent materials allowed for imaging of the captured CTCs using standard clinical histopathological stains, in addition to immunofluorescence-conjugated antibodies. In a subset of patient samples, the low shear design of the HB-Chip revealed microclusters of CTCs, previously unappreciated tumor cell aggregates that may contribute to the hematogenous dissemination of cancer.
Her, Ae-Young; Lim, Kyung-Hun; Shin, Eun-Seok
2018-01-27
This case study describes the successful percutaneous transcatheter retrieval of an embolized Amplatzer occluder device using the "waist capture technique" in a patient with an atrial septal defect. This technique allowed for stability of the Amplatzer device, compression of the atrial discs for easier removal, prevention of further embolization, and minimal injury to vasculature during device retrieval. This novel and effective technique can be used safely for the retrieval of Amplatzer devices in the venous system.
NASA Astrophysics Data System (ADS)
Haak, Daniel; Doma, Aliaa; Gombert, Alexander; Deserno, Thomas M.
2016-03-01
Today, subject's medical data in controlled clinical trials is captured digitally in electronic case report forms (eCRFs). However, eCRFs only insufficiently support integration of subject's image data, although medical imaging is looming large in studies today. For bed-side image integration, we present a mobile application (App) that utilizes the smartphone-integrated camera. To ensure high image quality with this inexpensive consumer hardware, color reference cards are placed in the camera's field of view next to the lesion. The cards are used for automatic calibration of geometry, color, and contrast. In addition, a personalized code is read from the cards that allows subject identification. For data integration, the App is connected to an communication and image analysis server that also holds the code-study-subject relation. In a second system interconnection, web services are used to connect the smartphone with OpenClinica, an open-source, Food and Drug Administration (FDA)-approved electronic data capture (EDC) system in clinical trials. Once the photographs have been securely stored on the server, they are released automatically from the mobile device. The workflow of the system is demonstrated by an ongoing clinical trial, in which photographic documentation is frequently performed to measure the effect of wound incision management systems. All 205 images, which have been collected in the study so far, have been correctly identified and successfully integrated into the corresponding subject's eCRF. Using this system, manual steps for the study personnel are reduced, and, therefore, errors, latency and costs decreased. Our approach also increases data security and privacy.
Lopez-Ruiz, Nuria; Curto, Vincenzo F; Erenas, Miguel M; Benito-Lopez, Fernando; Diamond, Dermot; Palma, Alberto J; Capitan-Vallvey, Luis F
2014-10-07
In this work, an Android application for measurement of nitrite concentration and pH determination in combination with a low-cost paper-based microfluidic device is presented. The application uses seven sensing areas, containing the corresponding immobilized reagents, to produce selective color changes when a sample solution is placed in the sampling area. Under controlled conditions of light, using the flash of the smartphone as a light source, the image captured with the built-in camera is processed using a customized algorithm for multidetection of the colored sensing areas. The developed image-processing allows reducing the influence of the light source and the positioning of the microfluidic device in the picture. Then, the H (hue) and S (saturation) coordinates of the HSV color space are extracted and related to pH and nitrite concentration, respectively. A complete characterization of the sensing elements has been carried out as well as a full description of the image analysis for detection. The results show good use of a mobile phone as an analytical instrument. For the pH, the resolution obtained is 0.04 units of pH, 0.09 of accuracy, and a mean squared error of 0.167. With regard to nitrite, 0.51% at 4.0 mg L(-1) of resolution and 0.52 mg L(-1) as the limit of detection was achieved.
An acoustic charge transport imager for high definition television applications
NASA Technical Reports Server (NTRS)
Hunt, W. D.; Brennan, Kevin F.
1994-01-01
The primary goal of this research is to develop a solid-state high definition television (HDTV) imager chip operating at a frame rate of about 170 frames/sec at 2 Megapixels per frame. This imager offers an order of magnitude improvement in speed over CCD designs and will allow for monolithic imagers operating from the IR to the UV. The technical approach of the project focuses on the development of the three basic components of the imager and their integration. The imager chip can be divided into three distinct components: (1) image capture via an array of avalanche photodiodes (APD's), (2) charge collection, storage and overflow control via a charge transfer transistor device (CTD), and (3) charge readout via an array of acoustic charge transport (ACT) channels. The use of APD's allows for front end gain at low noise and low operating voltages while the ACT readout enables concomitant high speed and high charge transfer efficiency. Currently work is progressing towards the development of manufacturable designs for each of these component devices. In addition to the development of each of the three distinct components, work towards their integration is also progressing. The component designs are considered not only to meet individual specifications but to provide overall system level performance suitable for HDTV operation upon integration. The ultimate manufacturability and reliability of the chip constrains the design as well. The progress made during this period is described in detail in Sections 2-4.
Code of Federal Regulations, 2014 CFR
2014-07-01
... capture system and add-on control device operating limits during the performance test? 63.3556 Section 63... system and add-on control device operating limits during the performance test? During the performance... of key parameters of the valve operating system (e.g., solenoid valve operation, air pressure...
Code of Federal Regulations, 2012 CFR
2012-07-01
... capture system and add-on control device operating limits during the performance test? 63.3556 Section 63... system and add-on control device operating limits during the performance test? During the performance... of key parameters of the valve operating system (e.g., solenoid valve operation, air pressure...
Cameras and settings for optimal image capture from UAVs
NASA Astrophysics Data System (ADS)
Smith, Mike; O'Connor, James; James, Mike R.
2017-04-01
Aerial image capture has become very common within the geosciences due to the increasing affordability of low payload (<20 kg) Unmanned Aerial Vehicles (UAVs) for consumer markets. Their application to surveying has led to many studies being undertaken using UAV imagery captured from consumer grade cameras as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion which can lead to experiments being difficult to accurately reproduce. In this contribution we revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well exposed and suitable imagery are derived. This then leads to discussion of how to optimise the platform, camera, lens and imaging settings relevant to image quality planning, presenting some worked examples as a guide. Finally, we challenge the community to make their image data open for review in order to ensure confidence in the outputs/error estimates, allow reproducibility of the results and have these comparable with future studies. We recommend providing open access imagery where possible, a range of example images, and detailed metadata to rigorously describe the image capture process.
Casperson, Shanon L; Sieling, Jared; Moon, Jon; Johnson, LuAnn; Roemmich, James N; Whigham, Leah
2015-03-13
Mobile technologies are emerging as valuable tools to collect and assess dietary intake. Adolescents readily accept and adopt new technologies; thus, a food record app (FRapp) may be a useful tool to better understand adolescents' dietary intake and eating patterns. We sought to determine the amenability of adolescents, in a free-living environment with minimal parental input, to use the FRapp to record their dietary intake. Eighteen community-dwelling adolescents (11-14 years) received detailed instructions to record their dietary intake for 3-7 days using the FRapp. Participants were instructed to capture before and after images of all foods and beverages consumed and to include a fiducial marker in the image. Participants were also asked to provide text descriptors including amount and type of all foods and beverages consumed. Eight of 18 participants were able to follow all instructions: included pre- and post-meal images, a fiducial marker, and a text descriptor and collected diet records on 2 weekdays and 1 weekend day. Dietary intake was recorded on average for 3.2 (SD 1.3 days; 68% weekdays and 32% weekend days) with an average of 2.2 (SD 1.1) eating events per day per participant. A total of 143 eating events were recorded, of which 109 had at least one associated image and 34 were recorded with text only. Of the 109 eating events with images, 66 included all foods, beverages and a fiducial marker and 44 included both a pre- and post-meal image. Text was included with 78 of the captured images. Of the meals recorded, 36, 33, 35, and 39 were breakfasts, lunches, dinners, and snacks, respectively. These data suggest that mobile devices equipped with an app to record dietary intake will be used by adolescents in a free-living environment; however, a minority of participants followed all directions. User-friendly mobile food record apps may increase participant amenability, increasing our understanding of adolescent dietary intake and eating patterns. To improve data collection, the FRapp should deliver prompts for tasks, such as capturing images before and after each eating event, including the fiducial marker in the image, providing complete and accurate text information, and ensuring all eating events are recorded and should be customizable to individuals and to different situations. Clinicaltrials.gov NCT01803997. http://clinicaltrials.gov/ct2/show/NCT01803997 (Archived at: http://www.webcitation.org/6WiV1vxoR).
Beck, Adam W; Lombardi, Joseph V; Abel, Dorothy B; Morales, J Pablo; Marinac-Dabic, Danica; Wang, Grace; Azizzadeh, Ali; Kern, John; Fillinger, Mark; White, Rodney; Cronenwett, Jack L; Cambria, Richard P
2017-05-01
United States Food and Drug Administration (FDA)-mandated postapproval studies have long been a mainstay of the continued evaluation of high-risk medical devices after initial marketing approval; however, these studies often present challenges related to patient/physician recruitment and retention. Retrospective single-center studies also do not fully represent the spectrum of real-world performance nor are they likely to have a sufficiently large enough sample size to detect important signals. In recent years, The FDA Center for Devices and Radiological Health has been promoting the development and use of patient registries to advance infrastructure and methodologies for medical device investigation. The FDA 2012 document, "Strengthening the National System for Medical Device Post-market Surveillance," highlighted registries as a core foundational infrastructure when linked to other complementary data sources, including embedded unique device identification. The Vascular Quality Initiative (VQI) thoracic endovascular aortic repair for type B aortic dissection project is an innovative method of using quality improvement registries to meet the needs of device evaluation after market approval. Here we report the organization and background of this project and highlight the innovation facilitated by collaboration of physicians, the FDA, and device manufacturers. This effort used an existing national network of VQI participants to capture patients undergoing thoracic endovascular aortic repair for acute type B aortic dissection within a registry that aligns with standard practice and existing quality efforts. The VQI captures detailed patient, device, and procedural data for consecutive eligible cases under the auspices of a Patient Safety Organization (PSO). Patients were divided into a 5-year follow-up group (200 acute; 200 chronic dissections) and a 1-year follow-up group (100 acute; 100 chronic). The 5-year cohort required additional imaging details, and the 1-year group required standard VQI registry data entry. The sample size of patients in each of the 5-year acute and chronic dissection arms was achieved ≤24 months of project initiation, and data capture for the 1-year follow-up group is also nearly complete. Data completeness and follow-up has been excellent, and the two FDA-approved devices for dissection are equally represented. Although the completeness of long-term follow-up is yet to be determined, the rapidity of data collection supports the use of this construct for device assessment after market approval. The alignment of this effort with routine clinical practice and ongoing quality improvement initiatives is critical and has required minimal additional effort by practitioners, thus facilitating patient inclusion. Importantly, the success and development of this unique project has helped inform FDA strategy for future device evaluation after market approval. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Scheimpflug with computational imaging to extend the depth of field of iris recognition systems
NASA Astrophysics Data System (ADS)
Sinharoy, Indranil
Despite the enormous success of iris recognition in close-range and well-regulated spaces for biometric authentication, it has hitherto failed to gain wide-scale adoption in less controlled, public environments. The problem arises from a limitation in imaging called the depth of field (DOF): the limited range of distances beyond which subjects appear blurry in the image. The loss of spatial details in the iris image outside the small DOF limits the iris image capture to a small volume-the capture volume. Existing techniques to extend the capture volume are usually expensive, computationally intensive, or afflicted by noise. Is there a way to combine the classical Scheimpflug principle with the modern computational imaging techniques to extend the capture volume? The solution we found is, surprisingly, simple; yet, it provides several key advantages over existing approaches. Our method, called Angular Focus Stacking (AFS), consists of capturing a set of images while rotating the lens, followed by registration, and blending of the in-focus regions from the images in the stack. The theoretical underpinnings of AFS arose from a pair of new and general imaging models we developed for Scheimpflug imaging that directly incorporates the pupil parameters. The model revealed that we could register the images in the stack analytically if we pivot the lens at the center of its entrance pupil, rendering the registration process exact. Additionally, we found that a specific lens design further reduces the complexity of image registration making AFS suitable for real-time performance. We have demonstrated up to an order of magnitude improvement in the axial capture volume over conventional image capture without sacrificing optical resolution and signal-to-noise ratio. The total time required for capturing the set of images for AFS is less than the time needed for a single-exposure, conventional image for the same DOF and brightness level. The net reduction in capture time can significantly relax the constraints on subject movement during iris acquisition, making it less restrictive.
NASA Astrophysics Data System (ADS)
Merkel, Ronny; Breuhan, Andy; Hildebrandt, Mario; Vielhauer, Claus; Bräutigam, Anja
2012-06-01
In the field of crime scene forensics, current methods of evidence collection, such as the acquisition of shoe-marks, tireimpressions, palm-prints or fingerprints are in most cases still performed in an analogue way. For example, fingerprints are captured by powdering and sticky tape lifting, ninhydrine bathing or cyanoacrylate fuming and subsequent photographing. Images of the evidence are then further processed by forensic experts. With the upcoming use of new multimedia systems for the digital capturing and processing of crime scene traces in forensics, higher resolutions can be achieved, leading to a much better quality of forensic images. Furthermore, the fast and mostly automated preprocessing of such data using digital signal processing techniques is an emerging field. Also, by the optical and non-destructive lifting of forensic evidence, traces are not destroyed and therefore can be re-captured, e.g. by creating time series of a trace, to extract its aging behavior and maybe determine the time the trace was left. However, such new methods and tools face different challenges, which need to be addressed before a practical application in the field. Based on the example of fingerprint age determination, which is an unresolved research challenge to forensic experts since decades, we evaluate the influences of different environmental conditions as well as different types of sweating and their implications to the capturing sensory, preprocessing methods and feature extraction. We use a Chromatic White Light (CWL) sensor to exemplary represent such a new optical and contactless measurement device and investigate the influence of 16 different environmental conditions, 8 different sweat types and 11 different preprocessing methods on the aging behavior of 48 fingerprint time series (2592 fingerprint scans in total). We show the challenges that arise for such new multimedia systems capturing and processing forensic evidence
LED induced autofluorescence (LIAF) imager with eight multi-filters for oral cancer diagnosis
NASA Astrophysics Data System (ADS)
Huang, Ting-Wei; Cheng, Nai-Lun; Tsai, Ming-Hsui; Chiou, Jin-Chern; Mang, Ou-Yang
2016-03-01
Oral cancer is one of the serious and growing problem in many developing and developed countries. The simple oral visual screening by clinician can reduce 37,000 oral cancer deaths annually worldwide. However, the conventional oral examination with the visual inspection and the palpation of oral lesions is not an objective and reliable approach for oral cancer diagnosis, and it may cause the delayed hospital treatment for the patients of oral cancer or leads to the oral cancer out of control in the late stage. Therefore, a device for oral cancer detection are developed for early diagnosis and treatment. A portable LED Induced autofluorescence (LIAF) imager is developed by our group. It contained the multiple wavelength of LED excitation light and the rotary filter ring of eight channels to capture ex-vivo oral tissue autofluorescence images. The advantages of LIAF imager compared to other devices for oral cancer diagnosis are that LIAF imager has a probe of L shape for fixing the object distance, protecting the effect of ambient light, and observing the blind spot in the deep port between the gumsgingiva and the lining of the mouth. Besides, the multiple excitation of LED light source can induce multiple autofluorescence, and LIAF imager with the rotary filter ring of eight channels can detect the spectral images of multiple narrow bands. The prototype of a portable LIAF imager is applied in the clinical trials for some cases in Taiwan, and the images of the clinical trial with the specific excitation show the significant differences between normal tissue and oral tissue under these cases.
Verification and compensation of respiratory motion using an ultrasound imaging system.
Chuang, Ho-Chiao; Hsu, Hsiao-Yu; Chiu, Wei-Hung; Tien, Der-Chi; Wu, Ren-Hong; Hsu, Chung-Hsien
2015-03-01
The purpose of this study was to determine if it is feasible to use ultrasound imaging as an aid for moving the treatment couch during diagnosis and treatment procedures associated with radiation therapy, in order to offset organ displacement caused by respiratory motion. A noninvasive ultrasound system was used to replace the C-arm device during diagnosis and treatment with the aims of reducing the x-ray radiation dose on the human body while simultaneously being able to monitor organ displacements. This study used a proposed respiratory compensating system combined with an ultrasound imaging system to monitor the compensation effect of respiratory motion. The accuracy of the compensation effect was verified by fluoroscopy, which means that fluoroscopy could be replaced so as to reduce unnecessary radiation dose on patients. A respiratory simulation system was used to simulate the respiratory motion of the human abdomen and a strain gauge (respiratory signal acquisition device) was used to capture the simulated respiratory signals. The target displacements could be detected by an ultrasound probe and used as a reference for adjusting the gain value of the respiratory signal used by the respiratory compensating system. This ensured that the amplitude of the respiratory compensation signal was a faithful representation of the target displacement. The results show that performing respiratory compensation with the assistance of the ultrasound images reduced the compensation error of the respiratory compensating system to 0.81-2.92 mm, both for sine-wave input signals with amplitudes of 5, 10, and 15 mm, and human respiratory signals; this represented compensation of the respiratory motion by up to 92.48%. In addition, the respiratory signals of 10 patients were captured in clinical trials, while their diaphragm displacements were observed simultaneously using ultrasound. Using the respiratory compensating system to offset, the diaphragm displacement resulted in compensation rates of 60%-84.4%. This study has shown that a respiratory compensating system combined with noninvasive ultrasound can provide real-time compensation of the respiratory motion of patients.
Method for eliminating artifacts in CCD imagers
Turko, B.T.; Yates, G.J.
1992-06-09
An electronic method for eliminating artifacts in a video camera employing a charge coupled device (CCD) as an image sensor is disclosed. The method comprises the step of initializing the camera prior to normal read out and includes a first dump cycle period for transferring radiation generated charge into the horizontal register while the decaying image on the phosphor being imaged is being integrated in the photosites, and a second dump cycle period, occurring after the phosphor image has decayed, for rapidly dumping unwanted smear charge which has been generated in the vertical registers. Image charge is then transferred from the photosites and to the vertical registers and read out in conventional fashion. The inventive method allows the video camera to be used in environments having high ionizing radiation content, and to capture images of events of very short duration and occurring either within or outside the normal visual wavelength spectrum. Resultant images are free from ghost, smear and smear phenomena caused by insufficient opacity of the registers and, and are also free from random damage caused by ionization charges which exceed the charge limit capacity of the photosites. 3 figs.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Operating Limits for Capture Systems... 63—Operating Limits for Capture Systems and Add-On Control Devices If you are required to comply with operating limits by § 63.3093, you must comply with the applicable operating limits in the following table...
Code of Federal Regulations, 2013 CFR
2013-07-01
... system and add-on control device operating limits during the performance test? 63.4966 Section 63.4966... outlet gas temperature is the maximum operating limit for your condenser. (e) Emission capture system... Emission Rate with Add-on Controls Option § 63.4966 How do I establish the emission capture system and add...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 12 2011-07-01 2009-07-01 true Operating Limits for Capture Systems... 63—Operating Limits for Capture Systems and Add-On Control Devices If you are required to comply with operating limits by § 63.3093, you must comply with the applicable operating limits in the following table...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 13 2013-07-01 2012-07-01 true Operating Limits for Capture Systems... Subpart IIII of Part 63—Operating Limits for Capture Systems and Add-On Control Devices If you are required to comply with operating limits by § 63.3093, you must comply with the applicable operating limits...
Code of Federal Regulations, 2011 CFR
2011-07-01
... system and add-on control device operating limits during the performance test? 63.4966 Section 63.4966... outlet gas temperature is the maximum operating limit for your condenser. (e) Emission capture system... with Add-on Controls Option § 63.4966 How do I establish the emission capture system and add-on control...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 13 2012-07-01 2012-07-01 false Operating Limits for Capture Systems... Subpart IIII of Part 63—Operating Limits for Capture Systems and Add-On Control Devices If you are required to comply with operating limits by § 63.3093, you must comply with the applicable operating limits...
Code of Federal Regulations, 2010 CFR
2010-07-01
... system and add-on control device operating limits during the performance test? 63.4966 Section 63.4966... outlet gas temperature is the maximum operating limit for your condenser. (e) Emission capture system... with Add-on Controls Option § 63.4966 How do I establish the emission capture system and add-on control...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 13 2014-07-01 2014-07-01 false Operating Limits for Capture Systems... Subpart IIII of Part 63—Operating Limits for Capture Systems and Add-On Control Devices If you are required to comply with operating limits by § 63.3093, you must comply with the applicable operating limits...
NASA Astrophysics Data System (ADS)
de Carvalho, Luis Alberto V.; Carvalho, Valeria
2014-02-01
One of the main problems with glaucoma throughout the world is that there are typically no symptoms in the early stages. Many people who have the disease do not know they have it and by the time one finds out, the disease is usually in an advanced stage. Most retinal cameras available in the market today use sophisticated optics and have several other features/capabilities (wide-angle optics, red-free and angiography filters, etc) that make them expensive for the general practice or for screening purposes. Therefore, it is important to develop instrumentation that is fast, effective and economic, in order to reach the mass public in the general eye-care centers. In this work, we have constructed the hardware and software of a cost-effective and non-mydriatic prototype device that allows fast capturing and plotting of high-resolution quantitative 3D images and videos of the optical disc head and neighboring region (30° of field of view). The main application of this device is for glaucoma screening, although it may also be useful for the diagnosis of other pathologies related to the optic nerve.
A practical introduction to skeletons for the plant sciences1
Bucksch, Alexander
2014-01-01
Before the availability of digital photography resulting from the invention of charged couple devices in 1969, the measurement of plant architecture was a manual process either on the plant itself or on traditional photographs. The introduction of cheap digital imaging devices for the consumer market enabled the wide use of digital images to capture the shape of plant networks such as roots, tree crowns, or leaf venation. Plant networks contain geometric traits that can establish links to genetic or physiological characteristics, support plant breeding efforts, drive evolutionary studies, or serve as input to plant growth simulations. Typically, traits are encoded in shape descriptors that are computed from imaging data. Skeletons are one class of shape descriptors that are used to describe the hierarchies and extent of branching and looping plant networks. While the mathematical understanding of skeletons is well developed, their application within the plant sciences remains challenging because the quality of the measurement depends partly on the interpretation of the skeleton. This article is meant to bridge the skeletonization literature in the plant sciences and related technical fields by discussing best practices for deriving diameters and approximating branching hierarchies in a plant network. PMID:25202645
An online detection system for aggregate sizes and shapes based on digital image processing
NASA Astrophysics Data System (ADS)
Yang, Jianhong; Chen, Sijia
2017-02-01
Traditional aggregate size measuring methods are time-consuming, taxing, and do not deliver online measurements. A new online detection system for determining aggregate size and shape based on a digital camera with a charge-coupled device, and subsequent digital image processing, have been developed to overcome these problems. The system captures images of aggregates while falling and flat lying. Using these data, the particle size and shape distribution can be obtained in real time. Here, we calibrate this method using standard globules. Our experiments show that the maximum particle size distribution error was only 3 wt%, while the maximum particle shape distribution error was only 2 wt% for data derived from falling aggregates, having good dispersion. In contrast, the data for flat-lying aggregates had a maximum particle size distribution error of 12 wt%, and a maximum particle shape distribution error of 10 wt%; their accuracy was clearly lower than for falling aggregates. However, they performed well for single-graded aggregates, and did not require a dispersion device. Our system is low-cost and easy to install. It can successfully achieve online detection of aggregate size and shape with good reliability, and it has great potential for aggregate quality assurance.
Wei, Wanchun; Broussard, Leah J.; Hoffbauer, Mark Arles; ...
2016-05-16
Position-sensitive detection of ultracold neutrons (UCNs) is demonstrated using an imaging charge-coupled device (CCD) camera. A spatial resolution less than 15μm has been achieved, which is equivalent to a UCN energy resolution below 2 pico-electron-volts through the relation δE=m 0gδx. Here, the symbols δE, δx, m 0 and g are the energy resolution, the spatial resolution, the neutron rest mass and the gravitational acceleration, respectively. A multilayer surface convertor described previously is used to capture UCNs and then emits visible light for CCD imaging. Particle identification and noise rejection are discussed through the use of light intensity profile analysis. Asmore » a result, this method allows different types of UCN spectroscopy and other applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Wanchun; Broussard, Leah J.; Hoffbauer, Mark Arles
Position-sensitive detection of ultracold neutrons (UCNs) is demonstrated using an imaging charge-coupled device (CCD) camera. A spatial resolution less than 15μm has been achieved, which is equivalent to a UCN energy resolution below 2 pico-electron-volts through the relation δE=m 0gδx. Here, the symbols δE, δx, m 0 and g are the energy resolution, the spatial resolution, the neutron rest mass and the gravitational acceleration, respectively. A multilayer surface convertor described previously is used to capture UCNs and then emits visible light for CCD imaging. Particle identification and noise rejection are discussed through the use of light intensity profile analysis. Asmore » a result, this method allows different types of UCN spectroscopy and other applications.« less
Code of Federal Regulations, 2010 CFR
2010-07-01
... system and add-on control device operating limits during the performance test? 63.3967 Section 63.3967... capture system and add-on control device operating limits during the performance test? During the... the operating limits required by § 63.3892 according to this section, unless you have received...
Code of Federal Regulations, 2011 CFR
2011-07-01
... system and add-on control device operating limits during the performance test? 63.3967 Section 63.3967... capture system and add-on control device operating limits during the performance test? During the... the operating limits required by § 63.3892 according to this section, unless you have received...
Code of Federal Regulations, 2013 CFR
2013-07-01
... system and add-on control device operating limits during the performance test? 63.3967 Section 63.3967... emission capture system and add-on control device operating limits during the performance test? During the... the operating limits required by § 63.3892 according to this section, unless you have received...
Code of Federal Regulations, 2013 CFR
2013-07-01
... system and add-on control device operating limits during the performance test? 63.4167 Section 63.4167... Emission Rate with Add-on Controls Option § 63.4167 How do I establish the emission capture system and add-on control device operating limits during the performance test? During the performance test required...
Code of Federal Regulations, 2013 CFR
2013-07-01
... system and add-on control device operating limits during the performance test? 63.4567 Section 63.4567... capture system and add-on control device operating limits during the performance test? During the... the operating limits required by § 63.4492 according to this section, unless you have received...
Remote volume rendering pipeline for mHealth applications
NASA Astrophysics Data System (ADS)
Gutenko, Ievgeniia; Petkov, Kaloian; Papadopoulos, Charilaos; Zhao, Xin; Park, Ji Hwan; Kaufman, Arie; Cha, Ronald
2014-03-01
We introduce a novel remote volume rendering pipeline for medical visualization targeted for mHealth (mobile health) applications. The necessity of such a pipeline stems from the large size of the medical imaging data produced by current CT and MRI scanners with respect to the complexity of the volumetric rendering algorithms. For example, the resolution of typical CT Angiography (CTA) data easily reaches 512^3 voxels and can exceed 6 gigabytes in size by spanning over the time domain while capturing a beating heart. This explosion in data size makes data transfers to mobile devices challenging, and even when the transfer problem is resolved the rendering performance of the device still remains a bottleneck. To deal with this issue, we propose a thin-client architecture, where the entirety of the data resides on a remote server where the image is rendered and then streamed to the client mobile device. We utilize the display and interaction capabilities of the mobile device, while performing interactive volume rendering on a server capable of handling large datasets. Specifically, upon user interaction the volume is rendered on the server and encoded into an H.264 video stream. H.264 is ubiquitously hardware accelerated, resulting in faster compression and lower power requirements. The choice of low-latency CPU- and GPU-based encoders is particularly important in enabling the interactive nature of our system. We demonstrate a prototype of our framework using various medical datasets on commodity tablet devices.
NASA Astrophysics Data System (ADS)
Amelard, Robert; Scharfenberger, Christian; Wong, Alexander; Clausi, David A.
2015-03-01
Non-contact camera-based imaging photoplethysmography (iPPG) is useful for measuring heart rate in conditions where contact devices are problematic due to issues such as mobility, comfort, and sanitation. Existing iPPG methods analyse the light-tissue interaction of either active or passive (ambient) illumination. Many active iPPG methods assume the incident ambient light is negligible to the active illumination, resulting in high power requirements, while many passive iPPG methods assume near-constant ambient conditions. These assumptions can only be achieved in environments with controlled illumination and thus constrain the use of such devices. To increase the number of possible applications of iPPG devices, we propose a dual-mode active iPPG system that is robust to changes in ambient illumination variations. Our system uses a temporally-coded illumination sequence that is synchronized with the camera to measure both active and ambient illumination interaction for determining heart rate. By subtracting the ambient contribution, the remaining illumination data can be attributed to the controlled illuminant. Our device comprises a camera and an LED illuminant controlled by a microcontroller. The microcontroller drives the temporal code via synchronizing the frame captures and illumination time at the hardware level. By simulating changes in ambient light conditions, experimental results show our device is able to assess heart rate accurately in challenging lighting conditions. By varying the temporal code, we demonstrate the trade-off between camera frame rate and ambient light compensation for optimal blood pulse detection.
2013-01-15
S48-E-007 (12 Sept 1991) --- Astronaut James F. Buchli, mission specialist, catches snack crackers as they float in the weightless environment of the earth-orbiting Discovery. This image was transmitted by the Electronic Still Camera, Development Test Objective (DTO) 648. The ESC is making its initial appearance on a Space Shuttle flight. Electronic still photography is a new technology that enables a camera to electronically capture and digitize an image with resolution approaching film quality. The digital image is stored on removable hard disks or small optical disks, and can be converted to a format suitable for downlink transmission or enhanced using image processing software. The Electronic Still Camera (ESC) was developed by the Man- Systems Division at the Johnson Space Center and is the first model in a planned evolutionary development leading to a family of high-resolution digital imaging devices. H. Don Yeates, JSC's Man-Systems Division, is program manager for the ESC. THIS IS A SECOND GENERATION PRINT MADE FROM AN ELECTRONICALLY PRODUCED NEGATIVE
Intra-cavity upconversion to 631 nm of images illuminated by an eye-safe ASE source at 1550 nm.
Torregrosa, A J; Maestre, H; Capmany, J
2015-11-15
We report an image wavelength upconversion system. The system mixes an incoming image at around 1550 nm (eye-safe region) illuminated by an amplified spontaneous emission (ASE) fiber source with a Gaussian beam at 1064 nm generated in a continuous-wave diode-pumped Nd(3+):GdVO(4) laser. Mixing takes place in a periodically poled lithium niobate (PPLN) crystal placed intra-cavity. The upconverted image obtained by sum-frequency mixing falls around the 631 nm red spectral region, well within the spectral response of standard silicon focal plane array bi-dimensional sensors, commonly used in charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) video cameras, and of most image intensifiers. The use of ASE illumination benefits from a noticeable increase in the field of view (FOV) that can be upconverted with regard to using coherent laser illumination. The upconverted power allows us to capture real-time video in a standard nonintensified CCD camera.
Lannin, Timothy B; Thege, Fredrik I; Kirby, Brian J
2016-10-01
Advances in rare cell capture technology have made possible the interrogation of circulating tumor cells (CTCs) captured from whole patient blood. However, locating captured cells in the device by manual counting bottlenecks data processing by being tedious (hours per sample) and compromises the results by being inconsistent and prone to user bias. Some recent work has been done to automate the cell location and classification process to address these problems, employing image processing and machine learning (ML) algorithms to locate and classify cells in fluorescent microscope images. However, the type of machine learning method used is a part of the design space that has not been thoroughly explored. Thus, we have trained four ML algorithms on three different datasets. The trained ML algorithms locate and classify thousands of possible cells in a few minutes rather than a few hours, representing an order of magnitude increase in processing speed. Furthermore, some algorithms have a significantly (P < 0.05) higher area under the receiver operating characteristic curve than do other algorithms. Additionally, significant (P < 0.05) losses to performance occur when training on cell lines and testing on CTCs (and vice versa), indicating the need to train on a system that is representative of future unlabeled data. Optimal algorithm selection depends on the peculiarities of the individual dataset, indicating the need of a careful comparison and optimization of algorithms for individual image classification tasks. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
Lin, Kao-Han; Young, Sun-Yi; Hsu, Ming-Chuan; Chan, Hsu; Chen, Yung-Yaw; Lin, Win-Li
2008-01-01
In this study, we developed a focused ultrasound (FUS) thermal therapy system with ultrasound image guidance and thermocouple temperature measurement feedback. Hydraulic position devices and computer-controlled servo motors were used to move the FUS transducer to the desired location with the measurement of actual movement by linear scale. The entire system integrated automatic position devices, FUS transducer, power amplifier, ultrasound image system, and thermocouple temperature measurement into a graphical user interface. For the treatment procedure, a thermocouple was implanted into a targeted treatment region in a tissue-mimicking phantom under ultrasound image guidance, and then the acoustic interference pattern formed by image ultrasound beam and low-power FUS beam was employed as image guidance to move the FUS transducer to have its focal zone coincident with the thermocouple tip. The thermocouple temperature rise was used to determine the sonication duration for a suitable thermal lesion as a high power was turned on and ultrasound image was used to capture the thermal lesion formation. For a multiple lesion formation, the FUS transducer was moved under the acoustic interference guidance to a new location and then it sonicated with the same power level and duration. This system was evaluated and the results showed that it could perform two-dimensional motion control to do a two-dimensional thermal therapy with a small localization error 0.5 mm. Through the user interface, the FUS transducer could be moved to heat the target region with the guidance of ultrasound image and acoustic interference pattern. The preliminary phantom experimental results demonstrated that the system could achieve the desired treatment plan satisfactorily.
The Quanta Image Sensor: Every Photon Counts
Fossum, Eric R.; Ma, Jiaju; Masoodian, Saleh; Anzagira, Leo; Zizza, Rachel
2016-01-01
The Quanta Image Sensor (QIS) was conceived when contemplating shrinking pixel sizes and storage capacities, and the steady increase in digital processing power. In the single-bit QIS, the output of each field is a binary bit plane, where each bit represents the presence or absence of at least one photoelectron in a photodetector. A series of bit planes is generated through high-speed readout, and a kernel or “cubicle” of bits (x, y, t) is used to create a single output image pixel. The size of the cubicle can be adjusted post-acquisition to optimize image quality. The specialized sub-diffraction-limit photodetectors in the QIS are referred to as “jots” and a QIS may have a gigajot or more, read out at 1000 fps, for a data rate exceeding 1 Tb/s. Basically, we are trying to count photons as they arrive at the sensor. This paper reviews the QIS concept and its imaging characteristics. Recent progress towards realizing the QIS for commercial and scientific purposes is discussed. This includes implementation of a pump-gate jot device in a 65 nm CIS BSI process yielding read noise as low as 0.22 e− r.m.s. and conversion gain as high as 420 µV/e−, power efficient readout electronics, currently as low as 0.4 pJ/b in the same process, creating high dynamic range images from jot data, and understanding the imaging characteristics of single-bit and multi-bit QIS devices. The QIS represents a possible major paradigm shift in image capture. PMID:27517926
A device for characterising the mechanical properties of the plantar soft tissue of the foot.
Parker, D; Cooper, G; Pearson, S; Crofts, G; Howard, D; Busby, P; Nester, C
2015-11-01
The plantar soft tissue is a highly functional viscoelastic structure involved in transferring load to the human body during walking. A Soft Tissue Response Imaging Device was developed to apply a vertical compression to the plantar soft tissue whilst measuring the mechanical response via a combined load cell and ultrasound imaging arrangement. Accuracy of motion compared to input profiles; validation of the response measured for standard materials in compression; variability of force and displacement measures for consecutive compressive cycles; and implementation in vivo with five healthy participants. Static displacement displayed average error of 0.04 mm (range of 15 mm), and static load displayed average error of 0.15 N (range of 250 N). Validation tests showed acceptable agreement compared to a Houndsfield tensometer for both displacement (CMC > 0.99 RMSE > 0.18 mm) and load (CMC > 0.95 RMSE < 4.86 N). Device motion was highly repeatable for bench-top tests (ICC = 0.99) and participant trials (CMC = 1.00). Soft tissue response was found repeatable for intra (CMC > 0.98) and inter trials (CMC > 0.70). The device has been shown to be capable of implementing complex loading patterns similar to gait, and of capturing the compressive response of the plantar soft tissue for a range of loading conditions in vivo. Copyright © 2015. Published by Elsevier Ltd.
A real-time monitoring system for night glare protection
NASA Astrophysics Data System (ADS)
Ma, Jun; Ni, Xuxiang
2010-11-01
When capturing a dark scene with a high bright object, the monitoring camera will be saturated in some regions and the details will be lost in and near these saturated regions because of the glare vision. This work aims at developing a real-time night monitoring system. The system can decrease the influence of the glare vision and gain more details from the ordinary camera when exposing a high-contrast scene like a car with its headlight on during night. The system is made up of spatial light modulator (The liquid crystal on silicon: LCoS), image sensor (CCD), imaging lens and DSP. LCoS, a reflective liquid crystal, can modular the intensity of reflective light at every pixel as a digital device. Through modulation function of LCoS, CCD is exposed with sub-region. With the control of DSP, the light intensity is decreased to minimum in the glare regions, and the light intensity is negative feedback modulated based on PID theory in other regions. So that more details of the object will be imaging on CCD and the glare protection of monitoring system is achieved. In experiments, the feedback is controlled by the embedded system based on TI DM642. Experiments shows: this feedback modulation method not only reduces the glare vision to improve image quality, but also enhances the dynamic range of image. The high-quality and high dynamic range image is real-time captured at 30hz. The modulation depth of LCoS determines how strong the glare can be removed.
Resolution analysis of archive films for the purpose of their optimal digitization and distribution
NASA Astrophysics Data System (ADS)
Fliegel, Karel; Vítek, Stanislav; Páta, Petr; Myslík, Jiří; Pecák, Josef; Jícha, Marek
2017-09-01
With recent high demand for ultra-high-definition (UHD) content to be screened in high-end digital movie theaters but also in the home environment, film archives full of movies in high-definition and above are in the scope of UHD content providers. Movies captured with the traditional film technology represent a virtually unlimited source of UHD content. The goal to maintain complete image information is also related to the choice of scanning resolution and spatial resolution for further distribution. It might seem that scanning the film material in the highest possible resolution using state-of-the-art film scanners and also its distribution in this resolution is the right choice. The information content of the digitized images is however limited, and various degradations moreover lead to its further reduction. Digital distribution of the content in the highest image resolution might be therefore unnecessary or uneconomical. In other cases, the highest possible resolution is inevitable if we want to preserve fine scene details or film grain structure for archiving purposes. This paper deals with the image detail content analysis of archive film records. The resolution limit in captured scene image and factors which lower the final resolution are discussed. Methods are proposed to determine the spatial details of the film picture based on the analysis of its digitized image data. These procedures allow determining recommendations for optimal distribution of digitized video content intended for various display devices with lower resolutions. Obtained results are illustrated on spatial downsampling use case scenario, and performance evaluation of the proposed techniques is presented.
A lateral electrophoretic flow diagnostic assay
Lin, Robert; Skandarajah, Arunan; Gerver, Rachel E.; Neira, Hector D.; Fletcher, Daniel A.
2015-01-01
Immunochromatographic assays are a cornerstone tool in disease screening. To complement existing lateral flow assays (based on wicking flow) we introduce a lateral flow format that employs directed electrophoretic transport. The format is termed a “lateral e-flow assay” and is designed to support multiplexed detection using immobilized reaction volumes of capture antigen. To fabricate the lateral e-flow device, we employ mask-based UV photopatterning to selectively immobilize unmodified capture antigen along the microchannel in a barcode-like pattern. The channel-filling polyacrylamide hydrogel incorporates a photoactive moiety (benzophenone) to immobilize capture antigen to the hydrogel without a priori antigen modification. We report a heterogeneous sandwich assay using low-power electrophoresis to drive biospecimen through the capture antigen barcode. Fluorescence barcode readout is collected via a low-resource appropriate imaging system (CellScope). We characterize lateral e-flow assay performance and demonstrate a serum assay for antibodies to the hepatitis C virus (HCV). In a pilot study, the lateral e-flow assay positively identifies HCV+ human sera in 60 min. The lateral e-flow assay provides a flexible format for conducting multiplexed immunoassays relevant to confirmatory diagnosis in near-patient settings. PMID:25608872
PROCEDURE FOR ESTIMATING PERMANENT TOTAL ENCLOSURE COSTS
The paper discusses a procedure for estimating permanent total enclosure (PTE) costs. (NOTE: Industries that use add-on control devices must adequately capture emissions before delivering them to the control device. One way to capture emissions is to use PTEs, enclosures that mee...
An efficient intensity-based ready-to-use X-ray image stitcher.
Wang, Junchen; Zhang, Xiaohui; Sun, Zhen; Yuan, Fuzhen
2018-06-14
The limited field of view of the X-ray image intensifier makes it difficult to cover a large target area with a single X-ray image. X-ray image stitching techniques have been proposed to produce a panoramic X-ray image. This paper presents an efficient intensity-based X-ray image stitcher, which does not rely on accurate C-arm motion control or auxiliary devices and hence is ready to use in clinic. The stitcher consumes sequentially captured X-ray images with overlap areas and automatically produces a panoramic image. The gradient information for optimization of image alignment is obtained using a back-propagation scheme so that it is convenient to adopt various image warping models. The proposed stitcher has the following advantages over existing methods: (1) no additional hardware modification or auxiliary markers are needed; (2) it is robust against feature-based approaches; (3) arbitrary warping models and shapes of the region of interest are supported; (4) seamless stitching is achieved using multi-band blending. Experiments have been performed to confirm the effectiveness of the proposed method. The proposed X-ray image stitcher is efficient, accurate and ready to use in clinic. Copyright © 2018 John Wiley & Sons, Ltd.
Piatti, Filippo; Palumbo, Maria Chiara; Consolo, Filippo; Pluchinotta, Francesca; Greiser, Andreas; Sturla, Francesco; Votta, Emiliano; Siryk, Sergii V; Vismara, Riccardo; Fiore, Gianfranco Beniamino; Lombardi, Massimo; Redaelli, Alberto
2018-02-08
The performance of blood-processing devices largely depends on the associated fluid dynamics, which hence represents a key aspect in their design and optimization. To this aim, two approaches are currently adopted: computational fluid-dynamics, which yields highly resolved three-dimensional data but relies on simplifying assumptions, and in vitro experiments, which typically involve the direct video-acquisition of the flow field and provide 2D data only. We propose a novel method that exploits space- and time-resolved magnetic resonance imaging (4D-flow) to quantify the complex 3D flow field in blood-processing devices and to overcome these limitations. We tested our method on a real device that integrates an oxygenator and a heat exchanger. A dedicated mock loop was implemented, and novel 4D-flow sequences with sub-millimetric spatial resolution and region-dependent velocity encodings were defined. Automated in house software was developed to quantify the complex 3D flow field within the different regions of the device: region-dependent flow rates, pressure drops, paths of the working fluid and wall shear stresses were computed. Our analysis highlighted the effects of fine geometrical features of the device on the local fluid-dynamics, which would be unlikely observed by current in vitro approaches. Also, the effects of non-idealities on the flow field distribution were captured, thanks to the absence of the simplifying assumptions that typically characterize numerical models. To the best of our knowledge, our approach is the first of its kind and could be extended to the analysis of a broad range of clinically relevant devices. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Novel Device for Grasping Assessment during Functional Tasks: Preliminary Results
Rocha, Ana Carolinne Portela; Tudella, Eloisa; Pedro, Leonardo M.; Appel, Viviane Cristina Roma; da Silva, Louise Gracelli Pereira; Caurin, Glauco Augusto de Paula
2016-01-01
This paper presents a methodology and first results obtained in a study with a novel device that allows the analysis of grasping quality. Such a device is able to acquire motion information of upper limbs allowing kinetic of manipulation analysis as well. A pilot experiment was carried out with six groups of typically developing children aged between 5 and 10 years, with seven to eight children in each one. The device, designed to emulate a glass, has an optical system composed by one digital camera and a special convex mirror that together allow image acquisition of grasping hand posture when it is grasped and manipulated. It also carries an Inertial Measurement Unit that captures motion data as acceleration, orientation, and angular velocities. The novel instrumented object is used in our approach to evaluate functional tasks performance in quantitative terms. During tests, each child was invited to grasp the cylindrical part of the device that was placed on the top of a table, simulating the task of drinking a glass of water. In the sequence, the child was oriented to transport the device back to the starting position and release it. The task was repeated three times for each child. A grasping hand posture evaluation is presented as an example to evaluate grasping quality. Additionally, motion patterns obtained with the trials performed with the different groups are presented and discussed. This device is attractive due to its portable characteristics, the small size, and its ability to evaluate grasping form. The results may be also useful to analyze the evolution of the rehabilitation process through reach-to-grasping movement and the grasping images analysis. PMID:26942178
NASA Technical Reports Server (NTRS)
Hunt, W. D.; Brennan, K. F.; Summers, C. J.; Yun, Ilgu
1994-01-01
Reliability modeling and parametric yield prediction of GaAs/AlGaAs multiple quantum well (MQW) avalanche photodiodes (APDs), which are of interest as an ultra-low noise image capture mechanism for high definition systems, have been investigated. First, the effect of various doping methods on the reliability of GaAs/AlGaAs multiple quantum well (MQW) avalanche photodiode (APD) structures fabricated by molecular beam epitaxy is investigated. Reliability is examined by accelerated life tests by monitoring dark current and breakdown voltage. Median device lifetime and the activation energy of the degradation mechanism are computed for undoped, doped-barrier, and doped-well APD structures. Lifetimes for each device structure are examined via a statistically designed experiment. Analysis of variance shows that dark-current is affected primarily by device diameter, temperature and stressing time, and breakdown voltage depends on the diameter, stressing time and APD type. It is concluded that the undoped APD has the highest reliability, followed by the doped well and doped barrier devices, respectively. To determine the source of the degradation mechanism for each device structure, failure analysis using the electron-beam induced current method is performed. This analysis reveals some degree of device degradation caused by ionic impurities in the passivation layer, and energy-dispersive spectrometry subsequently verified the presence of ionic sodium as the primary contaminant. However, since all device structures are similarly passivated, sodium contamination alone does not account for the observed variation between the differently doped APDs. This effect is explained by the dopant migration during stressing, which is verified by free carrier concentration measurements using the capacitance-voltage technique.
Terrain detection and classification using single polarization SAR
Chow, James G.; Koch, Mark W.
2016-01-19
The various technologies presented herein relate to identifying manmade and/or natural features in a radar image. Two radar images (e.g., single polarization SAR images) can be captured for a common scene. The first image is captured at a first instance and the second image is captured at a second instance, whereby the duration between the captures are of sufficient time such that temporal decorrelation occurs for natural surfaces in the scene, and only manmade surfaces, e.g., a road, produce correlated pixels. A LCCD image comprising the correlated and decorrelated pixels can be generated from the two radar images. A median image can be generated from a plurality of radar images, whereby any features in the median image can be identified. A superpixel operation can be performed on the LCCD image and the median image, thereby enabling a feature(s) in the LCCD image to be classified.
Super-resolution in a defocused plenoptic camera: a wave-optics-based approach.
Sahin, Erdem; Katkovnik, Vladimir; Gotchev, Atanas
2016-03-01
Plenoptic cameras enable the capture of a light field with a single device. However, with traditional light field rendering procedures, they can provide only low-resolution two-dimensional images. Super-resolution is considered to overcome this drawback. In this study, we present a super-resolution method for the defocused plenoptic camera (Plenoptic 1.0), where the imaging system is modeled using wave optics principles and utilizing low-resolution depth information of the scene. We are particularly interested in super-resolution of in-focus and near in-focus scene regions, which constitute the most challenging cases. The simulation results show that the employed wave-optics model makes super-resolution possible for such regions as long as sufficiently accurate depth information is available.
Gallegos, C.H.; Ogle, J.W.; Stokes, J.L.
1992-11-24
A method and apparatus for capturing and recording indications of frequency content of electromagnetic signals and radiation is disclosed including a laser light source and a Bragg cell for deflecting a light beam at a plurality of deflection angles dependent upon frequency content of the signal. A streak camera and a microchannel plate intensifier are used to project Bragg cell output onto either a photographic film or a charge coupled device (CCD) imager. Timing markers are provided by a comb generator and a one shot generator, the outputs of which are also routed through the streak camera onto the film or the CCD imager. Using the inventive method, the full range of the output of the Bragg cell can be recorded as a function of time. 5 figs.
Code of Federal Regulations, 2011 CFR
2011-07-01
... system and add-on control device operating limits during the performance test? 63.4767 Section 63.4767... Rate with Add-on Controls Option § 63.4767 How do I establish the emission capture system and add-on control device operating limits during the performance test? During the performance test required by § 63...
Code of Federal Regulations, 2010 CFR
2010-07-01
... system and add-on control device operating limits during the performance test? 63.4767 Section 63.4767... Rate with Add-on Controls Option § 63.4767 How do I establish the emission capture system and add-on control device operating limits during the performance test? During the performance test required by § 63...
Code of Federal Regulations, 2010 CFR
2010-07-01
... system and add-on control device operating limits during the performance test? 63.4167 Section 63.4167... with Add-on Controls Option § 63.4167 How do I establish the emission capture system and add-on control device operating limits during the performance test? During the performance test required by § 63.4160...
Code of Federal Regulations, 2011 CFR
2011-07-01
... system and add-on control device operating limits during the performance test? 63.4167 Section 63.4167... with Add-on Controls Option § 63.4167 How do I establish the emission capture system and add-on control device operating limits during the performance test? During the performance test required by § 63.4160...
Platform control for space-based imaging: the TOPSAT mission
NASA Astrophysics Data System (ADS)
Dungate, D.; Morgan, C.; Hardacre, S.; Liddle, D.; Cropp, A.; Levett, W.; Price, M.; Steyn, H.
2004-11-01
This paper describes the imaging mode ADCS design for the TOPSAT satellite, an Earth observation demonstration mission targeted at military applications. The baselined orbit for TOPSAT is a 600-700km sun synchronous orbit from which images up to 30° off track can be captured. For this baseline, the imaging camera proves a resolution of 2.5m and a nominal image size of 15x15km. The ADCS design solution for the imaging mode uses a moving demand approach to enable a single control algorithm solution for both the preparatory reorientation prior to image capture and the post capture return to nadir pointing. During image capture proper, control is suspended to minimise the disturbances experienced by the satellite from the wheels. Prior to each imaging sequence, the moving demand attitude and rate profiles are calculated such that the correct attitude and rate are achieved at the correct orbital position, enabling the correct target area to be captured.
NASA Astrophysics Data System (ADS)
Arp, Trevor; Pleskot, Dennis; Gabor, Nathaniel
We have developed a new photoresponse imaging technique that utilizes extensive data acquisition over a large parameter space. By acquiring a multi-dimensional data set, we fully capture the intrinsic optoelectronic response of two-dimensional heterostructure devices. Using this technique we have investigated the behavior of heterostructures consisting of molybdenum ditelluride (MoTe2) sandwiched between graphene top and bottom contacts. Under near-infrared optical excitation, the ultra-thin heterostructure devices exhibit sub-linear photocurrent response that recovers within several dozen picoseconds. As the optical power increases, the dynamics of the photoresponse, consistent with 3-body annihilation, precede a sudden suppression of photocurrent. The observed dynamics near the threshold to photocurrent suppression may indicate the onset to a strongly interacting population of electrons and holes.
NASA Astrophysics Data System (ADS)
Khosravi, Farhad; Trainor, Patrick J.; Lambert, Christopher; Kloecker, Goetz; Wickstrom, Eric; Rai, Shesh N.; Panchapakesan, Balaji
2016-11-01
We demonstrate the rapid and label-free capture of breast cancer cells spiked in blood using nanotube-antibody micro-arrays. 76-element single wall carbon nanotube arrays were manufactured using photo-lithography, metal deposition, and etching techniques. Anti-epithelial cell adhesion molecule (anti-EpCAM), Anti-human epithelial growth factor receptor 2 (anti-Her2) and non-specific IgG antibodies were functionalized to the surface of the nanotube devices using 1-pyrene-butanoic acid succinimidyl ester. Following device functionalization, blood spiked with SKBR3, MCF7 and MCF10A cells (100/1000 cells per 5 μl per device, 170 elements totaling 0.85 ml of whole blood) were adsorbed on to the nanotube device arrays. Electrical signatures were recorded from each device to screen the samples for differences in interaction (specific or non-specific) between samples and devices. A zone classification scheme enabled the classification of all 170 elements in a single map. A kernel-based statistical classifier for the ‘liquid biopsy’ was developed to create a predictive model based on dynamic time warping series to classify device electrical signals that corresponded to plain blood (control) or SKBR3 spiked blood (case) on anti-Her2 functionalized devices with ˜90% sensitivity, and 90% specificity in capture of 1000 SKBR3 breast cancer cells in blood using anti-Her2 functionalized devices. Screened devices that gave positive electrical signatures were confirmed using optical/confocal microscopy to hold spiked cancer cells. Confocal microscopic analysis of devices that were classified to hold spiked blood based on their electrical signatures confirmed the presence of cancer cells through staining for DAPI (nuclei), cytokeratin (cancer cells) and CD45 (hematologic cells) with single cell sensitivity. We report 55%-100% cancer cell capture yield depending on the active device area for blood adsorption with mean of 62% (˜12 500 captured off 20 000 spiked cells in 0.1 ml blood) in this first nanotube-CTC chip study.
Water Microbiology Kit/Microbial Capture Devices (WMK MCD)
2009-08-04
ISS020-E-027318 (4 Aug. 2009) --- Canadian Space Agency astronaut Robert Thirsk, Expedition 20 flight engineer, performs a subsequent in-flight analysis with a Water Microbiology Kit/Microbial Capture Devices (WMK MCD) for microbial traces in the Destiny laboratory of the International Space Station.
Water Capture Device Signal Integration Board
NASA Technical Reports Server (NTRS)
Chamberlin, Kathryn J.; Hartnett, Andrew J.
2018-01-01
I am a junior in electrical engineering at Arizona State University, and this is my second internship at Johnson Space Center. I am an intern in the Command and Data Handling Branch of Avionics Division (EV2), my previous internship was also in EV2. During my previous internship I was assigned to the Water Capture Device payload, where I designed a prototype circuit board for the electronics system of the payload. For this internship, I have come back to the Water Capture Device project to further the work on the electronics design I completed previously. The Water Capture Device is an experimental payload to test the functionality of two different phase separators aboard the International Space Station (ISS). A phase separator sits downstream of a condensing heat exchanger (CHX) and separates the water from the air particles for environmental control on the ISS. With changing CHX technology, new phase separators are required. The goal of the project is to develop a test bed for the two phase separators to determine the best solution.
Xue, Peng; Wu, Yafeng; Guo, Jinhong; Kang, Yuejun
2015-04-01
Circulating tumor cells (CTCs), which are derived from primary tumor site and transported to distant organs, are considered as the major cause of metastasis. So far, various techniques have been applied for CTC isolation and enumeration. However, there exists great demand to improve the sensitivity of CTC capture, and it remains challenging to elute the cells efficiently from device for further biomolecular and cellular analyses. In this study, we fabricate a dual functional chip integrated with herringbone structure and micropost array to achieve CTC capture and elution through EpCAM-based immunoreaction. Hep3B tumor cell line is selected as the model of CTCs for processing using this device. The results demonstrate that the capture limit of Hep3B cells can reach up to 10 cells (per mL of sample volume) with capture efficiency of 80% on average. Moreover, the elution rate of the captured Hep3B cells can reach up to 69.4% on average for cell number ranging from 1 to 100. These results demonstrate that this device exhibits dual functions with considerably high capture rate and elution rate, indicating its promising capability for cancer diagnosis and therapeutics.
Wide-field fundus imaging with trans-palpebral illumination.
Toslak, Devrim; Thapa, Damber; Chen, Yanjun; Erol, Muhammet Kazim; Paul Chan, R V; Yao, Xincheng
2017-01-28
In conventional fundus imaging devices, transpupillary illumination is used for illuminating the inside of the eye. In this method, the illumination light is directed into the posterior segment of the eye through the cornea and passes the pupillary area. As a result of sharing the pupillary area for the illumination beam and observation path, pupil dilation is typically necessary for wide-angle fundus examination, and the field of view is inherently limited. An alternative approach is to deliver light from the sclera. It is possible to image a wider retinal area with transcleral-illumination. However, the requirement of physical contact between the illumination probe and the sclera is a drawback of this method. We report here trans-palpebral illumination as a new method to deliver the light through the upper eyelid (palpebra). For this study, we used a 1.5 mm diameter fiber with a warm white LED light source. To illuminate the inside of the eye, the fiber illuminator was placed at the location corresponding to the pars plana region. A custom designed optical system was attached to a digital camera for retinal imaging. The optical system contained a 90 diopter ophthalmic lens and a 25 diopter relay lens. The ophthalmic lens collected light coming from the posterior of the eye and formed an aerial image between the ophthalmic and relay lenses. The aerial image was captured by the camera through the relay lens. An adequate illumination level was obtained to capture wide angle fundus images within ocular safety limits, defined by the ISO 15004-2: 2007 standard. This novel trans-palpebral illumination approach enables wide-angle fundus photography without eyeball contact and pupil dilation.
Design of a haptic device with grasp and push-pull force feedback for a master-slave surgical robot.
Hu, Zhenkai; Yoon, Chae-Hyun; Park, Samuel Byeongjun; Jo, Yung-Ho
2016-07-01
We propose a portable haptic device providing grasp (kinesthetic) and push-pull (cutaneous) sensations for optical-motion-capture master interfaces. Although optical-motion-capture master interfaces for surgical robot systems can overcome the stiffness, friction, and coupling problems of mechanical master interfaces, it is difficult to add haptic feedback to an optical-motion-capture master interface without constraining the free motion of the operator's hands. Therefore, we utilized a Bowden cable-driven mechanism to provide the grasp and push-pull sensation while retaining the free hand motion of the optical-motion capture master interface. To evaluate the haptic device, we construct a 2-DOF force sensing/force feedback system. We compare the sensed force and the reproduced force of the haptic device. Finally, a needle insertion test was done to evaluate the performance of the haptic interface in the master-slave system. The results demonstrate that both the grasp force feedback and the push-pull force feedback provided by the haptic interface closely matched with the sensed forces of the slave robot. We successfully apply our haptic interface in the optical-motion-capture master-slave system. The results of the needle insertion test showed that our haptic feedback can provide more safety than merely visual observation. We develop a suitable haptic device to produce both kinesthetic grasp force feedback and cutaneous push-pull force feedback. Our future research will include further objective performance evaluations of the optical-motion-capture master-slave robot system with our haptic interface in surgical scenarios.
Wavelet library for constrained devices
NASA Astrophysics Data System (ADS)
Ehlers, Johan Hendrik; Jassim, Sabah A.
2007-04-01
The wavelet transform is a powerful tool for image and video processing, useful in a range of applications. This paper is concerned with the efficiency of a certain fast-wavelet-transform (FWT) implementation and several wavelet filters, more suitable for constrained devices. Such constraints are typically found on mobile (cell) phones or personal digital assistants (PDA). These constraints can be a combination of; limited memory, slow floating point operations (compared to integer operations, most often as a result of no hardware support) and limited local storage. Yet these devices are burdened with demanding tasks such as processing a live video or audio signal through on-board capturing sensors. In this paper we present a new wavelet software library, HeatWave, that can be used efficiently for image/video processing/analysis tasks on mobile phones and PDA's. We will demonstrate that HeatWave is suitable for realtime applications with fine control and range to suit transform demands. We shall present experimental results to substantiate these claims. Finally this library is intended to be of real use and applied, hence we considered several well known and common embedded operating system platform differences; such as a lack of common routines or functions, stack limitations, etc. This makes HeatWave suitable for a range of applications and research projects.
Time of flight imaging through scattering environments (Conference Presentation)
NASA Astrophysics Data System (ADS)
Le, Toan H.; Breitbach, Eric C.; Jackson, Jonathan A.; Velten, Andreas
2017-02-01
Light scattering is a primary obstacle to imaging in many environments. On small scales in biomedical microscopy and diffuse tomography scenarios scattering is caused by tissue. On larger scales scattering from dust and fog provide challenges to vision systems for self driving cars and naval remote imaging systems. We are developing scale models for scattering environments and investigation methods for improved imaging particularly using time of flight transient information. With the emergence of Single Photon Avalanche Diode detectors and fast semiconductor lasers, illumination and capture on picosecond timescales are becoming possible in inexpensive, compact, and robust devices. This opens up opportunities for new computational imaging techniques that make use of photon time of flight. Time of flight or range information is used in remote imaging scenarios in gated viewing and in biomedical imaging in time resolved diffuse tomography. In addition spatial filtering is popular in biomedical scenarios with structured illumination and confocal microscopy. We are presenting a combination analytical, computational, and experimental models that allow us develop and test imaging methods across scattering scenarios and scales. This framework will be used for proof of concept experiments to evaluate new computational imaging methods.
NASA Astrophysics Data System (ADS)
Gedalin, Daniel; Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Rotman, Stanley R.; Stern, Adrian
2017-04-01
Compressive sensing theory was proposed to deal with the high quantity of measurements demanded by traditional hyperspectral systems. Recently, a compressive spectral imaging technique dubbed compressive sensing miniature ultraspectral imaging (CS-MUSI) was presented. This system uses a voltage controlled liquid crystal device to create multiplexed hyperspectral cubes. We evaluate the utility of the data captured using the CS-MUSI system for the task of target detection. Specifically, we compare the performance of the matched filter target detection algorithm in traditional hyperspectral systems and in CS-MUSI multiplexed hyperspectral cubes. We found that the target detection algorithm performs similarly in both cases, despite the fact that the CS-MUSI data is up to an order of magnitude less than that in conventional hyperspectral cubes. Moreover, the target detection is approximately an order of magnitude faster in CS-MUSI data.
Design and development of a smart aerial platform for surface hydrological measurements
NASA Astrophysics Data System (ADS)
Tauro, F.; Pagano, C.; Porfiri, M.; Grimaldi, S.
2013-12-01
Currently available experimental methodologies for surface hydrological monitoring rely on the use of intrusive sensing technologies which tend to provide local rather than distributed information on the flow physics. In this context, drawbacks deriving from the use of invasive instrumentation are partially alleviated by Large Scale Particle Image Velocimetry (LSPIV). LSPIV is based on the use of cameras mounted on masts along river banks which capture images of artificial tracers or naturally occurring objects floating on water surfaces. Images are then georeferenced and the displacement of groups of floating tracers statistically analyzed to reconstruct flow velocity maps at specific river cross-sections. In this work, we mitigate LSPIV spatial limitations and inaccuracies due to image calibration by designing and developing a smart platform which integrates digital acquisition system and laser calibration units onboard of a custom-built quadricopter. The quadricopter is designed to be lightweight, low cost as compared to kits available on the market, highly customizable, and stable to guarantee minimal vibrations during image acquisition. The onboard digital system includes an encased GoPro Hero 3 camera whose axis is constantly kept orthogonal to the water surface by means of an in-house developed gimbal. The gimbal is connected to the quadricopter through a shock absorber damping device which further reduces eventual vibrations. Image calibration is performed through laser units mounted at known distances on the quadricopter landing apparatus. The vehicle can be remotely controlled by the open-source Ardupilot microcontroller. Calibration tests and field experiments are conducted in outdoor environments to assess the feasibility of using the smart platform for acquisition of high quality images of natural streams. Captured images are processed by LSPIV algorithms and average flow velocities are compared to independently acquired flow estimates. Further, videos are presented where the smart platform captures the motion of environmentally-friendly buoyant fluorescent particle tracers floating on the surface of water bodies. Such fluorescent particles are in-house synthesized and their visibility and accuracy in tracing complex flows have been previously tested in laboratory and outdoor settings. Experimental results demonstrate the potential of the methodology in monitoring severely accessible and spatially extended environments. Improved accuracy in flow monitoring is accomplished by minimizing image orthorectification and introducing highly visible particle tracers. Future developments will aim at the autonomy of the vehicle through machine learning procedures for unmanned monitoring in the environment.
Estimation of signal-dependent noise level function in transform domain via a sparse recovery model.
Yang, Jingyu; Gan, Ziqiao; Wu, Zhaoyang; Hou, Chunping
2015-05-01
This paper proposes a novel algorithm to estimate the noise level function (NLF) of signal-dependent noise (SDN) from a single image based on the sparse representation of NLFs. Noise level samples are estimated from the high-frequency discrete cosine transform (DCT) coefficients of nonlocal-grouped low-variation image patches. Then, an NLF recovery model based on the sparse representation of NLFs under a trained basis is constructed to recover NLF from the incomplete noise level samples. Confidence levels of the NLF samples are incorporated into the proposed model to promote reliable samples and weaken unreliable ones. We investigate the behavior of the estimation performance with respect to the block size, sampling rate, and confidence weighting. Simulation results on synthetic noisy images show that our method outperforms existing state-of-the-art schemes. The proposed method is evaluated on real noisy images captured by three types of commodity imaging devices, and shows consistently excellent SDN estimation performance. The estimated NLFs are incorporated into two well-known denoising schemes, nonlocal means and BM3D, and show significant improvements in denoising SDN-polluted images.
A laboratory system for element specific hyperspectral X-ray imaging.
Jacques, Simon D M; Egan, Christopher K; Wilson, Matthew D; Veale, Matthew C; Seller, Paul; Cernik, Robert J
2013-02-21
X-ray tomography is a ubiquitous tool used, for example, in medical diagnosis, explosives detection or to check structural integrity of complex engineered components. Conventional tomographic images are formed by measuring many transmitted X-rays and later mathematically reconstructing the object, however the structural and chemical information carried by scattered X-rays of different wavelengths is not utilised in any way. We show how a very simple; laboratory-based; high energy X-ray system can capture these scattered X-rays to deliver 3D images with structural or chemical information in each voxel. This type of imaging can be used to separate and identify chemical species in bulk objects with no special sample preparation. We demonstrate the capability of hyperspectral imaging by examining an electronic device where we can clearly distinguish the atomic composition of the circuit board components in both fluorescence and transmission geometries. We are not only able to obtain attenuation contrast but also to image chemical variations in the object, potentially opening up a very wide range of applications from security to medical diagnostics.
Comparison of Two Foreign Body Retrieval Devices with Adjustable Loops in a Swine Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konya, Andras
2006-12-15
The purpose of the study was to compare two similar foreign body retrieval devices, the Texan{sup TM} (TX) and the Texan LONGhorn{sup TM} (TX-LG), in a swine model. Both devices feature a {<=}30-mm adjustable loop. Capture times and total procedure times for retrieving foreign bodies from the infrarenal aorta, inferior vena cava, and stomach were compared. All attempts with both devices (TX, n = 15; TX-LG, n = 14) were successful. Foreign bodies in the vasculature were captured quickly using both devices (mean {+-} SD, 88 {+-} 106 sec for TX vs 67 {+-} 42 sec for TX-LG) with nomore » significant difference between them. The TX-LG, however, allowed significantly better capture times than the TX in the stomach (p = 0.022), Overall, capture times for the TX-LG were significantly better than for the TX (p = 0.029). There was no significant difference between the total procedure times in any anatomic region. TX-LG performed significantly better than the TX in the stomach and therefore overall. The better torque control and maneuverability of TX-LG resulted in better performance in large anatomic spaces.« less
40 CFR 63.2292 - What definitions apply to this subpart?
Code of Federal Regulations, 2014 CFR
2014-07-01
... designed and maintained to capture all emissions for discharge through a control device. Work practice..., wheat straw, rice straw, and bagasse. Biofilter means an enclosed control system such as a tank or... collected by a capture device. Catalytic oxidizer means a control system that combusts or oxidizes, in the...
40 CFR 63.3981 - What definitions apply to this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
..., activators, accelerators). Add-on control means an air pollution control device, such as a thermal oxidizer or carbon adsorber, that reduces pollution in an air stream by destruction or removal before... directing those emissions into an add-on air pollution control device. Capture efficiency or capture system...
Rapid Isolation of Viable Circulating Tumor Cells from Patient Blood Samples
Hughes, Andrew D.; Mattison, Jeff; Powderly, John D.; Greene, Bryan T.; King, Michael R.
2012-01-01
Circulating tumor cells (CTC) are cells that disseminate from a primary tumor throughout the circulatory system and that can ultimately form secondary tumors at distant sites. CTC count can be used to follow disease progression based on the correlation between CTC concentration in blood and disease severity1. As a treatment tool, CTC could be studied in the laboratory to develop personalized therapies. To this end, CTC isolation must cause no cellular damage, and contamination by other cell types, particularly leukocytes, must be avoided as much as possible2. Many of the current techniques, including the sole FDA-approved device for CTC enumeration, destroy CTC as part of the isolation process (for more information see Ref. 2). A microfluidic device to capture viable CTC is described, consisting of a surface functionalized with E-selectin glycoprotein in addition to antibodies against epithelial markers3. To enhance device performance a nanoparticle coating was applied consisting of halloysite nanotubes, an aluminosilicate nanoparticle harvested from clay4. The E-selectin molecules provide a means to capture fast moving CTC that are pumped through the device, lending an advantage over alternative microfluidic devices wherein longer processing times are necessary to provide target cells with sufficient time to interact with a surface. The antibodies to epithelial targets provide CTC-specificity to the device, as well as provide a readily adjustable parameter to tune isolation. Finally, the halloysite nanotube coating allows significantly enhanced isolation compared to other techniques by helping to capture fast moving cells, providing increased surface area for protein adsorption, and repelling contaminating leukocytes3,4. This device is produced by a straightforward technique using off-the-shelf materials, and has been successfully used to capture cancer cells from the blood of metastatic cancer patients. Captured cells are maintained for up to 15 days in culture following isolation, and these samples typically consist of >50% viable primary cancer cells from each patient. This device has been used to capture viable CTC from both diluted whole blood and buffy coat samples. Ultimately, we present a technique with functionality in a clinical setting to develop personalized cancer therapies. PMID:22733259
Experimental investigations of pupil accommodation factors.
Lee, Eui Chul; Lee, Ji Woo; Park, Kang Ryoung
2011-08-17
PURPOSE. The contraction and dilation of the iris muscle that controls the amount of light entering the retina causes pupil accommodation. In this study, experiments were performed and two of the three factors that influence pupil accommodation were analyzed: lighting conditions and depth fixations. The psychological benefits were not examined, because they could not be quantified. METHODS. A head-wearable eyeglasses-based, eye-capturing device was designed to measure pupil size. It included a near-infrared (NIR) camera and an NIR light-emitting diode. Twenty-four subjects watched two-dimensional (2D) and three-dimensional (3D) stereoscopic videos of the same content, and the changes in pupil size were measured by using the eye-capturing device and image-processing methods: RESULTS. The pupil size changed with the intensity of the videos and the disparities between the left and right images of a 3D stereoscopic video. There was correlation between the pupil size and average intensity. The pupil diameter could be estimated as being contracted from approximately 5.96 to 4.25 mm as the intensity varied from 0 to 255. Further, from the changes in the depth fixation for the pupil accommodation, it was confirmed that the depth fixation also affected accommodation of pupil size. CONCLUSIONS. It was confirmed that the lighting condition was an even more significant factor in pupil accommodation than was depth fixation (significance ratio: approximately 3.2:1) when watching 3D stereoscopic video. Pupil accommodation was more affected by depth fixation in the real world than was the binocular convergence in the 3D stereoscopic display.
Lin, Yii-Lih; Huang, Yen-Jun; Teerapanich, Pattamon; Leïchlé, Thierry
2016-01-01
Nanofluidic devices promise high reaction efficiency and fast kinetic responses due to the spatial constriction of transported biomolecules with confined molecular diffusion. However, parallel detection of multiple biomolecules, particularly proteins, in highly confined space remains challenging. This study integrates extended nanofluidics with embedded protein microarray to achieve multiplexed real-time biosensing and kinetics monitoring. Implementation of embedded standard-sized antibody microarray is attained by epoxy-silane surface modification and a room-temperature low-aspect-ratio bonding technique. An effective sample transport is achieved by electrokinetic pumping via electroosmotic flow. Through the nanoslit-based spatial confinement, the antigen-antibody binding reaction is enhanced with ∼100% efficiency and may be directly observed with fluorescence microscopy without the requirement of intermediate washing steps. The image-based data provide numerous spatially distributed reaction kinetic curves and are collectively modeled using a simple one-dimensional convection-reaction model. This study represents an integrated nanofluidic solution for real-time multiplexed immunosensing and kinetics monitoring, starting from device fabrication, protein immobilization, device bonding, sample transport, to data analysis at Péclet number less than 1. PMID:27375819
Error analysis for creating 3D face templates based on cylindrical quad-tree structure
NASA Astrophysics Data System (ADS)
Gutfeter, Weronika
2015-09-01
Development of new biometric algorithms is parallel to advances in technology of sensing devices. Some of the limitations of the current face recognition systems may be eliminated by integrating 3D sensors into these systems. Depth sensing devices can capture a spatial structure of the face in addition to the texture and color. This kind of data is yet usually very voluminous and requires large amount of computer resources for being processed (face scans obtained with typical depth cameras contain more than 150 000 points per face). That is why defining efficient data structures for processing spatial images is crucial for further development of 3D face recognition methods. The concept described in this work fulfills the aforementioned demands. Modification of the quad-tree structure was chosen because it can be easily transformed into less dimensional data structures and maintains spatial relations between data points. We are able to interpret data stored in the tree as a pyramid of features which allow us to analyze face images using coarse-to-fine strategy, often exploited in biometric recognition systems.
Muzaffar, Razi; Frye, Sarah A; McMunn, Anna; Ryan, Kelley; Lattanze, Ron; Osman, Medhat M
2017-12-01
A novel quality control and quality assurance device provides time-activity curves that can identify and characterize PET/CT radiotracer infiltration at the injection site during the uptake phase. The purpose of this study was to compare rates of infiltration detected by the device with rates detected by physicians. We also assessed the value of using the device to improve injection results in our center. Methods: 109 subjects consented to the study. All had passive device sensors applied to their skin near the injection site and mirrored on the contralateral arm during the entire uptake period. Nuclear medicine physicians reviewed standard images for the presence of dose infiltration. Sensor-generated time-activity curves were independently examined and then compared with the physician reports. Injection data captured by the software were analyzed, and the results were provided to the technologists. Improvement measures were implemented, and rates were remeasured. Results: Physician review of the initial 40 head-to-toe field-of-view images identified 15 cases (38%) of dose infiltration (9 minor, 5 moderate, and 1 significant). Sensor time-activity curves on these 40 cases independently identified 22 cases (55%) of dose infiltration (16 minor, 5 moderate, and 1 significant). After the time-activity curve results and the contributing factor analysis were shared with technologists, injection techniques were modified and an additional 69 cases were studied. Of these, physician review identified 17 cases (25%) of infiltration (13 minor, 3 moderate, and 1 significant), a 34% decline. Sensor time-activity curves identified 4 cases (6%) of infiltration (2 minor and 2 moderate), an 89% decline. Conclusion: The device provides valuable quality control information for each subject. Time-activity curves can further characterize visible infiltration. Even when the injection site was out of the field of view, the time-activity curves could still detect and characterize infiltration. Our initial experience showed that the quality assurance information obtained from the device helped reduce the rate and severity of infiltration. The device revealed site-specific contributing factors that helped nuclear medicine physicians and technologists customize their quality improvement efforts to these site-specific issues. Reducing infiltration can improve image quality and SUV quantification, as well as the ability to minimize variability in a site's PET/CT results. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
A brief history of 25 years (or more) of infrared imaging radiometers
NASA Astrophysics Data System (ADS)
Lyon, Bernard R., Jr.; Orlove, Gary L.
2003-04-01
Modern thermal imaging radiometers are infrared systems usually endowed with some means of making surface temperature measurements of objects, as well as providing an image. These devices have evolved considerably over the past few decades, and are continuing to do so at an accelerating rate. Changes are not confined to merely camera size and user interface, but also include critical parameters, such as sensitivity, accuracy, dynamic range, spectral response, capture rates, storage media, and numerous other features, options, and accessories. Familiarity with this changing technology is much more than an academic topic. A misunderstanding or false assumption concerning system differences, could lead to misinterpretation of data, inaccurate temperature measurements, or disappointing, ambiguous results. Marketing demands have had considerable influence in the design and operation of these systems. In the past, many thermographers were scientists, engineers and researchers. Today, however, the majorities of people using these instruments work in the industrial sector and are involved in highly technical skilled trades. This change of operating personnel has effectively changed the status of these devices from a 'scientific instrument', to an 'essential tool'. Manufacturers have recognized this trend and responded accordingly, as seen in their product designs. This paper explores the history of commercial infrared imaging systems and accessories. Emphasis is placed on, but not confined to, real time systems with video output, capable of temperature measurements.
Image processing to optimize wave energy converters
NASA Astrophysics Data System (ADS)
Bailey, Kyle Marc-Anthony
The world is turning to renewable energies as a means of ensuring the planet's future and well-being. There have been a few attempts in the past to utilize wave power as a means of generating electricity through the use of Wave Energy Converters (WEC), but only recently are they becoming a focal point in the renewable energy field. Over the past few years there has been a global drive to advance the efficiency of WEC. Placing a mechanical device either onshore or offshore that captures the energy within ocean surface waves to drive a mechanical device is how wave power is produced. This paper seeks to provide a novel and innovative way to estimate ocean wave frequency through the use of image processing. This will be achieved by applying a complex modulated lapped orthogonal transform filter bank to satellite images of ocean waves. The complex modulated lapped orthogonal transform filterbank provides an equal subband decomposition of the Nyquist bounded discrete time Fourier Transform spectrum. The maximum energy of the 2D complex modulated lapped transform subband is used to determine the horizontal and vertical frequency, which subsequently can be used to determine the wave frequency in the direction of the WEC by a simple trigonometric scaling. The robustness of the proposed method is provided by the applications to simulated and real satellite images where the frequency is known.
Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung
2012-10-08
Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.
Improving the image discontinuous problem by using color temperature mapping method
NASA Astrophysics Data System (ADS)
Jeng, Wei-De; Mang, Ou-Yang; Lai, Chien-Cheng; Wu, Hsien-Ming
2011-09-01
This article mainly focuses on image processing of radial imaging capsule endoscope (RICE). First, it used the radial imaging capsule endoscope (RICE) to take the images, the experimental used a piggy to get the intestines and captured the images, but the images captured by RICE were blurred due to the RICE has aberration problems in the image center and lower light uniformity affect the image quality. To solve the problems, image processing can use to improve it. Therefore, the images captured by different time can use Person correlation coefficient algorithm to connect all the images, and using the color temperature mapping way to improve the discontinuous problem in the connection region.
Assessing the Risks Associated with MRI in Patients with a Pacemaker or Defibrillator.
Russo, Robert J; Costa, Heather S; Silva, Patricia D; Anderson, Jeffrey L; Arshad, Aysha; Biederman, Robert W W; Boyle, Noel G; Frabizzio, Jennifer V; Birgersdotter-Green, Ulrika; Higgins, Steven L; Lampert, Rachel; Machado, Christian E; Martin, Edward T; Rivard, Andrew L; Rubenstein, Jason C; Schaerf, Raymond H M; Schwartz, Jennifer D; Shah, Dipan J; Tomassoni, Gery F; Tominaga, Gail T; Tonkin, Allison E; Uretsky, Seth; Wolff, Steven D
2017-02-23
The presence of a cardiovascular implantable electronic device has long been a contraindication for the performance of magnetic resonance imaging (MRI). We established a prospective registry to determine the risks associated with MRI at a magnetic field strength of 1.5 tesla for patients who had a pacemaker or implantable cardioverter-defibrillator (ICD) that was "non-MRI-conditional" (i.e., not approved by the Food and Drug Administration for MRI scanning). Patients in the registry were referred for clinically indicated nonthoracic MRI at a field strength of 1.5 tesla. Devices were interrogated before and after MRI with the use of a standardized protocol and were appropriately reprogrammed before the scanning. The primary end points were death, generator or lead failure, induced arrhythmia, loss of capture, or electrical reset during the scanning. The secondary end points were changes in device settings. MRI was performed in 1000 cases in which patients had a pacemaker and in 500 cases in which patients had an ICD. No deaths, lead failures, losses of capture, or ventricular arrhythmias occurred during MRI. One ICD generator could not be interrogated after MRI and required immediate replacement; the device had not been appropriately programmed per protocol before the MRI. We observed six cases of self-terminating atrial fibrillation or flutter and six cases of partial electrical reset. Changes in lead impedance, pacing threshold, battery voltage, and P-wave and R-wave amplitude exceeded prespecified thresholds in a small number of cases. Repeat MRI was not associated with an increase in adverse events. In this study, device or lead failure did not occur in any patient with a non-MRI-conditional pacemaker or ICD who underwent clinically indicated nonthoracic MRI at 1.5 tesla, was appropriately screened, and had the device reprogrammed in accordance with the prespecified protocol. (Funded by St. Jude Medical and others; MagnaSafe ClinicalTrials.gov number, NCT00907361 .).
A tone mapping operator based on neural and psychophysical models of visual perception
NASA Astrophysics Data System (ADS)
Cyriac, Praveen; Bertalmio, Marcelo; Kane, David; Vazquez-Corral, Javier
2015-03-01
High dynamic range imaging techniques involve capturing and storing real world radiance values that span many orders of magnitude. However, common display devices can usually reproduce intensity ranges only up to two to three orders of magnitude. Therefore, in order to display a high dynamic range image on a low dynamic range screen, the dynamic range of the image needs to be compressed without losing details or introducing artefacts, and this process is called tone mapping. A good tone mapping operator must be able to produce a low dynamic range image that matches as much as possible the perception of the real world scene. We propose a two stage tone mapping approach, in which the first stage is a global method for range compression based on a gamma curve that equalizes the lightness histogram the best, and the second stage performs local contrast enhancement and color induction using neural activity models for the visual cortex.
NASA Astrophysics Data System (ADS)
Nine, H. M. Zulker
The adversity of metallic corrosion is of growing concern to industrial engineers and scientists. Corrosion attacks metal surface and causes structural as well as direct and indirect economic losses. Multiple corrosion monitoring tools are available although those are time-consuming and costly. Due to the availability of image capturing devices in today's world, image based corrosion control technique is a unique innovation. By setting up stainless steel SS 304 and low carbon steel QD 1008 panels in distilled water, half-saturated sodium chloride and saturated sodium chloride solutions and subsequent RGB image analysis in Matlab, in this research, a simple and cost-effective corrosion measurement tool has identified and investigated. Additionally, the open circuit potential and electrochemical impedance spectroscopy results have been compared with RGB analysis to gratify the corrosion. Additionally, to understand the importance of ambiguity in crisis communication, the communication process between Union Carbide and Indian Government regarding the Bhopal incident in 1984 was analyzed.
Embedded Palmprint Recognition System Using OMAP 3530
Shen, Linlin; Wu, Shipei; Zheng, Songhao; Ji, Zhen
2012-01-01
We have proposed in this paper an embedded palmprint recognition system using the dual-core OMAP 3530 platform. An improved algorithm based on palm code was proposed first. In this method, a Gabor wavelet is first convolved with the palmprint image to produce a response image, where local binary patterns are then applied to code the relation among the magnitude of wavelet response at the ccentral pixel with that of its neighbors. The method is fully tested using the public PolyU palmprint database. While palm code achieves only about 89% accuracy, over 96% accuracy is achieved by the proposed G-LBP approach. The proposed algorithm was then deployed to the DSP processor of OMAP 3530 and work together with the ARM processor for feature extraction. When complicated algorithms run on the DSP processor, the ARM processor can focus on image capture, user interface and peripheral control. Integrated with an image sensing module and central processing board, the designed device can achieve accurate and real time performance. PMID:22438721
Embedded palmprint recognition system using OMAP 3530.
Shen, Linlin; Wu, Shipei; Zheng, Songhao; Ji, Zhen
2012-01-01
We have proposed in this paper an embedded palmprint recognition system using the dual-core OMAP 3530 platform. An improved algorithm based on palm code was proposed first. In this method, a Gabor wavelet is first convolved with the palmprint image to produce a response image, where local binary patterns are then applied to code the relation among the magnitude of wavelet response at the central pixel with that of its neighbors. The method is fully tested using the public PolyU palmprint database. While palm code achieves only about 89% accuracy, over 96% accuracy is achieved by the proposed G-LBP approach. The proposed algorithm was then deployed to the DSP processor of OMAP 3530 and work together with the ARM processor for feature extraction. When complicated algorithms run on the DSP processor, the ARM processor can focus on image capture, user interface and peripheral control. Integrated with an image sensing module and central processing board, the designed device can achieve accurate and real time performance.
An efficient multiple exposure image fusion in JPEG domain
NASA Astrophysics Data System (ADS)
Hebbalaguppe, Ramya; Kakarala, Ramakrishna
2012-01-01
In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.
NASA Astrophysics Data System (ADS)
House, Rachael; Lasso, Andras; Harish, Vinyas; Baum, Zachary; Fichtinger, Gabor
2017-03-01
PURPOSE: Optical pose tracking of medical instruments is often used in image-guided interventions. Unfortunately, compared to commonly used computing devices, optical trackers tend to be large, heavy, and expensive devices. Compact 3D vision systems, such as Intel RealSense cameras can capture 3D pose information at several magnitudes lower cost, size, and weight. We propose to use Intel SR300 device for applications where it is not practical or feasible to use conventional trackers and limited range and tracking accuracy is acceptable. We also put forward a vertebral level localization application utilizing the SR300 to reduce risk of wrong-level surgery. METHODS: The SR300 was utilized as an object tracker by extending the PLUS toolkit to support data collection from RealSense cameras. Accuracy of the camera was tested by comparing to a high-accuracy optical tracker. CT images of a lumbar spine phantom were obtained and used to create a 3D model in 3D Slicer. The SR300 was used to obtain a surface model of the phantom. Markers were attached to the phantom and a pointer and tracked using Intel RealSense SDK's built-in object tracking feature. 3D Slicer was used to align CT image with phantom using landmark registration and display the CT image overlaid on the optical image. RESULTS: Accuracy of the camera yielded a median position error of 3.3mm (95th percentile 6.7mm) and orientation error of 1.6° (95th percentile 4.3°) in a 20x16x10cm workspace, constantly maintaining proper marker orientation. The model and surface correctly aligned demonstrating the vertebral level localization application. CONCLUSION: The SR300 may be usable for pose tracking in medical procedures where limited accuracy is acceptable. Initial results suggest the SR300 is suitable for vertebral level localization.
Keleshis, C; Ionita, CN; Yadava, G; Patel, V; Bednarek, DR; Hoffmann, KR; Verevkin, A; Rudin, S
2008-01-01
A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873) PMID:18836570
Keleshis, C; Ionita, Cn; Yadava, G; Patel, V; Bednarek, Dr; Hoffmann, Kr; Verevkin, A; Rudin, S
2008-01-01
A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873).
Open source OCR framework using mobile devices
NASA Astrophysics Data System (ADS)
Zhou, Steven Zhiying; Gilani, Syed Omer; Winkler, Stefan
2008-02-01
Mobile phones have evolved from passive one-to-one communication device to powerful handheld computing device. Today most new mobile phones are capable of capturing images, recording video, and browsing internet and do much more. Exciting new social applications are emerging on mobile landscape, like, business card readers, sing detectors and translators. These applications help people quickly gather the information in digital format and interpret them without the need of carrying laptops or tablet PCs. However with all these advancements we find very few open source software available for mobile phones. For instance currently there are many open source OCR engines for desktop platform but, to our knowledge, none are available on mobile platform. Keeping this in perspective we propose a complete text detection and recognition system with speech synthesis ability, using existing desktop technology. In this work we developed a complete OCR framework with subsystems from open source desktop community. This includes a popular open source OCR engine named Tesseract for text detection & recognition and Flite speech synthesis module, for adding text-to-speech ability.
NASA Astrophysics Data System (ADS)
Everard, Colm D.; Kim, Moon S.; Lee, Hoonsoo; O'Donnell, Colm P.
2016-05-01
An imaging device to detect fecal contamination in fresh produce fields could allow the producer avoid harvesting fecal contaminated produce. E.coli O157:H7 outbreaks have been associated with fecal contaminated leafy greens. In this study, in-field spectral profiles of bovine fecal matter, soil, and spinach leaves are compared. A common aperture imager designed with two identical monochromatic cameras, a beam splitter, and optical filters was used to simultaneously capture two-spectral images of leaves contaminated with both fecal matter and soil. The optical filters where 10 nm full width half maximum bandpass filters, one at 690 nm and the second at 710 nm. These were mounted in front of the object lenses. New images were created using the ratio of these two spectral images on a pixel by pixel basis. Image analysis results showed that the fecal matter contamination could be distinguished from soil and leaf on the ratio images. The use of this technology has potential to allow detection of fecal contamination in produce fields which can be a source of foodbourne illnesses. It has the added benefit of mitigating cross-contamination during harvesting and processing.
Method for eliminating artifacts in CCD imagers
Turko, Bojan T.; Yates, George J.
1992-01-01
An electronic method for eliminating artifacts in a video camera (10) employing a charge coupled device (CCD) (12) as an image sensor. The method comprises the step of initializing the camera (10) prior to normal read out and includes a first dump cycle period (76) for transferring radiation generated charge into the horizontal register (28) while the decaying image on the phosphor (39) being imaged is being integrated in the photosites, and a second dump cycle period (78), occurring after the phosphor (39) image has decayed, for rapidly dumping unwanted smear charge which has been generated in the vertical registers (32). Image charge is then transferred from the photosites (36) and (38) to the vertical registers (32) and read out in conventional fashion. The inventive method allows the video camera (10) to be used in environments having high ionizing radiation content, and to capture images of events of very short duration and occurring either within or outside the normal visual wavelength spectrum. Resultant images are free from ghost, smear and smear phenomena caused by insufficient opacity of the registers (28) and (32), and are also free from random damage caused by ionization charges which exceed the charge limit capacity of the photosites (36) and (37).
Artificial vision: needs, functioning, and testing of a retinal electronic prosthesis.
Chader, Gerald J; Weiland, James; Humayun, Mark S
2009-01-01
Hundreds of thousands around the world have poor vision or no vision at all due to inherited retinal degenerations (RDs) like retinitis pigmentosa (RP). Similarly, millions suffer from vision loss due to age-related macular degeneration (AMD). In both of these allied diseases, the primary target for pathology is the retinal photoreceptor cells that dysfunction and die. Secondary neurons though are relatively spared. To replace photoreceptor cell function, an electronic prosthetic device can be used such that retinal secondary neurons receive a signal that simulates an external visual image. The composite device has a miniature video camera mounted on the patient's eyeglasses, which captures images and passes them to a microprocessor that converts the data to an electronic signal. This signal, in turn, is transmitted to an array of electrodes placed on the retinal surface, which transmits the patterned signal to the remaining viable secondary neurons. These neurons (ganglion, bipolar cells, etc.) begin processing the signal and pass it down the optic nerve to the brain for final integration into a visual image. Many groups in different countries have different versions of the device, including brain implants and retinal implants, the latter having epiretinal or subretinal placement. The device furthest along in development is an epiretinal implant sponsored by Second Sight Medical Products (SSMP). Their first-generation device had 16 electrodes with human testing in a Phase 1 clinical trial beginning in 2002. The second-generation device has 60+ electrodes and is currently in Phase 2/3 clinical trial. Increased numbers of electrodes are planned for future versions of the device. Testing of the device's efficacy is a challenge since patients admitted into the trial have little or no vision. Thus, methods must be developed that accurately and reproducibly record small improvements in visual function after implantation. Standard tests such as visual acuity, visual field, electroretinography, or even contrast sensitivity may not adequately capture some aspects of improvement that relate to a better quality of life (QOL). Because of this, some tests are now relying more on "real-world functional capacity" that better assesses possible improvement in aspects of everyday living. Thus, a new battery of tests have been suggested that include (1) standard psychophysical testing, (2) performance in tasks that are used in real-life situations such as object discrimination, mobility, etc., and (3) well-crafted questionnaires that assess the patient's own feelings as to the usefulness of the device. In the Phase 1 trial of the SSMP 16-electrode device, six subjects with severe RP were implanted with ongoing, continuing testing since then. First, it was evident that even limited sight restoration is a slow, learning process that takes months for improvement to become evident. However, light perception was restored in all six patients. Moreover, all subjects ultimately saw discrete phosphenes and could perform simple visual spatial and motion tasks. As mentioned above, a Phase 2/3 trial is now ongoing with a 60+ device. A 250+ device is on the drawing board, and one with over 1000 electrodes is being planned. Each has the possibility of significantly improving a patient's vision and QOL, being smaller and safer in design and lasting for the lifetime of the patient. From theoretical modeling, it is estimated that a device with approximately 1000 electrodes could give good functional vision, i.e., face recognition and reading ability. This could be a reality within 5-10 years from now. In summary, no treatments are currently available for severely affected patients with RP and dry AMD. An electrical prosthetic device appears to offer hope in replacing the function of degenerating or dead photoreceptor neurons. Devices with new, sophisticated designs and increasing numbers of electrodes could allow for long-term restoration of functional sight in patients with improvement in object recognition, mobility, independent living, and general QOL.
Fiber-optic fringe projection with crosstalk reduction by adaptive pattern masking
NASA Astrophysics Data System (ADS)
Matthias, Steffen; Kästner, Markus; Reithmeier, Eduard
2017-02-01
To enable in-process inspection of industrial manufacturing processes, measuring devices need to fulfill time and space constraints, while also being robust to environmental conditions, such as high temperatures and electromagnetic fields. A new fringe projection profilometry system is being developed, which is capable of performing the inspection of filigree tool geometries, e.g. gearing elements with tip radii of 0.2 mm, inside forming machines of the sheet-bulk metal forming process. Compact gradient-index rod lenses with a diameter of 2 mm allow for a compact design of the sensor head, which is connected to a base unit via flexible high-resolution image fibers with a diameter of 1.7 mm. The base unit houses a flexible DMD based LED projector optimized for fiber coupling and a CMOS camera sensor. The system is capable of capturing up to 150 gray-scale patterns per second as well as high dynamic range images from multiple exposures. Owing to fiber crosstalk and light leakage in the image fiber, signal quality suffers especially when capturing 3-D data of technical surfaces with highly varying reflectance or surface angles. An algorithm is presented, which adaptively masks parts of the pattern to reduce these effects via multiple exposures. The masks for valid surface areas are automatically defined according to different parameters from an initial capture, such as intensity and surface gradient. In a second step, the masks are re-projected to projector coordinates using the mathematical model of the system. This approach is capable of reducing both inter-pixel crosstalk and inter-object reflections on concave objects while maintaining measurement durations of less than 5 s.
NASA Astrophysics Data System (ADS)
Millien, Christophe; Jean-Baptiste, Meredith C.; Manite, Garçon; Levitz, David
2015-03-01
Cervical cancer is a leading cause of cancer death for women all across the developing world, where much of the infrastructure required for effective cervical cancer screening is unavailable because of limited resources. One of the most common method to screen for cervical cancer is by visual inspection with acetic acid (VIA), in which the cervix is imaged with the naked eye. Given inherent challenges in analysis and documentation when characterizing cervical tissue with the naked eye, an optical solution is needed. To address this challenge, a smartphone was modified and transformed into a mobile colposcope (a device used to image the cervix from outside) by adding a custom-fit light source and optics. The mobile smartphone colposcope was designed such that it augments VIA and easily integrates within the standard of care. The mobile smartphone colposcope is controlled by an app, which, stores cervical images captured on the mobile smartphone colposcope on a portal, enabling remote doctors to evaluate images and the treatment chosen by the health worker. Images from patients undergoing cervical cancer screening by a nurse using VIA in the University Hospital of Mirebalais (HUM) GYN outpatient clinic in Haiti were captured on the mobile smartphone colposcope. These images were later analyzed by an experienced OB/GYN at HUM, who determined whether or not the patient should be treated with cryoablation; more complicated cases were also shared with a consulting doctor in the US. The opinions of the experienced OB/GYN doctors at HUM, as well as the experts from the US, were used to educate nurses and midwives performing mobile colposcopy. These results suggest that remote assessment offered by mobile colposcopy can improve training of health workers performing VIA, and ultimately affect the therapy administered to patients.
Experimental study of hydraulics and sediment capture efficiency in catchbasins.
Tang, Yangbo; Zhu, David Z; Rajaratnam, N; van Duin, Bert
2016-12-01
Catchbasins (also known as gully pot in the UK and Australia) are used to receive surface runoff and drain the stormwater into storm sewers. The recent interest in catchbasins is to improve their effectiveness in removing sediments in stormwater. An experimental study was conducted to examine the hydraulic features and sediment capture efficiency in catchbasins, with and without a bottom sump. A sump basin is found to increase the sediment capture efficiency significantly. The effect of inlet control devices, which are commonly used to control the amount of flow into the downstream storm sewer system, is also studied. These devices will increase the water depth in the catchbasin and increase the sediment capture efficiency. Equations are developed for predicting the sediment capture efficiency in catchbasins.
Light and portable novel device for diabetic retinopathy screening.
Ting, Daniel S W; Tay-Kearney, Mei Ling; Kanagasingam, Yogesan
2012-01-01
To validate the use of an economical portable multipurpose ophthalmic imaging device, EyeScan (Ophthalmic Imaging System, Sacramento, CA, USA), for diabetic retinopathy screening. Evaluation of a diagnostic device. One hundred thirty-six (272 eyes) were recruited from diabetic retinopathy screening clinic of Royal Perth Hospital, Western Australia, Australia. All patients underwent three-field (optic disc, macular and temporal view) mydriatic retinal digital still photography captured by EyeScan and FF450 plus (Carl Zeiss Meditec, North America) and were subsequently examined by a senior consultant ophthalmologist using the slit-lamp biomicroscopy (reference standard). All retinal images were interpreted by a consultant ophthalmologist and a medical officer. The sensitivity, specificity and kappa statistics of EyeScan and FF450 plus with reference to the slit-lamp examination findings by a senior consultant ophthalmologist. For detection of any grade of diabetic retinopathy, EyeScan had a sensitivity and specificity of 93 and 98%, respectively (ophthalmologist), and 92 and 95%, respectively (medical officer). In contrast, FF450 plus images had a sensitivity and specificity of 95 and 99%, respectively (ophthalmologist), and 92 and 96%, respectively (medical officer). The overall kappa statistics for diabetic retinopathy grading for EyeScan and FF450 plus were 0.93 and 0.95 for ophthalmologist and 0.88 and 0.90 for medical officer, respectively. Given that the EyeScan requires minimal training to use and has excellent diagnostic accuracy in screening for diabetic retinopathy, it could be potentially utilized by the primary eye care providers to widely screen for diabetic retinopathy in the community. © 2011 The Authors. Clinical and Experimental Ophthalmology © 2011 Royal Australian and New Zealand College of Ophthalmologists.
Comparison of normal and phase stepping shearographic NDE
NASA Astrophysics Data System (ADS)
Andhee, A.; Gryzagoridis, J.; Findeis, D.
2005-05-01
The paper presents results of non-destructive testing of composite main rotor helicopter blade calibration specimens using the laser based optical NDE technique known as Shearography. The tests were performed initially using the already well established near real-time non-destructive technique of Shearography, with the specimens perturbed during testing for a few seconds using the hot air from a domestic hair dryer. Subsequent to modification of the shearing device utilized in the shearographic setup, phase stepping of one of the sheared images to be captured by the CCD camera was enabled and identical tests were performed on the composite main rotor helicopter blade specimens. Considerable enhancement of the images manifesting or depicting the defects on the specimens is noted suggesting that phase stepping is a desirable enhancement technique to the traditional Shearographic setup.
A low cost real-time motion tracking approach using webcam technology.
Krishnan, Chandramouli; Washabaugh, Edward P; Seetharaman, Yogesh
2015-02-05
Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject's limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. Copyright © 2014 Elsevier Ltd. All rights reserved.
A low cost real-time motion tracking approach using webcam technology
Krishnan, Chandramouli; Washabaugh, Edward P.; Seetharaman, Yogesh
2014-01-01
Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject’s limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. PMID:25555306
The BlackBerry Project: Capturing the Content of Adolescents' Text Messaging
ERIC Educational Resources Information Center
Underwood, Marion K.; Rosen, Lisa H.; More, David; Ehrenreich, Samuel E.; Gentsch, Joanna K.
2012-01-01
This article presents an innovative method for capturing the content of adolescents' electronic communication on handheld devices: text messaging, e-mail, and instant messaging. In an ongoing longitudinal study, adolescents were provided with BlackBerry devices with service plans paid for by the investigators, and use of text messaging was…
40 CFR 63.4981 - What definitions apply to this subpart?
Code of Federal Regulations, 2013 CFR
2013-07-01
... subpart are defined in the CAA, in 40 CFR 63.2, and in this section as follows: Add-on control means an air pollution control device such as a thermal oxidizer or carbon adsorber that reduces pollution in... those emissions into an add-on air pollution control device. Capture efficiency or capture system...
40 CFR 63.4981 - What definitions apply to this subpart?
Code of Federal Regulations, 2012 CFR
2012-07-01
... subpart are defined in the CAA, in 40 CFR 63.2, and in this section as follows: Add-on control means an air pollution control device such as a thermal oxidizer or carbon adsorber that reduces pollution in... those emissions into an add-on air pollution control device. Capture efficiency or capture system...
40 CFR 63.4981 - What definitions apply to this subpart?
Code of Federal Regulations, 2014 CFR
2014-07-01
... subpart are defined in the CAA, in 40 CFR 63.2, and in this section as follows: Add-on control means an air pollution control device such as a thermal oxidizer or carbon adsorber that reduces pollution in... those emissions into an add-on air pollution control device. Capture efficiency or capture system...
Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras
NASA Astrophysics Data System (ADS)
Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro
2018-03-01
Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.
Capsule Endoscopy in Patients with Implantable Electromedical Devices is Safe
Harris, Lucinda A.; Hansel, Stephanie L.; Rajan, Elizabeth; Srivathsan, Komandoor; Rea, Robert; Crowell, Michael D.; Fleischer, David E.; Pasha, Shabana F.; Gurudu, Suryakanth R.; Heigh, Russell I.; Shiff, Arthur D.; Post, Janice K.; Leighton, Jonathan A.
2013-01-01
Background and Study Aims. The presence of an implantable electromechanical cardiac device (IED) has long been considered a relative contraindication to the performance of video capsule endoscopy (CE). The primary aim of this study was to evaluate the safety of CE in patients with IEDs. A secondary purpose was to determine whether IEDs have any impact on images captured by CE. Patients and Methods. A retrospective chart review of all patients who had a capsule endoscopy at Mayo Clinic in Scottsdale, AZ, USA, or Rochester, MN, USA, (January 2002 to June 2010) was performed to identify CE studies done on patients with IEDs. One hundred and eighteen capsule studies performed in 108 patients with IEDs were identified and reviewed for demographic data, method of preparation, and study data. Results. The most common indications for CE were obscure gastrointestinal bleeding (77%), anemia (14%), abdominal pain (5%), celiac disease (2%), diarrhea (1%), and Crohn's disease (1%). Postprocedure assessments did not reveal any detectable alteration on the function of the IED. One patient with an ICD had a 25-minute loss of capsule imaging due to recorder defect. Two patients with LVADs had interference with capsule image acquisition. Conclusions. CE did not interfere with IED function, including PM, ICD, and/or LVAD and thus appears safe. Additionally, PM and ICD do not appear to interfere with image acquisition but LVAD may interfere with capsule images and require that capsule leads be positioned as far away as possible from the IED to assure reliable image acquisition. PMID:23710168
NASA Astrophysics Data System (ADS)
Brüstle, Stefan; Erdnüß, Bastian
2016-10-01
In recent years, operational costs of unmanned aircraft systems (UAS) have been massively decreasing. New sensors satisfying weight and size restrictions of even small UAS cover many different spectral ranges and spatial resolutions. This results in airborne imagery having become more and more available. Such imagery is used to address many different tasks in various fields of application. For many of those tasks, not only the content of the imagery itself is of interest, but also its spatial location. This requires the imagery to be properly georeferenced. Many UAS have an integrated GPS receiver together with some kind of INS device acquiring the sensor orientation to provide the georeference. However, both GPS and INS data can easily become unavailable for a period of time during a flight, e.g. due to sensor malfunction, transmission problems or jamming. Imagery gathered during such times lacks georeference. Moreover, even in datasets not affected by such problems, GPS and INS inaccuracies together with a potentially poor knowledge of ground elevation can render location information accuracy less than sufficient for a given task. To provide or improve the georeference of an image affected by this, an image to reference registration can be performed if a suitable reference is available, e.g. a georeferenced orthophoto covering the area of the image to be georeferenced. Registration and thus georeferencing is achieved by determining a transformation between the image to be referenced and the reference which maximizes the coincidence of relevant structures present both in the former and the latter. Many methods have been developed to accomplish this task. Regardless of their differences they usually tend to perform the better the more similar an image and a reference are in appearance. This contribution evaluates a selection of such methods all differing in the type of structure they use for the assessment of coincidence with respect to their ability to tolerate unsimilarity in appearance. Similarity in appearance is mainly dependent on the following aspects, namely the similarity of abstraction levels (Is the reference e.g. an orthophoto or a topographical map?), the similarity of sensor types and spectral bands (Is the image e.g. a SAR image and the reference a passively sensed one? Was e.g. a NIR sensor used capturing the image while a VIS sensor was used in the reference?), the similarity of resolutions (Is the ground sampling distance of the reference comparable to the one of the image?), the similarity of capture parameters (Are e.g. the viewing angles comparable in the image and in the reference?) and the similarity concerning the image content (Was there e.g. snow coverage present when the image was captured while this was not the case when the reference was captured?). The evaluation is done by determining the performance of each method with a set of image to be referenced and reference pairs representing various degrees of unsimilarity with respect to each of the above mentioned aspects of similarity.
O'Brien, Caroline C; Kolandaivelu, Kumaran; Brown, Jonathan; Lopes, Augusto C; Kunio, Mie; Kolachalama, Vijaya B; Edelman, Elazer R
2016-01-01
Stacking cross-sectional intravascular images permits three-dimensional rendering of endovascular implants, yet introduces between-frame uncertainties that limit characterization of device placement and the hemodynamic microenvironment. In a porcine coronary stent model, we demonstrate enhanced OCT reconstruction with preservation of between-frame features through fusion with angiography and a priori knowledge of stent design. Strut positions were extracted from sequential OCT frames. Reconstruction with standard interpolation generated discontinuous stent structures. By computationally constraining interpolation to known stent skeletons fitted to 3D 'clouds' of OCT-Angio-derived struts, implant anatomy was resolved, accurately rendering features from implant diameter and curvature (n = 1 vessels, r2 = 0.91, 0.90, respectively) to individual strut-wall configurations (average displacement error ~15 μm). This framework facilitated hemodynamic simulation (n = 1 vessel), showing the critical importance of accurate anatomic rendering in characterizing both quantitative and basic qualitative flow patterns. Discontinuities with standard approaches systematically introduced noise and bias, poorly capturing regional flow effects. In contrast, the enhanced method preserved multi-scale (local strut to regional stent) flow interactions, demonstrating the impact of regional contexts in defining the hemodynamic consequence of local deployment errors. Fusion of planar angiography and knowledge of device design permits enhanced OCT image analysis of in situ tissue-device interactions. Given emerging interests in simulation-derived hemodynamic assessment as surrogate measures of biological risk, such fused modalities offer a new window into patient-specific implant environments.
Intelligent image capture of cartridge cases for firearms examiners
NASA Astrophysics Data System (ADS)
Jones, Brett C.; Guerci, Joseph R.
1997-02-01
The FBI's DRUGFIRETM system is a nationwide computerized networked image database of ballistic forensic evidence. This evidence includes images of cartridge cases and bullets obtained from both crime scenes and controlled test firings of seized weapons. Currently, the system is installed in over 80 forensic labs across the country and has enjoyed a high degree of success. In this paper, we discuss some of the issues and methods associated with providing a front-end semi-automated image capture system that simultaneously satisfies the often conflicting criteria of the many human examiners visual perception versus the criteria associated with optimizing autonomous digital image correlation. Specifically, we detail the proposed processing chain of an intelligent image capture system (IICS), involving a real- time capture 'assistant,' which assesses the quality of the image under test utilizing a custom designed neural network.
2015-02-27
ISS042E290579 (02/27/2015) --- On Feb. 27 2015, a series of CubeSats, small experimental satellites, were deployed via a special device mounted on the Japanese Experiment Module (JEM) Remote Manipulator System (JEMRMS). Deployed satellites included twelve Dove sats, one TechEdSat-4, one GEARRSat, one LambdaSat, one MicroMas. These satellites perform a variety of functions from capturing new Earth imagery, to using microwave scanners to create 3D images of hurricanes, to even developing new methods for returning science samples back to Earth from space. The small satellites were deployed through the first week in March.
Scalable Coding of Plenoptic Images by Using a Sparse Set and Disparities.
Li, Yun; Sjostrom, Marten; Olsson, Roger; Jennehag, Ulf
2016-01-01
One of the light field capturing techniques is the focused plenoptic capturing. By placing a microlens array in front of the photosensor, the focused plenoptic cameras capture both spatial and angular information of a scene in each microlens image and across microlens images. The capturing results in a significant amount of redundant information, and the captured image is usually of a large resolution. A coding scheme that removes the redundancy before coding can be of advantage for efficient compression, transmission, and rendering. In this paper, we propose a lossy coding scheme to efficiently represent plenoptic images. The format contains a sparse image set and its associated disparities. The reconstruction is performed by disparity-based interpolation and inpainting, and the reconstructed image is later employed as a prediction reference for the coding of the full plenoptic image. As an outcome of the representation, the proposed scheme inherits a scalable structure with three layers. The results show that plenoptic images are compressed efficiently with over 60 percent bit rate reduction compared with High Efficiency Video Coding intra coding, and with over 20 percent compared with an High Efficiency Video Coding block copying mode.
NASA Astrophysics Data System (ADS)
Golnik, C.; Bemmerer, D.; Enghardt, W.; Fiedler, F.; Hueso-González, F.; Pausch, G.; Römer, K.; Rohling, H.; Schöne, S.; Wagner, L.; Kormoll, T.
2016-06-01
The finite range of a proton beam in tissue opens new vistas for the delivery of a highly conformal dose distribution in radiotherapy. However, the actual particle range, and therefore the accurate dose deposition, is sensitive to the tissue composition in the proton path. Range uncertainties, resulting from limited knowledge of this tissue composition or positioning errors, are accounted for in the form of safety margins. Thus, the unverified particle range constrains the principle benefit of proton therapy. Detecting prompt γ-rays, a side product of proton-tissue interaction, aims at an on-line and non-invasive monitoring of the particle range, and therefore towards exploiting the potential of proton therapy. Compton imaging of the spatial prompt γ-ray emission is a promising measurement approach. Prompt γ-rays exhibit emission energies of several MeV. Hence, common radioactive sources cannot provide the energy range a prompt γ-ray imaging device must be designed for. In this work a benchmark measurement-setup for the production of a localized, monoenergetic 4.44 MeV γ-ray source is introduced. At the Tandetron accelerator at the HZDR, the proton-capture resonance reaction 15N(p,α γ4.439)12C is utilized. This reaction provides the same nuclear de-excitation (and γ-ray emission) occurrent as an intense prompt γ-ray line in proton therapy. The emission yield is quantitatively described. A two-stage Compton imaging device, dedicated for prompt γ-ray imaging, is tested at the setup exemplarily. Besides successful imaging tests, the detection efficiency of the prototype at 4.44 MeV is derived from the measured data. Combining this efficiency with the emission yield for prompt γ-rays, the number of valid Compton events, induced by γ-rays in the energy region around 4.44 MeV, is estimated for the prototype being implemented in a therapeutic treatment scenario. As a consequence, the detection efficiency turns out to be a key parameter for prompt γ-rays Compton imaging limiting the applicability of the prototype in its current realization.
Shrestha, Ravi; Mohammed, Shahed K; Hasan, Md Mehedi; Zhang, Xuechao; Wahid, Khan A
2016-08-01
Wireless capsule endoscopy (WCE) plays an important role in the diagnosis of gastrointestinal (GI) diseases by capturing images of human small intestine. Accurate diagnosis of endoscopic images depends heavily on the quality of captured images. Along with image and frame rate, brightness of the image is an important parameter that influences the image quality which leads to the design of an efficient illumination system. Such design involves the choice and placement of proper light source and its ability to illuminate GI surface with proper brightness. Light emitting diodes (LEDs) are normally used as sources where modulated pulses are used to control LED's brightness. In practice, instances like under- and over-illumination are very common in WCE, where the former provides dark images and the later provides bright images with high power consumption. In this paper, we propose a low-power and efficient illumination system that is based on an automated brightness algorithm. The scheme is adaptive in nature, i.e., the brightness level is controlled automatically in real-time while the images are being captured. The captured images are segmented into four equal regions and the brightness level of each region is calculated. Then an adaptive sigmoid function is used to find the optimized brightness level and accordingly a new value of duty cycle of the modulated pulse is generated to capture future images. The algorithm is fully implemented in a capsule prototype and tested with endoscopic images. Commercial capsules like Pillcam and Mirocam were also used in the experiment. The results show that the proposed algorithm works well in controlling the brightness level accordingly to the environmental condition, and as a result, good quality images are captured with an average of 40% brightness level that saves power consumption of the capsule.
Rare Cell Capture in Microfluidic Devices
Pratt, Erica D.; Huang, Chao; Hawkins, Benjamin G.; Gleghorn, Jason P.; Kirby, Brian J.
2010-01-01
This article reviews existing methods for the isolation, fractionation, or capture of rare cells in microfluidic devices. Rare cell capture devices face the challenge of maintaining the efficiency standard of traditional bulk separation methods such as flow cytometers and immunomagnetic separators while requiring very high purity of the target cell population, which is typically already at very low starting concentrations. Two major classifications of rare cell capture approaches are covered: (1) non-electrokinetic methods (e.g., immobilization via antibody or aptamer chemistry, size-based sorting, and sheath flow and streamline sorting) are discussed for applications using blood cells, cancer cells, and other mammalian cells, and (2) electrokinetic (primarily dielectrophoretic) methods using both electrode-based and insulative geometries are presented with a view towards pathogen detection, blood fractionation, and cancer cell isolation. The included methods were evaluated based on performance criteria including cell type modeled and used, number of steps/stages, cell viability, and enrichment, efficiency, and/or purity. Major areas for improvement are increasing viability and capture efficiency/purity of directly processed biological samples, as a majority of current studies only process spiked cell lines or pre-diluted/lysed samples. Despite these current challenges, multiple advances have been made in the development of devices for rare cell capture and the subsequent elucidation of new biological phenomena; this article serves to highlight this progress as well as the electrokinetic and non-electrokinetic methods that can potentially be combined to improve performance in future studies. PMID:21532971
NASA Astrophysics Data System (ADS)
Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha
2012-09-01
Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.
2012-11-08
S48-E-013 (15 Sept 1991) --- The Upper Atmosphere Research Satellite (UARS) in the payload bay of the earth- orbiting Discovery. UARS is scheduled for deploy on flight day three of the STS-48 mission. Data from UARS will enable scientists to study ozone depletion in the stratosphere, or upper atmosphere. This image was transmitted by the Electronic Still Camera (ESC), Development Test Objective (DTO) 648. The ESC is making its initial appearance on a Space Shuttle flight. Electronic still photography is a new technology that enables a camera to electronically capture and digitize an image with resolution approaching film quality. The digital image is stored on removable hard disks or small optical disks, and can be converted to a format suitable for downlink transmission or enhanced using image processing software. The Electronic Still Camera (ESC) was developed by the Man- Systems Division at the Johnson Space Center and is the first model in a planned evolutionary development leading to a family of high-resolution digital imaging devices. H. Don Yeates, JSC's Man-Systems Division, is program manager for the ESC. THIS IS A SECOND GENERATION PRINT MADE FROM AN ELECTRONICALLY PRODUCED NEGATIVE.
An imaging colorimeter for noncontact tissue color mapping.
Balas, C
1997-06-01
There has been a considerable effort in several medical fields, for objective color analysis and characterization of biological tissues. Conventional colorimeters have proved inadequate for this purpose, since they do not provide spatial color information and because the measuring procedure randomly affects the color of the tissue. In this paper an imaging colorimeter is presented, where the nonimaging optical photodetector of colorimeters is replaced with the charge-coupled device (CCD) sensor of a color video camera, enabling the independent capturing of the color information for any spatial point within its field-of-view. Combining imaging and colorimetry methods, the acquired image is calibrated and corrected, under several ambient light conditions, providing noncontact reproducible color measurements and mapping, free of the errors and the limitations present in conventional colorimeters. This system was used for monitoring of blood supply changes of psoriatic plaques, that have undergone Psoralens and ultraviolet-A radiation (PUVA) therapy, where reproducible and reliable measurements were demonstrated. These features highlight the potential of the imaging colorimeters as clinical and research tools for the standardization of clinical diagnosis and for the objective evaluation of treatment effectiveness.
Integrated telemedicine workstation for intercontinental grand rounds
NASA Astrophysics Data System (ADS)
Willis, Charles E.; Leckie, Robert G.; Brink, Linda; Goeringer, Fred
1995-04-01
The Telemedicine Spacebridge to Moscow was a series of intercontinental sessions sponsored jointly by NASA and the Moscow Academy of Medicine. To improve the quality of medical images presented, the MDIS Project developed a workstation for acquisition, storage, and interactive display of radiology and pathology images. The workstation was based on a Macintosh IIfx platform with a laser digitizer for radiographs and video capture capability for microscope images. Images were transmitted via the Russian Lyoutch Satellite which had only a single video channel available and no high speed data channels. Two workstations were configured -- one for use at the Uniformed Services University of Health Sciences in Bethesda, MD. and the other for use at the Hospital of the Interior in Moscow, Russia. The two workstations were used may times during 16 sessions. As clinicians used the systems, we modified the original configuration to improve interactive use. This project demonstrated that numerous acquisition and output devices could be brought together in a single interactive workstation. The video images were satisfactory for remote consultation in a grand rounds format.
A programmable light engine for quantitative single molecule TIRF and HILO imaging.
van 't Hoff, Marcel; de Sars, Vincent; Oheim, Martin
2008-10-27
We report on a simple yet powerful implementation of objective-type total internal reflection fluorescence (TIRF) and highly inclined and laminated optical sheet (HILO, a type of dark-field) illumination. Instead of focusing the illuminating laser beam to a single spot close to the edge of the microscope objective, we are scanning during the acquisition of a fluorescence image the focused spot in a circular orbit, thereby illuminating the sample from various directions. We measure parameters relevant for quantitative image analysis during fluorescence image acquisition by capturing an image of the excitation light distribution in an equivalent objective backfocal plane (BFP). Operating at scan rates above 1 MHz, our programmable light engine allows directional averaging by circular spinning the spot even for sub-millisecond exposure times. We show that restoring the symmetry of TIRF/HILO illumination reduces scattering and produces an evenly lit field-of-view that affords on-line analysis of evanescnt-field excited fluorescence without pre-processing. Utilizing crossed acousto-optical deflectors, our device generates arbitrary intensity profiles in BFP, permitting variable-angle, multi-color illumination, or objective lenses to be rapidly exchanged.
High throughput imaging cytometer with acoustic focussing.
Zmijan, Robert; Jonnalagadda, Umesh S; Carugo, Dario; Kochi, Yu; Lemm, Elizabeth; Packham, Graham; Hill, Martyn; Glynne-Jones, Peter
2015-10-31
We demonstrate an imaging flow cytometer that uses acoustic levitation to assemble cells and other particles into a sheet structure. This technique enables a high resolution, low noise CMOS camera to capture images of thousands of cells with each frame. While ultrasonic focussing has previously been demonstrated for 1D cytometry systems, extending the technology to a planar, much higher throughput format and integrating imaging is non-trivial, and represents a significant jump forward in capability, leading to diagnostic possibilities not achievable with current systems. A galvo mirror is used to track the images of the moving cells permitting exposure times of 10 ms at frame rates of 50 fps with motion blur of only a few pixels. At 80 fps, we demonstrate a throughput of 208 000 beads per second. We investigate the factors affecting motion blur and throughput, and demonstrate the system with fluorescent beads, leukaemia cells and a chondrocyte cell line. Cells require more time to reach the acoustic focus than beads, resulting in lower throughputs; however a longer device would remove this constraint.
Imaging intracellular protein dynamics by spinning disk confocal microscopy
Stehbens, Samantha; Pemble, Hayley; Murrow, Lindsay; Wittmann, Torsten
2012-01-01
The palette of fluorescent proteins has grown exponentially over the last decade, and as a result live imaging of cells expressing fluorescently tagged proteins is becoming more and more main stream. Spinning disk confocal microscopy (SDC) is a high speed optical sectioning technique, and a method of choice to observe and analyze intracellular fluorescent protein dynamics at high spatial and temporal resolution. In an SDC system, a rapidly rotating pinhole disk generates thousands of points of light that scan the specimen simultaneously, which allows direct capture of the confocal image with low noise scientific grade cooled charged-coupled device (CCD) cameras, and can achieve frame rates of up 1000 frames per second. In this chapter we describe important components of a state-of-the-art spinning disk system optimized for live cell microscopy, and provide a rationale for specific design choices. We also give guidelines how other imaging techniques such as total internal reflection (TIRF) microscopy or spatially controlled photoactivation can be coupled with SDC imaging, and provide a short protocol on how to generate cell lines stably expressing fluorescently tagged proteins by lentivirus-mediated transduction. PMID:22264541
ERIC Educational Resources Information Center
Witton, Gemma
2017-01-01
Lecture Capture technologies are becoming widespread in UK Higher Education with many institutions adopting a capture-all approach. Installations of capture devices in all teaching rooms and lecture theatres, scheduled recordings through integration with timetabling and automated distribution through virtual learning environments are swiftly…
NASA Astrophysics Data System (ADS)
Higashino, Satoru; Kobayashi, Shoei; Yamagami, Tamotsu
2007-06-01
High data transfer rate has been demanded for data storage devices along increasing the storage capacity. In order to increase the transfer rate, high-speed data processing techniques in read-channel devices are required. Generally, parallel architecture is utilized for the high-speed digital processing. We have developed a new architecture of Interpolated Timing Recovery (ITR) to achieve high-speed data transfer rate and wide capture-range in read-channel devices for the information storage channels. It facilitates the parallel implementation on large-scale-integration (LSI) devices.
Joint denoising, demosaicing, and chromatic aberration correction for UHD video
NASA Astrophysics Data System (ADS)
Jovanov, Ljubomir; Philips, Wilfried; Damstra, Klaas Jan; Ellenbroek, Frank
2017-09-01
High-resolution video capture is crucial for numerous applications such as surveillance, security, industrial inspection, medical imaging and digital entertainment. In the last two decades, we are witnessing a dramatic increase of the spatial resolution and the maximal frame rate of video capturing devices. In order to achieve further resolution increase, numerous challenges will be facing us. Due to the reduced size of the pixel, the amount of light also reduces, leading to the increased noise level. Moreover, the reduced pixel size makes the lens imprecisions more pronounced, which especially applies to chromatic aberrations. Even in the case when high quality lenses are used some chromatic aberration artefacts will remain. Next, noise level additionally increases due to the higher frame rates. To reduce the complexity and the price of the camera, one sensor captures all three colors, by relying on Color Filter Arrays. In order to obtain full resolution color image, missing color components have to be interpolated, i.e. demosaicked, which is more challenging than in the case of lower resolution, due to the increased noise and aberrations. In this paper, we propose a new method, which jointly performs chromatic aberration correction, denoising and demosaicking. By jointly performing the reduction of all artefacts, we are reducing the overall complexity of the system and the introduction of new artefacts. In order to reduce possible flicker we also perform temporal video enhancement. We evaluate the proposed method on a number of publicly available UHD sequences and on sequences recorded in our studio.
Panoramic Epipolar Image Generation for Mobile Mapping System
NASA Astrophysics Data System (ADS)
Chen, T.; Yamamoto, K.; Chhatkuli, S.; Shimamura, H.
2012-07-01
The notable improvements on performance and low cost of digital cameras and GPS/IMU devices have caused MMSs (Mobile Mapping Systems) to be gradually becoming one of the most important devices for mapping highway and railway networks, generating and updating road navigation data and constructing urban 3D models over the last 20 years. Moreover, the demands for large scale visual street-level image database construction by the internet giants such as Google and Microsoft have made the further rapid development of this technology. As one of the most important sensors, the omni-directional cameras are being commonly utilized on many MMSs to collect panoramic images for 3D close range photogrammetry and fusion with 3D laser point clouds since these cameras could record much visual information of the real environment in one image at field view angle of 360° in longitude direction and 180° in latitude direction. This paper addresses the problem of panoramic epipolar image generation for 3D modelling and mapping by stereoscopic viewing. These panoramic images are captured with Point Grey's Ladybug3 mounted on the top of Mitsubishi MMS-X 220 at 2m intervals along the streets in urban environment. Onboard GPS/IMU, speedometer and post sequence image analysis technology such as bundle adjustment provided high accuracy position and attitude data for these panoramic images and laser data, this makes it possible to construct the epipolar geometric relationship between any two adjacent panoramic images and then the panoramic epipolar images could be generated. Three kinds of projection planes: sphere, cylinder and flat plane are selected as the epipolar images' planes. In final we select the flat plane and use its effective parts (middle parts of base line's two sides) for epipolar image generation. The corresponding geometric relations and results will be presented in this paper.
Xia, Yiqiu; Tang, Yi; Yu, Xu; Wan, Yuan; Chen, Yizhu; Lu, Huaguang; Zheng, Si-Yang
2016-01-01
Viral diseases are perpetual threats to human and animal health. Detection and characterization of viral pathogens require accurate, sensitive and rapid diagnostic assays. For field and clinical samples, the sample preparation procedures limit the ultimate performance and utility of the overall virus diagnostic protocols. Here, we presented the development of a microfluidic device embedded with porous silicon nanowire (pSiNW) forest for label-free size-based point-of-care virus capture in a continuous curved flow design. The pSiNW forests with specific inter-wire spacing were synthesized in situ on both bottom and sidewalls of the microchannels in a batch process. With the enhancement effect of Dean flow, we demonstrated ~50% H5N2 avian influenza viruses were physically trapped without device clogging. A unique feature of the device is that captured viruses can be released by inducing self-degradation of the pSiNWs in physiological aqueous environment. About 60% of captured viruses can be released within 24 hours for virus culture, subsequent molecular diagnosis and other virus characterization and analyses. This device performs viable, unbiased and label-free virus isolation and release. It has great potentials for virus discovery, virus isolation and culture, functional studies of virus pathogenicity, transmission, drug screening, and vaccine development. PMID:27918640
Feasibility of a novel one-stop ISET device to capture CTCs and its clinical application
Zheng, Liang; Zhi, Xuan; Cheng, Boran; Chen, Yuanyuan; Zhang, Chunxiao; Shi, Dongdong; Song, Haibin; Cai, Congli; Zhou, Pengfei; Xiong, Bin
2017-01-01
Introduction Circulating tumor cells (CTCs) play a crucial role in cancer metastasis. In this study, we introduced a novel isolation method by size of epithelial tumor cells (ISET) device with automatic isolation and staining procedure, named one-stop ISET (osISET) and validated its feasibility to capture CTCs from cancer patients. Moreover, we aim to investigate the correlation between clinicopathologic features and CTCs in colorectal cancer (CRC) in order to explore its clinical application. Results The capture efficiency ranged from 80.3% to 88% with tumor cells spiked into medium while 67% to 78.3% with tumor cells spiked into healthy donors’ blood. In detection blood samples of 72 CRC patients, CTCs and clusters of circulating tumor cells (CTC-clusters) were detected with a positive rate of 52.8% (38/72) and 18.1% (13/72) respectively. Moreover, CTC positive rate was associated with factors of lymphatic or venous invasion, tumor depth, lymph node metastasis and TNM stage in CRC patients (p < 0.01). Lymphocyte count and neutrophil to lymphocyte ratio (NLR) were significantly different between CTC positive and negative groups (p < 0.01). Materials and Methods The capture efficiency of the device was tested by spiking cancer cells (MCF-7, A549, SW480, Hela) into medium or blood samples of healthy donors. Blood samples of 72 CRC patients were detected by osISET device. The clinicopathologic characteristics of 72 CRC patients were collected and the association with CTC positive rate or CTC count were analyzed. Conclusions Our osISET device was feasible to capture and identify CTCs and CTC-clusters from cancer patients. In addition, our device holds a potential for application in cancer management. PMID:27935872
Early melanoma diagnosis with mobile imaging.
Do, Thanh-Toan; Zhou, Yiren; Zheng, Haitian; Cheung, Ngai-Man; Koh, Dawn
2014-01-01
We research a mobile imaging system for early diagnosis of melanoma. Different from previous work, we focus on smartphone-captured images, and propose a detection system that runs entirely on the smartphone. Smartphone-captured images taken under loosely-controlled conditions introduce new challenges for melanoma detection, while processing performed on the smartphone is subject to computation and memory constraints. To address these challenges, we propose to localize the skin lesion by combining fast skin detection and fusion of two fast segmentation results. We propose new features to capture color variation and border irregularity which are useful for smartphone-captured images. We also propose a new feature selection criterion to select a small set of good features used in the final lightweight system. Our evaluation confirms the effectiveness of proposed algorithms and features. In addition, we present our system prototype which computes selected visual features from a user-captured skin lesion image, and analyzes them to estimate the likelihood of malignance, all on an off-the-shelf smartphone.
40 CFR 63.3100 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... compliance with the operating limits for emission capture systems and add-on control devices required by § 63... maintain a log detailing the operation and maintenance of the emission capture systems, add-on control... add-on control device performance tests have been completed, as specified in § 63.3160. (f) If your...
40 CFR 63.3100 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
... compliance with the operating limits for emission capture systems and add-on control devices required by § 63... maintain a log detailing the operation and maintenance of the emission capture systems, add-on control... add-on control device performance tests have been completed, as specified in § 63.3160. (f) If your...
40 CFR 63.4981 - What definitions apply to this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
... defined in the CAA, in 40 CFR 63.2, and in this section as follows: Add-on control means an air pollution control device such as a thermal oxidizer or carbon adsorber that reduces pollution in an air stream by... add-on air pollution control device. Capture efficiency or capture system efficiency means the portion...
The ion capturing effect of 5° SiOx alignment films in liquid crystal devices
NASA Astrophysics Data System (ADS)
Huang, Yi; Bos, Philip J.; Bhowmik, Achintya
2010-09-01
We show that SiOx, deposited at 5° to the interior surface of a liquid crystal cell allows for a surprisingly substantial reduction in the ion concentration of liquid crystal devices. We have investigated this effect and found that this type of film, due to its surface morphology, captures ions from the liquid crystal material. Ion adsorption on 5° SiOx film obeys the Langmuir isotherm. Experimental results shown allow estimation of the ion capturing capacity of these films to be more than an order of 10 000/μm2. These types of materials are useful for new types of very low power liquid crystal devices such as e-books.
Investigation of hyper-NA scanner emulation for photomask CDU performance
NASA Astrophysics Data System (ADS)
Poortinga, Eric; Scheruebl, Thomas; Conley, Will; Sundermann, Frank
2007-02-01
As the semiconductor industry moves toward immersion lithography using numerical apertures above 1.0 the quality of the photomask becomes even more crucial. Photomask specifications are driven by the critical dimension (CD) metrology within the wafer fab. Knowledge of the CD values at resist level provides a reliable mechanism for the prediction of device performance. Ultimately, tolerances of device electrical properties drive the wafer linewidth specifications of the lithography group. Staying within this budget is influenced mainly by the scanner settings, resist process, and photomask quality. Tightening of photomask specifications is one mechanism for meeting the wafer CD targets. The challenge lies in determining how photomask level metrology results influence wafer level imaging performance. Can it be inferred that photomask level CD performance is the direct contributor to wafer level CD performance? With respect to phase shift masks, criteria such as phase and transmission control are generally tightened with each technology node. Are there other photomask relevant influences that effect wafer CD performance? A comprehensive study is presented supporting the use of scanner emulation based photomask CD metrology to predict wafer level within chip CD uniformity (CDU). Using scanner emulation with the photomask can provide more accurate wafer level prediction because it inherently includes all contributors to image formation related to the 3D topography such as the physical CD, phase, transmission, sidewall angle, surface roughness, and other material properties. Emulated images from different photomask types were captured to provide CD values across chip. Emulated scanner image measurements were completed using an AIMS TM45-193i with its hyper-NA, through-pellicle data acquisition capability including the Global CDU Map TM software option for AIMS TM tools. The through-pellicle data acquisition capability is an essential prerequisite for capturing final CDU data (after final clean and pellicle mounting) before the photomask ships or for re-qualification at the wafer fab. Data was also collected on these photomasks using a conventional CD-SEM metrology system with the pellicles removed. A comparison was then made to wafer prints demonstrating the benefit of using scanner emulation based photomask CD metrology.
Verification and compensation of respiratory motion using an ultrasound imaging system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chuang, Ho-Chiao, E-mail: hchuang@mail.ntut.edu.tw; Hsu, Hsiao-Yu; Chiu, Wei-Hung
Purpose: The purpose of this study was to determine if it is feasible to use ultrasound imaging as an aid for moving the treatment couch during diagnosis and treatment procedures associated with radiation therapy, in order to offset organ displacement caused by respiratory motion. A noninvasive ultrasound system was used to replace the C-arm device during diagnosis and treatment with the aims of reducing the x-ray radiation dose on the human body while simultaneously being able to monitor organ displacements. Methods: This study used a proposed respiratory compensating system combined with an ultrasound imaging system to monitor the compensation effectmore » of respiratory motion. The accuracy of the compensation effect was verified by fluoroscopy, which means that fluoroscopy could be replaced so as to reduce unnecessary radiation dose on patients. A respiratory simulation system was used to simulate the respiratory motion of the human abdomen and a strain gauge (respiratory signal acquisition device) was used to capture the simulated respiratory signals. The target displacements could be detected by an ultrasound probe and used as a reference for adjusting the gain value of the respiratory signal used by the respiratory compensating system. This ensured that the amplitude of the respiratory compensation signal was a faithful representation of the target displacement. Results: The results show that performing respiratory compensation with the assistance of the ultrasound images reduced the compensation error of the respiratory compensating system to 0.81–2.92 mm, both for sine-wave input signals with amplitudes of 5, 10, and 15 mm, and human respiratory signals; this represented compensation of the respiratory motion by up to 92.48%. In addition, the respiratory signals of 10 patients were captured in clinical trials, while their diaphragm displacements were observed simultaneously using ultrasound. Using the respiratory compensating system to offset, the diaphragm displacement resulted in compensation rates of 60%–84.4%. Conclusions: This study has shown that a respiratory compensating system combined with noninvasive ultrasound can provide real-time compensation of the respiratory motion of patients.« less
NASA Astrophysics Data System (ADS)
Hassanzadeh, Iraj; Janabi-Sharifi, Farrokh
2005-12-01
In this paper, a new open architecture for visual servo control tasks is illustrated. A Puma-560 robotic manipulator is used to prove the concept. This design enables doing hybrid forcehisual servo control in an unstructured environment in different modes. Also, it can be controlled through Internet in teleoperation mode using a haptic device. Our proposed structure includes two major parts, hardware and software. In terms of hardware, it consists of a master (host) computer, a slave (target) computer, a Puma 560 manipulator, a CCD camera, a force sensor and a haptic device. There are five DAQ cards, interfacing Puma 560 and a slave computer. An open architecture package is developed using Matlab (R), Simulink (R) and XPC target toolbox. This package has the Hardware-In-the-Loop (HIL) property, i.e., enables one to readily implement different configurations of force, visual or hybrid control in real time. The implementation includes the following stages. First of all, retrofitting of puma was carried out. Then a modular joint controller for Puma 560 was realized using Simulink (R). Force sensor driver and force control implementation were written, using sjknction blocks of Simulink (R). Visual images were captured through Image Acquisition Toolbox of Matlab (R), and processed using Image Processing Toolbox. A haptic device interface was also written in Simulink (R). Thus, this setup could be readily reconfigured and accommodate any other robotic manipulator and/or other sensors without the trouble of the external issues relevant to the control, interface and software, while providing flexibility in components modification.
Comparison and evaluation of datasets for off-angle iris recognition
NASA Astrophysics Data System (ADS)
Kurtuncu, Osman M.; Cerme, Gamze N.; Karakaya, Mahmut
2016-05-01
In this paper, we investigated the publicly available iris recognition datasets and their data capture procedures in order to determine if they are suitable for the stand-off iris recognition research. Majority of the iris recognition datasets include only frontal iris images. Even if a few datasets include off-angle iris images, the frontal and off-angle iris images are not captured at the same time. The comparison of the frontal and off-angle iris images shows not only differences in the gaze angle but also change in pupil dilation and accommodation as well. In order to isolate the effect of the gaze angle from other challenging issues including dilation and accommodation, the frontal and off-angle iris images are supposed to be captured at the same time by using two different cameras. Therefore, we developed an iris image acquisition platform by using two cameras in this work where one camera captures frontal iris image and the other one captures iris images from off-angle. Based on the comparison of Hamming distance between frontal and off-angle iris images captured with the two-camera- setup and one-camera-setup, we observed that Hamming distance in two-camera-setup is less than one-camera-setup ranging from 0.05 to 0.001. These results show that in order to have accurate results in the off-angle iris recognition research, two-camera-setup is necessary in order to distinguish the challenging issues from each other.
Microfluidic-Based Enrichment and Retrieval of Circulating Tumor Cells for RT-PCR Analysis.
Gogoi, Priya; Sepehri, Saedeh; Chow, Will; Handique, Kalyan; Wang, Yixin
2017-01-01
Molecular analysis of circulating tumor cells (CTCs) is hindered by low sensitivity and high level of background leukocytes of currently available CTC enrichment technologies. We have developed a novel device to enrich and retrieve CTCs from blood samples by using a microfluidic chip. The Celsee PREP100 device captures CTCs with high sensitivity and allows the captured CTCs to be retrieved for molecular analysis. It uses the microfluidic chip which has approximately 56,320 capture chambers. Based on differences in cell size and deformability, each chamber ensures that small blood escape while larger CTCs of varying sizes are trapped and isolated in the chambers. In this report, we used the Celsee PREP100 to capture cancer cells spiked into normal donor blood samples. We were able to show that the device can capture as low as 10 cells with high reproducibility. The captured CTCs were retrieved from the microfluidic chip. The cell recovery rate of this back-flow procedure is 100% and the level of remaining background leukocytes is very low (about 300-400 cells). RNA from the retrieved cells are extracted and converted to cDNA, and gene expression analysis of selected cancer markers can be carried out by using RT-PCR assays. The sensitive and easy-to-use Celsee PREP100 system represents a promising technology for capturing and molecular characterization of CTCs.
Adams, André A; Okagbare, Paul I; Feng, Juan; Hupert, Matuesz L; Patterson, Don; Göttert, Jost; McCarley, Robin L; Nikitopoulos, Dimitris; Murphy, Michael C; Soper, Steven A
2008-07-09
A novel microfluidic device that can selectively and specifically isolate exceedingly small numbers of circulating tumor cells (CTCs) through a monoclonal antibody (mAB) mediated process by sampling large input volumes (>/=1 mL) of whole blood directly in short time periods (<37 min) was demonstrated. The CTCs were concentrated into small volumes (190 nL), and the number of cells captured was read without labeling using an integrated conductivity sensor following release from the capture surface. The microfluidic device contained a series (51) of high-aspect ratio microchannels (35 mum width x 150 mum depth) that were replicated in poly(methyl methacrylate), PMMA, from a metal mold master. The microchannel walls were covalently decorated with mABs directed against breast cancer cells overexpressing the epithelial cell adhesion molecule (EpCAM). This microfluidic device could accept inputs of whole blood, and its CTC capture efficiency was made highly quantitative (>97%) by designing capture channels with the appropriate widths and heights. The isolated CTCs were readily released from the mAB capturing surface using trypsin. The released CTCs were then enumerated on-device using a novel, label-free solution conductivity route capable of detecting single tumor cells traveling through the detection electrodes. The conductivity readout provided near 100% detection efficiency and exquisite specificity for CTCs due to scaling factors and the nonoptimal electrical properties of potential interferences (erythrocytes or leukocytes). The simplicity in manufacturing the device and its ease of operation make it attractive for clinical applications requiring one-time use operation.
Adams, André A.; Okagbare, Paul I.; Feng, Juan; Hupert, Matuesz L.; Patterson, Don; Göttert, Jost; McCarley, Robin L.; Nikitopoulos, Dimitris; Murphy, Michael C.; Soper, Steven A.
2008-01-01
A novel microfluidic device that can selectively and specifically isolate exceedingly small numbers of circulating tumor cells (CTCs) through a monoclonal antibody (mAB) mediated process by sampling large input volumes (≥1 mL) of whole blood directly in short time periods (<37 min) was demonstrated. The CTCs were concentrated into small volumes (190 nL), and the number of cells captured was read without labeling using an integrated conductivity sensor following release from the capture surface. The microfluidic device contained a series (51) of high-aspect ratio microchannels (35 μm width × 150 μm depth) that were replicated in poly(methyl methacrylate), PMMA, from a metal mold master. The microchannel walls were covalently decorated with mABs directed against breast cancer cells overexpressing the epithelial cell adhesion molecule (EpCAM). This microfluidic device could accept inputs of whole blood, and its CTC capture efficiency was made highly quantitative (>97%) by designing capture channels with the appropriate widths and heights. The isolated CTCs were readily released from the mAB capturing surface using trypsin. The released CTCs were then enumerated on-device using a novel, label-free solution conductivity route capable of detecting single tumor cells traveling through the detection electrodes. The conductivity readout provided near 100% detection efficiency and exquisite specificity for CTCs due to scaling factors and the nonoptimal electrical properties of potential interferences (erythrocytes or leukocytes). The simplicity in manufacturing the device and its ease of operation make it attractive for clinical applications requiring one-time use operation. PMID:18557614
High frame rate imaging systems developed in Northwest Institute of Nuclear Technology
NASA Astrophysics Data System (ADS)
Li, Binkang; Wang, Kuilu; Guo, Mingan; Ruan, Linbo; Zhang, Haibing; Yang, Shaohua; Feng, Bing; Sun, Fengrong; Chen, Yanli
2007-01-01
This paper presents high frame rate imaging systems developed in Northwest Institute of Nuclear Technology in recent years. Three types of imaging systems are included. The first type of system utilizes EG&G RETICON Photodiode Array (PDA) RA100A as the image sensor, which can work at up to 1000 frame per second (fps). Besides working continuously, the PDA system is also designed to switch to capture flash light event working mode. A specific time sequence is designed to satisfy this request. The camera image data can be transmitted to remote area by coaxial or optic fiber cable and then be stored. The second type of imaging system utilizes PHOTOBIT Complementary Metal Oxygen Semiconductor (CMOS) PB-MV13 as the image sensor, which has a high resolution of 1280 (H) ×1024 (V) pixels per frame. The CMOS system can operate at up to 500fps in full frame and 4000fps partially. The prototype scheme of the system is presented. The third type of imaging systems adopts charge coupled device (CCD) as the imagers. MINTRON MTV-1881EX, DALSA CA-D1 and CA-D6 camera head are used in the systems development. The features comparison of the RA100A, PB-MV13, and CA-D6 based systems are given in the end.
Burrell, Thomas; Fozard, Susan; Holroyd, Geoff H; French, Andrew P; Pound, Michael P; Bigley, Christopher J; James Taylor, C; Forde, Brian G
2017-01-01
Chemical genetics provides a powerful alternative to conventional genetics for understanding gene function. However, its application to plants has been limited by the lack of a technology that allows detailed phenotyping of whole-seedling development in the context of a high-throughput chemical screen. We have therefore sought to develop an automated micro-phenotyping platform that would allow both root and shoot development to be monitored under conditions where the phenotypic effects of large numbers of small molecules can be assessed. The 'Microphenotron' platform uses 96-well microtitre plates to deliver chemical treatments to seedlings of Arabidopsis thaliana L. and is based around four components: (a) the 'Phytostrip', a novel seedling growth device that enables chemical treatments to be combined with the automated capture of images of developing roots and shoots; (b) an illuminated robotic platform that uses a commercially available robotic manipulator to capture images of developing shoots and roots; (c) software to control the sequence of robotic movements and integrate these with the image capture process; (d) purpose-made image analysis software for automated extraction of quantitative phenotypic data. Imaging of each plate (representing 80 separate assays) takes 4 min and can easily be performed daily for time-course studies. As currently configured, the Microphenotron has a capacity of 54 microtitre plates in a growth room footprint of 2.1 m 2 , giving a potential throughput of up to 4320 chemical treatments in a typical 10 days experiment. The Microphenotron has been validated by using it to screen a collection of 800 natural compounds for qualitative effects on root development and to perform a quantitative analysis of the effects of a range of concentrations of nitrate and ammonium on seedling development. The Microphenotron is an automated screening platform that for the first time is able to combine large numbers of individual chemical treatments with a detailed analysis of whole-seedling development, and particularly root system development. The Microphenotron should provide a powerful new tool for chemical genetics and for wider chemical biology applications, including the development of natural and synthetic chemical products for improved agricultural sustainability.
Light ray field capture using focal plane sweeping and its optical reconstruction using 3D displays.
Park, Jae-Hyeung; Lee, Sung-Keun; Jo, Na-Young; Kim, Hee-Jae; Kim, Yong-Soo; Lim, Hong-Gi
2014-10-20
We propose a method to capture light ray field of three-dimensional scene using focal plane sweeping. Multiple images are captured using a usual camera at different focal distances, spanning the three-dimensional scene. The captured images are then back-projected to four-dimensional spatio-angular space to obtain the light ray field. The obtained light ray field can be visualized either using digital processing or optical reconstruction using various three-dimensional display techniques including integral imaging, layered display, and holography.
Image Capture with Synchronized Multiple-Cameras for Extraction of Accurate Geometries
NASA Astrophysics Data System (ADS)
Koehl, M.; Delacourt, T.; Boutry, C.
2016-06-01
This paper presents a project of recording and modelling tunnels, traffic circles and roads from multiple sensors. The aim is the representation and the accurate 3D modelling of a selection of road infrastructures as dense point clouds in order to extract profiles and metrics from it. Indeed, these models will be used for the sizing of infrastructures in order to simulate exceptional convoy truck routes. The objective is to extract directly from the point clouds the heights, widths and lengths of bridges and tunnels, the diameter of gyrating and to highlight potential obstacles for a convoy. Light, mobile and fast acquisition approaches based on images and videos from a set of synchronized sensors have been tested in order to obtain useable point clouds. The presented solution is based on a combination of multiple low-cost cameras designed on an on-boarded device allowing dynamic captures. The experimental device containing GoPro Hero4 cameras has been set up and used for tests in static or mobile acquisitions. That way, various configurations have been tested by using multiple synchronized cameras. These configurations are discussed in order to highlight the best operational configuration according to the shape of the acquired objects. As the precise calibration of each sensor and its optics are major factors in the process of creation of accurate dense point clouds, and in order to reach the best quality available from such cameras, the estimation of the internal parameters of fisheye lenses of the cameras has been processed. Reference measures were also realized by using a 3D TLS (Faro Focus 3D) to allow the accuracy assessment.
Gandolla, Marta; Ferrante, Simona; Casellato, Claudia; Ferrigno, Giancarlo; Molteni, Franco; Martegani, Alberto; Frattini, Tiziano; Pedrocchi, Alessandra
2011-10-01
Functional Electrical Stimulation (FES) is a well known clinical rehabilitation procedure, however the neural mechanisms that underlie this treatment at Central Nervous System (CNS) level are still not completely understood. Functional magnetic resonance imaging (fMRI) is a suitable tool to investigate effects of rehabilitative treatments on brain plasticity. Moreover, monitoring the effective executed movement is needed to correctly interpret activation maps, most of all in neurological patients where required motor tasks could be only partially accomplished. The proposed experimental set-up includes a 1.5 T fMRI scanner, a motion capture system to acquire kinematic data, and an electro-stimulation device. The introduction of metallic devices and of stimulation current in the MRI room could affect fMRI acquisitions so as to prevent a reliable activation maps analysis. What we are interested in is that the Blood Oxygenation Level Dependent (BOLD) signal, marker of neural activity, could be detected within a given experimental condition and set-up. In this paper we assess temporal Signal to Noise Ratio (SNR) as image quality index. BOLD signal change is about 1-2% as revealed by a 1.5 T scanner. This work demonstrates that, with this innovative set-up, in the main cortical sensorimotor regions 1% BOLD signal change can be detected at least in the 93% of the sub-volumes, and almost 100% of the sub-volumes are suitable for 2% signal change detection. The integrated experimental set-up will therefore allows to detect FES induced movements fMRI maps simultaneously with kinematic acquisitions so as to investigate FES-based rehabilitation treatments contribution at CNS level. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.