3D surface pressure measurement with single light-field camera and pressure-sensitive paint
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Xu, Shengming; Zhao, Zhou; Niu, Xiaofu; Quinn, Mark Kenneth
2018-05-01
A novel technique that simultaneously measures three-dimensional model geometry, as well as surface pressure distribution, with single camera is demonstrated in this study. The technique takes the advantage of light-field photography which can capture three-dimensional information with single light-field camera, and combines it with the intensity-based pressure-sensitive paint method. The proposed single camera light-field three-dimensional pressure measurement technique (LF-3DPSP) utilises a similar hardware setup to the traditional two-dimensional pressure measurement technique, with exception that the wind-on, wind-off and model geometry images are captured via an in-house-constructed light-field camera. The proposed LF-3DPSP technique was validated with a Mach 5 flared cone model test. Results show that the technique is capable of measuring three-dimensional geometry with high accuracy for relatively large curvature models, and the pressure results compare well with the Schlieren tests, analytical calculations, and numerical simulations.
Single exposure three-dimensional imaging of dusty plasma clusters.
Hartmann, Peter; Donkó, István; Donkó, Zoltán
2013-02-01
We have worked out the details of a single camera, single exposure method to perform three-dimensional imaging of a finite particle cluster. The procedure is based on the plenoptic imaging principle and utilizes a commercial Lytro light field still camera. We demonstrate the capabilities of our technique on a single layer particle cluster in a dusty plasma, where the camera is aligned and inclined at a small angle to the particle layer. The reconstruction of the third coordinate (depth) is found to be accurate and even shadowing particles can be identified.
Applications of digital image acquisition in anthropometry
NASA Technical Reports Server (NTRS)
Woolford, B.; Lewis, J. L.
1981-01-01
A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.
A panning DLT procedure for three-dimensional videography.
Yu, B; Koh, T J; Hay, J G
1993-06-01
The direct linear transformation (DLT) method [Abdel-Aziz and Karara, APS Symposium on Photogrammetry. American Society of Photogrammetry, Falls Church, VA (1971)] is widely used in biomechanics to obtain three-dimensional space coordinates from film and video records. This method has some major shortcomings when used to analyze events which take place over large areas. To overcome these shortcomings, a three-dimensional data collection method based on the DLT method, and making use of panning cameras, was developed. Several small single control volumes were combined to construct a large total control volume. For each single control volume, a regression equation (calibration equation) is developed to express each of the 11 DLT parameters as a function of camera orientation, so that the DLT parameters can then be estimated from arbitrary camera orientations. Once the DLT parameters are known for at least two cameras, and the associated two-dimensional film or video coordinates of the event are obtained, the desired three-dimensional space coordinates can be computed. In a laboratory test, five single control volumes (in a total control volume of 24.40 x 2.44 x 2.44 m3) were used to test the effect of the position of the single control volume on the accuracy of the computed three dimensional space coordinates. Linear and quadratic calibration equations were used to test the effect of the order of the equation on the accuracy of the computed three dimensional space coordinates. For four of the five single control volumes tested, the mean resultant errors associated with the use of the linear calibration equation were significantly larger than those associated with the use of the quadratic calibration equation. The position of the single control volume had no significant effect on the mean resultant errors in computed three dimensional coordinates when the quadratic calibration equation was used. Under the same data collection conditions, the mean resultant errors in the computed three dimensional coordinates associated with the panning and stationary DLT methods were 17 and 22 mm, respectively. The major advantages of the panning DLT method lie in the large image sizes obtained and in the ease with which the data can be collected. The method also has potential for use in a wide variety of contexts. The major shortcoming of the method is the large amount of digitizing necessary to calibrate the total control volume. Adaptations of the method to reduce the amount of digitizing required are being explored.
Single camera volumetric velocimetry in aortic sinus with a percutaneous valve
NASA Astrophysics Data System (ADS)
Clifford, Chris; Thurow, Brian; Midha, Prem; Okafor, Ikechukwu; Raghav, Vrishank; Yoganathan, Ajit
2016-11-01
Cardiac flows have long been understood to be highly three dimensional, yet traditional in vitro techniques used to capture these complexities are costly and cumbersome. Thus, two dimensional techniques are primarily used for heart valve flow diagnostics. The recent introduction of plenoptic camera technology allows for traditional cameras to capture both spatial and angular information from a light field through the addition of a microlens array in front of the image sensor. When combined with traditional particle image velocimetry (PIV) techniques, volumetric velocity data may be acquired with a single camera using off-the-shelf optics. Particle volume pairs are reconstructed from raw plenoptic images using a filtered refocusing scheme, followed by three-dimensional cross-correlation. This technique was applied to the sinus region (known for having highly three-dimensional flow structures) of an in vitro aortic model with a percutaneous valve. Phase-locked plenoptic PIV data was acquired at two cardiac outputs (2 and 5 L/min) and 7 phases of the cardiac cycle. The volumetric PIV data was compared to standard 2D-2C PIV. Flow features such as recirculation and stagnation were observed in the sinus region in both cases.
Finding Intrinsic and Extrinsic Viewing Parameters from a Single Realist Painting
NASA Astrophysics Data System (ADS)
Jordan, Tadeusz; Stork, David G.; Khoo, Wai L.; Zhu, Zhigang
In this paper we studied the geometry of a three-dimensional tableau from a single realist painting - Scott Fraser’s Three way vanitas (2006). The tableau contains a carefully chosen complex arrangement of objects including a moth, egg, cup, and strand of string, glass of water, bone, and hand mirror. Each of the three plane mirrors presents a different view of the tableau from a virtual camera behind each mirror and symmetric to the artist’s viewing point. Our new contribution was to incorporate single-view geometric information extracted from the direct image of the wooden mirror frames in order to obtain the camera models of both the real camera and the three virtual cameras. Both the intrinsic and extrinsic parameters are estimated for the direct image and the images in three plane mirrors depicted within the painting.
Depth measurements through controlled aberrations of projected patterns.
Birch, Gabriel C; Tyo, J Scott; Schwiegerling, Jim
2012-03-12
Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture three-dimensional images in space confined environments and without major modifications to current cameras is uncommon. Our goal is to create a simple modification to a conventional camera that allows for three dimensional reconstruction. We require such an imaging system have imaging and illumination paths coincident. Furthermore, we require that any three-dimensional modification to a camera also permits full resolution 2D image capture.Here we present a method of extracting depth information with a single camera and aberrated projected pattern. A commercial digital camera is used in conjunction with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of a projected pattern, thereby encoding depth. By designing an aberrated projected pattern, we are able to exploit this differential focus in post-processing designed to exploit the projected pattern and optical system. We are able to correlate the distance of an object at a particular transverse position from the camera to ratios of particular wavelet coefficients.We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.
Improving accuracy of Plenoptic PIV using two light field cameras
NASA Astrophysics Data System (ADS)
Thurow, Brian; Fahringer, Timothy
2017-11-01
Plenoptic particle image velocimetry (PIV) has recently emerged as a viable technique for acquiring three-dimensional, three-component velocity field data using a single plenoptic, or light field, camera. The simplified experimental arrangement is advantageous in situations where optical access is limited and/or it is not possible to set-up the four or more cameras typically required in a tomographic PIV experiment. A significant disadvantage of a single camera plenoptic PIV experiment, however, is that the accuracy of the velocity measurement along the optical axis of the camera is significantly worse than in the two lateral directions. In this work, we explore the accuracy of plenoptic PIV when two plenoptic cameras are arranged in a stereo imaging configuration. It is found that the addition of a 2nd camera improves the accuracy in all three directions and nearly eliminates any differences between them. This improvement is illustrated using both synthetic and real experiments conducted on a vortex ring using both one and two plenoptic cameras.
Design of an open-ended plenoptic camera for three-dimensional imaging of dusty plasmas
NASA Astrophysics Data System (ADS)
Sanpei, Akio; Tokunaga, Kazuya; Hayashi, Yasuaki
2017-08-01
Herein, the design of a plenoptic imaging system for three-dimensional reconstructions of dusty plasmas using an integral photography technique has been reported. This open-ended system is constructed with a multi-convex lens array and a typical reflex CMOS camera. We validated the design of the reconstruction system using known target particles. Additionally, the system has been applied to observations of fine particles floating in a horizontal, parallel-plate radio-frequency plasma. Furthermore, the system works well in the range of our dusty plasma experiment. We can identify the three-dimensional positions of dust particles from a single-exposure image obtained from one viewing port.
Plasma crystal dynamics measured with a three-dimensional plenoptic camera
NASA Astrophysics Data System (ADS)
Jambor, M.; Nosenko, V.; Zhdanov, S. K.; Thomas, H. M.
2016-03-01
Three-dimensional (3D) imaging of a single-layer plasma crystal was performed using a commercial plenoptic camera. To enhance the out-of-plane oscillations of particles in the crystal, the mode-coupling instability (MCI) was triggered in it by lowering the discharge power below a threshold. 3D coordinates of all particles in the crystal were extracted from the recorded videos. All three fundamental wave modes of the plasma crystal were calculated from these data. In the out-of-plane spectrum, only the MCI-induced hot spots (corresponding to the unstable hybrid mode) were resolved. The results are in agreement with theory and show that plenoptic cameras can be used to measure the 3D dynamics of plasma crystals.
Plasma crystal dynamics measured with a three-dimensional plenoptic camera.
Jambor, M; Nosenko, V; Zhdanov, S K; Thomas, H M
2016-03-01
Three-dimensional (3D) imaging of a single-layer plasma crystal was performed using a commercial plenoptic camera. To enhance the out-of-plane oscillations of particles in the crystal, the mode-coupling instability (MCI) was triggered in it by lowering the discharge power below a threshold. 3D coordinates of all particles in the crystal were extracted from the recorded videos. All three fundamental wave modes of the plasma crystal were calculated from these data. In the out-of-plane spectrum, only the MCI-induced hot spots (corresponding to the unstable hybrid mode) were resolved. The results are in agreement with theory and show that plenoptic cameras can be used to measure the 3D dynamics of plasma crystals.
Prism-based single-camera system for stereo display
NASA Astrophysics Data System (ADS)
Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa
2016-06-01
This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.
Liang, Jinyang; Gao, Liang; Hai, Pengfei; Li, Chiye; Wang, Lihong V.
2015-01-01
Compressed ultrafast photography (CUP), a computational imaging technique, is synchronized with short-pulsed laser illumination to enable dynamic three-dimensional (3D) imaging. By leveraging the time-of-flight (ToF) information of pulsed light backscattered by the object, ToF-CUP can reconstruct a volumetric image from a single camera snapshot. In addition, the approach unites the encryption of depth data with the compressed acquisition of 3D data in a single snapshot measurement, thereby allowing efficient and secure data storage and transmission. We demonstrated high-speed 3D videography of moving objects at up to 75 volumes per second. The ToF-CUP camera was applied to track the 3D position of a live comet goldfish. We have also imaged a moving object obscured by a scattering medium. PMID:26503834
Single-camera three-dimensional tracking of natural particulate and zooplankton
NASA Astrophysics Data System (ADS)
Troutman, Valerie A.; Dabiri, John O.
2018-07-01
We develop and characterize an image processing algorithm to adapt single-camera defocusing digital particle image velocimetry (DDPIV) for three-dimensional (3D) particle tracking velocimetry (PTV) of natural particulates, such as those present in the ocean. The conventional DDPIV technique is extended to facilitate tracking of non-uniform, non-spherical particles within a volume depth an order of magnitude larger than current single-camera applications (i.e. 10 cm × 10 cm × 24 cm depth) by a dynamic template matching method. This 2D cross-correlation method does not rely on precise determination of the centroid of the tracked objects. To accommodate the broad range of particle number densities found in natural marine environments, the performance of the measurement technique at higher particle densities has been improved by utilizing the time-history of tracked objects to inform 3D reconstruction. The developed processing algorithms were analyzed using synthetically generated images of flow induced by Hill’s spherical vortex, and the capabilities of the measurement technique were demonstrated empirically through volumetric reconstructions of the 3D trajectories of particles and highly non-spherical, 5 mm zooplankton.
NASA Astrophysics Data System (ADS)
Hoffmann, A.; Zimmermann, F.; Scharr, H.; Krömker, S.; Schulz, C.
2005-01-01
A laser-based technique for measuring instantaneous three-dimensional species concentration distributions in turbulent flows is presented. The laser beam from a single laser is formed into two crossed light sheets that illuminate the area of interest. The laser-induced fluorescence (LIF) signal emitted from excited species within both planes is detected with a single camera via a mirror arrangement. Image processing enables the reconstruction of the three-dimensional data set in close proximity to the cutting line of the two light sheets. Three-dimensional intensity gradients are computed and compared to the two-dimensional projections obtained from the two directly observed planes. Volume visualization by digital image processing gives unique insight into the three-dimensional structures within the turbulent processes. We apply this technique to measurements of toluene-LIF in a turbulent, non-reactive mixing process of toluene and air and to hydroxyl (OH) LIF in a turbulent methane-air flame upon excitation at 248 nm with a tunable KrF excimer laser.
Parallel phase-sensitive three-dimensional imaging camera
Smithpeter, Colin L.; Hoover, Eddie R.; Pain, Bedabrata; Hancock, Bruce R.; Nellums, Robert O.
2007-09-25
An apparatus is disclosed for generating a three-dimensional (3-D) image of a scene illuminated by a pulsed light source (e.g. a laser or light-emitting diode). The apparatus, referred to as a phase-sensitive 3-D imaging camera utilizes a two-dimensional (2-D) array of photodetectors to receive light that is reflected or scattered from the scene and processes an electrical output signal from each photodetector in the 2-D array in parallel using multiple modulators, each having inputs of the photodetector output signal and a reference signal, with the reference signal provided to each modulator having a different phase delay. The output from each modulator is provided to a computational unit which can be used to generate intensity and range information for use in generating a 3-D image of the scene. The 3-D camera is capable of generating a 3-D image using a single pulse of light, or alternately can be used to generate subsequent 3-D images with each additional pulse of light.
Bae, Sam Y; Korniski, Ronald J; Shearn, Michael; Manohara, Harish M; Shahinian, Hrayr
2017-01-01
High-resolution three-dimensional (3-D) imaging (stereo imaging) by endoscopes in minimally invasive surgery, especially in space-constrained applications such as brain surgery, is one of the most desired capabilities. Such capability exists at larger than 4-mm overall diameters. We report the development of a stereo imaging endoscope of 4-mm maximum diameter, called Multiangle, Rear-Viewing Endoscopic Tool (MARVEL) that uses a single-lens system with complementary multibandpass filter (CMBF) technology to achieve 3-D imaging. In addition, the system is endowed with the capability to pan from side-to-side over an angle of [Formula: see text], which is another unique aspect of MARVEL for such a class of endoscopes. The design and construction of a single-lens, CMBF aperture camera with integrated illumination to generate 3-D images, and the actuation mechanism built into it is summarized.
3-D Velocimetry of Strombolian Explosions
NASA Astrophysics Data System (ADS)
Taddeucci, J.; Gaudin, D.; Orr, T. R.; Scarlato, P.; Houghton, B. F.; Del Bello, E.
2014-12-01
Using two synchronized high-speed cameras we were able to reconstruct the three-dimensional displacement and velocity field of bomb-sized pyroclasts in Strombolian explosions at Stromboli Volcano. Relatively low-intensity Strombolian-style activity offers a rare opportunity to observe volcanic processes that remain hidden from view during more violent explosive activity. Such processes include the ejection and emplacement of bomb-sized clasts along pure or drag-modified ballistic trajectories, in-flight bomb collision, and gas liberation dynamics. High-speed imaging of Strombolian activity has already opened new windows for the study of the abovementioned processes, but to date has only utilized two-dimensional analysis with limited motion detection and ability to record motion towards or away from the observer. To overcome this limitation, we deployed two synchronized high-speed video cameras at Stromboli. The two cameras, located sixty meters apart, filmed Strombolian explosions at 500 and 1000 frames per second and with different resolutions. Frames from the two cameras were pre-processed and combined into a single video showing frames alternating from one to the other camera. Bomb-sized pyroclasts were then manually identified and tracked in the combined video, together with fixed reference points located as close as possible to the vent. The results from manual tracking were fed to a custom software routine that, knowing the relative position of the vent and cameras, and the field of view of the latter, provided the position of each bomb relative to the reference points. By tracking tens of bombs over five to ten frames at different intervals during one explosion, we were able to reconstruct the three-dimensional evolution of the displacement and velocity fields of bomb-sized pyroclasts during individual Strombolian explosions. Shifting jet directivity and dispersal angle clearly appear from the three-dimensional analysis.
Chen, Brian R; Poon, Emily; Alam, Murad
2017-08-01
Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.
Deflection Measurements of a Thermally Simulated Nuclear Core Using a High-Resolution CCD-Camera
NASA Technical Reports Server (NTRS)
Stanojev, B. J.; Houts, M.
2004-01-01
Space fission systems under consideration for near-term missions all use compact. fast-spectrum reactor cores. Reactor dimensional change with increasing temperature, which affects neutron leakage. is the dominant source of reactivity feedback in these systems. Accurately measuring core dimensional changes during realistic non-nuclear testing is therefore necessary in predicting the system nuclear equivalent behavior. This paper discusses one key technique being evaluated for measuring such changes. The proposed technique is to use a Charged Couple Device (CCD) sensor to obtain deformation readings of electrically heated prototypic reactor core geometry. This paper introduces a technique by which a single high spatial resolution CCD camera is used to measure core deformation in Real-Time (RT). Initial system checkout results are presented along with a discussion on how additional cameras could be used to achieve a three- dimensional deformation profile of the core during test.
Virtual three-dimensional blackboard: three-dimensional finger tracking with a single camera
NASA Astrophysics Data System (ADS)
Wu, Andrew; Hassan-Shafique, Khurram; Shah, Mubarak; da Vitoria Lobo, N.
2004-01-01
We present a method for three-dimensional (3D) tracking of a human finger from a monocular sequence of images. To recover the third dimension from the two-dimensional images, we use the fact that the motion of the human arm is highly constrained owing to the dependencies between elbow and forearm and the physical constraints on joint angles. We use these anthropometric constraints to derive a 3D trajectory of a gesticulating arm. The system is fully automated and does not require human intervention. The system presented can be used as a visualization tool, as a user-input interface, or as part of some gesture-analysis system in which 3D information is important.
NASA Astrophysics Data System (ADS)
Chi, Yuxi; Yu, Liping; Pan, Bing
2018-05-01
A low-cost, portable, robust and high-resolution single-camera stereo-digital image correlation (stereo-DIC) system for accurate surface three-dimensional (3D) shape and deformation measurements is described. This system adopts a single consumer-grade high-resolution digital Single Lens Reflex (SLR) camera and a four-mirror adaptor, rather than two synchronized industrial digital cameras, for stereo image acquisition. In addition, monochromatic blue light illumination and coupled bandpass filter imaging are integrated to ensure the robustness of the system against ambient light variations. In contrast to conventional binocular stereo-DIC systems, the developed pseudo-stereo-DIC system offers the advantages of low cost, portability, robustness against ambient light variations, and high resolution. The accuracy and precision of the developed single SLR camera-based stereo-DIC system were validated by measuring the 3D shape of a stationary sphere along with in-plane and out-of-plane displacements of a translated planar plate. Application of the established system to thermal deformation measurement of an alumina ceramic plate and a stainless-steel plate subjected to radiation heating was also demonstrated.
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2016-12-01
A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.
NASA Astrophysics Data System (ADS)
Johnson, Kyle; Thurow, Brian; Kim, Taehoon; Blois, Gianluca; Christensen, Kenneth
2016-11-01
Three-dimensional, three-component (3D-3C) measurements were made using a plenoptic camera on the flow around a roughness element immersed in a turbulent boundary layer. A refractive index matched approach allowed whole-field optical access from a single camera to a measurement volume that includes transparent solid geometries. In particular, this experiment measures the flow over a single hemispherical roughness element made of acrylic and immersed in a working fluid consisting of Sodium Iodide solution. Our results demonstrate that plenoptic particle image velocimetry (PIV) is a viable technique to obtaining statistically-significant volumetric velocity measurements even in a complex separated flow. The boundary layer to roughness height-ratio of the flow was 4.97 and the Reynolds number (based on roughness height) was 4.57×103. Our measurements reveal key flow features such as spiraling legs of the shear layer, a recirculation region, and shed arch vortices. Proper orthogonal decomposition (POD) analysis was applied to the instantaneous velocity and vorticity data to extract these features. Supported by the National Science Foundation Grant No. 1235726.
Split ring resonator based THz-driven electron streak camera featuring femtosecond resolution
Fabiańska, Justyna; Kassier, Günther; Feurer, Thomas
2014-01-01
Through combined three-dimensional electromagnetic and particle tracking simulations we demonstrate a THz driven electron streak camera featuring a temporal resolution on the order of a femtosecond. The ultrafast streaking field is generated in a resonant THz sub-wavelength antenna which is illuminated by an intense single-cycle THz pulse. Since electron bunches and THz pulses are generated with parts of the same laser system, synchronization between the two is inherently guaranteed. PMID:25010060
Plenoptic particle image velocimetry with multiple plenoptic cameras
NASA Astrophysics Data System (ADS)
Fahringer, Timothy W.; Thurow, Brian S.
2018-07-01
Plenoptic particle image velocimetry was recently introduced as a viable three-dimensional, three-component velocimetry technique based on light field cameras. One of the main benefits of this technique is its single camera configuration allowing the technique to be applied in facilities with limited optical access. The main drawback of this configuration is decreased accuracy in the out-of-plane dimension. This work presents a solution with the addition of a second plenoptic camera in a stereo-like configuration. A framework for reconstructing volumes with multiple plenoptic cameras including the volumetric calibration and reconstruction algorithms, including: integral refocusing, filtered refocusing, multiplicative refocusing, and MART are presented. It is shown that the addition of a second camera improves the reconstruction quality and removes the ‘cigar’-like elongation associated with the single camera system. In addition, it is found that adding a third camera provides minimal improvement. Further metrics of the reconstruction quality are quantified in terms of a reconstruction algorithm, particle density, number of cameras, camera separation angle, voxel size, and the effect of common image noise sources. In addition, a synthetic Gaussian ring vortex is used to compare the accuracy of the single and two camera configurations. It was determined that the addition of a second camera reduces the RMSE velocity error from 1.0 to 0.1 voxels in depth and 0.2 to 0.1 voxels in the lateral spatial directions. Finally, the technique is applied experimentally on a ring vortex and comparisons are drawn from the four presented reconstruction algorithms, where it was found that MART and multiplicative refocusing produced the cleanest vortex structure and had the least shot-to-shot variability. Filtered refocusing is able to produce the desired structure, albeit with more noise and variability, while integral refocusing struggled to produce a coherent vortex ring.
Rectification of curved document images based on single view three-dimensional reconstruction.
Kang, Lai; Wei, Yingmei; Jiang, Jie; Bai, Liang; Lao, Songyang
2016-10-01
Since distortions in camera-captured document images significantly affect the accuracy of optical character recognition (OCR), distortion removal plays a critical role for document digitalization systems using a camera for image capturing. This paper proposes a novel framework that performs three-dimensional (3D) reconstruction and rectification of camera-captured document images. While most existing methods rely on additional calibrated hardware or multiple images to recover the 3D shape of a document page, or make a simple but not always valid assumption on the corresponding 3D shape, our framework is more flexible and practical since it only requires a single input image and is able to handle a general locally smooth document surface. The main contributions of this paper include a new iterative refinement scheme for baseline fitting from connected components of text line, an efficient discrete vertical text direction estimation algorithm based on convex hull projection profile analysis, and a 2D distortion grid construction method based on text direction function estimation using 3D regularization. In order to examine the performance of our proposed method, both qualitative and quantitative evaluation and comparison with several recent methods are conducted in our experiments. The experimental results demonstrate that the proposed method outperforms relevant approaches for camera-captured document image rectification, in terms of improvements on both visual distortion removal and OCR accuracy.
NASA Astrophysics Data System (ADS)
Xu, Qian; Krivets, Vitaliy V.; Sewell, Everest G.; Jacobs, Jeffrey W.
2016-11-01
A vertical shock tube is used to perform experiments on the single-mode three-dimensional Richtmyer-Meshkov Instability (RMI). The light gas (Air) and the heavy gas (SF6) enter from the top and the bottom of the shock tube driven section to form the interface. The initial perturbation is then generated by oscillating the gases vertically. Both gases are seeded with particles generated through vaporizing propylene glycol. An incident shock wave (M 1.2) impacts the interface to create an impulsive acceleration. The seeded particles are illuminated by a dual cavity 75W, Nd: YLF laser. Three high-speed CMOS cameras record time sequences of image pairs at a rate of 2 kHz. The initial perturbation used is that of a single, square-mode perturbation with either a single spike or a single bubble positioned at the center of the shock tube. The full time dependent velocity field is obtained allowing the determination of the circulation versus time. In addition, the evolution of time dependent amplitude is also determined. The results are compared with PIV measurements from previous two-dimensional single mode experiments along with PLIF measurements from previous three-dimensional single mode experiments.
Plenoptic Imaging of a Three Dimensional Cold Atom Cloud
NASA Astrophysics Data System (ADS)
Lott, Gordon
2017-04-01
A plenoptic imaging system is capable of sampling the rays of light in a volume, both spatially and angularly, providing information about the three dimensional (3D) volume being imaged. The extraction of the 3D structure of a cold atom cloud is demonstrated, using a single plenoptic camera and a single image. The reconstruction is tested against a reference image and the results discussed along with the capabilities and limitations of the imaging system. This capability is useful when the 3D distribution of the atoms is desired, such as determining the shape of an atom trap, particularly when there is limited optical access. Gratefully acknowledge support from AFRL.
Non Contacting Evaluation of Strains and Cracking Using Optical and Infrared Imaging Techniques
1988-08-22
Compatible Zenith Z-386 microcomputer with plotter II. 3-D Motion Measurinq System 1. Complete OPTOTRAK three dimensional digitizing system. System includes...acquisition unit - 16 single ended analog input channels 3. Data Analysis Package software (KINEPLOT) 4. Extra OPTOTRAK Camera (max 224 per system
NASA Technical Reports Server (NTRS)
Borgstahl, Gloria (Inventor); Lovelace, Jeff (Inventor); Snell, Edward Holmes (Inventor); Bellamy, Henry (Inventor)
2008-01-01
The present invention provides a digital topography imaging system for determining the crystalline structure of a biological macromolecule, wherein the system employs a charge coupled device (CCD) camera with antiblooming circuitry to directly convert x-ray signals to electrical signals without the use of phosphor and measures reflection profiles from the x-ray emitting source after x-rays are passed through a sample. Methods for using said system are also provided.
1992-03-01
construction were completed and data, "’dm blue prints and physical measurements, was entered concurrent with the coding of routines for data retrieval. While...desirable for that view to accurately reflect what a person (or camera) would see if they were to stand at the same point in the physical world. To... physical dimensions. A parallel projection does not perform this scaling and is therefore not suitable to our application. B. GENERAL PERSPECTIVE
A system for extracting 3-dimensional measurements from a stereo pair of TV cameras
NASA Technical Reports Server (NTRS)
Yakimovsky, Y.; Cunningham, R.
1976-01-01
Obtaining accurate three-dimensional (3-D) measurement from a stereo pair of TV cameras is a task requiring camera modeling, calibration, and the matching of the two images of a real 3-D point on the two TV pictures. A system which models and calibrates the cameras and pairs the two images of a real-world point in the two pictures, either manually or automatically, was implemented. This system is operating and provides three-dimensional measurements resolution of + or - mm at distances of about 2 m.
Single-shot three-dimensional reconstruction based on structured light line pattern
NASA Astrophysics Data System (ADS)
Wang, ZhenZhou; Yang, YongMing
2018-07-01
Reconstruction of the object by single-shot is of great importance in many applications, in which the object is moving or its shape is non-rigid and changes irregularly. In this paper, we propose a single-shot structured light 3D imaging technique that calculates the phase map from the distorted line pattern. This technique makes use of the image processing techniques to segment and cluster the projected structured light line pattern from one single captured image. The coordinates of the clustered lines are extracted to form a low-resolution phase matrix which is then transformed to full-resolution phase map by spline interpolation. The 3D shape of the object is computed from the full-resolution phase map and the 2D camera coordinates. Experimental results show that the proposed method was able to reconstruct the three-dimensional shape of the object robustly from one single image.
NASA Astrophysics Data System (ADS)
Kachach, Redouane; Cañas, José María
2016-05-01
Using video in traffic monitoring is one of the most active research domains in the computer vision community. TrafficMonitor, a system that employs a hybrid approach for automatic vehicle tracking and classification on highways using a simple stationary calibrated camera, is presented. The proposed system consists of three modules: vehicle detection, vehicle tracking, and vehicle classification. Moving vehicles are detected by an enhanced Gaussian mixture model background estimation algorithm. The design includes a technique to resolve the occlusion problem by using a combination of two-dimensional proximity tracking algorithm and the Kanade-Lucas-Tomasi feature tracking algorithm. The last module classifies the shapes identified into five vehicle categories: motorcycle, car, van, bus, and truck by using three-dimensional templates and an algorithm based on histogram of oriented gradients and the support vector machine classifier. Several experiments have been performed using both real and simulated traffic in order to validate the system. The experiments were conducted on GRAM-RTM dataset and a proper real video dataset which is made publicly available as part of this work.
NASA Technical Reports Server (NTRS)
McDowell, Mark (Inventor); Glasgow, Thomas K. (Inventor)
1999-01-01
A system and a method for measuring three-dimensional velocities at a plurality of points in a fluid employing at least two cameras positioned approximately perpendicular to one another. The cameras are calibrated to accurately represent image coordinates in world coordinate system. The two-dimensional views of the cameras are recorded for image processing and centroid coordinate determination. Any overlapping particle clusters are decomposed into constituent centroids. The tracer particles are tracked on a two-dimensional basis and then stereo matched to obtain three-dimensional locations of the particles as a function of time so that velocities can be measured therefrom The stereo imaging velocimetry technique of the present invention provides a full-field. quantitative, three-dimensional map of any optically transparent fluid which is seeded with tracer particles.
Generating Stereoscopic Television Images With One Camera
NASA Technical Reports Server (NTRS)
Coan, Paul P.
1996-01-01
Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.
3D digital image correlation using a single 3CCD colour camera and dichroic filter
NASA Astrophysics Data System (ADS)
Zhong, F. Q.; Shao, X. X.; Quan, C.
2018-04-01
In recent years, three-dimensional digital image correlation methods using a single colour camera have been reported. In this study, we propose a simplified system by employing a dichroic filter (DF) to replace the beam splitter and colour filters. The DF can be used to combine two views from different perspectives reflected by two planar mirrors and eliminate their interference. A 3CCD colour camera is then used to capture two different views simultaneously via its blue and red channels. Moreover, the measurement accuracy of the proposed method is higher since the effect of refraction is reduced. Experiments are carried out to verify the effectiveness of the proposed method. It is shown that the interference between the blue and red views is insignificant. In addition, the measurement accuracy of the proposed method is validated on the rigid body displacement. The experimental results demonstrate that the measurement accuracy of the proposed method is higher compared with the reported methods using a single colour camera. Finally, the proposed method is employed to measure the in- and out-of-plane displacements of a loaded plastic board. The re-projection errors of the proposed method are smaller than those of the reported methods using a single colour camera.
Optical design and development of a snapshot light-field laryngoscope
NASA Astrophysics Data System (ADS)
Zhu, Shuaishuai; Jin, Peng; Liang, Rongguang; Gao, Liang
2018-02-01
The convergence of recent advances in optical fabrication and digital processing yields a generation of imaging technology-light-field (LF) cameras which bridge the realms of applied mathematics, optics, and high-performance computing. Herein for the first time, we introduce the paradigm of LF imaging into laryngoscopy. The resultant probe can image the three-dimensional shape of vocal folds within a single camera exposure. Furthermore, to improve the spatial resolution, we developed an image fusion algorithm, providing a simple solution to a long-standing problem in LF imaging.
Compensation for positioning error of industrial robot for flexible vision measuring system
NASA Astrophysics Data System (ADS)
Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui
2013-01-01
Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.
Stability analysis for a multi-camera photogrammetric system.
Habib, Ayman; Detchev, Ivan; Kwak, Eunju
2014-08-18
Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.
Stability Analysis for a Multi-Camera Photogrammetric System
Habib, Ayman; Detchev, Ivan; Kwak, Eunju
2014-01-01
Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction. PMID:25196012
Single-Camera Stereoscopy Setup to Visualize 3D Dusty Plasma Flows
NASA Astrophysics Data System (ADS)
Romero-Talamas, C. A.; Lemma, T.; Bates, E. M.; Birmingham, W. J.; Rivera, W. F.
2016-10-01
A setup to visualize and track individual particles in multi-layered dusty plasma flows is presented. The setup consists of a single camera with variable frame rate, and a pair of adjustable mirrors that project the same field of view from two different angles to the camera, allowing for three-dimensional tracking of particles. Flows are generated by inclining the plane in which the dust is levitated using a specially designed setup that allows for external motion control without compromising vacuum. Dust illumination is achieved with an optics arrangement that includes a Powell lens that creates a laser fan with adjustable thickness and with approximately constant intensity everywhere. Both the illumination and the stereoscopy setup allow for the camera to be placed at right angles with respect to the levitation plane, in preparation for magnetized dusty plasma experiments in which there will be no direct optical access to the levitation plane. Image data and analysis of unmagnetized dusty plasma flows acquired with this setup are presented.
3D digital image correlation using single color camera pseudo-stereo system
NASA Astrophysics Data System (ADS)
Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang
2017-10-01
Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.
Volumetric particle image velocimetry with a single plenoptic camera
NASA Astrophysics Data System (ADS)
Fahringer, Timothy W.; Lynch, Kyle P.; Thurow, Brian S.
2015-11-01
A novel three-dimensional (3D), three-component (3C) particle image velocimetry (PIV) technique based on volume illumination and light field imaging with a single plenoptic camera is described. A plenoptic camera uses a densely packed microlens array mounted near a high resolution image sensor to sample the spatial and angular distribution of light collected by the camera. The multiplicative algebraic reconstruction technique (MART) computed tomography algorithm is used to reconstruct a volumetric intensity field from individual snapshots and a cross-correlation algorithm is used to estimate the velocity field from a pair of reconstructed particle volumes. This work provides an introduction to the basic concepts of light field imaging with a plenoptic camera and describes the unique implementation of MART in the context of plenoptic image data for 3D/3C PIV measurements. Simulations of a plenoptic camera using geometric optics are used to generate synthetic plenoptic particle images, which are subsequently used to estimate the quality of particle volume reconstructions at various particle number densities. 3D reconstructions using this method produce reconstructed particles that are elongated by a factor of approximately 4 along the optical axis of the camera. A simulated 3D Gaussian vortex is used to test the capability of single camera plenoptic PIV to produce a 3D/3C vector field, where it was found that lateral displacements could be measured to approximately 0.2 voxel accuracy in the lateral direction and 1 voxel in the depth direction over a 300× 200× 200 voxel volume. The feasibility of the technique is demonstrated experimentally using a home-built plenoptic camera based on a 16-megapixel interline CCD camera and a 289× 193 array of microlenses and a pulsed Nd:YAG laser. 3D/3C measurements were performed in the wake of a low Reynolds number circular cylinder and compared with measurements made using a conventional 2D/2C PIV system. Overall, single camera plenoptic PIV is shown to be a viable 3D/3C velocimetry technique.
Bae, Seo-Yoon; Kim, Dongwook; Shin, Dongbin; Mahmood, Javeed; Jeon, In-Yup; Jung, Sun-Min; Shin, Sun-Hee; Kim, Seok-Jin; Park, Noejung; Lah, Myoung Soo; Baek, Jong-Beom
2017-11-17
Solid-state reaction of organic molecules holds a considerable advantage over liquid-phase processes in the manufacturing industry. However, the research progress in exploring this benefit is largely staggering, which leaves few liquid-phase systems to work with. Here, we show a synthetic protocol for the formation of a three-dimensional porous organic network via solid-state explosion of organic single crystals. The explosive reaction is realized by the Bergman reaction (cycloaromatization) of three enediyne groups on 2,3,6,7,14,15-hexaethynyl-9,10-dihydro-9,10-[1,2]benzenoanthracene. The origin of the explosion is systematically studied using single-crystal X-ray diffraction and differential scanning calorimetry, along with high-speed camera and density functional theory calculations. The results suggest that the solid-state explosion is triggered by an abrupt change in lattice energy induced by release of primer molecules in the 2,3,6,7,14,15-hexaethynyl-9,10-dihydro-9,10-[1,2]benzenoanthracene crystal lattice.
Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects
Lu, Shin-Yee
1998-01-01
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.
Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects
Lu, S.Y.
1998-12-22
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.
Photogrammetry Applied to Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
Liu, Tian-Shu; Cattafesta, L. N., III; Radeztsky, R. H.; Burner, A. W.
2000-01-01
In image-based measurements, quantitative image data must be mapped to three-dimensional object space. Analytical photogrammetric methods, which may be used to accomplish this task, are discussed from the viewpoint of experimental fluid dynamicists. The Direct Linear Transformation (DLT) for camera calibration, used in pressure sensitive paint, is summarized. An optimization method for camera calibration is developed that can be used to determine the camera calibration parameters, including those describing lens distortion, from a single image. Combined with the DLT method, this method allows a rapid and comprehensive in-situ camera calibration and therefore is particularly useful for quantitative flow visualization and other measurements such as model attitude and deformation in production wind tunnels. The paper also includes a brief description of typical photogrammetric applications to temperature- and pressure-sensitive paint measurements and model deformation measurements in wind tunnels.
3-dimensional telepresence system for a robotic environment
Anderson, Matthew O.; McKay, Mark D.
2000-01-01
A telepresence system includes a camera pair remotely controlled by a control module affixed to an operator. The camera pair provides for three dimensional viewing and the control module, affixed to the operator, affords hands-free operation of the camera pair. In one embodiment, the control module is affixed to the head of the operator and an initial position is established. A triangulating device is provided to track the head movement of the operator relative to the initial position. A processor module receives input from the triangulating device to determine where the operator has moved relative to the initial position and moves the camera pair in response thereto. The movement of the camera pair is predetermined by a software map having a plurality of operation zones. Each zone therein corresponds to unique camera movement parameters such as speed of movement. Speed parameters include constant speed, or increasing or decreasing. Other parameters include pan, tilt, slide, raise or lowering of the cameras. Other user interface devices are provided to improve the three dimensional control capabilities of an operator in a local operating environment. Such other devices include a pair of visual display glasses, a microphone and a remote actuator. The pair of visual display glasses are provided to facilitate three dimensional viewing, hence depth perception. The microphone affords hands-free camera movement by utilizing voice commands. The actuator allows the operator to remotely control various robotic mechanisms in the remote operating environment.
Tuschy, Benjamin; Berlit, Sebastian; Brade, Joachim; Sütterlin, Marc; Hornemann, Amadeus
2014-01-01
To investigate the clinical assessment of a full high-definition (HD) three-dimensional robot-assisted laparoscopic device in gynaecological surgery. This study included 70 women who underwent gynaecological laparoscopic procedures. Demographic parameters, type and duration of surgery and perioperative complications were analyzed. Fifteen surgeons were postoperatively interviewed regarding their assessment of this new system with a standardized questionnaire. The clinical assessment revealed that three-dimensional full-HD visualisation is comfortable and improves spatial orientation and hand-to-eye coordination. The majority of the surgeons stated they would prefer a three-dimensional system to a conventional two-dimensional device and stated that the robotic camera arm led to more relaxed working conditions. Three-dimensional laparoscopy is feasible, comfortable and well-accepted in daily routine. The three-dimensional visualisation improves surgeons' hand-to-eye coordination, intracorporeal suturing and fine dissection. The combination of full-HD three-dimensional visualisation with the robotic camera arm results in very high image quality and stability.
Computer aided photographic engineering
NASA Technical Reports Server (NTRS)
Hixson, Jeffrey A.; Rieckhoff, Tom
1988-01-01
High speed photography is an excellent source of engineering data but only provides a two-dimensional representation of a three-dimensional event. Multiple cameras can be used to provide data for the third dimension but camera locations are not always available. A solution to this problem is to overlay three-dimensional CAD/CAM models of the hardware being tested onto a film or photographic image, allowing the engineer to measure surface distances, relative motions between components, and surface variations.
NASA Astrophysics Data System (ADS)
Zhao, Jiaye; Wen, Huihui; Liu, Zhanwei; Rong, Jili; Xie, Huimin
2018-05-01
Three-dimensional (3D) deformation measurements are a key issue in experimental mechanics. In this paper, a displacement field correlation (DFC) method to measure centrosymmetric 3D dynamic deformation using a single camera is proposed for the first time. When 3D deformation information is collected by a camera at a tilted angle, the measured displacement fields are coupling fields of both the in-plane and out-of-plane displacements. The features of the coupling field are analysed in detail, and a decoupling algorithm based on DFC is proposed. The 3D deformation to be measured can be inverted and reconstructed using only one coupling field. The accuracy of this method was validated by a high-speed impact experiment that simulated an underwater explosion. The experimental results show that the approach proposed in this paper can be used in 3D deformation measurements with higher sensitivity and accuracy, and is especially suitable for high-speed centrosymmetric deformation. In addition, this method avoids the non-synchronisation problem associated with using a pair of high-speed cameras, as is common in 3D dynamic measurements.
Plenoptic camera image simulation for reconstruction algorithm verification
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim
2014-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.
Compton camera imaging and the cone transform: a brief overview
NASA Astrophysics Data System (ADS)
Terzioglu, Fatma; Kuchment, Peter; Kunyansky, Leonid
2018-05-01
While most of Radon transform applications to imaging involve integrations over smooth sub-manifolds of the ambient space, lately important situations have appeared where the integration surfaces are conical. Three of such applications are single scatter optical tomography, Compton camera medical imaging, and homeland security. In spite of the similar surfaces of integration, the data and the inverse problems associated with these modalities differ significantly. In this article, we present a brief overview of the mathematics arising in Compton camera imaging. In particular, the emphasis is made on the overdetermined data and flexible geometry of the detectors. For the detailed results, as well as other approaches (e.g. smaller-dimensional data or restricted geometry of detectors) the reader is directed to the relevant publications. Only a brief description and some references are provided for the single scatter optical tomography. This work was supported in part by NSF DMS grants 1211463 (the first two authors), 1211521 and 141877 (the third author), as well as a College of Science of Texas A&M University grant.
Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras
NASA Astrophysics Data System (ADS)
Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro
2018-03-01
Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.
Research on three-dimensional reconstruction method based on binocular vision
NASA Astrophysics Data System (ADS)
Li, Jinlin; Wang, Zhihui; Wang, Minjun
2018-03-01
As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.
The Example of Using the Xiaomi Cameras in Inventory of Monumental Objects - First Results
NASA Astrophysics Data System (ADS)
Markiewicz, J. S.; Łapiński, S.; Bienkowski, R.; Kaliszewska, A.
2017-11-01
At present, digital documentation recorded in the form of raster or vector files is the obligatory way of inventorying historical objects. Today, photogrammetry is becoming more and more popular and is becoming the standard of documentation in many projects involving the recording of all possible spatial data on landscape, architecture, or even single objects. Low-cost sensors allow for the creation of reliable and accurate three-dimensional models of investigated objects. This paper presents the results of a comparison between the outcomes obtained when using three sources of image: low-cost Xiaomi cameras, a full-frame camera (Canon 5D Mark II) and middle-frame camera (Hasselblad-Hd4). In order to check how the results obtained from the two sensors differ the following parameters were analysed: the accuracy of the orientation of the ground level photos on the control and check points, the distribution of appointed distortion in the self-calibration process, the flatness of the walls, the discrepancies between point clouds from the low-cost cameras and references data. The results presented below are a result of co-operation of researchers from three institutions: the Systems Research Institute PAS, The Department of Geodesy and Cartography at the Warsaw University of Technology and the National Museum in Warsaw.
The application of holography as a real-time three-dimensional motion picture camera
NASA Technical Reports Server (NTRS)
Kurtz, R. L.
1973-01-01
A historical introduction to holography is presented, as well as a basic description of sideband holography for stationary objects. A brief theoretical development of both time-dependent and time-independent holography is also provided, along with an analytical and intuitive discussion of a unique holographic arrangement which allows the resolution of front surface detail from an object moving at high speeds. As an application of such a system, a real-time three-dimensional motion picture camera system is discussed and the results of a recent demonstration of the world's first true three-dimensional motion picture are given.
NASA Technical Reports Server (NTRS)
Chen, Fang-Jenq
1997-01-01
Flow visualization produces data in the form of two-dimensional images. If the optical components of a camera system are perfect, the transformation equations between the two-dimensional image and the three-dimensional object space are linear and easy to solve. However, real camera lenses introduce nonlinear distortions that affect the accuracy of transformation unless proper corrections are applied. An iterative least-squares adjustment algorithm is developed to solve the nonlinear transformation equations incorporated with distortion corrections. Experimental applications demonstrate that a relative precision on the order of 40,000 is achievable without tedious laboratory calibrations of the camera.
Unsupervised markerless 3-DOF motion tracking in real time using a single low-budget camera.
Quesada, Luis; León, Alejandro J
2012-10-01
Motion tracking is a critical task in many computer vision applications. Existing motion tracking techniques require either a great amount of knowledge on the target object or specific hardware. These requirements discourage the wide spread of commercial applications based on motion tracking. In this paper, we present a novel three degrees of freedom motion tracking system that needs no knowledge on the target object and that only requires a single low-budget camera that can be found installed in most computers and smartphones. Our system estimates, in real time, the three-dimensional position of a nonmodeled unmarked object that may be nonrigid, nonconvex, partially occluded, self-occluded, or motion blurred, given that it is opaque, evenly colored, enough contrasting with the background in each frame, and that it does not rotate. Our system is also able to determine the most relevant object to track in the screen. Our proposal does not impose additional constraints, therefore it allows a market-wide implementation of applications that require the estimation of the three position degrees of freedom of an object.
Application of a laser scanner to three dimensional visual sensing tasks
NASA Technical Reports Server (NTRS)
Ryan, Arthur M.
1992-01-01
The issues are described which are associated with using a laser scanner for visual sensing and the methods developed by the author to address them. A laser scanner is a device that controls the direction of a laser beam by deflecting it through a pair of orthogonal mirrors, the orientations of which are specified by a computer. If a calibrated laser scanner is combined with a calibrated camera, it is possible to perform three dimensional sensing by directing the laser at objects within the field of view of the camera. There are several issues associated with using a laser scanner for three dimensional visual sensing that must be addressed in order to use the laser scanner effectively. First, methods are needed to calibrate the laser scanner and estimate three dimensional points. Second, methods to estimate three dimensional points using a calibrated camera and laser scanner are required. Third, methods are required for locating the laser spot in a cluttered image. Fourth, mathematical models that predict the laser scanner's performance and provide structure for three dimensional data points are necessary. Several methods were developed to address each of these and has evaluated them to determine how and when they should be applied. The theoretical development, implementation, and results when used in a dual arm eighteen degree of freedom robotic system for space assembly is described.
Autocalibration of a projector-camera system.
Okatani, Takayuki; Deguchi, Koichiro
2005-12-01
This paper presents a method for calibrating a projector-camera system that consists of multiple projectors (or multiple poses of a single projector), a camera, and a planar screen. We consider the problem of estimating the homography between the screen and the image plane of the camera or the screen-camera homography, in the case where there is no prior knowledge regarding the screen surface that enables the direct computation of the homography. It is assumed that the pose of each projector is unknown while its internal geometry is known. Subsequently, it is shown that the screen-camera homography can be determined from only the images projected by the projectors and then obtained by the camera, up to a transformation with four degrees of freedom. This transformation corresponds to arbitrariness in choosing a two-dimensional coordinate system on the screen surface and when this coordinate system is chosen in some manner, the screen-camera homography as well as the unknown poses of the projectors can be uniquely determined. A noniterative algorithm is presented, which computes the homography from three or more images. Several experimental results on synthetic as well as real images are shown to demonstrate the effectiveness of the method.
Augmented reality glass-free three-dimensional display with the stereo camera
NASA Astrophysics Data System (ADS)
Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu
2017-10-01
An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.
NASA Technical Reports Server (NTRS)
Monford, Leo G. (Inventor)
1990-01-01
Improved techniques are provided for alignment of two objects. The present invention is particularly suited for three-dimensional translation and three-dimensional rotational alignment of objects in outer space. A camera 18 is fixedly mounted to one object, such as a remote manipulator arm 10 of the spacecraft, while the planar reflective surface 30 is fixed to the other object, such as a grapple fixture 20. A monitor 50 displays in real-time images from the camera, such that the monitor displays both the reflected image of the camera and visible markings on the planar reflective surface when the objects are in proper alignment. The monitor may thus be viewed by the operator and the arm 10 manipulated so that the reflective surface is perpendicular to the optical axis of the camera, the roll of the reflective surface is at a selected angle with respect to the camera, and the camera is spaced a pre-selected distance from the reflective surface.
NASA Astrophysics Data System (ADS)
Jones, Terry Jay; Humphreys, Roberta M.; Helton, L. Andrew; Gui, Changfeng; Huang, Xiang
2007-06-01
We use imaging polarimetry taken with the HST Advanced Camera for Surveys High Resolution Camera to explore the three-dimensional structure of the circumstellar dust distribution around the red supergiant VY Canis Majoris. The polarization vectors of the nebulosity surrounding VY CMa show a strong centrosymmetric pattern in all directions except directly east and range from 10% to 80% in fractional polarization. In regions that are optically thin, and therefore likely to have only single scattering, we use the fractional polarization and photometric color to locate the physical position of the dust along the line of sight. Most of the individual arclike features and clumps seen in the intensity image are also features in the fractional polarization map. These features must be distinct geometric objects. If they were just local density enhancements, the fractional polarization would not change so abruptly at the edge of the feature. The location of these features in the ejecta of VY CMa using polarimetry provides a determination of their three-dimensional geometry independent of, but in close agreement with, the results from our study of their kinematics (Paper I). Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
Salmingo, Remel A; Tadano, Shigeru; Fujisaki, Kazuhiro; Abe, Yuichiro; Ito, Manabu
2012-05-01
Scoliosis is defined as a spinal pathology characterized as a three-dimensional deformity of the spine combined with vertebral rotation. Treatment for severe scoliosis is achieved when the scoliotic spine is surgically corrected and fixed using implanted rods and screws. Several studies performed biomechanical modeling and corrective forces measurements of scoliosis correction. These studies were able to predict the clinical outcome and measured the corrective forces acting on screws, however, they were not able to measure the intraoperative three-dimensional geometry of the spinal rod. In effect, the results of biomechanical modeling might not be so realistic and the corrective forces during the surgical correction procedure were intra-operatively difficult to measure. Projective geometry has been shown to be successful in the reconstruction of a three-dimensional structure using a series of images obtained from different views. In this study, we propose a new method to measure the three-dimensional geometry of an implant rod using two cameras. The reconstruction method requires only a few parameters, the included angle θ between the two cameras, the actual length of the rod in mm, and the location of points for curve fitting. The implant rod utilized in spine surgery was used to evaluate the accuracy of the current method. The three-dimensional geometry of the rod was measured from the image obtained by a scanner and compared to the proposed method using two cameras. The mean error in the reconstruction measurements ranged from 0.32 to 0.45 mm. The method presented here demonstrated the possibility of intra-operatively measuring the three-dimensional geometry of spinal rod. The proposed method could be used in surgical procedures to better understand the biomechanics of scoliosis correction through real-time measurement of three-dimensional implant rod geometry in vivo.
A new method to acquire 3-D images of a dental cast
NASA Astrophysics Data System (ADS)
Li, Zhongke; Yi, Yaxing; Zhu, Zhen; Li, Hua; Qin, Yongyuan
2006-01-01
This paper introduced our newly developed method to acquire three-dimensional images of a dental cast. A rotatable table, a laser-knife, a mirror, a CCD camera and a personal computer made up of a three-dimensional data acquiring system. A dental cast is placed on the table; the mirror is installed beside the table; a linear laser is projected to the dental cast; the CCD camera is put up above the dental cast, it can take picture of the dental cast and the shadow in the mirror; while the table rotating, the camera records the shape of the laser streak projected on the dental cast, and transmit the data to the computer. After the table rotated one circuit, the computer processes the data, calculates the three-dimensional coordinates of the dental cast's surface. In data processing procedure, artificial neural networks are enrolled to calibrate the lens distortion, map coordinates form screen coordinate system to world coordinate system. According to the three-dimensional coordinates, the computer reconstructs the stereo image of the dental cast. It is essential for computer-aided diagnosis and treatment planning in orthodontics. In comparison with other systems in service, for example, laser beam three-dimensional scanning system, the characteristic of this three-dimensional data acquiring system: a. celerity, it casts only 1 minute to scan a dental cast; b. compact, the machinery is simple and compact; c. no blind zone, a mirror is introduced ably to reduce blind zone.
NASA Astrophysics Data System (ADS)
Wong, Erwin
2000-03-01
Traditional methods of linear based imaging limits the viewer to a single fixed-point perspective. By means of a single lens multiple perspective mirror system, a 360-degree representation of the area around the camera is reconstructed. This reconstruction is used overcome the limitations of a traditional camera by providing the viewer with many different perspectives. By constructing the mirror into a hemispherical surface with multiple focal lengths at various diameters on the mirror, and by placing a parabolic mirror overhead, a stereoscopic image can be extracted from the image captured by a high-resolution camera placed beneath the mirror. Image extraction and correction is made by computer processing of the image obtained by camera; the image present up to five distinguishable different viewpoints that a computer can extrapolate pseudo- perspective data from. Geometric and depth for field can be extrapolated via comparison and isolation of objects within a virtual scene post processed by the computer. Combining data with scene rendering software provides the viewer with the ability to choose a desired viewing position, multiple dynamic perspectives, and virtually constructed perspectives based on minimal existing data. An examination into the workings of the mirror relay system is provided, including possible image extrapolation and correctional methods. Generation of data and virtual interpolated and constructed data is also mentioned.
NASA Technical Reports Server (NTRS)
Lane, Marc; Hsieh, Cheng; Adams, Lloyd
1989-01-01
In undertaking the design of a 2000-mm focal length camera for the Mariner Mark II series of spacecraft, JPL sought novel materials with the requisite dimensional and thermal stability, outgassing and corrosion resistance, low mass, high stiffness, and moderate cost. Metal-matrix composites and Al-Li alloys have, in addition to excellent mechanical properties and low density, a suitably low coefficient of thermal expansion, high specific stiffness, and good electrical conductivity. The greatest single obstacle to application of these materials to camera structure design is noted to have been the lack of information regarding long-term dimensional stability.
Dimensional coordinate measurements: application in characterizing cervical spine motion
NASA Astrophysics Data System (ADS)
Zheng, Weilong; Li, Linan; Wang, Shibin; Wang, Zhiyong; Shi, Nianke; Xue, Yuan
2014-06-01
Cervical spine as a complicated part in the human body, the form of its movement is diverse. The movements of the segments of vertebrae are three-dimensional, and it is reflected in the changes of the angle between two joint and the displacement in different directions. Under normal conditions, cervical can flex, extend, lateral flex and rotate. For there is no relative motion between measuring marks fixed on one segment of cervical vertebra, the cervical vertebrae with three marked points can be seen as a body. Body's motion in space can be decomposed into translational movement and rotational movement around a base point .This study concerns the calculation of dimensional coordinate of the marked points pasted to the human body's cervical spine by an optical method. Afterward, these measures will allow the calculation of motion parameters for every spine segment. For this study, we choose a three-dimensional measurement method based on binocular stereo vision. The object with marked points is placed in front of the CCD camera. Through each shot, we will get there two parallax images taken from different cameras. According to the principle of binocular vision we can be realized three-dimensional measurements. Cameras are erected parallelly. This paper describes the layout of experimental system and a mathematical model to get the coordinates.
3D Surface Reconstruction for Lower Limb Prosthetic Model using Radon Transform
NASA Astrophysics Data System (ADS)
Sobani, S. S. Mohd; Mahmood, N. H.; Zakaria, N. A.; Razak, M. A. Abdul
2018-03-01
This paper describes the idea to realize three-dimensional surfaces of objects with cylinder-based shapes where the techniques adopted and the strategy developed for a non-rigid three-dimensional surface reconstruction of an object from uncalibrated two-dimensional image sequences using multiple-view digital camera and turntable setup. The surface of an object is reconstructed based on the concept of tomography with the aid of performing several digital image processing algorithms on the two-dimensional images captured by a digital camera in thirty-six different projections and the three-dimensional structure of the surface is analysed. Four different objects are used as experimental models in the reconstructions and each object is placed on a manually rotated turntable. The results shown that the proposed method has successfully reconstruct the three-dimensional surface of the objects and practicable. The shape and size of the reconstructed three-dimensional objects are recognizable and distinguishable. The reconstructions of objects involved in the test are strengthened with the analysis where the maximum percent error obtained from the computation is approximately 1.4 % for the height whilst 4.0%, 4.79% and 4.7% for the diameters at three specific heights of the objects.
Holographic motion picture camera with Doppler shift compensation
NASA Technical Reports Server (NTRS)
Kurtz, R. L. (Inventor)
1976-01-01
A holographic motion picture camera is reported for producing three dimensional images by employing an elliptical optical system. There is provided in one of the beam paths (the object or reference beam path) a motion compensator which enables the camera to photograph faster moving objects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu Feipeng; Shi Hongjian; Bai Pengxiang
In fringe projection, the CCD camera and the projector are often placed at equal height. In this paper, we will study the calibration of an unequal arrangement of the CCD camera and the projector. The principle of fringe projection with two-dimensional digital image correlation to acquire the profile of object surface is described in detail. By formula derivation and experiment, the linear relationship between the out-of-plane calibration coefficient and the y coordinate is clearly found. To acquire the three-dimensional (3D) information of an object correctly, this paper presents an effective calibration method with linear least-squares fitting, which is very simplemore » in principle and calibration. Experiments are implemented to validate the availability and reliability of the calibration method.« less
Three-dimensional particle tracking via tunable color-encoded multiplexing.
Duocastella, Martí; Theriault, Christian; Arnold, Craig B
2016-03-01
We present a novel 3D tracking approach capable of locating single particles with nanometric precision over wide axial ranges. Our method uses a fast acousto-optic liquid lens implemented in a bright field microscope to multiplex light based on color into different and selectable focal planes. By separating the red, green, and blue channels from an image captured with a color camera, information from up to three focal planes can be retrieved. Multiplane information from the particle diffraction rings enables precisely locating and tracking individual objects up to an axial range about 5 times larger than conventional single-plane approaches. We apply our method to the 3D visualization of the well-known coffee-stain phenomenon in evaporating water droplets.
Plenoptic Imaging for Three-Dimensional Particle Field Diagnostics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guildenbecher, Daniel Robert; Hall, Elise Munz
2017-06-01
Plenoptic imaging is a promising emerging technology for single-camera, 3D diagnostics of particle fields. In this work, recent developments towards quantitative measurements of particle size, positions, and velocities are discussed. First, the technique is proven viable with measurements of the particle field generated by the impact of a water drop on a thin film of water. Next, well cont rolled experiments are used to verify diagnostic uncertainty. Finally, an example is presented of 3D plenoptic imaging of a laboratory scale, explosively generated fragment field.
Method and apparatus for coherent imaging of infrared energy
Hutchinson, D.P.
1998-05-12
A coherent camera system performs ranging, spectroscopy, and thermal imaging. Local oscillator radiation is combined with target scene radiation to enable heterodyne detection by the coherent camera`s two-dimensional photodetector array. Versatility enables deployment of the system in either a passive mode (where no laser energy is actively transmitted toward the target scene) or an active mode (where a transmitting laser is used to actively illuminate the target scene). The two-dimensional photodetector array eliminates the need to mechanically scan the detector. Each element of the photodetector array produces an intermediate frequency signal that is amplified, filtered, and rectified by the coherent camera`s integrated circuitry. By spectroscopic examination of the frequency components of each pixel of the detector array, a high-resolution, three-dimensional or holographic image of the target scene is produced for applications such as air pollution studies, atmospheric disturbance monitoring, and military weapons targeting. 8 figs.
NASA Astrophysics Data System (ADS)
La Riviere, P. J.; Pan, X.; Penney, B. C.
1998-06-01
Scintimammography, a nuclear-medicine imaging technique that relies on the preferential uptake of Tc-99m-sestamibi and other radionuclides in breast malignancies, has the potential to provide differentiation of mammographically suspicious lesions, as well as outright detection of malignancies in women with radiographically dense breasts. In this work we use the ideal-observer framework to quantify the detectability of a 1-cm lesion using three different imaging geometries: the planar technique that is the current clinical standard, conventional single-photon emission computed tomography (SPECT), in which the scintillation cameras rotate around the entire torso, and dedicated breast SPECT, in which the cameras rotate around the breast alone. We also introduce an adaptive smoothing technique for the processing of planar images and of sinograms that exploits Fourier transforms to achieve effective multidimensional smoothing at a reasonable computational cost. For the detection of a 1-cm lesion with a clinically typical 6:1 tumor-background ratio, we find ideal-observer signal-to-noise ratios (SNR) that suggest that the dedicated breast SPECT geometry is the most effective of the three, and that the adaptive, two-dimensional smoothing technique should enhance lesion detectability in the tomographic reconstructions.
GoPro Hero Cameras for Creation of a Three-Dimensional, Educational, Neurointerventional Video.
Park, Min S; Brock, Andrea; Mortimer, Vance; Taussky, Philipp; Couldwell, William T; Quigley, Edward
2017-10-01
Neurointerventional education relies on an apprenticeship model, with the trainee observing and participating in procedures with the guidance of a mentor. While educational videos are becoming prevalent in surgical cases, there is a dearth of comparable educational material for trainees in neurointerventional programs. We sought to create a high-quality, three-dimensional video of a routine diagnostic cerebral angiogram for use as an educational tool. A diagnostic cerebral angiogram was recorded using two GoPro HERO 3+ cameras with the Dual HERO System to capture the proceduralist's hands during the case. This video was edited with recordings from the video monitors to create a real-time three-dimensional video of both the actions of the neurointerventionalist and the resulting wire/catheter movements. The final edited video, in either two or three dimensions, can serve as another instructional tool for the training of residents and/or fellows. Additional videos can be created in a similar fashion of more complicated neurointerventional cases. The GoPro HERO 3+ camera and Dual HERO System can be used to create educational videos of neurointerventional procedures.
3-D endoscopic imaging using plenoptic camera.
Le, Hanh N D; Decker, Ryan; Opferman, Justin; Kim, Peter; Krieger, Axel; Kang, Jin U
2016-06-01
Three-dimensional endoscopic imaging using plenoptic technique combined with F-matching algorithm has been pursued in this study. A custom relay optics was designed to integrate a commercial surgical straight endoscope with a plenoptic camera.
Human iris three-dimensional imaging at micron resolution by a micro-plenoptic camera
Chen, Hao; Woodward, Maria A.; Burke, David T.; Jeganathan, V. Swetha E.; Demirci, Hakan; Sick, Volker
2017-01-01
A micro-plenoptic system was designed to capture the three-dimensional (3D) topography of the anterior iris surface by simple single-shot imaging. Within a depth-of-field of 2.4 mm, depth resolution of 10 µm can be achieved with accuracy (systematic errors) and precision (random errors) below 20%. We demonstrated the application of our micro-plenoptic imaging system on two healthy irides, an iris with naevi, and an iris with melanoma. The ridges and folds, with height differences of 10~80 µm, on the healthy irides can be effectively captured. The front surface on the iris naevi was flat, and the iris melanoma was 50 ± 10 µm higher than the surrounding iris. The micro-plenoptic imaging system has great potential to be utilized for iris disease diagnosis and continuing, simple monitoring. PMID:29082081
Human iris three-dimensional imaging at micron resolution by a micro-plenoptic camera.
Chen, Hao; Woodward, Maria A; Burke, David T; Jeganathan, V Swetha E; Demirci, Hakan; Sick, Volker
2017-10-01
A micro-plenoptic system was designed to capture the three-dimensional (3D) topography of the anterior iris surface by simple single-shot imaging. Within a depth-of-field of 2.4 mm, depth resolution of 10 µm can be achieved with accuracy (systematic errors) and precision (random errors) below 20%. We demonstrated the application of our micro-plenoptic imaging system on two healthy irides, an iris with naevi, and an iris with melanoma. The ridges and folds, with height differences of 10~80 µm, on the healthy irides can be effectively captured. The front surface on the iris naevi was flat, and the iris melanoma was 50 ± 10 µm higher than the surrounding iris. The micro-plenoptic imaging system has great potential to be utilized for iris disease diagnosis and continuing, simple monitoring.
Three dimensional identification card and applications
NASA Astrophysics Data System (ADS)
Zhou, Changhe; Wang, Shaoqing; Li, Chao; Li, Hao; Liu, Zhao
2016-10-01
Three dimensional Identification Card, with its three-dimensional personal image displayed and stored for personal identification, is supposed be the advanced version of the present two-dimensional identification card in the future [1]. Three dimensional Identification Card means that there are three-dimensional optical techniques are used, the personal image on ID card is displayed to be three-dimensional, so we can see three dimensional personal face. The ID card also stores the three-dimensional face information in its inside electronics chip, which might be recorded by using two-channel cameras, and it can be displayed in computer as three-dimensional images for personal identification. Three-dimensional ID card might be one interesting direction to update the present two-dimensional card in the future. Three-dimension ID card might be widely used in airport custom, entrance of hotel, school, university, as passport for on-line banking, registration of on-line game, etc...
Afanasyev, Pavel; Seer-Linnemayr, Charlotte; Ravelli, Raimond B G; Matadeen, Rishi; De Carlo, Sacha; Alewijnse, Bart; Portugal, Rodrigo V; Pannu, Navraj S; Schatz, Michael; van Heel, Marin
2017-09-01
Single-particle cryogenic electron microscopy (cryo-EM) can now yield near-atomic resolution structures of biological complexes. However, the reference-based alignment algorithms commonly used in cryo-EM suffer from reference bias, limiting their applicability (also known as the 'Einstein from random noise' problem). Low-dose cryo-EM therefore requires robust and objective approaches to reveal the structural information contained in the extremely noisy data, especially when dealing with small structures. A reference-free pipeline is presented for obtaining near-atomic resolution three-dimensional reconstructions from heterogeneous ('four-dimensional') cryo-EM data sets. The methodologies integrated in this pipeline include a posteriori camera correction, movie-based full-data-set contrast transfer function determination, movie-alignment algorithms, (Fourier-space) multivariate statistical data compression and unsupervised classification, 'random-startup' three-dimensional reconstructions, four-dimensional structural refinements and Fourier shell correlation criteria for evaluating anisotropic resolution. The procedures exclusively use information emerging from the data set itself, without external 'starting models'. Euler-angle assignments are performed by angular reconstitution rather than by the inherently slower projection-matching approaches. The comprehensive 'ABC-4D' pipeline is based on the two-dimensional reference-free 'alignment by classification' (ABC) approach, where similar images in similar orientations are grouped by unsupervised classification. Some fundamental differences between X-ray crystallography versus single-particle cryo-EM data collection and data processing are discussed. The structure of the giant haemoglobin from Lumbricus terrestris at a global resolution of ∼3.8 Å is presented as an example of the use of the ABC-4D procedure.
Fu, Xiaoming; Peng, Chun; Li, Zan; Liu, Shan; Tan, Minmin; Song, Jinlin
2017-01-01
To explore a new technique for reconstructing and measuring three-dimensional (3D) models of orthodontic plaster casts using multi-baseline digital close-range photogrammetry (MBDCRP) with a single-lens reflex camera. Thirty sets of orthodontic plaster casts that do not exhibit severe horizontal overlap (>2 mm) between any two teeth were recorded by a single-lens reflex camera with 72 pictures taken in different directions. The 3D models of these casts were reconstructed and measured using the open source software MeshLab. These parameters, including mesio-distal crown diameter, arch width, and arch perimeter, were recorded six times on both the 3D digital models and on plaster casts by two examiners. Statistical analysis was carried out using the Bland-Altman method to measure agreement between the novel method and the traditional calliper method by calculating the differences between mean values. The average differences between the measurements of the photogrammetric 3D models and the plaster casts were 0.011-0.402mm. The mean differences between measurements obtained by the photogrammetric 3D models and the dental casts were not significant except for the lower arch perimeter (P>0.05), and all the differences were regarded as clinically acceptable (<0.5 mm). Measurements obtained by MBDCRP are compared well with those obtained from plaster casts, indicating that MBDCRP is an alternate way to store and measure dental plaster casts without severe horizontal overlap between any two teeth.
Software Graphical User Interface For Analysis Of Images
NASA Technical Reports Server (NTRS)
Leonard, Desiree M.; Nolf, Scott R.; Avis, Elizabeth L.; Stacy, Kathryn
1992-01-01
CAMTOOL software provides graphical interface between Sun Microsystems workstation and Eikonix Model 1412 digitizing camera system. Camera scans and digitizes images, halftones, reflectives, transmissives, rigid or flexible flat material, or three-dimensional objects. Users digitize images and select from three destinations: work-station display screen, magnetic-tape drive, or hard disk. Written in C.
Rapid prototyping 3D virtual world interfaces within a virtual factory environment
NASA Technical Reports Server (NTRS)
Kosta, Charles Paul; Krolak, Patrick D.
1993-01-01
On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.
3-D endoscopic imaging using plenoptic camera
Le, Hanh N. D.; Decker, Ryan; Opferman, Justin; Kim, Peter; Krieger, Axel
2017-01-01
Three-dimensional endoscopic imaging using plenoptic technique combined with F-matching algorithm has been pursued in this study. A custom relay optics was designed to integrate a commercial surgical straight endoscope with a plenoptic camera. PMID:29276806
Development of robots and application to industrial processes
NASA Technical Reports Server (NTRS)
Palm, W. J.; Liscano, R.
1984-01-01
An algorithm is presented for using a robot system with a single camera to position in three-dimensional space a slender object for insertion into a hole; for example, an electrical pin-type termination into a connector hole. The algorithm relies on a control-configured end effector to achieve the required horizontal translations and rotational motion, and it does not require camera calibration. A force sensor in each fingertip is integrated with the vision system to allow the robot to teach itself new reference points when different connectors and pins are used. Variability in the grasped orientation and position of the pin can be accomodated with the sensor system. Performance tests show that the system is feasible. More work is needed to determine more precisely the effects of lighting levels and lighting direction.
The Proof of the ``Vortex Theory of Matter''
NASA Astrophysics Data System (ADS)
Moon, Russell
2009-11-01
According to the Vortex Theory, protons and electrons are three-dimensional holes connected by fourth-dimensional vortices. It was further theorized that when photons are absorbed then readmitted by atoms, the photon is absorbed into the proton, moves through the fourth-dimensional vortex, then reemerges back into three-dimensional space through the electron. To prove this hypothesis, an experiment was conducted using a hollow aluminum sphere containing a powerful permanent magnet suspended directly above a zinc plate. Ultraviolet light was then shined upon the zinc. The zinc emits electrons via the photoelectric effect that are attracted to the surface of the aluminum sphere. The sphere was removed from above the zinc plate and repositioned above a sensitive infrared digital camera in another room. The ball and camera were placed within a darkened box inside a Faraday cage. Light was shined upon the zinc plate and the picture taken by the camera was observed. When the light was turned on above the zinc plate in one room, the camera recorded increased light coming from the surface of the sphere within the other room; when the light was turned off, the intensity of the infrared light coming from the surface of the sphere was suddenly diminished. Five other tests were then performed to eliminate other possible explanations such as quantum-entangled electrons.
The Proof of the ``Vortex Theory of Matter''
NASA Astrophysics Data System (ADS)
Gridnev, Konstantin; Moon, Russell; Vasiliev, Victor
2009-11-01
According to the Vortex Theory, protons and electrons are three-dimensional holes connected by fourth-dimensional vortices. It was further theorized that when photons are absorbed then readmitted by atoms, the photon is absorbed into the proton, moves through the fourth-dimensional vortex, then reemerges back into three-dimensional space through the electron^2. To prove this hypothesis, an experiment was conducted using a hollow aluminum sphere containing a powerful permanent magnet suspended directly above a zinc plate. Ultraviolet light was then shined upon the zinc. The zinc emits electrons via the photoelectric effect that are attracted to the surface of the aluminum sphere. The sphere was removed from above the zinc plate and repositioned above a sensitive infrared digital camera in another room. The ball and camera were placed within a darkened box inside a Faraday cage. Light was shined upon the zinc plate and the picture taken by the camera was observed. When the light was turned on above the zinc plate in one room, the camera recorded increased light coming from the surface of the sphere within the other room; when the light was turned off, the intensity of the infrared light coming from the surface of the sphere was suddenly diminished. Five other tests were then performed to eliminate other possible explanations such as quantum-entangled electrons.
The Proof of the ``Vortex Theory of Matter''
NASA Astrophysics Data System (ADS)
Gridnev, Konstantin; Moon, Russell; Vasiliev, Victor
2009-10-01
According to the Vortex Theory, protons and electrons are three-dimensional holes connected by fourth-dimensional vortices. It was further theorized that when photons are absorbed then readmitted by atoms, the photon is absorbed into the proton, moves through the fourth-dimensional vortex, then reemerges back into three-dimensional space through the electron^2. To prove this hypothesis, an experiment was conducted using a hollow aluminum sphere containing a powerful permanent magnet suspended directly above a zinc plate. Ultraviolet light was then shined upon the zinc. The zinc emits electrons via the photoelectric effect that are attracted to the surface of the aluminum sphere. The sphere was removed from above the zinc plate and repositioned above a sensitive infrared digital camera in another room. The ball and camera were placed within a darkened box inside a Faraday cage. Light was shined upon the zinc plate and the picture taken by the camera was observed. When the light was turned on above the zinc plate in one room, the camera recorded increased light coming from the surface of the sphere within the other room; when the light was turned off, the intensity of the infrared light coming from the surface of the sphere was suddenly diminished. Five other tests were then performed to eliminate other possible explanations such as quantum-entangled electrons.
The Proof of the ``Vortex Theory of Matter''
NASA Astrophysics Data System (ADS)
Moon, Russell; Gridnev, Konstantin; Vasiliev, Victor
2010-02-01
According to the Vortex Theory, protons and electrons are three-dimensional holes connected by fourth-dimensional vortices. It was further theorized that when photons are absorbed then readmitted by atoms, the photon is absorbed into the proton, moves through the fourth-dimensional vortex, then reemerges back into three-dimensional space through the electron. To prove this hypothesis, an experiment was conducted using a hollow aluminum sphere containing a powerful permanent magnet suspended directly above a zinc plate. Ultraviolet light was then shined upon the zinc. The zinc emits electrons via the photoelectric effect that are attracted to the surface of the aluminum sphere. The sphere was removed from above the zinc plate and repositioned above a sensitive infrared digital camera in another room. The ball and camera were placed within a darkened box inside a Faraday cage. Light was shined upon the zinc plate and the picture taken by the camera was observed. When the light was turned on above the zinc plate in one room, the camera recorded increased light coming from the surface of the sphere within the other room; when the light was turned off, the intensity of the infrared light coming from the surface of the sphere was suddenly diminished. Five other tests were then performed to eliminate other possible explanations such as quantum-entangled electrons. )
von Diezmann, Alex; Shechtman, Yoav; Moerner, W. E.
2017-01-01
Single-molecule super-resolution fluorescence microscopy and single-particle tracking are two imaging modalities that illuminate the properties of cells and materials on spatial scales down to tens of nanometers, or with dynamical information about nanoscale particle motion in the millisecond range, respectively. These methods generally use wide-field microscopes and two-dimensional camera detectors to localize molecules to much higher precision than the diffraction limit. Given the limited total photons available from each single-molecule label, both modalities require careful mathematical analysis and image processing. Much more information can be obtained about the system under study by extending to three-dimensional (3D) single-molecule localization: without this capability, visualization of structures or motions extending in the axial direction can easily be missed or confused, compromising scientific understanding. A variety of methods for obtaining both 3D super-resolution images and 3D tracking information have been devised, each with their own strengths and weaknesses. These include imaging of multiple focal planes, point-spread-function engineering, and interferometric detection. These methods may be compared based on their ability to provide accurate and precise position information of single-molecule emitters with limited photons. To successfully apply and further develop these methods, it is essential to consider many practical concerns, including the effects of optical aberrations, field-dependence in the imaging system, fluorophore labeling density, and registration between different color channels. Selected examples of 3D super-resolution imaging and tracking are described for illustration from a variety of biological contexts and with a variety of methods, demonstrating the power of 3D localization for understanding complex systems. PMID:28151646
NASA Astrophysics Data System (ADS)
Liu, Zexi; Cohen, Fernand
2017-11-01
We describe an approach for synthesizing a three-dimensional (3-D) face structure from an image or images of a human face taken at a priori unknown poses using gender and ethnicity specific 3-D generic models. The synthesis process starts with a generic model, which is personalized as images of the person become available using preselected landmark points that are tessellated to form a high-resolution triangular mesh. From a single image, two of the three coordinates of the model are reconstructed in accordance with the given image of the person, while the third coordinate is sampled from the generic model, and the appearance is made in accordance with the image. With multiple images, all coordinates and appearance are reconstructed in accordance with the observed images. This method allows for accurate pose estimation as well as face identification in 3-D rendering of a difficult two-dimensional (2-D) face recognition problem into a much simpler 3-D surface matching problem. The estimation of the unknown pose is achieved using the Levenberg-Marquardt optimization process. Encouraging experimental results are obtained in a controlled environment with high-resolution images under a good illumination condition, as well as for images taken in an uncontrolled environment under arbitrary illumination with low-resolution cameras.
Imaging chemical reactions - 3D velocity mapping
NASA Astrophysics Data System (ADS)
Chichinin, A. I.; Gericke, K.-H.; Kauczok, S.; Maul, C.
Visualising a collision between an atom or a molecule or a photodissociation (half-collision) of a molecule on a single particle and single quantum level is like watching the collision of billiard balls on a pool table: Molecular beams or monoenergetic photodissociation products provide the colliding reactants at controlled velocity before the reaction products velocity is imaged directly with an elaborate camera system, where one should keep in mind that velocity is, in general, a three-dimensional (3D) vectorial property which combines scattering angles and speed. If the processes under study have no cylindrical symmetry, then only this 3D product velocity vector contains the full information of the elementary process under study.
The study of integration about measurable image and 4D production
NASA Astrophysics Data System (ADS)
Zhang, Chunsen; Hu, Pingbo; Niu, Weiyun
2008-12-01
In this paper, we create the geospatial data of three-dimensional (3D) modeling by the combination of digital photogrammetry and digital close-range photogrammetry. For large-scale geographical background, we make the establishment of DEM and DOM combination of three-dimensional landscape model based on the digital photogrammetry which uses aerial image data to make "4D" (DOM: Digital Orthophoto Map, DEM: Digital Elevation Model, DLG: Digital Line Graphic and DRG: Digital Raster Graphic) production. For the range of building and other artificial features which the users are interested in, we realize that the real features of the three-dimensional reconstruction adopting the method of the digital close-range photogrammetry can come true on the basis of following steps : non-metric cameras for data collection, the camera calibration, feature extraction, image matching, and other steps. At last, we combine three-dimensional background and local measurements real images of these large geographic data and realize the integration of measurable real image and the 4D production.The article discussed the way of the whole flow and technology, achieved the three-dimensional reconstruction and the integration of the large-scale threedimensional landscape and the metric building.
Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa
2016-08-08
We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.
Volume three-dimensional flow measurements using wavelength multiplexing.
Moore, Andrew J; Smith, Jason; Lawson, Nicholas J
2005-10-01
Optically distinguishable seeding particles that emit light in a narrow bandwidth, and a combination of bandwidths, were prepared by encapsulating quantum dots. The three-dimensional components of the particles' displacement were measured within a volume of fluid with particle tracking velocimetry (PTV). Particles are multiplexed to different hue bands in the camera images, enabling an increased seeding density and (or) fewer cameras to be used, thereby increasing the measurement spatial resolution and (or) reducing optical access requirements. The technique is also applicable to two-phase flow measurements with PTV or particle image velocimetry, where each phase is uniquely seeded.
Bhadri, Prashant R; Rowley, Adrian P; Khurana, Rahul N; Deboer, Charles M; Kerns, Ralph M; Chong, Lawrence P; Humayun, Mark S
2007-05-01
To evaluate the effectiveness of a prototype stereoscopic camera-based viewing system (Digital Microsurgical Workstation, three-dimensional (3D) Vision Systems, Irvine, California, USA) for anterior and posterior segment ophthalmic surgery. Institutional-based prospective study. Anterior and posterior segment surgeons performed designated standardized tasks on porcine eyes after training on prosthetic plastic eyes. Both anterior and posterior segment surgeons were able to complete tasks requiring minimal or moderate stereoscopic viewing. The results indicate that the system provides improved ergonomics. Improvements in key viewing performance areas would further enhance the value over a conventional operating microscope. The performance of the prototype system is not at par with the planned commercial system. With continued development of this technology, the three- dimensional system may be a novel viewing system in ophthalmic surgery with improved ergonomics with respect to traditional microscopic viewing.
Real time markerless motion tracking using linked kinematic chains
Luck, Jason P [Arvada, CO; Small, Daniel E [Albuquerque, NM
2007-08-14
A markerless method is described for tracking the motion of subjects in a three dimensional environment using a model based on linked kinematic chains. The invention is suitable for tracking robotic, animal or human subjects in real-time using a single computer with inexpensive video equipment, and does not require the use of markers or specialized clothing. A simple model of rigid linked segments is constructed of the subject and tracked using three dimensional volumetric data collected by a multiple camera video imaging system. A physics based method is then used to compute forces to align the model with subsequent volumetric data sets in real-time. The method is able to handle occlusion of segments and accommodates joint limits, velocity constraints, and collision constraints and provides for error recovery. The method further provides for elimination of singularities in Jacobian based calculations, which has been problematic in alternative methods.
NASA Technical Reports Server (NTRS)
Luchini, Chris B.
1997-01-01
Development of camera and instrument simulations for space exploration requires the development of scientifically accurate models of the objects to be studied. Several planned cometary missions have prompted the development of a three dimensional, multi-spectral, anisotropic multiple scattering model of cometary coma.
Multiview hyperspectral topography of tissue structural and functional characteristics
NASA Astrophysics Data System (ADS)
Zhang, Shiwu; Liu, Peng; Huang, Jiwei; Xu, Ronald
2012-12-01
Accurate and in vivo characterization of structural, functional, and molecular characteristics of biological tissue will facilitate quantitative diagnosis, therapeutic guidance, and outcome assessment in many clinical applications, such as wound healing, cancer surgery, and organ transplantation. However, many clinical imaging systems have limitations and fail to provide noninvasive, real time, and quantitative assessment of biological tissue in an operation room. To overcome these limitations, we developed and tested a multiview hyperspectral imaging system. The multiview hyperspectral imaging system integrated the multiview and the hyperspectral imaging techniques in a single portable unit. Four plane mirrors are cohered together as a multiview reflective mirror set with a rectangular cross section. The multiview reflective mirror set was placed between a hyperspectral camera and the measured biological tissue. For a single image acquisition task, a hyperspectral data cube with five views was obtained. The five-view hyperspectral image consisted of a main objective image and four reflective images. Three-dimensional topography of the scene was achieved by correlating the matching pixels between the objective image and the reflective images. Three-dimensional mapping of tissue oxygenation was achieved using a hyperspectral oxygenation algorithm. The multiview hyperspectral imaging technique is currently under quantitative validation in a wound model, a tissue-simulating blood phantom, and an in vivo biological tissue model. The preliminary results have demonstrated the technical feasibility of using multiview hyperspectral imaging for three-dimensional topography of tissue functional properties.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1991-01-01
Methods for providing stereoscopic image presentation and stereoscopic configurations using stereoscopic viewing systems having converged or parallel cameras may be set up to reduce or eliminate erroneously perceived accelerations and decelerations by proper selection of parameters, such as an image magnification factor, q, and intercamera distance, 2w. For converged cameras, q is selected to be equal to Ve - qwl = 0, where V is the camera distance, e is half the interocular distance of an observer, w is half the intercamera distance, and l is the actual distance from the first nodal point of each camera to the convergence point, and for parallel cameras, q is selected to be equal to e/w. While converged cameras cannot be set up to provide fully undistorted three-dimensional views, they can be set up to provide a linear relationship between real and apparent depth and thus minimize erroneously perceived accelerations and decelerations for three sagittal planes, x = -w, x = 0, and x = +w which are indicated to the observer. Parallel cameras can be set up to provide fully undistorted three-dimensional views by controlling the location of the observer and by magnification and shifting of left and right images. In addition, the teachings of this disclosure can be used to provide methods of stereoscopic image presentation and stereoscopic camera configurations to produce a nonlinear relation between perceived and real depth, and erroneously produce or enhance perceived accelerations and decelerations in order to provide special effects for entertainment, training, or educational purposes.
Three-dimensional cinematography with control object of unknown shape.
Dapena, J; Harman, E A; Miller, J A
1982-01-01
A technique for reconstruction of three-dimensional (3D) motion which involves a simple filming procedure but allows the deduction of coordinates in large object volumes was developed. Internal camera parameters are calculated from measurements of the film images of two calibrated crosses while external camera parameters are calculated from the film images of points in a control object of unknown shape but at least one known length. The control object, which includes the volume in which the activity is to take place, is formed by a series of poles placed at unknown locations, each carrying two targets. From the internal and external camera parameters, and from locations of the images of point in the films of the two cameras, 3D coordinates of the point can be calculated. Root mean square errors of the three coordinates of points in a large object volume (5m x 5m x 1.5m) were 15 mm, 13 mm, 13 mm and 6 mm, and relative errors in lengths averaged 0.5%, 0.7% and 0.5%, respectively.
Testud, Frederik; Gallichan, Daniel; Layton, Kelvin J; Barmet, Christoph; Welz, Anna M; Dewdney, Andrew; Cocosco, Chris A; Pruessmann, Klaas P; Hennig, Jürgen; Zaitsev, Maxim
2015-03-01
PatLoc (Parallel Imaging Technique using Localized Gradients) accelerates imaging and introduces a resolution variation across the field-of-view. Higher-dimensional encoding employs more spatial encoding magnetic fields (SEMs) than the corresponding image dimensionality requires, e.g. by applying two quadratic and two linear spatial encoding magnetic fields to reconstruct a 2D image. Images acquired with higher-dimensional single-shot trajectories can exhibit strong artifacts and geometric distortions. In this work, the source of these artifacts is analyzed and a reliable correction strategy is derived. A dynamic field camera was built for encoding field calibration. Concomitant fields of linear and nonlinear spatial encoding magnetic fields were analyzed. A combined basis consisting of spherical harmonics and concomitant terms was proposed and used for encoding field calibration and image reconstruction. A good agreement between the analytical solution for the concomitant fields and the magnetic field simulations of the custom-built PatLoc SEM coil was observed. Substantial image quality improvements were obtained using a dynamic field camera for encoding field calibration combined with the proposed combined basis. The importance of trajectory calibration for single-shot higher-dimensional encoding is demonstrated using the combined basis including spherical harmonics and concomitant terms, which treats the concomitant fields as an integral part of the encoding. © 2014 Wiley Periodicals, Inc.
Li, Zan; Liu, Shan; Tan, Minmin; Song, Jinlin
2017-01-01
Objective To explore a new technique for reconstructing and measuring three-dimensional (3D) models of orthodontic plaster casts using multi-baseline digital close-range photogrammetry (MBDCRP) with a single-lens reflex camera. Study design Thirty sets of orthodontic plaster casts that do not exhibit severe horizontal overlap (>2 mm) between any two teeth were recorded by a single-lens reflex camera with 72 pictures taken in different directions. The 3D models of these casts were reconstructed and measured using the open source software MeshLab. These parameters, including mesio-distal crown diameter, arch width, and arch perimeter, were recorded six times on both the 3D digital models and on plaster casts by two examiners. Statistical analysis was carried out using the Bland–Altman method to measure agreement between the novel method and the traditional calliper method by calculating the differences between mean values. Results The average differences between the measurements of the photogrammetric 3D models and the plaster casts were 0.011–0.402mm. The mean differences between measurements obtained by the photogrammetric 3D models and the dental casts were not significant except for the lower arch perimeter (P>0.05), and all the differences were regarded as clinically acceptable (<0.5 mm). Conclusions Measurements obtained by MBDCRP are compared well with those obtained from plaster casts, indicating that MBDCRP is an alternate way to store and measure dental plaster casts without severe horizontal overlap between any two teeth. PMID:28640827
Light ray field capture using focal plane sweeping and its optical reconstruction using 3D displays.
Park, Jae-Hyeung; Lee, Sung-Keun; Jo, Na-Young; Kim, Hee-Jae; Kim, Yong-Soo; Lim, Hong-Gi
2014-10-20
We propose a method to capture light ray field of three-dimensional scene using focal plane sweeping. Multiple images are captured using a usual camera at different focal distances, spanning the three-dimensional scene. The captured images are then back-projected to four-dimensional spatio-angular space to obtain the light ray field. The obtained light ray field can be visualized either using digital processing or optical reconstruction using various three-dimensional display techniques including integral imaging, layered display, and holography.
3D reconstruction based on light field images
NASA Astrophysics Data System (ADS)
Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei
2018-04-01
This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.
A novel camera localization system for extending three-dimensional digital image correlation
NASA Astrophysics Data System (ADS)
Sabato, Alessandro; Reddy, Narasimha; Khan, Sameer; Niezrecki, Christopher
2018-03-01
The monitoring of civil, mechanical, and aerospace structures is important especially as these systems approach or surpass their design life. Often, Structural Health Monitoring (SHM) relies on sensing techniques for condition assessment. Advancements achieved in camera technology and optical sensors have made three-dimensional (3D) Digital Image Correlation (DIC) a valid technique for extracting structural deformations and geometry profiles. Prior to making stereophotogrammetry measurements, a calibration has to be performed to obtain the vision systems' extrinsic and intrinsic parameters. It means that the position of the cameras relative to each other (i.e. separation distance, cameras angle, etc.) must be determined. Typically, cameras are placed on a rigid bar to prevent any relative motion between the cameras. This constraint limits the utility of the 3D-DIC technique, especially as it is applied to monitor large-sized structures and from various fields of view. In this preliminary study, the design of a multi-sensor system is proposed to extend 3D-DIC's capability and allow for easier calibration and measurement. The suggested system relies on a MEMS-based Inertial Measurement Unit (IMU) and a 77 GHz radar sensor for measuring the orientation and relative distance of the stereo cameras. The feasibility of the proposed combined IMU-radar system is evaluated through laboratory tests, demonstrating its ability in determining the cameras position in space for performing accurate 3D-DIC calibration and measurements.
Kim, Jong Bae; Brienza, David M
2006-01-01
A Remote Accessibility Assessment System (RAAS) that uses three-dimensional (3-D) reconstruction technology is being developed; it enables clinicians to assess the wheelchair accessibility of users' built environments from a remote location. The RAAS uses commercial software to construct 3-D virtualized environments from photographs. We developed custom screening algorithms and instruments for analyzing accessibility. Characteristics of the camera and 3-D reconstruction software chosen for the system significantly affect its overall reliability. In this study, we performed an accuracy assessment to verify that commercial hardware and software can construct accurate 3-D models by analyzing the accuracy of dimensional measurements in a virtual environment and a comparison of dimensional measurements from 3-D models created with four cameras/settings. Based on these two analyses, we were able to specify a consumer-grade digital camera and PhotoModeler (EOS Systems, Inc, Vancouver, Canada) software for this system. Finally, we performed a feasibility analysis of the system in an actual environment to evaluate its ability to assess the accessibility of a wheelchair user's typical built environment. The field test resulted in an accurate accessibility assessment and thus validated our system.
Analysis of the Three-Dimensional Vector FAÇADE Model Created from Photogrammetric Data
NASA Astrophysics Data System (ADS)
Kamnev, I. S.; Seredovich, V. A.
2017-12-01
The results of the accuracy assessment analysis for creation of a three-dimensional vector model of building façade are described. In the framework of the analysis, analytical comparison of three-dimensional vector façade models created by photogrammetric and terrestrial laser scanning data has been done. The three-dimensional model built from TLS point clouds was taken as the reference one. In the course of the experiment, the three-dimensional model to be analyzed was superimposed on the reference one, the coordinates were measured and deviations between the same model points were determined. The accuracy estimation of the three-dimensional model obtained by using non-metric digital camera images was carried out. Identified façade surface areas with the maximum deviations were revealed.
Three-dimensional particle tracking velocimetry using dynamic vision sensors
NASA Astrophysics Data System (ADS)
Borer, D.; Delbruck, T.; Rösgen, T.
2017-12-01
A fast-flow visualization method is presented based on tracking neutrally buoyant soap bubbles with a set of neuromorphic cameras. The "dynamic vision sensors" register only the changes in brightness with very low latency, capturing fast processes at a low data rate. The data consist of a stream of asynchronous events, each encoding the corresponding pixel position, the time instant of the event and the sign of the change in logarithmic intensity. The work uses three such synchronized cameras to perform 3D particle tracking in a medium sized wind tunnel. The data analysis relies on Kalman filters to associate the asynchronous events with individual tracers and to reconstruct the three-dimensional path and velocity based on calibrated sensor information.
New Approach for Environmental Monitoring and Plant Observation Using a Light-Field Camera
NASA Astrophysics Data System (ADS)
Schima, Robert; Mollenhauer, Hannes; Grenzdörffer, Görres; Merbach, Ines; Lausch, Angela; Dietrich, Peter; Bumberger, Jan
2015-04-01
The aim of gaining a better understanding of ecosystems and the processes in nature accentuates the need for observing exactly these processes with a higher temporal and spatial resolution. In the field of environmental monitoring, an inexpensive and field applicable imaging technique to derive three-dimensional information about plants and vegetation would represent a decisive contribution to the understanding of the interactions and dynamics of ecosystems. This is particularly true for the monitoring of plant growth and the frequently mentioned lack of morphological information about the plants, e.g. plant height, vegetation canopy, leaf position or leaf arrangement. Therefore, an innovative and inexpensive light-field (plenoptic) camera, the Lytro LF, and a stereo vision system, based on two industrial cameras, were tested and evaluated as possible measurement tools for the given monitoring purpose. In this instance, the usage of a light field camera offers the promising opportunity of providing three-dimensional information without any additional requirements during the field measurements based on one single shot, which represents a substantial methodological improvement in the area of environmental research and monitoring. Since the Lytro LF was designed as a daily-life consumer camera, it does not support depth or distance estimation or rather an external triggering by default. Therefore, different technical modifications and a calibration routine had to be figured out during the preliminary study. As a result, the used light-field camera was proven suitable as a depth and distance measurement tool with a measuring range of approximately one meter. Consequently, this confirms the assumption that a light field camera holds the potential of being a promising measurement tool for environmental monitoring purposes, especially with regard to a low methodological effort in field. Within the framework of the Global Change Experimental Facility Project, founded by the Helmholtz Centre for Environmental Research, and its large-scaled field experiments to investigate the influence of the climate change on different forms of land utilization, both techniques were installed and evaluated in a long-term experiment on a pilot-scaled maize field in late 2014. Based on this, it was possible to show the growth of the plants in dependence of time, showing a good accordance to the measurements, which were carried out by hand on a weekly basis. In addition, the experiment has shown that the light-field vision approach is applicable for the monitoring of the crop growth under field conditions, although it is limited to close range applications. Since this work was intended as a proof of concept, further research is recommended, especially with respect to the automation and evaluation of data processing. Altogether, this study is addressed to researchers as an elementary groundwork to improve the usage of the introduced light field imaging technique for the monitoring of plant growth dynamics and the three-dimensional modeling of plants under field conditions.
NASA Astrophysics Data System (ADS)
Brown, C. David; Ih, Charles S.; Arce, Gonzalo R.; Fertell, David A.
1987-01-01
Vision systems for mobile robots or autonomous vehicles navigating in an unknown terrain environment must provide a rapid and accurate method of segmenting the scene ahead into regions of pathway and background. A major distinguishing feature between the pathway and background is the three dimensional texture of these two regions. Typical methods of textural image segmentation are very computationally intensive, often lack the required robustness, and are incapable of sensing the three dimensional texture of various regions of the scene. A method is presented where scanned laser projected lines of structured light, viewed by a stereoscopically located single video camera, resulted in an image in which the three dimensional characteristics of the scene were represented by the discontinuity of the projected lines. This image was conducive to processing with simple regional operators to classify regions as pathway or background. Design of some operators and application methods, and demonstration on sample images are presented. This method provides rapid and robust scene segmentation capability that has been implemented on a microcomputer in near real time, and should result in higher speed and more reliable robotic or autonomous navigation in unstructured environments.
On the single-photon-counting (SPC) modes of imaging using an XFEL source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhehui
In this study, the requirements to achieve high detection efficiency (above 50%) and gigahertz (GHz) frame rate for the proposed 42-keV X-ray free-electron laser (XFEL) at Los Alamos are summarized. Direct detection scenarios using C (diamond), Si, Ge and GaAs semiconductor sensors are analyzed. Single-photon counting (SPC) mode and weak SPC mode using Si can potentially meet the efficiency and frame rate requirements and be useful to both photoelectric absorption and Compton physics as the photon energy increases. Multilayer three-dimensional (3D) detector architecture, as a possible means to realize SPC modes, is compared with the widely used two-dimensional (2D) hybridmore » planar electrode structure and 3D deeply entrenched electrode architecture. Demonstration of thin film cameras less than 100-μm thick with onboard thin ASICs could be an initial step to realize multilayer 3D detectors and SPC modes for XFELs.« less
On the single-photon-counting (SPC) modes of imaging using an XFEL source
Wang, Zhehui
2015-12-14
In this study, the requirements to achieve high detection efficiency (above 50%) and gigahertz (GHz) frame rate for the proposed 42-keV X-ray free-electron laser (XFEL) at Los Alamos are summarized. Direct detection scenarios using C (diamond), Si, Ge and GaAs semiconductor sensors are analyzed. Single-photon counting (SPC) mode and weak SPC mode using Si can potentially meet the efficiency and frame rate requirements and be useful to both photoelectric absorption and Compton physics as the photon energy increases. Multilayer three-dimensional (3D) detector architecture, as a possible means to realize SPC modes, is compared with the widely used two-dimensional (2D) hybridmore » planar electrode structure and 3D deeply entrenched electrode architecture. Demonstration of thin film cameras less than 100-μm thick with onboard thin ASICs could be an initial step to realize multilayer 3D detectors and SPC modes for XFELs.« less
Development of a single-photon-counting camera with use of a triple-stacked micro-channel plate.
Yasuda, Naruomi; Suzuki, Hitoshi; Katafuchi, Tetsuro
2016-01-01
At the quantum-mechanical level, all substances (not merely electromagnetic waves such as light and X-rays) exhibit wave–particle duality. Whereas students of radiation science can easily understand the wave nature of electromagnetic waves, the particle (photon) nature may elude them. Therefore, to assist students in understanding the wave–particle duality of electromagnetic waves, we have developed a photon-counting camera that captures single photons in two-dimensional images. As an image intensifier, this camera has a triple-stacked micro-channel plate (MCP) with an amplification factor of 10(6). The ultra-low light of a single photon entering the camera is first converted to an electron through the photoelectric effect on the photocathode. The electron is intensified by the triple-stacked MCP and then converted to a visible light distribution, which is measured by a high-sensitivity complementary metal oxide semiconductor image sensor. Because it detects individual photons, the photon-counting camera is expected to provide students with a complete understanding of the particle nature of electromagnetic waves. Moreover, it measures ultra-weak light that cannot be detected by ordinary low-sensitivity cameras. Therefore, it is suitable for experimental research on scintillator luminescence, biophoton detection, and similar topics.
Eccentricity error identification and compensation for high-accuracy 3D optical measurement
He, Dong; Liu, Xiaoli; Peng, Xiang; Ding, Yabin; Gao, Bruce Z
2016-01-01
The circular target has been widely used in various three-dimensional optical measurements, such as camera calibration, photogrammetry and structured light projection measurement system. The identification and compensation of the circular target systematic eccentricity error caused by perspective projection is an important issue for ensuring accurate measurement. This paper introduces a novel approach for identifying and correcting the eccentricity error with the help of a concentric circles target. Compared with previous eccentricity error correction methods, our approach does not require taking care of the geometric parameters of the measurement system regarding target and camera. Therefore, the proposed approach is very flexible in practical applications, and in particular, it is also applicable in the case of only one image with a single target available. The experimental results are presented to prove the efficiency and stability of the proposed approach for eccentricity error compensation. PMID:26900265
Eccentricity error identification and compensation for high-accuracy 3D optical measurement.
He, Dong; Liu, Xiaoli; Peng, Xiang; Ding, Yabin; Gao, Bruce Z
2013-07-01
The circular target has been widely used in various three-dimensional optical measurements, such as camera calibration, photogrammetry and structured light projection measurement system. The identification and compensation of the circular target systematic eccentricity error caused by perspective projection is an important issue for ensuring accurate measurement. This paper introduces a novel approach for identifying and correcting the eccentricity error with the help of a concentric circles target. Compared with previous eccentricity error correction methods, our approach does not require taking care of the geometric parameters of the measurement system regarding target and camera. Therefore, the proposed approach is very flexible in practical applications, and in particular, it is also applicable in the case of only one image with a single target available. The experimental results are presented to prove the efficiency and stability of the proposed approach for eccentricity error compensation.
NASA Astrophysics Data System (ADS)
Blain, Pascal; Michel, Fabrice; Piron, Pierre; Renotte, Yvon; Habraken, Serge
2013-08-01
Noncontact optical measurement methods are essential tools in many industrial and research domains. A family of new noncontact optical measurement methods based on the polarization states splitting technique and monochromatic light projection as a way to overcome ambient lighting for in-situ measurement has been developed. Recent works on a birefringent element, a Savart plate, allow one to build a more flexible and robust interferometer. This interferometer is a multipurpose metrological device. On one hand the interferometer can be set in front of a charge-coupled device (CCD) camera. This optical measurement system is called a shearography interferometer and allows one to measure microdisplacements between two states of the studied object under coherent lighting. On the other hand, by producing and shifting multiple sinusoidal Young's interference patterns with this interferometer, and using a CCD camera, it is possible to build a three-dimensional structured light profilometer.
Prototype of a single probe Compton camera for laparoscopic surgery
NASA Astrophysics Data System (ADS)
Koyama, A.; Nakamura, Y.; Shimazoe, K.; Takahashi, H.; Sakuma, I.
2017-02-01
Image-guided surgery (IGS) is performed using a real-time surgery navigation system with three-dimensional (3D) position tracking of surgical tools. IGS is fast becoming an important technology for high-precision laparoscopic surgeries, in which the field of view is limited. In particular, recent developments in intraoperative imaging using radioactive biomarkers may enable advanced IGS for supporting malignant tumor removal surgery. In this light, we develop a novel intraoperative probe with a Compton camera and a position tracking system for performing real-time radiation-guided surgery. A prototype probe consisting of Ce :Gd3 Al2 Ga3 O12 (GAGG) crystals and silicon photomultipliers was fabricated, and its reconstruction algorithm was optimized to enable real-time position tracking. The results demonstrated the visualization capability of the radiation source with ARM = ∼ 22.1 ° and the effectiveness of the proposed system.
Erdenebat, Munkh-Uchral; Kim, Byeong-Jun; Piao, Yan-Ling; Park, Seo-Yeon; Kwon, Ki-Chul; Piao, Mei-Lan; Yoo, Kwan-Hee; Kim, Nam
2017-10-01
A mobile three-dimensional image acquisition and reconstruction system using a computer-generated integral imaging technique is proposed. A depth camera connected to the mobile device acquires the color and depth data of a real object simultaneously, and an elemental image array is generated based on the original three-dimensional information for the object, with lens array specifications input into the mobile device. The three-dimensional visualization of the real object is reconstructed on the mobile display through optical or digital reconstruction methods. The proposed system is implemented successfully and the experimental results certify that the system is an effective and interesting method of displaying real three-dimensional content on a mobile device.
Study on the initial value for the exterior orientation of the mobile version
NASA Astrophysics Data System (ADS)
Yu, Zhi-jing; Li, Shi-liang
2011-10-01
Single mobile vision coordinate measurement system is in the measurement site using a single camera body and a notebook computer to achieve three-dimensional coordinates. To obtain more accurate approximate values of exterior orientation calculation in the follow-up is very important in the measurement process. The problem is a typical one for the space resection, and now studies on this topic have been widely conducted in research. Single-phase space resection mainly focuses on two aspects: of co-angular constraint based on the method, its representatives are camera co-angular constraint pose estimation algorithm and the cone angle law; the other is a direct linear transformation (DLT). One common drawback for both methods is that the CCD lens distortion is not considered. When the initial value was calculated with the direct linear transformation method, the distribution and abundance of control points is required relatively high, the need that control points can not be distributed in the same plane must be met, and there are at least six non- coplanar control points. However, its usefulness is limited. Initial value will directly influence the convergence and convergence speed of the ways of calculation. This paper will make the nonlinear of the total linear equations linearized by using the total linear equations containing distorted items and Taylor series expansion, calculating the initial value of the camera exterior orientation. Finally, the initial value is proved to be better through experiments.
NASA Astrophysics Data System (ADS)
Arciniegas, Javier R.; González, Andrés. L.; Quintero, L. A.; Contreras, Carlos R.; Meneses, Jaime E.
2014-05-01
Three-dimensional shape measurement is a subject that consistently produces high scientific interest and provides information for medical, industrial and investigative applications, among others. In this paper, it is proposed to implement a three-dimensional (3D) reconstruction system for applications in superficial inspection of non-metallic pipes for the hydrocarbons transport. The system is formed by a CCD camera, a video-projector and a laptop and it is based on fringe projection technique. System functionality is evidenced by evaluating the quality of three-dimensional reconstructions obtained, which allow observing the failures and defects on the study object surface.
NASA Astrophysics Data System (ADS)
Servin, Manuel; Padilla, Moises; Garnica, Guillermo; Gonzalez, Adonai
2016-12-01
In this work we review and combine two techniques that have been recently published for three-dimensional (3D) fringe projection profilometry and phase unwrapping, namely: co-phased profilometry and 2-steps temporal phase-unwrapping. By combining these two methods we get a more accurate, higher signal-to-noise 3D profilometer for discontinuous industrial objects. In single-camera single-projector (standard) profilometry, the camera and the projector must form an angle between them. The phase-sensitivity of the profilometer depends on this angle, so it cannot be avoided. This angle produces regions with self-occluding shadows and glare from the solid as viewed from the camera's perspective, making impossible the demodulation of the fringe-pattern there. In other words, the phase data is undefined at those shadow regions. As published recently, this limitation can be solved by using several co-phased fringe-projectors and a single camera. These co-phased projectors are positioned at different directions towards the object, and as a consequence most shadows are compensated. In addition to this, most industrial objects are highly discontinuous, which precludes the use of spatial phase-unwrappers. One way to avoid spatial unwrapping is to decrease the phase-sensitivity to a point where the demodulated phase is bounded to one lambda, so the need for phase-unwrapping disappears. By doing this, however, the recovered non-wrapped phase contains too much harmonic distortion and noise. Using our recently proposed two-step temporal phase-unwrapping technique, the high-sensitivity phase is unwrapped using the low-frequency one as initial gross estimation. This two-step unwrapping technique solves the 3D object discontinuities while keeping the accuracy of the high-frequency profilometry data. In scientific research, new art are derived as logical and consistent result of previous efforts in the same direction. Here we present a new 3D-profilometer combining these two recently published methods: co-phased profilometry and two-steps temporal phase-unwrapping. By doing this, we obtain a new and more powerful 3D profilometry technique which overcomes the two main limitations of previous fringe-projection profilometers namely: high phase-sensitivity digitalization of discontinuous objects and solid's self-generated shadow minimization. This new 3D profilometer is demonstrated by an experiment digitizing a discontinuous 3D industrial-solid where the advantages of this new profilometer with respect to previous art are clearly shown.
Seer-Linnemayr, Charlotte; Ravelli, Raimond B. G.; Matadeen, Rishi; De Carlo, Sacha; Alewijnse, Bart; Portugal, Rodrigo V.; Pannu, Navraj S.; Schatz, Michael; van Heel, Marin
2017-01-01
Single-particle cryogenic electron microscopy (cryo-EM) can now yield near-atomic resolution structures of biological complexes. However, the reference-based alignment algorithms commonly used in cryo-EM suffer from reference bias, limiting their applicability (also known as the ‘Einstein from random noise’ problem). Low-dose cryo-EM therefore requires robust and objective approaches to reveal the structural information contained in the extremely noisy data, especially when dealing with small structures. A reference-free pipeline is presented for obtaining near-atomic resolution three-dimensional reconstructions from heterogeneous (‘four-dimensional’) cryo-EM data sets. The methodologies integrated in this pipeline include a posteriori camera correction, movie-based full-data-set contrast transfer function determination, movie-alignment algorithms, (Fourier-space) multivariate statistical data compression and unsupervised classification, ‘random-startup’ three-dimensional reconstructions, four-dimensional structural refinements and Fourier shell correlation criteria for evaluating anisotropic resolution. The procedures exclusively use information emerging from the data set itself, without external ‘starting models’. Euler-angle assignments are performed by angular reconstitution rather than by the inherently slower projection-matching approaches. The comprehensive ‘ABC-4D’ pipeline is based on the two-dimensional reference-free ‘alignment by classification’ (ABC) approach, where similar images in similar orientations are grouped by unsupervised classification. Some fundamental differences between X-ray crystallography versus single-particle cryo-EM data collection and data processing are discussed. The structure of the giant haemoglobin from Lumbricus terrestris at a global resolution of ∼3.8 Å is presented as an example of the use of the ABC-4D procedure. PMID:28989723
NASA Astrophysics Data System (ADS)
Shao, Xinxing; Zhu, Feipeng; Su, Zhilong; Dai, Xiangjun; Chen, Zhenning; He, Xiaoyuan
2018-03-01
The strain errors in stereo-digital image correlation (DIC) due to camera calibration were investigated using precisely controlled numerical experiments and real experiments. Three-dimensional rigid body motion tests were conducted to examine the effects of camera calibration on the measured results. For a fully accurate calibration, rigid body motion causes negligible strain errors. However, for inaccurately calibrated camera parameters and a short working distance, rigid body motion will lead to more than 50-μɛ strain errors, which significantly affects the measurement. In practical measurements, it is impossible to obtain a fully accurate calibration; therefore, considerable attention should be focused on attempting to avoid these types of errors, especially for high-accuracy strain measurements. It is necessary to avoid large rigid body motions in both two-dimensional DIC and stereo-DIC.
NASA Technical Reports Server (NTRS)
Revilock, Duane M., Jr.; Thesken, John C.; Schmidt, Timothy E.
2007-01-01
Ambient temperature hydrostatic pressurization tests were conducted on a composite overwrapped pressure vessel (COPV) to understand the fiber stresses in COPV components. Two three-dimensional digital image correlation systems with high speed cameras were used in the evaluation to provide full field displacement and strain data for each pressurization test. A few of the key findings will be discussed including how the principal strains provided better insight into system behavior than traditional gauges, a high localized strain that was measured where gages were not present and the challenges of measuring curved surfaces with the use of a 1.25 in. thick layered polycarbonate panel that protected the cameras.
Tracking a Head-Mounted Display in a Room-Sized Environment with Head-Mounted Cameras
1990-04-01
poor resolution and a very limited working volume [Wan90]. 4 OPTOTRAK [Nor88] uses one camera with two dual-axis CCD infrared position sensors. Each...Nor88] Northern Digital. Trade literature on Optotrak - Northern Digital’s Three Dimensional Optical Motion Tracking and Analysis System. Northern Digital
Method and apparatus for coherent imaging of infrared energy
Hutchinson, Donald P.
1998-01-01
A coherent camera system performs ranging, spectroscopy, and thermal imaging. Local oscillator radiation is combined with target scene radiation to enable heterodyne detection by the coherent camera's two-dimensional photodetector array. Versatility enables deployment of the system in either a passive mode (where no laser energy is actively transmitted toward the target scene) or an active mode (where a transmitting laser is used to actively illuminate the target scene). The two-dimensional photodetector array eliminates the need to mechanically scan the detector. Each element of the photodetector array produces an intermediate frequency signal that is amplified, filtered, and rectified by the coherent camera's integrated circuitry. By spectroscopic examination of the frequency components of each pixel of the detector array, a high-resolution, three-dimensional or holographic image of the target scene is produced for applications such as air pollution studies, atmospheric disturbance monitoring, and military weapons targeting.
Single-pixel camera with one graphene photodetector.
Li, Gongxin; Wang, Wenxue; Wang, Yuechao; Yang, Wenguang; Liu, Lianqing
2016-01-11
Consumer cameras in the megapixel range are ubiquitous, but the improvement of them is hindered by the poor performance and high cost of traditional photodetectors. Graphene, a two-dimensional micro-/nano-material, recently has exhibited exceptional properties as a sensing element in a photodetector over traditional materials. However, it is difficult to fabricate a large-scale array of graphene photodetectors to replace the traditional photodetector array. To take full advantage of the unique characteristics of the graphene photodetector, in this study we integrated a graphene photodetector in a single-pixel camera based on compressive sensing. To begin with, we introduced a method called laser scribing for fabrication the graphene. It produces the graphene components in arbitrary patterns more quickly without photoresist contamination as do traditional methods. Next, we proposed a system for calibrating the optoelectrical properties of micro/nano photodetectors based on a digital micromirror device (DMD), which changes the light intensity by controlling the number of individual micromirrors positioned at + 12°. The calibration sensitivity is driven by the sum of all micromirrors of the DMD and can be as high as 10(-5)A/W. Finally, the single-pixel camera integrated with one graphene photodetector was used to recover a static image to demonstrate the feasibility of the single-pixel imaging system with the graphene photodetector. A high-resolution image can be recovered with the camera at a sampling rate much less than Nyquist rate. The study was the first demonstration for ever record of a macroscopic camera with a graphene photodetector. The camera has the potential for high-speed and high-resolution imaging at much less cost than traditional megapixel cameras.
NASA Astrophysics Data System (ADS)
Wang, Xiaodong; Zhang, Wei; Luo, Yi; Yang, Weimin; Chen, Liang
2013-01-01
In assembly of miniature devices, the position and orientation of the parts to be assembled should be guaranteed during or after assembly. In some cases, the relative position or orientation errors among the parts can not be measured from only one direction using visual method, because of visual occlusion or for the features of parts located in a three-dimensional way. An automatic assembly system for precise miniature devices is introduced. In the modular assembly system, two machine vision systems were employed for measurement of the three-dimensionally distributed assembly errors. High resolution CCD cameras and high position repeatability precision stages were integrated to realize high precision measurement in large work space. The two cameras worked in collaboration in measurement procedure to eliminate the influence of movement errors of the rotational or translational stages. A set of templates were designed for calibration of the vision systems and evaluation of the system's measurement accuracy.
Design and performance of a large area neutron sensitive anger camera
Visscher, Theodore; Montcalm, Christopher A.; Donahue, Jr., Cornelius; ...
2015-05-21
We describe the design and performance of a 157mm x 157mm two dimensional neutron detector. The detector uses the Anger principle to determine the position of neutrons. We have verified FWHM resolution of < 1.2mm with distortion < 0.5mm on over 50 installed Anger Cameras. The performance of the detector is limited by the light yield of the scintillator, and it is estimated that the resolution of the current detector could be doubled with a brighter scintillator. Data collected from small (<1mm 3) single crystal reference samples at the single crystal instrument TOPAZ provide results with low R w(F) values
Multifunctional, three-dimensional tomography for analysis of eletrectrohydrodynamic jetting
NASA Astrophysics Data System (ADS)
Nguyen, Xuan Hung; Gim, Yeonghyeon; Ko, Han Seo
2015-05-01
A three-dimensional optical tomography technique was developed to reconstruct three-dimensional objects using a set of two-dimensional shadowgraphic images and normal gray images. From three high-speed cameras, which were positioned at an offset angle of 45° between each other, number, size, and location of electrohydrodynamic jets with respect to the nozzle position were analyzed using shadowgraphic tomography employing multiplicative algebraic reconstruction technique (MART). Additionally, a flow field inside a cone-shaped liquid (Taylor cone) induced under an electric field was observed using a simultaneous multiplicative algebraic reconstruction technique (SMART), a tomographic method for reconstructing light intensities of particles, combined with three-dimensional cross-correlation. Various velocity fields of circulating flows inside the cone-shaped liquid caused by various physico-chemical properties of liquid were also investigated.
Accuracy Assessment of Underwater Photogrammetric Three Dimensional Modelling for Coral Reefs
NASA Astrophysics Data System (ADS)
Guo, T.; Capra, A.; Troyer, M.; Gruen, A.; Brooks, A. J.; Hench, J. L.; Schmitt, R. J.; Holbrook, S. J.; Dubbini, M.
2016-06-01
Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.
Agostini, Denis; Marie, Pierre-Yves; Ben-Haim, Simona; Rouzet, François; Songy, Bernard; Giordano, Alessandro; Gimelli, Alessia; Hyafil, Fabien; Sciagrà, Roberto; Bucerius, Jan; Verberne, Hein J; Slart, Riemer H J A; Lindner, Oliver; Übleis, Christopher; Hacker, Marcus
2016-12-01
The trade-off between resolution and count sensitivity dominates the performance of standard gamma cameras and dictates the need for relatively high doses of radioactivity of the used radiopharmaceuticals in order to limit image acquisition duration. The introduction of cadmium-zinc-telluride (CZT)-based cameras may overcome some of the limitations against conventional gamma cameras. CZT cameras used for the evaluation of myocardial perfusion have been shown to have a higher count sensitivity compared to conventional single photon emission computed tomography (SPECT) techniques. CZT image quality is further improved by the development of a dedicated three-dimensional iterative reconstruction algorithm, based on maximum likelihood expectation maximization (MLEM), which corrects for the loss in spatial resolution due to line response function of the collimator. All these innovations significantly reduce imaging time and result in a lower patient's radiation exposure compared with standard SPECT. To guide current and possible future users of the CZT technique for myocardial perfusion imaging, the Cardiovascular Committee of the European Association of Nuclear Medicine, starting from the experience of its members, has decided to examine the current literature regarding procedures and clinical data on CZT cameras. The committee hereby aims 1) to identify the main acquisitions protocols; 2) to evaluate the diagnostic and prognostic value of CZT derived myocardial perfusion, and finally 3) to determine the impact of CZT on radiation exposure.
Blur spot limitations in distal endoscope sensors
NASA Astrophysics Data System (ADS)
Yaron, Avi; Shechterman, Mark; Horesh, Nadav
2006-02-01
In years past, the picture quality of electronic video systems was limited by the image sensor. In the present, the resolution of miniature image sensors, as in medical endoscopy, is typically superior to the resolution of the optical system. This "excess resolution" is utilized by Visionsense to create stereoscopic vision. Visionsense has developed a single chip stereoscopic camera that multiplexes the horizontal dimension of the image sensor into two (left and right) images, compensates the blur phenomena, and provides additional depth resolution without sacrificing planar resolution. The camera is based on a dual-pupil imaging objective and an image sensor coated by an array of microlenses (a plenoptic camera). The camera has the advantage of being compact, providing simultaneous acquisition of left and right images, and offering resolution comparable to a dual chip stereoscopic camera with low to medium resolution imaging lenses. A stereoscopic vision system provides an improved 3-dimensional perspective of intra-operative sites that is crucial for advanced minimally invasive surgery and contributes to surgeon performance. An additional advantage of single chip stereo sensors is improvement of tolerance to electronic signal noise.
Luegmair, Georg; Mehta, Daryush D.; Kobler, James B.; Döllinger, Michael
2015-01-01
Vocal fold kinematics and its interaction with aerodynamic characteristics play a primary role in acoustic sound production of the human voice. Investigating the temporal details of these kinematics using high-speed videoendoscopic imaging techniques has proven challenging in part due to the limitations of quantifying complex vocal fold vibratory behavior using only two spatial dimensions. Thus, we propose an optical method of reconstructing the superior vocal fold surface in three spatial dimensions using a high-speed video camera and laser projection system. Using stereo-triangulation principles, we extend the camera-laser projector method and present an efficient image processing workflow to generate the three-dimensional vocal fold surfaces during phonation captured at 4000 frames per second. Initial results are provided for airflow-driven vibration of an ex vivo vocal fold model in which at least 75% of visible laser points contributed to the reconstructed surface. The method captures the vertical motion of the vocal folds at a high accuracy to allow for the computation of three-dimensional mucosal wave features such as vibratory amplitude, velocity, and asymmetry. PMID:26087485
Grubsky, Victor; Romanoov, Volodymyr; Shoemaker, Keith; Patton, Edward Matthew; Jannson, Tomasz
2016-02-02
A Compton tomography system comprises an x-ray source configured to produce a planar x-ray beam. The beam irradiates a slice of an object to be imaged, producing Compton-scattered x-rays. The Compton-scattered x-rays are imaged by an x-ray camera. Translation of the object with respect to the source and camera or vice versa allows three-dimensional object imaging.
NASA Astrophysics Data System (ADS)
Baghaei, H.; Wong, Wai-Hoi; Uribe, J.; Li, Hongdi; Wang, Yu; Liu, Yaqiang; Xing, Tao; Ramirez, R.; Xie, Shuping; Kim, Soonseok
2004-10-01
We compared two fully three-dimensional (3-D) image reconstruction algorithms and two 3-D rebinning algorithms followed by reconstruction with a two-dimensional (2-D) filtered-backprojection algorithm for 3-D positron emission tomography (PET) imaging. The two 3-D image reconstruction algorithms were ordered-subsets expectation-maximization (3D-OSEM) and 3-D reprojection (3DRP) algorithms. The two rebinning algorithms were Fourier rebinning (FORE) and single slice rebinning (SSRB). The 3-D projection data used for this work were acquired with a high-resolution PET scanner (MDAPET) with an intrinsic transaxial resolution of 2.8 mm. The scanner has 14 detector rings covering an axial field-of-view of 38.5 mm. We scanned three phantoms: 1) a uniform cylindrical phantom with inner diameter of 21.5 cm; 2) a uniform 11.5-cm cylindrical phantom with four embedded small hot lesions with diameters of 3, 4, 5, and 6 mm; and 3) the 3-D Hoffman brain phantom with three embedded small hot lesion phantoms with diameters of 3, 5, and 8.6 mm in a warm background. Lesions were placed at different radial and axial distances. We evaluated the different reconstruction methods for MDAPET camera by comparing the noise level of images, contrast recovery, and hot lesion detection, and visually compared images. We found that overall the 3D-OSEM algorithm, especially when images post filtered with the Metz filter, produced the best results in terms of contrast-noise tradeoff, and detection of hot spots, and reproduction of brain phantom structures. Even though the MDAPET camera has a relatively small maximum axial acceptance (/spl plusmn/5 deg), images produced with the 3DRP algorithm had slightly better contrast recovery and reproduced the structures of the brain phantom slightly better than the faster 2-D rebinning methods.
Quantitative analysis of three-dimensional biological cells using interferometric microscopy
NASA Astrophysics Data System (ADS)
Shaked, Natan T.; Wax, Adam
2011-06-01
Live biological cells are three-dimensional microscopic objects that constantly adjust their sizes, shapes and other biophysical features. Wide-field digital interferometry (WFDI) is a holographic technique that is able to record the complex wavefront of the light which has interacted with in-vitro cells in a single camera exposure, where no exogenous contrast agents are required. However, simple quasi-three-dimensional holographic visualization of the cell phase profiles need not be the end of the process. Quantitative analysis should permit extraction of numerical parameters which are useful for cytology or medical diagnosis. Using a transmission-mode setup, the phase profile represents the multiplication between the integral refractive index and the thickness of the sample. These coupled variables may not be distinct when acquiring the phase profiles of dynamic cells. Many morphological parameters which are useful for cell biologists are based on the cell thickness profile rather than on its phase profile. We first overview methods to decouple the cell thickness and its refractive index using the WFDI-based phase profile. Then, we present a whole-cell-imaging approach which is able to extract useful numerical parameters on the cells even in cases where decoupling of cell thickness and refractive index is not possible or desired.
Salmingo, Remel Alingalan; Tadano, Shigeru; Abe, Yuichiro; Ito, Manabu
2016-05-12
Treatment for severe scoliosis is usually attained when the scoliotic spine is deformed and fixed by implant rods. Investigation of the intraoperative changes of implant rod shape in three-dimensions is necessary to understand the biomechanics of scoliosis correction, establish consensus of the treatment, and achieve the optimal outcome. The objective of this study was to measure the intraoperative three-dimensional geometry and deformation of implant rod during scoliosis corrective surgery.A pair of images was obtained intraoperatively by the dual camera system before rotation and after rotation of rods during scoliosis surgery. The three-dimensional implant rod geometry before implantation was measured directly by the surgeon and after surgery using a CT scanner. The images of rods were reconstructed in three-dimensions using quintic polynomial functions. The implant rod deformation was evaluated using the angle between the two three-dimensional tangent vectors measured at the ends of the implant rod.The implant rods at the concave side were significantly deformed during surgery. The highest rod deformation was found after the rotation of rods. The implant curvature regained after the surgical treatment.Careful intraoperative rod maneuver is important to achieve a safe clinical outcome because the intraoperative forces could be higher than the postoperative forces. Continuous scoliosis correction was observed as indicated by the regain of the implant rod curvature after surgery.
Three-dimensional displays and stereo vision
Westheimer, Gerald
2011-01-01
Procedures for three-dimensional image reconstruction that are based on the optical and neural apparatus of human stereoscopic vision have to be designed to work in conjunction with it. The principal methods of implementing stereo displays are described. Properties of the human visual system are outlined as they relate to depth discrimination capabilities and achieving optimal performance in stereo tasks. The concept of depth rendition is introduced to define the change in the parameters of three-dimensional configurations for cases in which the physical disposition of the stereo camera with respect to the viewed object differs from that of the observer's eyes. PMID:21490023
Kirk, R.L.; Howington-Kraus, E.; Hare, T.; Dorrer, E.; Cook, D.; Becker, K.; Thompson, K.; Redding, B.; Blue, J.; Galuszka, D.; Lee, E.M.; Gaddis, L.R.; Johnson, J. R.; Soderblom, L.A.; Ward, A.W.; Smith, P.H.; Britt, D.T.
1999-01-01
This paper describes our photogrammetric analysis of the Imager for Mars Pathfinder data, part of a broader program of mapping the Mars Pathfinder landing site in support of geoscience investigations. This analysis, carried out primarily with a commercial digital photogrammetric system, supported by our in-house Integrated Software for Imagers and Spectrometers (ISIS), consists of three steps: (1) geometric control: simultaneous solution for refined estimates of camera positions and pointing plus three-dimensional (3-D) coordinates of ???103 features sitewide, based on the measured image coordinates of those features; (2) topographic modeling: identification of ???3 ?? 105 closely spaced points in the images and calculation (based on camera parameters from step 1) of their 3-D coordinates, yielding digital terrain models (DTMs); and (3) geometric manipulation of the data: combination of the DTMs from different stereo pairs into a sitewide model, and reprojection of image data to remove parallax between the different spectral filters in the two cameras and to provide an undistorted planimetric view of the site. These processes are described in detail and example products are shown. Plans for combining the photogrammetrically derived topographic data with spectrophotometry are also described. These include photometric modeling using surface orientations from the DTM to study surface microtextures and improve the accuracy of spectral measurements, and photoclinometry to refine the DTM to single-pixel resolution where photometric properties are sufficiently uniform. Finally, the inclusion of rover images in a joint photogrammetric analysis with IMP images is described. This challenging task will provide coverage of areas hidden to the IMP, but accurate ranging of distant features can be achieved only if the lander is also visible in the rover image used. Copyright 1999 by the American Geophysical Union.
Volumetric three-component velocimetry measurements of the turbulent flow around a Rushton turbine
NASA Astrophysics Data System (ADS)
Sharp, Kendra V.; Hill, David; Troolin, Daniel; Walters, Geoffrey; Lai, Wing
2010-01-01
Volumetric three-component velocimetry measurements have been taken of the flow field near a Rushton turbine in a stirred tank reactor. This particular flow field is highly unsteady and three-dimensional, and is characterized by a strong radial jet, large tank-scale ring vortices, and small-scale blade tip vortices. The experimental technique uses a single camera head with three apertures to obtain approximately 15,000 three-dimensional vectors in a cubic volume. These velocity data offer the most comprehensive view to date of this flow field, especially since they are acquired at three Reynolds numbers (15,000, 107,000, and 137,000). Mean velocity fields and turbulent kinetic energy quantities are calculated. The volumetric nature of the data enables tip vortex identification, vortex trajectory analysis, and calculation of vortex strength. Three identification methods for the vortices are compared based on: the calculation of circumferential vorticity; the calculation of local pressure minima via an eigenvalue approach; and the calculation of swirling strength again via an eigenvalue approach. The use of two-dimensional data and three-dimensional data is compared for vortex identification; a `swirl strength' criterion is less sensitive to completeness of the velocity gradient tensor and overall provides clearer identification of the tip vortices. The principal components of the strain rate tensor are also calculated for one Reynolds number case as these measures of stretching and compression have recently been associated with tip vortex characterization. Vortex trajectories and strength compare favorably with those in the literature. No clear dependence of trajectory on Reynolds number is deduced. The visualization of tip vortices up to 140° past blade passage in the highest Reynolds number case is notable and has not previously been shown.
Use of three-dimensional computer graphic animation to illustrate cleft lip and palate surgery.
Cutting, C; Oliker, A; Haring, J; Dayan, J; Smith, D
2002-01-01
Three-dimensional (3D) computer animation is not commonly used to illustrate surgical techniques. This article describes the surgery-specific processes that were required to produce animations to teach cleft lip and palate surgery. Three-dimensional models were created using CT scans of two Chinese children with unrepaired clefts (one unilateral and one bilateral). We programmed several custom software tools, including an incision tool, a forceps tool, and a fat tool. Three-dimensional animation was found to be particularly useful for illustrating surgical concepts. Positioning the virtual "camera" made it possible to view the anatomy from angles that are impossible to obtain with a real camera. Transparency allows the underlying anatomy to be seen during surgical repair while maintaining a view of the overlaying tissue relationships. Finally, the representation of motion allows modeling of anatomical mechanics that cannot be done with static illustrations. The animations presented in this article can be viewed on-line at http://www.smiletrain.org/programs/virtual_surgery2.htm. Sophisticated surgical procedures are clarified with the use of 3D animation software and customized software tools. The next step in the development of this technology is the creation of interactive simulators that recreate the experience of surgery in a safe, digital environment. Copyright 2003 Wiley-Liss, Inc.
Light field imaging and application analysis in THz
NASA Astrophysics Data System (ADS)
Zhang, Hongfei; Su, Bo; He, Jingsuo; Zhang, Cong; Wu, Yaxiong; Zhang, Shengbo; Zhang, Cunlin
2018-01-01
The light field includes the direction information and location information. Light field imaging can capture the whole light field by single exposure. The four-dimensional light field function model represented by two-plane parameter, which is proposed by Levoy, is adopted in the light field. Acquisition of light field is based on the microlens array, camera array and the mask. We calculate the dates of light-field to synthetize light field image. The processing techniques of light field data include technology of refocusing rendering, technology of synthetic aperture and technology of microscopic imaging. Introducing the technology of light field imaging into THz, the efficiency of 3D imaging is higher than that of conventional THz 3D imaging technology. The advantages compared with visible light field imaging include large depth of field, wide dynamic range and true three-dimensional. It has broad application prospects.
High-immersion three-dimensional display of the numerical computer model
NASA Astrophysics Data System (ADS)
Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu
2013-08-01
High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.
Geometrical distortion calibration of the stereo camera for the BepiColombo mission to Mercury
NASA Astrophysics Data System (ADS)
Simioni, Emanuele; Da Deppo, Vania; Re, Cristina; Naletto, Giampiero; Martellato, Elena; Borrelli, Donato; Dami, Michele; Aroldi, Gianluca; Ficai Veltroni, Iacopo; Cremonese, Gabriele
2016-07-01
The ESA-JAXA mission BepiColombo that will be launched in 2018 is devoted to the observation of Mercury, the innermost planet of the Solar System. SIMBIOSYS is its remote sensing suite, which consists of three instruments: the High Resolution Imaging Channel (HRIC), the Visible and Infrared Hyperspectral Imager (VIHI), and the Stereo Imaging Channel (STC). The latter will provide the global three dimensional reconstruction of the Mercury surface, and it represents the first push-frame stereo camera on board of a space satellite. Based on a new telescope design, STC combines the advantages of a compact single detector camera to the convenience of a double direction acquisition system; this solution allows to minimize mass and volume performing a push-frame imaging acquisition. The shared camera sensor is divided in six portions: four are covered with suitable filters; the others, one looking forward and one backwards with respect to nadir direction, are covered with a panchromatic filter supplying stereo image pairs of the planet surface. The main STC scientific requirements are to reconstruct in 3D the Mercury surface with a vertical accuracy better than 80 m and performing a global imaging with a grid size of 65 m along-track at the periherm. Scope of this work is to present the on-ground geometric calibration pipeline for this original instrument. The selected STC off-axis configuration forced to develop a new distortion map model. Additional considerations are connected to the detector, a Si-Pin hybrid CMOS, which is characterized by a high fixed pattern noise. This had a great impact in pre-calibration phases compelling to use a not common approach to the definition of the spot centroids in the distortion calibration process. This work presents the results obtained during the calibration of STC concerning the distortion analysis for three different temperatures. These results are then used to define the corresponding distortion model of the camera.
NASA Astrophysics Data System (ADS)
Felipe-Sesé, Luis; López-Alba, Elías; Siegmann, Philip; Díaz, Francisco A.
2016-12-01
A low-cost approach for three-dimensional (3-D) full-field displacement measurement is applied for the analysis of large displacements involved in two different mechanical events. The method is based on a combination of fringe projection and two-dimensional digital image correlation (DIC) techniques. The two techniques have been employed simultaneously using an RGB camera and a color encoding method; therefore, it is possible to measure in-plane and out-of-plane displacements at the same time with only one camera even at high speed rates. The potential of the proposed methodology has been employed for the analysis of large displacements during contact experiments in a soft material block. Displacement results have been successfully compared with those obtained using a 3D-DIC commercial system. Moreover, the analysis of displacements during an impact test on a metal plate was performed to emphasize the application of the methodology for dynamics events. Results show a good level of agreement, highlighting the potential of FP + 2D DIC as low-cost alternative for the analysis of large deformations problems.
Tomographic PIV behind a prosthetic heart valve
NASA Astrophysics Data System (ADS)
Hasler, D.; Landolt, A.; Obrist, D.
2016-05-01
The instantaneous three-dimensional velocity field past a bioprosthetic heart valve was measured using tomographic particle image velocimetry. Two digital cameras were used together with a mirror setup to record PIV images from four different angles. Measurements were conducted in a transparent silicone phantom with a simplified geometry of the aortic root. The refraction indices of the silicone phantom and the working fluid were matched to minimize optical distortion from the flow field to the cameras. The silicone phantom of the aorta was integrated in a flow loop driven by a piston pump. Measurements were conducted for steady and pulsatile flow conditions. Results of the instantaneous, ensemble and phase-averaged flow field are presented. The three-dimensional velocity field reveals a flow topology, which can be related to features of the aortic valve prosthesis.
Conical Refraction Bottle Beams for Entrapment of Absorbing Droplets.
Esseling, Michael; Alpmann, Christina; Schnelle, Jens; Meissner, Robert; Denz, Cornelia
2018-03-22
Conical refraction (CR) optical bottle beams for photophoretic trapping of airborne absorbing droplets are introduced and experimentally demonstrated. CR describes the circular split-up of unpolarised light propagating along an optical axis in a biaxial crystal. The diverging and converging cones lend themselves to the construction of optical bottle beams with flexible entry points. The interaction of single inkjet droplets with an open or partly open bottle beam is shown implementing high-speed video microscopy in a dual-view configuration. Perpendicular image planes are visualized on a single camera chip to characterize the integral three-dimensional movement dynamics of droplets. We demonstrate how a partly opened optical bottle transversely confines liquid objects. Furthermore we observe and analyse transverse oscillations of absorbing droplets as they hit the inner walls and simultaneously measure both transverse and axial velocity components.
Earth orbital teleoperator visual system evaluation program
NASA Technical Reports Server (NTRS)
Frederick, P. N.; Shields, N. L., Jr.; Kirkpatrick, M., III
1977-01-01
Visual system parameters and stereoptic television component geometries were evaluated for optimum viewing. The accuracy of operator range estimation using a Fresnell stereo television system with a three dimensional cursor was examined. An operator's ability to align three dimensional targets using vidicon tube and solid state television cameras as part of a Fresnell stereoptic system was evaluated. An operator's ability to discriminate between varied color samples viewed with a color television system was determined.
Hult, Johan; Richter, Mattias; Nygren, Jenny; Aldén, Marcus; Hultqvist, Anders; Christensen, Magnus; Johansson, Bengt
2002-08-20
High-repetition-rate laser-induced fluorescence measurements of fuel and OH concentrations in internal combustion engines are demonstrated. Series of as many as eight fluorescence images, with a temporal resolution ranging from 10 micros to 1 ms, are acquired within one engine cycle. A multiple-laser system in combination with a multiple-CCD camera is used for cycle-resolved imaging in spark-ignition, direct-injection stratified-charge, and homogeneous-charge compression-ignition engines. The recorded data reveal unique information on cycle-to-cycle variations in fuel transport and combustion. Moreover, the imaging system in combination with a scanning mirror is used to perform instantaneous three-dimensional fuel-concentration measurements.
NASA Technical Reports Server (NTRS)
Wilmington, R. P.; Klute, Glenn K. (Editor); Carroll, Amy E. (Editor); Stuart, Mark A. (Editor); Poliner, Jeff (Editor); Rajulu, Sudhakar (Editor); Stanush, Julie (Editor)
1992-01-01
Kinematics, the study of motion exclusive of the influences of mass and force, is one of the primary methods used for the analysis of human biomechanical systems as well as other types of mechanical systems. The Anthropometry and Biomechanics Laboratory (ABL) in the Crew Interface Analysis section of the Man-Systems Division performs both human body kinematics as well as mechanical system kinematics using the Ariel Performance Analysis System (APAS). The APAS supports both analysis of analog signals (e.g. force plate data collection) as well as digitization and analysis of video data. The current evaluations address several methodology issues concerning the accuracy of the kinematic data collection and analysis used in the ABL. This document describes a series of evaluations performed to gain quantitative data pertaining to position and constant angular velocity movements under several operating conditions. Two-dimensional as well as three-dimensional data collection and analyses were completed in a controlled laboratory environment using typical hardware setups. In addition, an evaluation was performed to evaluate the accuracy impact due to a single axis camera offset. Segment length and positional data exhibited errors within 3 percent when using three-dimensional analysis and yielded errors within 8 percent through two-dimensional analysis (Direct Linear Software). Peak angular velocities displayed errors within 6 percent through three-dimensional analyses and exhibited errors of 12 percent when using two-dimensional analysis (Direct Linear Software). The specific results from this series of evaluations and their impacts on the methodology issues of kinematic data collection and analyses are presented in detail. The accuracy levels observed in these evaluations are also presented.
Comparative assessment of techniques for initial pose estimation using monocular vision
NASA Astrophysics Data System (ADS)
Sharma, Sumant; D`Amico, Simone
2016-06-01
This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.
Sinclair, Jonathan; Fewtrell, David; Taylor, Paul John; Bottoms, Lindsay; Atkins, Stephen; Hobbs, Sarah Jane
2014-01-01
Achieving a high ball velocity is important during soccer shooting, as it gives the goalkeeper less time to react, thus improving a player's chance of scoring. This study aimed to identify important technical aspects of kicking linked to the generation of ball velocity using regression analyses. Maximal instep kicks were obtained from 22 academy-level soccer players using a 10-camera motion capture system sampling at 500 Hz. Three-dimensional kinematics of the lower extremity segments were obtained. Regression analysis was used to identify the kinematic parameters associated with the development of ball velocity. A single biomechanical parameter; knee extension velocity of the kicking limb at ball contact Adjusted R(2) = 0.39, p ≤ 0.01 was obtained as a significant predictor of ball-velocity. This study suggests that sagittal plane knee extension velocity is the strongest contributor to ball velocity and potentially overall kicking performance. It is conceivable therefore that players may benefit from exposure to coaching and strength techniques geared towards the improvement of knee extension angular velocity as highlighted in this study.
NASA Astrophysics Data System (ADS)
Fukuda, Takahito; Shinomura, Masato; Xia, Peng; Awatsuji, Yasuhiro; Nishio, Kenzo; Matoba, Osamu
2017-04-01
We constructed a parallel-phase-shifting digital holographic microscopy (PPSDHM) system using an inverted magnification optical system, and succeeded in three-dimensional (3D) motion-picture imaging for 3D displacement of a microscopic object. In the PPSDHM system, the inverted and afocal magnification optical system consisted of a microscope objective (16.56 mm focal length and 0.25 numerical aperture) and a convex lens (300 mm focal length and 82 mm aperture diameter). A polarization-imaging camera was used to record multiple phase-shifted holograms with a single-shot exposure. We recorded an alum crystal, sinking down in aqueous solution of alum, by the constructed PPSDHM system at 60 frames/s for about 20 s and reconstructed high-quality 3D motion-picture image of the crystal. Then, we calculated amounts of displacement of the crystal from the amounts in the focus plane and the magnifications of the magnification optical system, and obtained the 3D trajectory of the crystal by that amounts.
Coupled multiview autoencoders with locality sensitivity for three-dimensional human pose estimation
NASA Astrophysics Data System (ADS)
Yu, Jialin; Sun, Jifeng; Luo, Shasha; Duan, Bichao
2017-09-01
Estimating three-dimensional (3D) human poses from a single camera is usually implemented by searching pose candidates with image descriptors. Existing methods usually suppose that the mapping from feature space to pose space is linear, but in fact, their mapping relationship is highly nonlinear, which heavily degrades the performance of 3D pose estimation. We propose a method to recover 3D pose from a silhouette image. It is based on the multiview feature embedding (MFE) and the locality-sensitive autoencoders (LSAEs). On the one hand, we first depict the manifold regularized sparse low-rank approximation for MFE and then the input image is characterized by a fused feature descriptor. On the other hand, both the fused feature and its corresponding 3D pose are separately encoded by LSAEs. A two-layer back-propagation neural network is trained by parameter fine-tuning and then used to map the encoded 2D features to encoded 3D poses. Our LSAE ensures a good preservation of the local topology of data points. Experimental results demonstrate the effectiveness of our proposed method.
Poland, Michael P.; Dzurisin, Daniel; LaHusen, Richard G.; Major, John J.; Lapcewich, Dennis; Endo, Elliot T.; Gooding, Daniel J.; Schilling, Steve P.; Janda, Christine G.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.
2008-01-01
Images from a Web-based camera (Webcam) located 8 km north of Mount St. Helens and a network of remote, telemetered digital cameras were used to observe eruptive activity at the volcano between October 2004 and February 2006. The cameras offered the advantages of low cost, low power, flexibility in deployment, and high spatial and temporal resolution. Images obtained from the cameras provided important insights into several aspects of dome extrusion, including rockfalls, lava extrusion rates, and explosive activity. Images from the remote, telemetered digital cameras were assembled into time-lapse animations of dome extrusion that supported monitoring, research, and outreach efforts. The wide-ranging utility of remote camera imagery should motivate additional work, especially to develop the three-dimensional quantitative capabilities of terrestrial camera networks.
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-03-02
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-01-01
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703
Low-cost, high-performance and efficiency computational photometer design
NASA Astrophysics Data System (ADS)
Siewert, Sam B.; Shihadeh, Jeries; Myers, Randall; Khandhar, Jay; Ivanov, Vitaly
2014-05-01
Researchers at the University of Alaska Anchorage and University of Colorado Boulder have built a low cost high performance and efficiency drop-in-place Computational Photometer (CP) to test in field applications ranging from port security and safety monitoring to environmental compliance monitoring and surveying. The CP integrates off-the-shelf visible spectrum cameras with near to long wavelength infrared detectors and high resolution digital snapshots in a single device. The proof of concept combines three or more detectors into a single multichannel imaging system that can time correlate read-out, capture, and image process all of the channels concurrently with high performance and energy efficiency. The dual-channel continuous read-out is combined with a third high definition digital snapshot capability and has been designed using an FPGA (Field Programmable Gate Array) to capture, decimate, down-convert, re-encode, and transform images from two standard definition CCD (Charge Coupled Device) cameras at 30Hz. The continuous stereo vision can be time correlated to megapixel high definition snapshots. This proof of concept has been fabricated as a fourlayer PCB (Printed Circuit Board) suitable for use in education and research for low cost high efficiency field monitoring applications that need multispectral and three dimensional imaging capabilities. Initial testing is in progress and includes field testing in ports, potential test flights in un-manned aerial systems, and future planned missions to image harsh environments in the arctic including volcanic plumes, ice formation, and arctic marine life.
2004-02-10
This is a three-dimensional stereo anaglyph of an image taken by the front navigation camera onboard NASA Mars Exploration Rover Spirit, showing an interesting patch of rippled soil. 3D glasses are necessary to view this image.
Real time three dimensional sensing system
Gordon, S.J.
1996-12-31
The invention is a three dimensional sensing system which utilizes two flexibly located cameras for receiving and recording visual information with respect to a sensed object illuminated by a series of light planes. Each pixel of each image is converted to a digital word and the words are grouped into stripes, each stripe comprising contiguous pixels. One pixel of each stripe in one image is selected and an epi-polar line of that point is drawn in the other image. The three dimensional coordinate of each selected point is determined by determining the point on said epi-polar line which also lies on a stripe in the second image and which is closest to a known light plane. 7 figs.
Real time three dimensional sensing system
Gordon, Steven J.
1996-01-01
The invention is a three dimensional sensing system which utilizes two flexibly located cameras for receiving and recording visual information with respect to a sensed object illuminated by a series of light planes. Each pixel of each image is converted to a digital word and the words are grouped into stripes, each stripe comprising contiguous pixels. One pixel of each stripe in one image is selected and an epi-polar line of that point is drawn in the other image. The three dimensional coordinate of each selected point is determined by determining the point on said epi-polar line which also lies on a stripe in the second image and which is closest to a known light plane.
Super-resolution in a defocused plenoptic camera: a wave-optics-based approach.
Sahin, Erdem; Katkovnik, Vladimir; Gotchev, Atanas
2016-03-01
Plenoptic cameras enable the capture of a light field with a single device. However, with traditional light field rendering procedures, they can provide only low-resolution two-dimensional images. Super-resolution is considered to overcome this drawback. In this study, we present a super-resolution method for the defocused plenoptic camera (Plenoptic 1.0), where the imaging system is modeled using wave optics principles and utilizing low-resolution depth information of the scene. We are particularly interested in super-resolution of in-focus and near in-focus scene regions, which constitute the most challenging cases. The simulation results show that the employed wave-optics model makes super-resolution possible for such regions as long as sufficiently accurate depth information is available.
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-06-24
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.
Optical fringe-reflection deflectometry with bundle adjustment
NASA Astrophysics Data System (ADS)
Xiao, Yong-Liang; Li, Sikun; Zhang, Qican; Zhong, Jianxin; Su, Xianyu; You, Zhisheng
2018-06-01
Liquid crystal display (LCD) screens are located outside of a camera's field of view in fringe-reflection deflectometry. Therefore, fringes that are displayed on LCD screens are obtained through specular reflection by a fixed camera. Thus, the pose calibration between the camera and LCD screen is one of the main challenges in fringe-reflection deflectometry. A markerless planar mirror is used to reflect the LCD screen more than three times, and the fringes are mapped into the fixed camera. The geometrical calibration can be accomplished by estimating the pose between the camera and the virtual image of fringes. Considering the relation between their pose, the incidence and reflection rays can be unified in the camera frame, and a forward triangulation intersection can be operated in the camera frame to measure three-dimensional (3D) coordinates of the specular surface. In the final optimization, constraint-bundle adjustment is operated to refine simultaneously the camera intrinsic parameters, including distortion coefficients, estimated geometrical pose between the LCD screen and camera, and 3D coordinates of the specular surface, with the help of the absolute phase collinear constraint. Simulation and experiment results demonstrate that the pose calibration with planar mirror reflection is simple and feasible, and the constraint-bundle adjustment can enhance the 3D coordinate measurement accuracy in fringe-reflection deflectometry.
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-01-01
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961
Two-dimensional fruit ripeness estimation using thermal imaging
NASA Astrophysics Data System (ADS)
Sumriddetchkajorn, Sarun; Intaravanne, Yuttana
2013-06-01
Some green fruits do not change their color from green to yellow when being ripe. As a result, ripeness estimation via color and fluorescent analytical approaches cannot be applied. In this article, we propose and show for the first time how a thermal imaging camera can be used to two-dimensionally classify fruits into different ripeness levels. Our key idea relies on the fact that the mature fruits have higher heat capacity than the immature ones and therefore the change in surface temperature overtime is slower. Our experimental proof of concept using a thermal imaging camera shows a promising result in non-destructively identifying three different ripeness levels of mangoes Mangifera indica L.
Three-dimensional modeling of tea-shoots using images and models.
Wang, Jian; Zeng, Xianyin; Liu, Jianbing
2011-01-01
In this paper, a method for three-dimensional modeling of tea-shoots with images and calculation models is introduced. The process is as follows: the tea shoots are photographed with a camera, color space conversion is conducted, using an improved algorithm that is based on color and regional growth to divide the tea shoots in the images, and the edges of the tea shoots extracted with the help of edge detection; after that, using the divided tea-shoot images, the three-dimensional coordinates of the tea shoots are worked out and the feature parameters extracted, matching and calculation conducted according to the model database, and finally the three-dimensional modeling of tea-shoots is completed. According to the experimental results, this method can avoid a lot of calculations and has better visual effects and, moreover, performs better in recovering the three-dimensional information of the tea shoots, thereby providing a new method for monitoring the growth of and non-destructive testing of tea shoots.
Analysis of eletrectrohydrodynamic jetting using multifunctional and three-dimensional tomography
NASA Astrophysics Data System (ADS)
Ko, Han Seo; Nguyen, Xuan Hung; Lee, Soo-Hong; Kim, Young Hyun
2013-11-01
Three-dimensional optical tomography technique was developed to reconstruct three-dimensional flow fields using a set of two-dimensional shadowgraphic images and normal gray images. From three high speed cameras, which were positioned at an offset angle of 45° relative to one another, number, size and location of electrohydrodynamic jets with respect to the nozzle position were analyzed using shadowgraphic tomography employing a multiplicative algebraic reconstruction technique (MART). Additionally, a flow field inside cone-shaped liquid (Taylor cone) which was induced under electric field was also observed using a simultaneous multiplicative algebraic reconstruction technique (SMART) for reconstructing intensities of particle light and combining with a three-dimensional cross correlation. Various velocity fields of a circulating flow inside the cone-shaped liquid due to different physico-chemical properties of liquid and applied voltages were also investigated. This work supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Korean government (MEST) (No. S-2011-0023457).
NASA Astrophysics Data System (ADS)
Feng, Zhixin
2018-02-01
Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.
Ma, Ying; Shaik, Mohammed A.; Kozberg, Mariel G.; Thibodeaux, David N.; Zhao, Hanzhi T.; Yu, Hang
2016-01-01
Although modern techniques such as two-photon microscopy can now provide cellular-level three-dimensional imaging of the intact living brain, the speed and fields of view of these techniques remain limited. Conversely, two-dimensional wide-field optical mapping (WFOM), a simpler technique that uses a camera to observe large areas of the exposed cortex under visible light, can detect changes in both neural activity and haemodynamics at very high speeds. Although WFOM may not provide single-neuron or capillary-level resolution, it is an attractive and accessible approach to imaging large areas of the brain in awake, behaving mammals at speeds fast enough to observe widespread neural firing events, as well as their dynamic coupling to haemodynamics. Although such wide-field optical imaging techniques have a long history, the advent of genetically encoded fluorophores that can report neural activity with high sensitivity, as well as modern technologies such as light emitting diodes and sensitive and high-speed digital cameras have driven renewed interest in WFOM. To facilitate the wider adoption and standardization of WFOM approaches for neuroscience and neurovascular coupling research, we provide here an overview of the basic principles of WFOM, considerations for implementation of wide-field fluorescence imaging of neural activity, spectroscopic analysis and interpretation of results. This article is part of the themed issue ‘Interpreting BOLD: a dialogue between cognitive and cellular neuroscience’. PMID:27574312
2004-02-02
This is a three-dimensional stereo anaglyph of an image taken by the front hazard-identification camera onboard NASA Mars Exploration Rover Opportunity, showing the rover arm in its extended position. 3D glasses are necessary to view this image.
Dynamics and interactions of particles in a thermophoretic trap
NASA Astrophysics Data System (ADS)
Foster, Benjamin; Fung, Frankie; Fieweger, Connor; Usatyuk, Mykhaylo; Gaj, Anita; DeSalvo, B. J.; Chin, Cheng
2017-08-01
We investigate dynamics and interactions of particles levitated and trapped by the thermophoretic force in a vacuum cell. Our analysis is based on footage taken by orthogonal cameras that are able to capture the three dimensional trajectories of the particles. In contrast to spherical particles, which remain stationary at the center of the cell, here we report new qualitative features of the motion of particles with non-spherical geometry. Singly levitated particles exhibit steady spinning around their body axis and rotation around the symmetry axis of the cell. When two levitated particles approach each other, repulsive or attractive interactions between the particles are observed. Our levitation system offers a wonderful platform to study interaction between particles in a microgravity environment.
Feasibility study of a ``4H'' X-ray camera based on GaAs:Cr sensor
NASA Astrophysics Data System (ADS)
Dragone, A.; Kenney, C.; Lozinskaya, A.; Tolbanov, O.; Tyazhev, A.; Zarubin, A.; Wang, Zhehui
2016-11-01
A multilayer stacked X-ray camera concept is described. This type of technology is called `4H' X-ray cameras, where 4H stands for high-Z (Z>30) sensor, high-resolution (less than 300 micron pixel pitch), high-speed (above 100 MHz), and high-energy (above 30 keV in photon energy). The components of the technology, similar to the popular two-dimensional (2D) hybrid pixelated array detectors, consists of GaAs:Cr sensors bonded to high-speed ASICs. 4H cameras based on GaAs also use integration mode of X-ray detection. The number of layers, on the order of ten, is smaller than an earlier configuration for single-photon-counting (SPC) mode of detection [1]. High-speed ASIC based on modification to the ePix family of ASIC is discussed. Applications in X-ray free electron lasers (XFELs), synchrotrons, medicine and non-destructive testing are possible.
Beam characterization by wavefront sensor
Neal, Daniel R.; Alford, W. J.; Gruetzner, James K.
1999-01-01
An apparatus and method for characterizing an energy beam (such as a laser) with a two-dimensional wavefront sensor, such as a Shack-Hartmann lenslet array. The sensor measures wavefront slope and irradiance of the beam at a single point on the beam and calculates a space-beamwidth product. A detector array such as a charge coupled device camera is preferably employed.
Nighttime Foreground Pedestrian Detection Based on Three-Dimensional Voxel Surface Model.
Li, Jing; Zhang, Fangbing; Wei, Lisong; Yang, Tao; Lu, Zhaoyang
2017-10-16
Pedestrian detection is among the most frequently-used preprocessing tasks in many surveillance application fields, from low-level people counting to high-level scene understanding. Even though many approaches perform well in the daytime with sufficient illumination, pedestrian detection at night is still a critical and challenging problem for video surveillance systems. To respond to this need, in this paper, we provide an affordable solution with a near-infrared stereo network camera, as well as a novel three-dimensional foreground pedestrian detection model. Specifically, instead of using an expensive thermal camera, we build a near-infrared stereo vision system with two calibrated network cameras and near-infrared lamps. The core of the system is a novel voxel surface model, which is able to estimate the dynamic changes of three-dimensional geometric information of the surveillance scene and to segment and locate foreground pedestrians in real time. A free update policy for unknown points is designed for model updating, and the extracted shadow of the pedestrian is adopted to remove foreground false alarms. To evaluate the performance of the proposed model, the system is deployed in several nighttime surveillance scenes. Experimental results demonstrate that our method is capable of nighttime pedestrian segmentation and detection in real time under heavy occlusion. In addition, the qualitative and quantitative comparison results show that our work outperforms classical background subtraction approaches and a recent RGB-D method, as well as achieving comparable performance with the state-of-the-art deep learning pedestrian detection method even with a much lower hardware cost.
Nighttime Foreground Pedestrian Detection Based on Three-Dimensional Voxel Surface Model
Li, Jing; Zhang, Fangbing; Wei, Lisong; Lu, Zhaoyang
2017-01-01
Pedestrian detection is among the most frequently-used preprocessing tasks in many surveillance application fields, from low-level people counting to high-level scene understanding. Even though many approaches perform well in the daytime with sufficient illumination, pedestrian detection at night is still a critical and challenging problem for video surveillance systems. To respond to this need, in this paper, we provide an affordable solution with a near-infrared stereo network camera, as well as a novel three-dimensional foreground pedestrian detection model. Specifically, instead of using an expensive thermal camera, we build a near-infrared stereo vision system with two calibrated network cameras and near-infrared lamps. The core of the system is a novel voxel surface model, which is able to estimate the dynamic changes of three-dimensional geometric information of the surveillance scene and to segment and locate foreground pedestrians in real time. A free update policy for unknown points is designed for model updating, and the extracted shadow of the pedestrian is adopted to remove foreground false alarms. To evaluate the performance of the proposed model, the system is deployed in several nighttime surveillance scenes. Experimental results demonstrate that our method is capable of nighttime pedestrian segmentation and detection in real time under heavy occlusion. In addition, the qualitative and quantitative comparison results show that our work outperforms classical background subtraction approaches and a recent RGB-D method, as well as achieving comparable performance with the state-of-the-art deep learning pedestrian detection method even with a much lower hardware cost. PMID:29035295
The suitability of lightfield camera depth maps for coordinate measurement applications
NASA Astrophysics Data System (ADS)
Rangappa, Shreedhar; Tailor, Mitul; Petzing, Jon; Kinnell, Peter; Jackson, Michael
2015-12-01
Plenoptic cameras can capture 3D information in one exposure without the need for structured illumination, allowing grey scale depth maps of the captured image to be created. The Lytro, a consumer grade plenoptic camera, provides a cost effective method of measuring depth of multiple objects under controlled lightning conditions. In this research, camera control variables, environmental sensitivity, image distortion characteristics, and the effective working range of two Lytro first generation cameras were evaluated. In addition, a calibration process has been created, for the Lytro cameras, to deliver three dimensional output depth maps represented in SI units (metre). The novel results show depth accuracy and repeatability of +10.0 mm to -20.0 mm, and 0.5 mm respectively. For the lateral X and Y coordinates, the accuracy was +1.56 μm to -2.59 μm and the repeatability was 0.25 μm.
Parallel-Processing Software for Correlating Stereo Images
NASA Technical Reports Server (NTRS)
Klimeck, Gerhard; Deen, Robert; Mcauley, Michael; DeJong, Eric
2007-01-01
A computer program implements parallel- processing algorithms for cor relating images of terrain acquired by stereoscopic pairs of digital stereo cameras on an exploratory robotic vehicle (e.g., a Mars rove r). Such correlations are used to create three-dimensional computatio nal models of the terrain for navigation. In this program, the scene viewed by the cameras is segmented into subimages. Each subimage is assigned to one of a number of central processing units (CPUs) opera ting simultaneously.
Making 3D movies of Northern Lights
NASA Astrophysics Data System (ADS)
Hivon, Eric; Mouette, Jean; Legault, Thierry
2017-10-01
We describe the steps necessary to create three-dimensional (3D) movies of Northern Lights or Aurorae Borealis out of real-time images taken with two distant high-resolution fish-eye cameras. Astrometric reconstruction of the visible stars is used to model the optical mapping of each camera and correct for it in order to properly align the two sets of images. Examples of the resulting movies can be seen at http://www.iap.fr/aurora3d
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ceglio, N.M.; George, E.V.; Brooks, K.M.
The first successful demonstration of high resolution, tomographic imaging of a laboratory plasma using coded imaging techniques is reported. ZPCI has been used to image the x-ray emission from laser compressed DT filled microballoons. The zone plate camera viewed an x-ray spectral window extending from below 2 keV to above 6 keV. It exhibited a resolution approximately 8 ..mu..m, a magnification factor approximately 13, and subtended a radiation collection solid angle at the target approximately 10/sup -2/ sr. X-ray images using ZPCI were compared with those taken using a grazing incidence reflection x-ray microscope. The agreement was excellent. In addition,more » the zone plate camera produced tomographic images. The nominal tomographic resolution was approximately 75 ..mu..m. This allowed three dimensional viewing of target emission from a single shot in planar ''slices''. In addition to its tomographic capability, the great advantage of the coded imaging technique lies in its applicability to hard (greater than 10 keV) x-ray and charged particle imaging. Experiments involving coded imaging of the suprathermal x-ray and high energy alpha particle emission from laser compressed microballoon targets are discussed.« less
Fukuda, Shinichi; Beheregaray, Simone; Hoshi, Sujin; Yamanari, Masahiro; Lim, Yiheng; Hiraoka, Takahiro; Yasuno, Yoshiaki; Oshika, Tetsuro
2013-12-01
To evaluate the ability of parameters measured by three-dimensional (3D) corneal and anterior segment optical coherence tomography (CAS-OCT) and a rotating Scheimpflug camera combined with a Placido topography system (Scheimpflug camera with topography) to discriminate between normal eyes and forme fruste keratoconus. Forty-eight eyes of 48 patients with keratoconus, 25 eyes of 25 patients with forme fruste keratoconus and 128 eyes of 128 normal subjects were evaluated. Anterior and posterior keratometric parameters (steep K, flat K, average K), elevation, topographic parameters, regular and irregular astigmatism (spherical, asymmetry, regular and higher-order astigmatism) and five pachymetric parameters (minimum, minimum-median, inferior-superior, inferotemporal-superonasal, vertical thinnest location of the cornea) were measured using 3D CAS-OCT and a Scheimpflug camera with topography. The area under the receiver operating curve (AUROC) was calculated to assess the discrimination ability. Compatibility and repeatability of both devices were evaluated. Posterior surface elevation showed higher AUROC values in discrimination analysis of forme fruste keratoconus using both devices. Both instruments showed significant linear correlations (p<0.05, Pearson's correlation coefficient) and good repeatability (ICCs: 0.885-0.999) for normal and forme fruste keratoconus. Posterior elevation was the best discrimination parameter for forme fruste keratoconus. Both instruments presented good correlation and repeatability for this condition.
Plenoptic background oriented schlieren imaging
NASA Astrophysics Data System (ADS)
Klemkowsky, Jenna N.; Fahringer, Timothy W.; Clifford, Christopher J.; Bathel, Brett F.; Thurow, Brian S.
2017-09-01
The combination of the background oriented schlieren (BOS) technique with the unique imaging capabilities of a plenoptic camera, termed plenoptic BOS, is introduced as a new addition to the family of schlieren techniques. Compared to conventional single camera BOS, plenoptic BOS is capable of sampling multiple lines-of-sight simultaneously. Displacements from each line-of-sight are collectively used to build a four-dimensional displacement field, which is a vector function structured similarly to the original light field captured in a raw plenoptic image. The displacement field is used to render focused BOS images, which qualitatively are narrow depth of field slices of the density gradient field. Unlike focused schlieren methods that require manually changing the focal plane during data collection, plenoptic BOS synthetically changes the focal plane position during post-processing, such that all focal planes are captured in a single snapshot. Through two different experiments, this work demonstrates that plenoptic BOS is capable of isolating narrow depth of field features, qualitatively inferring depth, and quantitatively estimating the location of disturbances in 3D space. Such results motivate future work to transition this single-camera technique towards quantitative reconstructions of 3D density fields.
Particle displacement tracking applied to air flows
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1991-01-01
Electronic Particle Image Velocimeter (PIV) techniques offer many advantages over conventional photographic PIV methods such as fast turn around times and simplified data reduction. A new all electronic PIV technique was developed which can measure high speed gas velocities. The Particle Displacement Tracking (PDT) technique employs a single cw laser, small seed particles (1 micron), and a single intensified, gated CCD array frame camera to provide a simple and fast method of obtaining two-dimensional velocity vector maps with unambiguous direction determination. Use of a single CCD camera eliminates registration difficulties encountered when multiple cameras are used to obtain velocity magnitude and direction information. An 80386 PC equipped with a large memory buffer frame-grabber board provides all of the data acquisition and data reduction operations. No array processors of other numerical processing hardware are required. Full video resolution (640x480 pixel) is maintained in the acquired images, providing high resolution video frames of the recorded particle images. The time between data acquisition to display of the velocity vector map is less than 40 sec. The new electronic PDT technique is demonstrated on an air nozzle flow with velocities less than 150 m/s.
Image intensifier-based volume tomographic angiography imaging system: system evaluation
NASA Astrophysics Data System (ADS)
Ning, Ruola; Wang, Xiaohui; Shen, Jianjun; Conover, David L.
1995-05-01
An image intensifier-based rotational volume tomographic angiography imaging system has been constructed. The system consists of an x-ray tube and an image intensifier that are separately mounted on a gantry. This system uses an image intensifier coupled to a TV camera as a two-dimensional detector so that a set of two-dimensional projections can be acquired for a direct three-dimensional reconstruction (3D). This system has been evaluated with two phantoms: a vascular phantom and a monkey head cadaver. One hundred eighty projections of each phantom were acquired with the system. A set of three-dimensional images were directly reconstructed from the projection data. The experimental results indicate that good imaging quality can be obtained with this system.
Integrated calibration of multiview phase-measuring profilometry
NASA Astrophysics Data System (ADS)
Lee, Yeong Beum; Kim, Min H.
2017-11-01
Phase-measuring profilometry (PMP) measures per-pixel height information of a surface with high accuracy. Height information captured by a camera in PMP relies on its screen coordinates. Therefore, a PMP measurement from a view cannot be integrated directly to other measurements from different views due to the intrinsic difference of the screen coordinates. In order to integrate multiple PMP scans, an auxiliary calibration of each camera's intrinsic and extrinsic properties is required, in addition to principal PMP calibration. This is cumbersome and often requires physical constraints in the system setup, and multiview PMP is consequently rarely practiced. In this work, we present a novel multiview PMP method that yields three-dimensional global coordinates directly so that three-dimensional measurements can be integrated easily. Our PMP calibration parameterizes intrinsic and extrinsic properties of the configuration of both a camera and a projector simultaneously. It also does not require any geometric constraints on the setup. In addition, we propose a novel calibration target that can remain static without requiring any mechanical operation while conducting multiview calibrations, whereas existing calibration methods require manually changing the target's position and orientation. Our results validate the accuracy of measurements and demonstrate the advantages on our multiview PMP.
Validating two-dimensional leadership models on three-dimensionally structured fish schools
Nagy, Máté; Holbrook, Robert I.; Biro, Dora; Burt de Perera, Theresa
2017-01-01
Identifying leader–follower interactions is crucial for understanding how a group decides where or when to move, and how this information is transferred between members. Although many animal groups have a three-dimensional structure, previous studies investigating leader–follower interactions have often ignored vertical information. This raises the question of whether commonly used two-dimensional leader–follower analyses can be used justifiably on groups that interact in three dimensions. To address this, we quantified the individual movements of banded tetra fish (Astyanax mexicanus) within shoals by computing the three-dimensional trajectories of all individuals using a stereo-camera technique. We used these data firstly to identify and compare leader–follower interactions in two and three dimensions, and secondly to analyse leadership with respect to an individual's spatial position in three dimensions. We show that for 95% of all pairwise interactions leadership identified through two-dimensional analysis matches that identified through three-dimensional analysis, and we reveal that fish attend to the same shoalmates for vertical information as they do for horizontal information. Our results therefore highlight that three-dimensional analyses are not always required to identify leader–follower relationships in species that move freely in three dimensions. We discuss our results in terms of the importance of taking species' sensory capacities into account when studying interaction networks within groups. PMID:28280582
Spacecraft hazard avoidance utilizing structured light
NASA Technical Reports Server (NTRS)
Liebe, Carl Christian; Padgett, Curtis; Chapsky, Jacob; Wilson, Daniel; Brown, Kenneth; Jerebets, Sergei; Goldberg, Hannah; Schroeder, Jeffrey
2006-01-01
At JPL, a <5 kg free-flying micro-inspector spacecraft is being designed for host-vehicle inspection. The spacecraft includes a hazard avoidance sensor to navigate relative to the vehicle being inspected. Structured light was selected for hazard avoidance because of its low mass and cost. Structured light is a method of remote sensing 3-dimensional structure of the proximity utilizing a laser, a grating, and a single regular APS camera. The laser beam is split into 400 different beams by a grating to form a regular spaced grid of laser beams that are projected into the field of view of an APS camera. The laser source and the APS camera are separated forming the base of a triangle. The distance to all beam intersections of the host are calculated based on triangulation.
Gwynne, Craig R; Curran, Sarah A
2014-12-01
Clinical assessment of lower limb kinematics during dynamic tasks may identify individuals who demonstrate abnormal movement patterns that may lead to etiology of exacerbation of knee conditions such as patellofemoral joint (PFJt) pain. The purpose of this study was to determine the reliability, validity and associated measurement error of a clinically appropriate two-dimensional (2-D) procedure of quantifying frontal plane knee alignment during single limb squats. Nine female and nine male recreationally active subjects with no history of PFJt pain had frontal plane limb alignment assessed using three-dimensional (3-D) motion analysis and digital video cameras (2-D analysis) while performing single limb squats. The association between 2-D and 3-D measures was quantified using Pearson's product correlation coefficients. Intraclass correlation coefficients (ICCs) were determined for within- and between-session reliability of 2-D data and standard error of measurement (SEM) was used to establish measurement error. Frontal plane limb alignment assessed with 2-D analysis demonstrated good correlation compared with 3-D methods (r = 0.64 to 0.78, p < 0.001). Within-session (0.86) and between-session ICCs (0.74) demonstrated good reliability for 2-D measures and SEM scores ranged from 2° to 4°. 2-D measures have good consistency and may provide a valid measure of lower limb alignment when compared to existing 3-D methods. Assessment of lower limb kinematics using 2-D methods may be an accurate and clinically useful alternative to 3-D motion analysis when identifying individuals who demonstrate abnormal movement patterns associated with PFJt pain. 2b.
Mentiplay, Benjamin F; Hasanki, Ksaniel; Perraton, Luke G; Pua, Yong-Hao; Charlton, Paula C; Clark, Ross A
2018-03-01
The Microsoft Xbox One Kinect™ (Kinect V2) contains a depth camera that can be used to manually identify anatomical landmark positions in three-dimensions independent of the standard skeletal tracking, and therefore has potential for low-cost, time-efficient three-dimensional movement analysis (3DMA). This study examined inter-session reliability and concurrent validity of the Kinect V2 for the assessment of coronal and sagittal plane kinematics for the trunk, hip and knee during single leg squats (SLS) and drop vertical jumps (DVJ). Thirty young, healthy participants (age = 23 ± 5yrs, male/female = 15/15) performed a SLS and DVJ protocol that was recorded concurrently by the Kinect V2 and 3DMA during two sessions, one week apart. The Kinect V2 demonstrated good to excellent reliability for all SLS and DVJ variables (ICC ≥ 0.73). Concurrent validity ranged from poor to excellent (ICC = 0.02 to 0.98) during the SLS task, although trunk, hip and knee flexion and two-dimensional measures of knee abduction and frontal plane projection angle all demonstrated good to excellent validity (ICC ≥ 0.80). Concurrent validity for the DVJ task was typically worse, with only two variables exceeding ICC = 0.75 (trunk and hip flexion). These findings indicate that the Kinect V2 may have potential for large-scale screening for ACL injury risk, however future prospective research is required.
RANKING TEM CAMERAS BY THEIR RESPONSE TO ELECTRON SHOT NOISE
Grob, Patricia; Bean, Derek; Typke, Dieter; Li, Xueming; Nogales, Eva; Glaeser, Robert M.
2013-01-01
We demonstrate two ways in which the Fourier transforms of images that consist solely of randomly distributed electrons (shot noise) can be used to compare the relative performance of different electronic cameras. The principle is to determine how closely the Fourier transform of a given image does, or does not, approach that of an image produced by an ideal camera, i.e. one for which single-electron events are modeled as Kronecker delta functions located at the same pixels where the electrons were incident on the camera. Experimentally, the average width of the single-electron response is characterized by fitting a single Lorentzian function to the azimuthally averaged amplitude of the Fourier transform. The reciprocal of the spatial frequency at which the Lorentzian function falls to a value of 0.5 provides an estimate of the number of pixels at which the corresponding line-spread function falls to a value of 1/e. In addition, the excess noise due to stochastic variations in the magnitude of the response of the camera (for single-electron events) is characterized by the amount to which the appropriately normalized power spectrum does, or does not, exceed the total number of electrons in the image. These simple measurements provide an easy way to evaluate the relative performance of different cameras. To illustrate this point we present data for three different types of scintillator-coupled camera plus a silicon-pixel (direct detection) camera. PMID:23747527
Urbanová, Petra; Hejna, Petr; Jurda, Mikoláš
2015-05-01
Three-dimensional surface technologies particularly close range photogrammetry and optical surface scanning have recently advanced into affordable, flexible and accurate techniques. Forensic postmortem investigation as performed on a daily basis, however, has not yet fully benefited from their potentials. In the present paper, we tested two approaches to 3D external body documentation - digital camera-based photogrammetry combined with commercial Agisoft PhotoScan(®) software and stereophotogrammetry-based Vectra H1(®), a portable handheld surface scanner. In order to conduct the study three human subjects were selected, a living person, a 25-year-old female, and two forensic cases admitted for postmortem examination at the Department of Forensic Medicine, Hradec Králové, Czech Republic (both 63-year-old males), one dead to traumatic, self-inflicted, injuries (suicide by hanging), the other diagnosed with the heart failure. All three cases were photographed in 360° manner with a Nikon 7000 digital camera and simultaneously documented with the handheld scanner. In addition to having recorded the pre-autopsy phase of the forensic cases, both techniques were employed in various stages of autopsy. The sets of collected digital images (approximately 100 per case) were further processed to generate point clouds and 3D meshes. Final 3D models (a pair per individual) were counted for numbers of points and polygons, then assessed visually and compared quantitatively using ICP alignment algorithm and a cloud point comparison technique based on closest point to point distances. Both techniques were proven to be easy to handle and equally laborious. While collecting the images at autopsy took around 20min, the post-processing was much more time-demanding and required up to 10h of computation time. Moreover, for the full-body scanning the post-processing of the handheld scanner required rather time-consuming manual image alignment. In all instances the applied approaches produced high-resolution photorealistic, real sized or easy to calibrate 3D surface models. Both methods equally failed when the scanned body surface was covered with body hair or reflective moist areas. Still, it can be concluded that single camera close range photogrammetry and optical surface scanning using Vectra H1 scanner represent relatively low-cost solutions which were shown to be beneficial for postmortem body documentation in forensic pathology. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
3D SAPIV particle field reconstruction method based on adaptive threshold.
Qu, Xiangju; Song, Yang; Jin, Ying; Li, Zhenhua; Wang, Xuezhen; Guo, ZhenYan; Ji, Yunjing; He, Anzhi
2018-03-01
Particle image velocimetry (PIV) is a necessary flow field diagnostic technique that provides instantaneous velocimetry information non-intrusively. Three-dimensional (3D) PIV methods can supply the full understanding of a 3D structure, the complete stress tensor, and the vorticity vector in the complex flows. In synthetic aperture particle image velocimetry (SAPIV), the flow field can be measured with large particle intensities from the same direction by different cameras. During SAPIV particle reconstruction, particles are commonly reconstructed by manually setting a threshold to filter out unfocused particles in the refocused images. In this paper, the particle intensity distribution in refocused images is analyzed, and a SAPIV particle field reconstruction method based on an adaptive threshold is presented. By using the adaptive threshold to filter the 3D measurement volume integrally, the three-dimensional location information of the focused particles can be reconstructed. The cross correlations between images captured from cameras and images projected by the reconstructed particle field are calculated for different threshold values. The optimal threshold is determined by cubic curve fitting and is defined as the threshold value that causes the correlation coefficient to reach its maximum. The numerical simulation of a 16-camera array and a particle field at two adjacent time events quantitatively evaluates the performance of the proposed method. An experimental system consisting of a camera array of 16 cameras was used to reconstruct the four adjacent frames in a vortex flow field. The results show that the proposed reconstruction method can effectively reconstruct the 3D particle fields.
An overview of the stereo correlation and triangulation formulations used in DICe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, Daniel Z.
This document provides a detailed overview of the stereo correlation algorithm and triangulation formulation used in the Digital Image Correlation Engine (DICe) to triangulate three dimensional motion in space given the image coordinates and camera calibration parameters.
Automated Wing Twist And Bending Measurements Under Aerodynamic Load
NASA Technical Reports Server (NTRS)
Burner, A. W.; Martinson, S. D.
1996-01-01
An automated system to measure the change in wing twist and bending under aerodynamic load in a wind tunnel is described. The basic instrumentation consists of a single CCD video camera and a frame grabber interfaced to a computer. The technique is based upon a single view photogrammetric determination of two dimensional coordinates of wing targets with a fixed (and known) third dimensional coordinate, namely the spanwise location. The measurement technique has been used successfully at the National Transonic Facility, the Transonic Dynamics Tunnel, and the Unitary Plan Wind Tunnel at NASA Langley Research Center. The advantages and limitations (including targeting) of the technique are discussed. A major consideration in the development was that use of the technique must not appreciably reduce wind tunnel productivity.
Videogrammetric Model Deformation Measurement System User's Manual
NASA Technical Reports Server (NTRS)
Dismond, Harriett R.
2002-01-01
The purpose of this manual is to provide the user of the NASA VMD system, running the MDef software, Version 1.10, all information required to operate the system. The NASA Videogrammetric Model Deformation system consists of an automated videogrammetric technique used to measure the change in wing twist and bending under aerodynamic load in a wind tunnel. The basic instrumentation consists of a single CCD video camera and a frame grabber interfaced to a computer. The technique is based upon a single view photogrammetric determination of two-dimensional coordinates of wing targets with fixed (and known) third dimensional coordinate, namely the span-wise location. The major consideration in the development of the measurement system was that productivity must not be appreciably reduced.
Single shot laser speckle based 3D acquisition system for medical applications
NASA Astrophysics Data System (ADS)
Khan, Danish; Shirazi, Muhammad Ayaz; Kim, Min Young
2018-06-01
The state of the art techniques used by medical practitioners to extract the three-dimensional (3D) geometry of different body parts requires a series of images/frames such as laser line profiling or structured light scanning. Movement of the patients during scanning process often leads to inaccurate measurements due to sequential image acquisition. Single shot structured techniques are robust to motion but the prevalent challenges in single shot structured light methods are the low density and algorithm complexity. In this research, a single shot 3D measurement system is presented that extracts the 3D point cloud of human skin by projecting a laser speckle pattern using a single pair of images captured by two synchronized cameras. In contrast to conventional laser speckle 3D measurement systems that realize stereo correspondence by digital correlation of projected speckle patterns, the proposed system employs KLT tracking method to locate the corresponding points. The 3D point cloud contains no outliers and sufficient quality of 3D reconstruction is achieved. The 3D shape acquisition of human body parts validates the potential application of the proposed system in the medical industry.
The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors
NASA Astrophysics Data System (ADS)
Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.
2015-12-01
Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and the most popular ones in each category were selected (Arc 3D, Visual SfM, Sure, Agisoft). Also four small objects with distinct geometric properties and especial complexities were chosen and their accurate models as reliable true data was created using ATOS Compact Scan 2M 3D scanner. Images were taken using Fujifilm Real 3D stereo camera, Apple iPhone 5 and Nikon D3200 professional camera and three dimensional models of the objects were obtained using each of the software. Finally, a comprehensive comparison between the detailed reviews of the results on the data set showed that the best combination of software and sensors for generating three-dimensional models is directly related to the object shape as well as the expected accuracy of the final model. Generally better quantitative and qualitative results were obtained by using the Nikon D3200 professional camera, while Fujifilm Real 3D stereo camera and Apple iPhone 5 were the second and third respectively in this comparison. On the other hand, three software of Visual SfM, Sure and Agisoft had a hard competition to achieve the most accurate and complete model of the objects and the best software was different according to the geometric properties of the object.
Measurement of impinging butane flame using combined optical system with digital speckle tomography
NASA Astrophysics Data System (ADS)
Ko, Han Seo; Ahn, Seong Soo; Kim, Hyun Jung
2011-11-01
Three-dimensional density distributions of an impinging and eccentric flame were measured experimentally using a combined optical system with digital speckle tomography. In addition, a three-dimensional temperature distribution of the flame was reconstructed from an ideal gas equation based on the reconstructed density data. The flame was formed by the ignition of premixed butane/air from air holes and impinged upward against a plate located 24 mm distance from the burner nozzle. In order to verify the reconstruction process for the experimental measurements, numerically synthesized phantoms of impinging and eccentric flames were derived and reconstructed using a developed three-dimensional multiplicative algebraic reconstruction technique (MART). A new scanning technique was developed for the accurate analysis of speckle displacements necessary for investigating the wall jet regions of the impinging flame at which a sharp variation of the flow direction and pressure gradient occur. The reconstructed temperatures by the digital speckle tomography were applied to the boundary condition for numerical analysis of a flame impinged plate. Then, the numerically calculated temperature distribution of the upper side of the flame impinged plate was compared to temperature data taken by an infrared camera. The absolute average uncertainty between the numerical and infrared camera data was 3.7%.
Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho
2016-03-11
This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility.
Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho
2016-01-01
This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility. PMID:26978366
Three-dimensional image signals: processing methods
NASA Astrophysics Data System (ADS)
Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru
2010-11-01
Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.
Beam characterization by wavefront sensor
Neal, D.R.; Alford, W.J.; Gruetzner, J.K.
1999-08-10
An apparatus and method are disclosed for characterizing an energy beam (such as a laser) with a two-dimensional wavefront sensor, such as a Shack-Hartmann lenslet array. The sensor measures wavefront slope and irradiance of the beam at a single point on the beam and calculates a space-beamwidth product. A detector array such as a charge coupled device camera is preferably employed. 21 figs.
Feasibility and accuracy assessment of light field (plenoptic) PIV flow-measurement technique
NASA Astrophysics Data System (ADS)
Shekhar, Chandra; Ogawa, Syo; Kawaguchi, Tatsuya
A light field camera can enable measurement of all the three velocity components of a flow field inside a three-dimensional volume when implemented in a PIV measurement. Due to the usage of only one camera, the measurement procedure gets greatly simplified, as well as measurement of the flows with limited visual access also becomes possible. Due to these advantages, light field cameras and their usage in PIV measurements are actively studied. The overall procedure of obtaining an instantaneous flow field consists of imaging a seeded flow at two closely separated time instants, reconstructing the two volumetric distributions of the particles using algorithms such as MART, followed by obtaining the flow velocity through cross-correlations. In this study, we examined effects of various configuration parameters of a light field camera on the in-plane and the depth resolutions, obtained near-optimal parameters in a given case, and then used it to simulate a PIV measurement scenario in order to assess the reconstruction accuracy.
GPS/Optical/Inertial Integration for 3D Navigation Using Multi-Copter Platforms
NASA Technical Reports Server (NTRS)
Dill, Evan T.; Young, Steven D.; Uijt De Haag, Maarten
2017-01-01
In concert with the continued advancement of a UAS traffic management system (UTM), the proposed uses of autonomous unmanned aerial systems (UAS) have become more prevalent in both the public and private sectors. To facilitate this anticipated growth, a reliable three-dimensional (3D) positioning, navigation, and mapping (PNM) capability will be required to enable operation of these platforms in challenging environments where global navigation satellite systems (GNSS) may not be available continuously. Especially, when the platform's mission requires maneuvering through different and difficult environments like outdoor opensky, outdoor under foliage, outdoor-urban and indoor, and may include transitions between these environments. There may not be a single method to solve the PNM problem for all environments. The research presented in this paper is a subset of a broader research effort, described in [1]. The research is focused on combining data from dissimilar sensor technologies to create an integrated navigation and mapping method that can enable reliable operation in both an outdoor and structured indoor environment. The integrated navigation and mapping design is utilizes a Global Positioning System (GPS) receiver, an Inertial Measurement Unit (IMU), a monocular digital camera, and three short to medium range laser scanners. This paper describes specifically the techniques necessary to effectively integrate the monocular camera data within the established mechanization. To evaluate the developed algorithms a hexacopter was built, equipped with the discussed sensors, and both hand-carried and flown through representative environments. This paper highlights the effect that the monocular camera has on the aforementioned sensor integration scheme's reliability, accuracy and availability.
Cheng, Victor S; Bai, Jinfen; Chen, Yazhu
2009-11-01
As the needs for various kinds of body surface information are wide-ranging, we developed an imaging-sensor integrated system that can synchronously acquire high-resolution three-dimensional (3D) far-infrared (FIR) thermal and true-color images of the body surface. The proposed system integrates one FIR camera and one color camera with a 3D structured light binocular profilometer. To eliminate the emotion disturbance of the inspector caused by the intensive light projection directly into the eye from the LCD projector, we have developed a gray encoding strategy based on the optimum fringe projection layout. A self-heated checkerboard has been employed to perform the calibration of different types of cameras. Then, we have calibrated the structured light emitted by the LCD projector, which is based on the stereo-vision idea and the least-squares quadric surface-fitting algorithm. Afterwards, the precise 3D surface can fuse with undistorted thermal and color images. To enhance medical applications, the region-of-interest (ROI) in the temperature or color image representing the surface area of clinical interest can be located in the corresponding position in the other images through coordinate system transformation. System evaluation demonstrated a mapping error between FIR and visual images of three pixels or less. Experiments show that this work is significantly useful in certain disease diagnoses.
Three dimensional modelling for the target asteroid of HAYABUSA
NASA Astrophysics Data System (ADS)
Demura, H.; Kobayashi, S.; Asada, N.; Hashimoto, T.; Saito, J.
Hayabusa program is the first sample return mission of Japan. This was launched at May 9 2003, and will arrive at the target asteroid 25143 Itokawa on June 2005. The spacecraft has three optical navigation cameras, which are two wide angle ones and a telescopic one. The telescope with a filter wheel was named AMICA (Asteroid Multiband Imaging CAmera). We are going to model a shape of the target asteroid by this telescope; expected resolution: 1m/pixel at 10 km in distanc, field of view: 5.7 squared degrees, MPP-type CCD with 1024 x 1000 pixels. Because size of the Hayabusa is about 1x1x1 m, our goal is shape modeling with about 1m in precision on the basis of a camera system with scanning by rotation of the asteroid. This image-based modeling requires sequential images via AMICA and a history of distance between the asteroid and Hayabusa provided by a Laser Range Finder. We established a system of hierarchically recursive search with sub-pixel matching of Ground Control Points, which are picked up with Susan Operator. The matched dataset is restored with a restriction of epipolar geometry, and the obtained a group of three dimensional points are converted to a polygon model with Delaunay Triangulation. The current status of our development for the shape modeling is displayed.
Arain, Nabeel A; Cadeddu, Jeffrey A; Best, Sara L; Roshek, Thomas; Chang, Victoria; Hogg, Deborah C; Bergs, Richard; Fernandez, Raul; Webb, Erin M; Scott, Daniel J
2012-04-01
This study aimed to evaluate the surgeon performance and workload of a next-generation magnetically anchored camera compared with laparoscopic and flexible endoscopic imaging systems for laparoscopic and single-site laparoscopy (SSL) settings. The cameras included a 5-mm 30° laparoscope (LAP), a magnetically anchored (MAGS) camera, and a flexible endoscope (ENDO). The three camera systems were evaluated using standardized optical characteristic tests. Each system was used in random order for visualization during performance of a standardized suturing task by four surgeons. Each participant performed three to five consecutive repetitions as a surgeon and also served as a camera driver for other surgeons. Ex vivo testing was conducted in a laparoscopic multiport and SSL layout using a box trainer. In vivo testing was performed only in the multiport configuration and used a previously validated live porcine Nissen model. Optical testing showed superior resolution for MAGS at 5 and 10 cm compared with LAP or ENDO. The field of view ranged from 39 to 99°. The depth of focus was almost three times greater for MAGS (6-270 mm) than for LAP (2-88 mm) or ENDO (1-93 mm). Both ex vivo and in vivo multiport combined surgeon performance was significantly better for LAP than for ENDO, but no significant differences were detected for MAGS. For multiport testing, workload ratings were significantly less ex vivo for LAP and MAGS than for ENDO and less in vivo for LAP than for MAGS or ENDO. For ex vivo SSL, no significant performance differences were detected, but camera drivers rated the workload significantly less for MAGS than for LAP or ENDO. The data suggest that the improved imaging element of the next-generation MAGS camera has optical and performance characteristics that meet or exceed those of the LAP or ENDO systems and that the MAGS camera may be especially useful for SSL. Further refinements of the MAGS camera are encouraged.
Seafloor Topographic Analysis in Staged Ocean Resource Exploration
NASA Astrophysics Data System (ADS)
Ikeda, M.; Okawa, M.; Osawa, K.; Kadoshima, K.; Asakawa, E.; Sumi, T.
2017-12-01
J-MARES (Research and Development Partnership for Next Generation Technology of Marine Resources Survey, JAPAN) has been designing a low-expense and high-efficiency exploration system for seafloor hydrothermal massive sulfide deposits in "Cross-ministerial Strategic Innovation Promotion Program (SIP)" granted by the Cabinet Office, Government of Japan since 2014. We designed a method to focus mineral deposit prospective area in multi-stages (the regional survey, semi-detail survey and detail survey) by extracted topographic features of some well-known seafloor massive sulfide deposits from seafloor topographic analysis using seafloor topographic data acquired by the bathymetric survey. We applied this procedure to an area of interest more than 100km x 100km over Okinawa Trough, including some known seafloor massive sulfide deposits. In Addition, we tried to create a three-dimensional model of seafloor topography by SfM (Structure from Motion) technique using multiple image data of Chimney distributed around well-known seafloor massive sulfide deposit taken with Hi-Vision camera mounted on ROV in detail survey such as geophysical exploration. Topographic features of Chimney was extracted by measuring created three-dimensional model. As the result, it was possible to estimate shape of seafloor sulfide such as Chimney to be mined by three-dimensional model created from image data taken with camera mounted on ROV. In this presentation, we will discuss about focusing mineral deposit prospective area in multi-stages by seafloor topographic analysis using seafloor topographic data in exploration system for seafloor massive sulfide deposit and also discuss about three-dimensional model of seafloor topography created from seafloor image data taken with ROV.
2016-04-28
Single- shot , volumetrically illuminated, three- dimensional, tomographic laser-induced- fluorescence imaging in a gaseous free jet Benjamin R. Halls...us.af.mil Abstract: Single- shot , tomographic imaging of the three-dimensional concentration field is demonstrated in a turbulent gaseous free jet in co-flow...2001). 6. K. M. Tacina and W. J. A. Dahm, “Effects of heat release on turbulent shear flows, Part 1. A general equivalence principle for non-buoyant
Eliminating Bias In Acousto-Optical Spectrum Analysis
NASA Technical Reports Server (NTRS)
Ansari, Homayoon; Lesh, James R.
1992-01-01
Scheme for digital processing of video signals in acousto-optical spectrum analyzer provides real-time correction for signal-dependent spectral bias. Spectrum analyzer described in "Two-Dimensional Acousto-Optical Spectrum Analyzer" (NPO-18092), related apparatus described in "Three-Dimensional Acousto-Optical Spectrum Analyzer" (NPO-18122). Essence of correction is to average over digitized outputs of pixels in each CCD row and to subtract this from the digitized output of each pixel in row. Signal processed electro-optically with reference-function signals to form two-dimensional spectral image in CCD camera.
Two-dimensional real-time imaging system for subtraction angiography using an iodine filter
NASA Astrophysics Data System (ADS)
Umetani, Keiji; Ueda, Ken; Takeda, Tohoru; Anno, Izumi; Itai, Yuji; Akisada, Masayoshi; Nakajima, Teiichi
1992-01-01
A new type of subtraction imaging system was developed using an iodine filter and a single-energy broad bandwidth monochromatized x ray. The x-ray images of coronary arteries made after intravenous injection of a contrast agent are enhanced by an energy-subtraction technique. Filter chopping of the x-ray beam switches energies rapidly, so that a nearly simultaneous pair of filtered and nonfiltered images can be made. By using a high-speed video camera, a pair of two 512 × 512 pixel images can be obtained within 9 ms. Three hundred eighty-four images (raw data) are stored in a 144-Mbyte frame memory. After phantom studies, in vivo subtracted images of coronary arteries in dogs were obtained at a rate of 15 images/s.
Yoon, Se Jin; Noh, Si Cheol; Choi, Heung Ho
2007-01-01
The infrared diagnosis device provides two-dimensional images and patient-oriented results that can be easily understood by the inspection target by using infrared cameras; however, it has disadvantages such as large size, high price, and inconvenient maintenance. In this regard, this study has proposed small-sized diagnosis device for body heat using a single infrared sensor and implemented an infrared detection system using a single infrared sensor and an algorithm that represents thermography using the obtained data on the temperature of the point source. The developed systems had the temperature resolution of 0.1 degree and the reproducibility of +/-0.1 degree. The accuracy was 90.39% at the error bound of +/-0 degree and 99.98% at that of +/-0.1 degree. In order to evaluate the proposed algorithm and system, the infrared images of camera method was compared. The thermal images that have clinical meaning were obtained from a patient who has lesion to verify its clinical applicability.
Pound, Michael P.; French, Andrew P.; Murchie, Erik H.; Pridmore, Tony P.
2014-01-01
Increased adoption of the systems approach to biological research has focused attention on the use of quantitative models of biological objects. This includes a need for realistic three-dimensional (3D) representations of plant shoots for quantification and modeling. Previous limitations in single-view or multiple-view stereo algorithms have led to a reliance on volumetric methods or expensive hardware to record plant structure. We present a fully automatic approach to image-based 3D plant reconstruction that can be achieved using a single low-cost camera. The reconstructed plants are represented as a series of small planar sections that together model the more complex architecture of the leaf surfaces. The boundary of each leaf patch is refined using the level-set method, optimizing the model based on image information, curvature constraints, and the position of neighboring surfaces. The reconstruction process makes few assumptions about the nature of the plant material being reconstructed and, as such, is applicable to a wide variety of plant species and topologies and can be extended to canopy-scale imaging. We demonstrate the effectiveness of our approach on data sets of wheat (Triticum aestivum) and rice (Oryza sativa) plants as well as a unique virtual data set that allows us to compute quantitative measures of reconstruction accuracy. The output is a 3D mesh structure that is suitable for modeling applications in a format that can be imported in the majority of 3D graphics and software packages. PMID:25332504
Camera calibration for multidirectional flame chemiluminescence tomography
NASA Astrophysics Data System (ADS)
Wang, Jia; Zhang, Weiguang; Zhang, Yuhong; Yu, Xun
2017-04-01
Flame chemiluminescence tomography (FCT), which combines computerized tomography theory and multidirectional chemiluminescence emission measurements, can realize instantaneous three-dimensional (3-D) diagnostics for flames with high spatial and temporal resolutions. One critical step of FCT is to record the projections by multiple cameras from different view angles. For high accuracy reconstructions, it requires that extrinsic parameters (the positions and orientations) and intrinsic parameters (especially the image distances) of cameras be accurately calibrated first. Taking the focus effect of the camera into account, a modified camera calibration method was presented for FCT, and a 3-D calibration pattern was designed to solve the parameters. The precision of the method was evaluated by reprojections of feature points to cameras with the calibration results. The maximum root mean square error of the feature points' position is 1.42 pixels and 0.0064 mm for the image distance. An FCT system with 12 cameras was calibrated by the proposed method and the 3-D CH* intensity of a propane flame was measured. The results showed that the FCT system provides reasonable reconstruction accuracy using the camera's calibration results.
An electrically tunable plenoptic camera using a liquid crystal microlens array.
Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng
2015-05-01
Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.
An electrically tunable plenoptic camera using a liquid crystal microlens array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Yu; School of Automation, Huazhong University of Science and Technology, Wuhan 430074; Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074
2015-05-15
Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated withmore » an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.« less
An electrically tunable plenoptic camera using a liquid crystal microlens array
NASA Astrophysics Data System (ADS)
Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng
2015-05-01
Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.
OpenCV and TYZX : video surveillance for tracking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Jim; Spencer, Andrew; Chu, Eric
2008-08-01
As part of the National Security Engineering Institute (NSEI) project, several sensors were developed in conjunction with an assessment algorithm. A camera system was developed in-house to track the locations of personnel within a secure room. In addition, a commercial, off-the-shelf (COTS) tracking system developed by TYZX was examined. TYZX is a Bay Area start-up that has developed its own tracking hardware and software which we use as COTS support for robust tracking. This report discusses the pros and cons of each camera system, how they work, a proposed data fusion method, and some visual results. Distributed, embedded image processingmore » solutions show the most promise in their ability to track multiple targets in complex environments and in real-time. Future work on the camera system may include three-dimensional volumetric tracking by using multiple simple cameras, Kalman or particle filtering, automated camera calibration and registration, and gesture or path recognition.« less
Baum, S.; Sillem, M.; Ney, J. T.; Baum, A.; Friedrich, M.; Radosa, J.; Kramer, K. M.; Gronwald, B.; Gottschling, S.; Solomayer, E. F.; Rody, A.; Joukhadar, R.
2017-01-01
Introduction Minimally invasive operative techniques are being used increasingly in gynaecological surgery. The expansion of the laparoscopic operation spectrum is in part the result of improved imaging. This study investigates the practical advantages of using 3D cameras in routine surgical practice. Materials and Methods Two different 3-dimensional camera systems were compared with a 2-dimensional HD system; the operating surgeonʼs experiences were documented immediately postoperatively using a questionnaire. Results Significant advantages were reported for suturing and cutting of anatomical structures when using the 3D compared to 2D camera systems. There was only a slight advantage for coagulating. The use of 3D cameras significantly improved the general operative visibility and in particular the representation of spacial depth compared to 2-dimensional images. There was not a significant advantage for image width. Depiction of adhesions and retroperitoneal neural structures was significantly improved by the stereoscopic cameras, though this did not apply to blood vessels, ureter, uterus or ovaries. Conclusion 3-dimensional cameras were particularly advantageous for the depiction of fine anatomical structures due to improved spacial depth representation compared to 2D systems. 3D cameras provide the operating surgeon with a monitor image that more closely resembles actual anatomy, thus simplifying laparoscopic procedures. PMID:28190888
The role of three-dimensional high-definition laparoscopic surgery for gynaecology.
Usta, Taner A; Gundogdu, Elif C
2015-08-01
This article reviews the potential benefits and disadvantages of new three-dimensional (3D) high-definition laparoscopic surgery for gynaecology. With the new-generation 3D high-definition laparoscopic vision systems (LVSs), operation time and learning period are reduced and procedural error margin is decreased. New-generation 3D high-definition LVSs enable to reduce operation time both for novice and experienced surgeons. Headache, eye fatigue or nausea reported with first-generation systems are not different than two-dimensional (2D) LVSs. The system's being more expensive, having the obligation to wear glasses, big and heavy camera probe in some of the devices are accounted for negative aspects of the system that need to be improved. Depth loss in tissues in 2D LVSs and associated adverse events can be eliminated with 3D high-definition LVSs. By virtue of faster learning curve, shorter operation time, reduced error margin and lack of side-effects reported by surgeons with first-generation systems, 3D LVSs seem to be a strong competition to classical laparoscopic imaging systems. Thanks to technological advancements, using lighter and smaller cameras and monitors without glasses is in the near future.
Multi-Angle Snowflake Camera Instrument Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuefer, Martin; Bailey, J.
2016-07-01
The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASCmore » cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.« less
Three dimensional measurement with an electrically tunable focused plenoptic camera
NASA Astrophysics Data System (ADS)
Lei, Yu; Tong, Qing; Xin, Zhaowei; Wei, Dong; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng
2017-03-01
A liquid crystal microlens array (LCMLA) with an arrayed microhole pattern electrode based on nematic liquid crystal materials using a fabrication method including traditional UV-photolithography and wet etching is presented. Its focusing performance is measured under different voltage signals applied between the electrodes of the LCMLA. The experimental outcome shows that the focal length of the LCMLA can be tuned easily by only changing the root mean square value of the voltage signal applied. The developed LCMLA is further integrated with a main lens and an imaging sensor to construct a LCMLA-based focused plenoptic camera (LCFPC) prototype. The focused range of the LCFPC can be shifted electrically along the optical axis of the imaging system. The principles and methods for acquiring several key parameters such as three dimensional (3D) depth, positioning, and motion expression are given. The depth resolution is discussed in detail. Experiments are carried out to obtain the static and dynamic 3D information of objects chosen.
Three dimensional measurement with an electrically tunable focused plenoptic camera.
Lei, Yu; Tong, Qing; Xin, Zhaowei; Wei, Dong; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng
2017-03-01
A liquid crystal microlens array (LCMLA) with an arrayed microhole pattern electrode based on nematic liquid crystal materials using a fabrication method including traditional UV-photolithography and wet etching is presented. Its focusing performance is measured under different voltage signals applied between the electrodes of the LCMLA. The experimental outcome shows that the focal length of the LCMLA can be tuned easily by only changing the root mean square value of the voltage signal applied. The developed LCMLA is further integrated with a main lens and an imaging sensor to construct a LCMLA-based focused plenoptic camera (LCFPC) prototype. The focused range of the LCFPC can be shifted electrically along the optical axis of the imaging system. The principles and methods for acquiring several key parameters such as three dimensional (3D) depth, positioning, and motion expression are given. The depth resolution is discussed in detail. Experiments are carried out to obtain the static and dynamic 3D information of objects chosen.
Automated flight path planning for virtual endoscopy.
Paik, D S; Beaulieu, C F; Jeffrey, R B; Rubin, G D; Napel, S
1998-05-01
In this paper, a novel technique for rapid and automatic computation of flight paths for guiding virtual endoscopic exploration of three-dimensional medical images is described. While manually planning flight paths is a tedious and time consuming task, our algorithm is automated and fast. Our method for positioning the virtual camera is based on the medial axis transform but is much more computationally efficient. By iteratively correcting a path toward the medial axis, the necessity of evaluating simple point criteria during morphological thinning is eliminated. The virtual camera is also oriented in a stable viewing direction, avoiding sudden twists and turns. We tested our algorithm on volumetric data sets of eight colons, one aorta and one bronchial tree. The algorithm computed the flight paths in several minutes per volume on an inexpensive workstation with minimal computation time added for multiple paths through branching structures (10%-13% per extra path). The results of our algorithm are smooth, centralized paths that aid in the task of navigation in virtual endoscopic exploration of three-dimensional medical images.
Dust as a versatile matter for high-temperature plasma diagnostic.
Wang, Zhehui; Ticos, Catalin M
2008-10-01
Dust varies from a few nanometers to a fraction of a millimeter in size. Dust also offers essentially unlimited choices in material composition and structure. The potential of dust for high-temperature plasma diagnostic is largely unfulfilled yet. The principles of dust spectroscopy to measure internal magnetic field, microparticle tracer velocimetry to measure plasma flow, and dust photometry to measure heat flux are described. Two main components of the different dust diagnostics are a dust injector and a dust imaging system. The dust injector delivers a certain number of dust grains into a plasma. The imaging system collects and selectively detects certain photons resulted from dust-plasma interaction. One piece of dust gives the local plasma quantity, a collection of dust grains together reveals either two-dimensional (using only one or two imaging cameras) or three-dimensional (using two or more imaging cameras) structures of the measured quantity. A generic conceptual design suitable for all three types of dust diagnostics is presented.
Hiby, Lex; Lovell, Phil; Patil, Narendra; Kumar, N Samba; Gopalaswamy, Arjun M; Karanth, K Ullas
2009-06-23
The tiger is one of many species in which individuals can be identified by surface patterns. Camera traps can be used to record individual tigers moving over an array of locations and provide data for monitoring and studying populations and devising conservation strategies. We suggest using a combination of algorithms to calculate similarity scores between pattern samples scanned from the images to automate the search for a match to a new image. We show how using a three-dimensional surface model of a tiger to scan the pattern samples allows comparison of images that differ widely in camera angles and body posture. The software, which is free to download, considerably reduces the effort required to maintain an image catalogue and we suggest it could be used to trace the origin of a tiger skin by searching a central database of living tigers' images for matches to an image of the skin.
Hiby, Lex; Lovell, Phil; Patil, Narendra; Kumar, N. Samba; Gopalaswamy, Arjun M.; Karanth, K. Ullas
2009-01-01
The tiger is one of many species in which individuals can be identified by surface patterns. Camera traps can be used to record individual tigers moving over an array of locations and provide data for monitoring and studying populations and devising conservation strategies. We suggest using a combination of algorithms to calculate similarity scores between pattern samples scanned from the images to automate the search for a match to a new image. We show how using a three-dimensional surface model of a tiger to scan the pattern samples allows comparison of images that differ widely in camera angles and body posture. The software, which is free to download, considerably reduces the effort required to maintain an image catalogue and we suggest it could be used to trace the origin of a tiger skin by searching a central database of living tigers' images for matches to an image of the skin. PMID:19324633
Maurice, Matthew J; Kaouk, Jihad H
2017-12-01
To assess the feasibility of radical perineal cystoprostatectomy using the latest generation purpose-built single-port robotic surgical system. In two male cadavers the da Vinci ® SP1098 Surgical System (Intuitive Surgical, Sunnyvale, CA, USA) was used to perform radical perineal cystoprostatectomy and bilateral extended pelvic lymph node dissection (ePLND). New features in this model include enhanced high-definition three-dimensional optics, improved instrument manoeuvrability, and a real-time instrument tracking and guidance system. The surgery was accomplished through a 3-cm perineal incision via a novel robotic single-port system, which accommodates three double-jointed articulating robotic instruments, an articulating camera, and an accessory laparoscopic instrument. The primary outcomes were technical feasibility, intraoperative complications, and total robotic operative time. The cases were completed successfully without conversion. There were no accidental punctures or lacerations. The robotic operative times were 197 and 202 min. In this preclinical model, robotic radical perineal cystoprostatectomy and ePLND was feasible using the SP1098 robotic platform. Further investigation is needed to assess the feasibility of urinary diversion using this novel approach and new technology. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.
Three-dimensional deformation of orthodontic brackets
Melenka, Garrett W; Nobes, David S; Major, Paul W
2013-01-01
Braces are used by orthodontists to correct the misalignment of teeth in the mouth. Archwire rotation is a particular procedure used to correct tooth inclination. Wire rotation can result in deformation to the orthodontic brackets, and an orthodontic torque simulator has been designed to examine this wire–bracket interaction. An optical technique has been employed to measure the deformation due to size and geometric constraints of the orthodontic brackets. Images of orthodontic brackets are collected using a stereo microscope and two charge-coupled device cameras, and deformation of orthodontic brackets is measured using a three-dimensional digital image correlation technique. The three-dimensional deformation of orthodontic brackets will be evaluated. The repeatability of the three-dimensional digital image correlation measurement method was evaluated by performing 30 archwire rotation tests using the same bracket and archwire. Finally, five Damon 3MX and five In-Ovation R self-ligating brackets will be compared using this technique to demonstrate the effect of archwire rotation on bracket design. PMID:23762201
Three-dimensional deformation of orthodontic brackets.
Melenka, Garrett W; Nobes, David S; Major, Paul W; Carey, Jason P
2013-01-01
Braces are used by orthodontists to correct the misalignment of teeth in the mouth. Archwire rotation is a particular procedure used to correct tooth inclination. Wire rotation can result in deformation to the orthodontic brackets, and an orthodontic torque simulator has been designed to examine this wire-bracket interaction. An optical technique has been employed to measure the deformation due to size and geometric constraints of the orthodontic brackets. Images of orthodontic brackets are collected using a stereo microscope and two charge-coupled device cameras, and deformation of orthodontic brackets is measured using a three-dimensional digital image correlation technique. The three-dimensional deformation of orthodontic brackets will be evaluated. The repeatability of the three-dimensional digital image correlation measurement method was evaluated by performing 30 archwire rotation tests using the same bracket and archwire. Finally, five Damon 3MX and five In-Ovation R self-ligating brackets will be compared using this technique to demonstrate the effect of archwire rotation on bracket design.
Producing a Linear Laser System for 3d Modelimg of Small Objects
NASA Astrophysics Data System (ADS)
Amini, A. Sh.; Mozaffar, M. H.
2012-07-01
Today, three dimensional modeling of objects is considered in many applications such as documentation of ancient heritage, quality control, reverse engineering and animation In this regard, there are a variety of methods for producing three-dimensional models. In this paper, a 3D modeling system is developed based on photogrammetry method using image processing and laser line extraction from images. In this method the laser beam profile is radiated on the body of the object and with video image acquisition, and extraction of laser line from the frames, three-dimensional coordinates of the objects can be achieved. In this regard, first the design and implementation of hardware, including cameras and laser systems was conducted. Afterwards, the system was calibrated. Finally, the software of the system was implemented for three dimensional data extraction. The system was investigated for modeling a number of objects. The results showed that the system can provide benefits such as low cost, appropriate speed and acceptable accuracy in 3D modeling of objects.
Initialization and Simulation of Three-Dimensional Aircraft Wake Vortices
NASA Technical Reports Server (NTRS)
Ash, Robert L.; Zheng, Z. C.
1997-01-01
This paper studies the effects of axial velocity profiles on vortex decay, in order to properly initialize and simulate three-dimensional wake vortex flow. Analytical relationships are obtained based on a single vortex model and computational simulations are performed for a rather practical vortex wake, which show that the single vortex analytical relations can still be applicable at certain streamwise sections of three-dimensional wake vortices.
Feasibility study of a ``4H'' X-ray camera based on GaAs:Cr sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dragone, Angelo; Kenney, Chris; Lozinskaya, Anastassiya
Here, we describe a multilayer stacked X-ray camera concept. This type of technology is called `4H' X-ray cameras, where 4H stands for high-Z (Z>30) sensor, high-resolution (less than 300 micron pixel pitch), high-speed (above 100 MHz), and high-energy (above 30 keV in photon energy). The components of the technology, similar to the popular two-dimensional (2D) hybrid pixelated array detectors, consists of GaAs:Cr sensors bonded to high-speed ASICs. 4H cameras based on GaAs also use integration mode of X-ray detection. The number of layers, on the order of ten, is smaller than an earlier configuration for single-photon-counting (SPC) mode of detectionmore » [1]. High-speed ASIC based on modification to the ePix family of ASIC is discussed. Applications in X-ray free electron lasers (XFELs), synchrotrons, medicine and non-destructive testing are possible.« less
Light field geometry of a Standard Plenoptic Camera.
Hahne, Christopher; Aggoun, Amar; Haxha, Shyqyri; Velisavljevic, Vladan; Fernández, Juan Carlos Jácome
2014-11-03
The Standard Plenoptic Camera (SPC) is an innovation in photography, allowing for acquiring two-dimensional images focused at different depths, from a single exposure. Contrary to conventional cameras, the SPC consists of a micro lens array and a main lens projecting virtual lenses into object space. For the first time, the present research provides an approach to estimate the distance and depth of refocused images extracted from captures obtained by an SPC. Furthermore, estimates for the position and baseline of virtual lenses which correspond to an equivalent camera array are derived. On the basis of paraxial approximation, a ray tracing model employing linear equations has been developed and implemented using Matlab. The optics simulation tool Zemax is utilized for validation purposes. By designing a realistic SPC, experiments demonstrate that a predicted image refocusing distance at 3.5 m deviates by less than 11% from the simulation in Zemax, whereas baseline estimations indicate no significant difference. Applying the proposed methodology will enable an alternative to the traditional depth map acquisition by disparity analysis.
Feasibility study of a ``4H'' X-ray camera based on GaAs:Cr sensor
Dragone, Angelo; Kenney, Chris; Lozinskaya, Anastassiya; ...
2016-11-29
Here, we describe a multilayer stacked X-ray camera concept. This type of technology is called `4H' X-ray cameras, where 4H stands for high-Z (Z>30) sensor, high-resolution (less than 300 micron pixel pitch), high-speed (above 100 MHz), and high-energy (above 30 keV in photon energy). The components of the technology, similar to the popular two-dimensional (2D) hybrid pixelated array detectors, consists of GaAs:Cr sensors bonded to high-speed ASICs. 4H cameras based on GaAs also use integration mode of X-ray detection. The number of layers, on the order of ten, is smaller than an earlier configuration for single-photon-counting (SPC) mode of detectionmore » [1]. High-speed ASIC based on modification to the ePix family of ASIC is discussed. Applications in X-ray free electron lasers (XFELs), synchrotrons, medicine and non-destructive testing are possible.« less
Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction
NASA Technical Reports Server (NTRS)
Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)
2001-01-01
In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.
Three-dimensional and multienergy gamma-ray simultaneous imaging by using a Si/CdTe Compton camera.
Suzuki, Yoshiyuki; Yamaguchi, Mitsutaka; Odaka, Hirokazu; Shimada, Hirofumi; Yoshida, Yukari; Torikai, Kota; Satoh, Takahiro; Arakawa, Kazuo; Kawachi, Naoki; Watanabe, Shigeki; Takeda, Shin'ichiro; Ishikawa, Shin-nosuke; Aono, Hiroyuki; Watanabe, Shin; Takahashi, Tadayuki; Nakano, Takashi
2013-06-01
To develop a silicon (Si) and cadmium telluride (CdTe) imaging Compton camera for biomedical application on the basis of technologies used for astrophysical observation and to test its capacity to perform three-dimensional (3D) imaging. All animal experiments were performed according to the Animal Care and Experimentation Committee (Gunma University, Maebashi, Japan). Flourine 18 fluorodeoxyglucose (FDG), iodine 131 ((131)I) methylnorcholestenol, and gallium 67 ((67)Ga) citrate, separately compacted into micro tubes, were inserted subcutaneously into a Wistar rat, and the distribution of the radioisotope compounds was determined with 3D imaging by using the Compton camera after the rat was sacrificed (ex vivo model). In a separate experiment, indium 111((111)In) chloride and (131)I-methylnorcholestenol were injected into a rat intravenously, and copper 64 ((64)Cu) chloride was administered into the stomach orally just before imaging. The isotope distributions were determined with 3D imaging after sacrifice by means of the list-mode-expectation-maximizing-maximum-likelihood method. The Si/CdTe Compton camera demonstrated its 3D multinuclear imaging capability by separating out the distributions of FDG, (131)I-methylnorcholestenol, and (67)Ga-citrate clearly in a test-tube-implanted ex vivo model. In the more physiologic model with tail vein injection prior to sacrifice, the distributions of (131)I-methylnorcholestenol and (64)Cu-chloride were demonstrated with 3D imaging, and the difference in distribution of the two isotopes was successfully imaged although the accumulation on the image of (111)In-chloride was difficult to visualize because of blurring at the low-energy region. The Si/CdTe Compton camera clearly resolved the distribution of multiple isotopes in 3D imaging and simultaneously in the ex vivo model.
Three-dimensional illumination procedure for photodynamic therapy of dermatology
NASA Astrophysics Data System (ADS)
Hu, Xiao-ming; Zhang, Feng-juan; Dong, Fei; Zhou, Ya
2014-09-01
Light dosimetry is an important parameter that affects the efficacy of photodynamic therapy (PDT). However, the irregular morphologies of lesions complicate lesion segmentation and light irradiance adjustment. Therefore, this study developed an illumination demo system comprising a camera, a digital projector, and a computing unit to solve these problems. A three-dimensional model of a lesion was reconstructed using the developed system. Hierarchical segmentation was achieved with the superpixel algorithm. The expected light dosimetry on the targeted lesion was achieved with the proposed illumination procedure. Accurate control and optimization of light delivery can improve the efficacy of PDT.
Real-time stereo generation for surgical vision during minimal invasive robotic surgery
NASA Astrophysics Data System (ADS)
Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod
2016-03-01
This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.
Markerless 3D motion capture for animal locomotion studies
Sellers, William Irvin; Hirasaki, Eishi
2014-01-01
ABSTRACT Obtaining quantitative data describing the movements of animals is an essential step in understanding their locomotor biology. Outside the laboratory, measuring animal locomotion often relies on video-based approaches and analysis is hampered because of difficulties in calibration and often the limited availability of possible camera positions. It is also usually restricted to two dimensions, which is often an undesirable over-simplification given the essentially three-dimensional nature of many locomotor performances. In this paper we demonstrate a fully three-dimensional approach based on 3D photogrammetric reconstruction using multiple, synchronised video cameras. This approach allows full calibration based on the separation of the individual cameras and will work fully automatically with completely unmarked and undisturbed animals. As such it has the potential to revolutionise work carried out on free-ranging animals in sanctuaries and zoological gardens where ad hoc approaches are essential and access within enclosures often severely restricted. The paper demonstrates the effectiveness of video-based 3D photogrammetry with examples from primates and birds, as well as discussing the current limitations of this technique and illustrating the accuracies that can be obtained. All the software required is open source so this can be a very cost effective approach and provides a methodology of obtaining data in situations where other approaches would be completely ineffective. PMID:24972869
2018-01-01
Frequent checks on livestock’s body growth can help reducing problems related to cow infertility or other welfare implications, and recognizing health’s anomalies. In the last ten years, optical methods have been proposed to extract information on various parameters while avoiding direct contact with animals’ body, generally causes stress. This research aims to evaluate a new monitoring system, which is suitable to frequently check calves and cow’s growth through a three-dimensional analysis of their bodies’ portions. The innovative system is based on multiple acquisitions from a low cost Structured Light Depth-Camera (Microsoft Kinect™ v1). The metrological performance of the instrument is proved through an uncertainty analysis and a proper calibration procedure. The paper reports application of the depth camera for extraction of different body parameters. Expanded uncertainty ranging between 3 and 15 mm is reported in the case of ten repeated measurements. Coefficients of determination R² > 0.84 and deviations lower than 6% from manual measurements where in general detected in the case of head size, hips distance, withers to tail length, chest girth, hips, and withers height. Conversely, lower performances where recognized in the case of animal depth (R² = 0.74) and back slope (R² = 0.12). PMID:29495290
Uncertainty Propagation Methods for High-Dimensional Complex Systems
NASA Astrophysics Data System (ADS)
Mukherjee, Arpan
Researchers are developing ever smaller aircraft called Micro Aerial Vehicles (MAVs). The Space Robotics Group has joined the field by developing a dragonfly-inspired MAV. This thesis presents two contributions to this project. The first is the development of a dynamical model of the internal MAV components to be used for tuning design parameters and as a future plant model. This model is derived using the Lagrangian method and differs from others because it accounts for the internal dynamics of the system. The second contribution of this thesis is an estimation algorithm that can be used to determine prototype performance and verify the dynamical model from the first part. Based on the Gauss-Newton Batch Estimator, this algorithm uses a single camera and known points of interest on the wing to estimate the wing kinematic angles. Unlike other single-camera methods, this method is probabilistically based rather than being geometric.
NASA Technical Reports Server (NTRS)
Hunter, Stanley; deNolfo, G. A.; Barbier, L. M.; Link, J. T.; Son, S.; Floyd, S. R.; Guardala, N.; Skopec, M.; Stark, B.
2008-01-01
The Neutron Imaging Camera (NIC) is based on the Three-dimensional Track Imager (3DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution, 3-D tracking of charged particles. The incident direction of fast neutrons, En > 0.5 MeV, are reconstructed from the momenta and energies of the proton and triton fragments resulting from (sup 3)He(n,p) (sup 3)H interactions in the 3-DTI volume. The performance of the NIC from laboratory and accelerator tests is presented.
A combined stereo-photogrammetry and underwater-video system to study group composition of dolphins
NASA Astrophysics Data System (ADS)
Bräger, S.; Chong, A.; Dawson, S.; Slooten, E.; Würsig, B.
1999-11-01
One reason for the paucity of knowledge of dolphin social structure is the difficulty of measuring individual dolphins. In Hector's dolphins, Cephalorhynchus hectori, total body length is a function of age, and sex can be determined by individual colouration pattern. We developed a novel system combining stereo-photogrammetry and underwater-video to record dolphin group composition. The system consists of two downward-looking single-lens-reflex (SLR) cameras and a Hi8 video camera in an underwater housing mounted on a small boat. Bow-riding Hector's dolphins were photographed and video-taped at close range in coastal waters around the South Island of New Zealand. Three-dimensional, stereoscopic measurements of the distance between the blowhole and the anterior margin of the dorsal fin (BH-DF) were calibrated by a suspended frame with reference points. Growth functions derived from measurements of 53 dead Hector's dolphins (29 female : 24 male) provided the necessary reference data. For the analysis, the measurements were synchronised with corresponding underwater-video of the genital area. A total of 27 successful measurements (8 with corresponding sex) were obtained, showing how this new system promises to be potentially useful for cetacean studies.
A multiple camera tongue switch for a child with severe spastic quadriplegic cerebral palsy.
Leung, Brian; Chau, Tom
2010-01-01
The present study proposed a video-based access technology that facilitated a non-contact tongue protrusion access modality for a 7-year-old boy with severe spastic quadriplegic cerebral palsy (GMFCS level 5). The proposed system featured a centre camera and two peripheral cameras to extend coverage of the frontal face view of this user for longer durations. The child participated in a descriptive case study. The participant underwent 3 months of tongue protrusion training while the multiple camera tongue switch prototype was being prepared. Later, the participant was brought back for five experiment sessions where he worked on a single-switch picture matching activity, using the multiple camera tongue switch prototype in a controlled environment. The multiple camera tongue switch achieved an average sensitivity of 82% and specificity of 80%. In three of the experiment sessions, the peripheral cameras were associated with most of the true positive switch activations. These activations would have been missed by a centre-camera-only setup. The study demonstrated proof-of-concept of a non-contact tongue access modality implemented by a video-based system involving three cameras and colour video processing.
Opportunity Landing Spot Panorama (3-D Model)
NASA Technical Reports Server (NTRS)
2004-01-01
The rocky outcrop traversed by the Mars Exploration Rover Opportunity is visible in this three-dimensional model of the rover's landing site. Opportunity has acquired close-up images along the way, and scientists are using the rover's instruments to closely examine portions of interest. The white fragments that look crumpled near the center of the image are portions of the airbags. Distant scenery is displayed on a spherical backdrop or 'billboard' for context. Artifacts near the top rim of the crater are a result of the transition between the three-dimensional model and the billboard. Portions of the terrain model lacking sufficient data appear as blank spaces or gaps, colored reddish-brown for better viewing. This image was generated using special software from NASA's Ames Research Center and a mosaic of images taken by the rover's panoramic camera.
[figure removed for brevity, see original site] Click on image for larger view The rocky outcrop traversed by the Mars Exploration Rover Opportunity is visible in this zoomed-in portion of a three-dimensional model of the rover's landing site. Opportunity has acquired close-up images along the way, and scientists are using the rover's instruments to closely examine portions of interest. The white fragments that look crumpled near the center of the image are portions of the airbags. Distant scenery is displayed on a spherical backdrop or 'billboard' for context. Artifacts near the top rim of the crater are a result of the transition between the three-dimensional model and the billboard. Portions of the terrain model lacking sufficient data appear as blank spaces or gaps, colored reddish-brown for better viewing. This image was generated using special software from NASA's Ames Research Center and a mosaic of images taken by the rover's panoramic camera."Immersive Education" Submerges Students in Online Worlds Made for Learning
ERIC Educational Resources Information Center
Foster, Andrea L.
2007-01-01
Immersive Education is a multimillion-dollar project devoted to build virtual-reality software exclusively for education within commercial and nonprofit fantasy spaces like Second Life. The project combines interactive three-dimensional graphics, Web cameras, Internet-based telephony, and other digital media. Some critics have complained that…
Computational Video for Collaborative Applications
2003-03-01
Plenoptic Modeling: An Image- Based Rendering System.” SIGGRAPH 95, 39-46. [18] McMillan, L. An Image-Based Approach to Three-Dimensional Computer... Plenoptic modeling and rendering from image sequences taken by hand-held camera. Proc. DAGM 99, pages 94–101. [8] Y. Horry, K. Anjyo, and K. Arai
USDA-ARS?s Scientific Manuscript database
Significant advancements in photogrammetric Structure-from-Motion (SfM) software, coupled with improvements in the quality and resolution of smartphone cameras, has made it possible to create ultra-fine resolution three-dimensional models of physical objects using an ordinary smartphone. Here we pre...
NASA Astrophysics Data System (ADS)
Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake
2003-07-01
4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The latter is particularly useful for high-speed applications where the raw images are often captured and stored by the camera before being downloaded. Provision has been made to synchronize data acquired from any combination of these video sources using audio and visual "tags." Additional "front-ends," designed for digital cameras, are anticipated.
The use of global image characteristics for neural network pattern recognitions
NASA Astrophysics Data System (ADS)
Kulyas, Maksim O.; Kulyas, Oleg L.; Loshkarev, Aleksey S.
2017-04-01
The recognition system is observed, where the information is transferred by images of symbols generated by a television camera. For descriptors of objects the coefficients of two-dimensional Fourier transformation generated in a special way. For solution of the task of classification the one-layer neural network trained on reference images is used. Fast learning of a neural network with a single neuron calculation of coefficients is applied.
Single-Photon Detectors for Time-of-Flight Range Imaging
NASA Astrophysics Data System (ADS)
Stoppa, David; Simoni, Andrea
We live in a three-dimensional (3D) world and thanks to the stereoscopic vision provided by our two eyes, in combination with the powerful neural network of the brain we are able to perceive the distance of the objects. Nevertheless, despite the huge market volume of digital cameras, solid-state image sensors can capture only a two-dimensional (2D) projection, of the scene under observation, losing a variable of paramount importance, i.e., the scene depth. On the contrary, 3D vision tools could offer amazing possibilities of improvement in many areas thanks to the increased accuracy and reliability of the models representing the environment. Among the great variety of distance measuring techniques and detection systems available, this chapter will treat only the emerging niche of solid-state, scannerless systems based on the TOF principle and using a detector SPAD-based pixels. The chapter is organized into three main parts. At first, TOF systems and measuring techniques will be described. In the second part, most meaningful sensor architectures for scannerless TOF distance measurements will be analyzed, focusing onto the circuital building blocks required by time-resolved image sensors. Finally, a performance summary is provided and a perspective view for the near future developments of SPAD-TOF sensors is given.
Reconstructing the flight kinematics of swarming and mating in wild mosquitoes
Butail, Sachit; Manoukis, Nicholas; Diallo, Moussa; Ribeiro, José M.; Lehmann, Tovi; Paley, Derek A.
2012-01-01
We describe a novel tracking system for reconstructing three-dimensional tracks of individual mosquitoes in wild swarms and present the results of validating the system by filming swarms and mating events of the malaria mosquito Anopheles gambiae in Mali. The tracking system is designed to address noisy, low frame-rate (25 frames per second) video streams from a stereo camera system. Because flying A. gambiae move at 1–4 m s−1, they appear as faded streaks in the images or sometimes do not appear at all. We provide an adaptive algorithm to search for missing streaks and a likelihood function that uses streak endpoints to extract velocity information. A modified multi-hypothesis tracker probabilistically addresses occlusions and a particle filter estimates the trajectories. The output of the tracking algorithm is a set of track segments with an average length of 0.6–1 s. The segments are verified and combined under human supervision to create individual tracks up to the duration of the video (90 s). We evaluate tracking performance using an established metric for multi-target tracking and validate the accuracy using independent stereo measurements of a single swarm. Three-dimensional reconstructions of A. gambiae swarming and mating events are presented. PMID:22628212
Vertical dynamic deflection measurement in concrete beams with the Microsoft Kinect.
Qi, Xiaojuan; Lichti, Derek; El-Badry, Mamdouh; Chow, Jacky; Ang, Kathleen
2014-02-19
The Microsoft Kinect is arguably the most popular RGB-D camera currently on the market, partially due to its low cost. It offers many advantages for the measurement of dynamic phenomena since it can directly measure three-dimensional coordinates of objects at video frame rate using a single sensor. This paper presents the results of an investigation into the development of a Microsoft Kinect-based system for measuring the deflection of reinforced concrete beams subjected to cyclic loads. New segmentation methods for object extraction from the Kinect's depth imagery and vertical displacement reconstruction algorithms have been developed and implemented to reconstruct the time-dependent displacement of concrete beams tested in laboratory conditions. The results demonstrate that the amplitude and frequency of the vertical displacements can be reconstructed with submillimetre and milliHz-level precision and accuracy, respectively.
Vertical Dynamic Deflection Measurement in Concrete Beams with the Microsoft Kinect
Qi, Xiaojuan; Lichti, Derek; El-Badry, Mamdouh; Chow, Jacky; Ang, Kathleen
2014-01-01
The Microsoft Kinect is arguably the most popular RGB-D camera currently on the market, partially due to its low cost. It offers many advantages for the measurement of dynamic phenomena since it can directly measure three-dimensional coordinates of objects at video frame rate using a single sensor. This paper presents the results of an investigation into the development of a Microsoft Kinect-based system for measuring the deflection of reinforced concrete beams subjected to cyclic loads. New segmentation methods for object extraction from the Kinect's depth imagery and vertical displacement reconstruction algorithms have been developed and implemented to reconstruct the time-dependent displacement of concrete beams tested in laboratory conditions. The results demonstrate that the amplitude and frequency of the vertical displacements can be reconstructed with submillimetre and milliHz-level precision and accuracy, respectively. PMID:24556668
Ultra-fast framing camera tube
Kalibjian, Ralph
1981-01-01
An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.
Winter precipitation particle size distribution measurement by Multi-Angle Snowflake Camera
NASA Astrophysics Data System (ADS)
Huang, Gwo-Jong; Kleinkort, Cameron; Bringi, V. N.; Notaroš, Branislav M.
2017-12-01
From the radar meteorology viewpoint, the most important properties for quantitative precipitation estimation of winter events are 3D shape, size, and mass of precipitation particles, as well as the particle size distribution (PSD). In order to measure these properties precisely, optical instruments may be the best choice. The Multi-Angle Snowflake Camera (MASC) is a relatively new instrument equipped with three high-resolution cameras to capture the winter precipitation particle images from three non-parallel angles, in addition to measuring the particle fall speed using two pairs of infrared motion sensors. However, the results from the MASC so far are usually presented as monthly or seasonally, and particle sizes are given as histograms, no previous studies have used the MASC for a single storm study, and no researchers use MASC to measure the PSD. We propose the methodology for obtaining the winter precipitation PSD measured by the MASC, and present and discuss the development, implementation, and application of the new technique for PSD computation based on MASC images. Overall, this is the first study of the MASC-based PSD. We present PSD MASC experiments and results for segments of two snow events to demonstrate the performance of our PSD algorithm. The results show that the self-consistency of the MASC measured single-camera PSDs is good. To cross-validate PSD measurements, we compare MASC mean PSD (averaged over three cameras) with the collocated 2D Video Disdrometer, and observe good agreements of the two sets of results.
History of Hubble Space Telescope (HST)
1995-01-01
These eerie, dark, pillar-like structures are actually columns of cool interstellar hydrogen gas and dust that are also incubators for new stars. The pillars protrude from the interior wall of a dark molecular cloud like stalagmites from the floor of a cavern. They are part of the Eagle Nebula (also called M16), a nearby star-forming region 7,000 light-years away, in the constellation Serpens. The ultraviolet light from hot, massive, newborn stars is responsible for illuminating the convoluted surfaces of the columns and the ghostly streamers of gas boiling away from their surfaces, producing the dramatic visual effects that highlight the three-dimensional nature of the clouds. This image was taken on April 1, 1995 with the Hubble Space Telescope Wide Field Planetary Camera 2. The color image is constructed from three separate images taken in the light of emission from different types of atoms. Red shows emissions from singly-ionized sulfur atoms, green shows emissions from hydrogen, and blue shows light emitted by doubly-ionized oxygen atoms.
Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.
Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish
2018-01-01
Purpose: The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Methods: Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Conclusion: Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching. PMID:29283133
Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish
2018-01-01
The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching.
The TolTEC Camera for the LMT Telescope
NASA Astrophysics Data System (ADS)
Bryan, Sean
2018-01-01
TolTEC is a new camera being built for the 50-meter Large Millimeter-wave Telescope (LMT) on Sierra Negra in Puebla, Mexico. The instrument will discover and characterize distant galaxies by detecting the thermal emission of dust heated by starlight. The polarimetric capabilities of the camera will measure magnetic fields in star-forming regions in the Milky Way. The optical design of the camera uses mirrors, lenses, and dichroics to simultaneously couple a 4 arcminute diameter field of view onto three single-band focal planes at 150, 220, and 280 GHz. The 7000 polarization-selective detectors are single-band horn-coupled LEKID detectors fabricated at NIST. A rotating half wave plate operates at ambient temperature to modulate the polarized signal. In addition to the galactic and extragalactic surveys already planned, TolTEC installed at the LMT will provide open observing time to the community.
A three-dimensional radiation image display on a real space image created via photogrammetry
NASA Astrophysics Data System (ADS)
Sato, Y.; Ozawa, S.; Tanifuji, Y.; Torii, T.
2018-03-01
The Fukushima Daiichi Nuclear Power Station (FDNPS), operated by Tokyo Electric Power Company Holdings, Inc., went into meltdown after the occurrence of a large tsunami caused by the Great East Japan Earthquake of March 11, 2011. The radiation distribution measurements inside the FDNPS buildings are indispensable to execute decommissioning tasks in the reactor buildings. We have developed a three-dimensional (3D) image reconstruction method for radioactive substances using a compact Compton camera. Moreover, we succeeded in visually recognizing the position of radioactive substances in real space by the integration of 3D radiation images and the 3D photo-model created using photogrammetry.
Metric Calibration of a Focused Plenoptic Camera Based on a 3d Calibration Target
NASA Astrophysics Data System (ADS)
Zeller, N.; Noury, C. A.; Quint, F.; Teulière, C.; Stilla, U.; Dhome, M.
2016-06-01
In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.
Wing Twist Measurements at the National Transonic Facility
NASA Technical Reports Server (NTRS)
Burner, Alpheus W.; Wahls, Richard A.; Goad, William K.
1996-01-01
A technique for measuring wing twist currently in use at the National Transonic Facility is described. The technique is based upon a single camera photogrammetric determination of two dimensional coordinates with a fixed (and known) third dimensional coordinate. The wing twist is found from a conformal transformation between wind-on and wind-off 2-D coordinates in the plane of rotation. The advantages and limitations of the technique as well as the rationale for selection of this particular technique are discussed. Examples are presented to illustrate run-to-run and test-to-test repeatability of the technique in air mode. Examples of wing twist in cryogenic nitrogen mode are also presented.
Plenoptic Image Motion Deblurring.
Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo
2018-04-01
We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.
Observation of the wing deformation and the CFD study of cicada
NASA Astrophysics Data System (ADS)
Dai, Hu; Mohd Adam Das, Shahrizan; Luo, Haoxiang
2011-11-01
We studied the wing properties and kinematics of cicada when the 13-year species emerged in amazingly large numbers in middle Tennessee during May 2011. Using a high-speed camera, we recorded the wing motion of the insect and then reconstructed the three-dimensional wing kinematics using a video digitization software. Like many other insects, the deformation of the cicada wing is asymmetric between the downstroke and upstroke half cycles, and this particular deformation pattern would benefit production of the lift and propulsive forces. Both two-dimensional and three-dimensional CFD studies are carried out based on the reconstructed wing motion. The implication of the study on the role of the aerodynamic force in the wing deformation will be discussed. This work is sponsored by the NSF.
A low-cost and portable realization on fringe projection three-dimensional measurement
NASA Astrophysics Data System (ADS)
Xiao, Suzhi; Tao, Wei; Zhao, Hui
2015-12-01
Fringe projection three-dimensional measurement is widely applied in a wide range of industrial application. The traditional fringe projection system has the disadvantages of high expense, big size, and complicated calibration requirements. In this paper we introduce a low-cost and portable realization on three-dimensional measurement with Pico projector. It has the advantages of low cost, compact physical size, and flexible configuration. For the proposed fringe projection system, there is no restriction to camera and projector's relative alignment on parallelism and perpendicularity for installation. Moreover, plane-based calibration method is adopted in this paper that avoids critical requirements on calibration system such as additional gauge block or precise linear z stage. What is more, error sources existing in the proposed system are introduced in this paper. The experimental results demonstrate the feasibility of the proposed low cost and portable fringe projection system.
3D fluorescence anisotropy imaging using selective plane illumination microscopy.
Hedde, Per Niklas; Ranjit, Suman; Gratton, Enrico
2015-08-24
Fluorescence anisotropy imaging is a popular method to visualize changes in organization and conformation of biomolecules within cells and tissues. In such an experiment, depolarization effects resulting from differences in orientation, proximity and rotational mobility of fluorescently labeled molecules are probed with high spatial resolution. Fluorescence anisotropy is typically imaged using laser scanning and epifluorescence-based approaches. Unfortunately, those techniques are limited in either axial resolution, image acquisition speed, or by photobleaching. In the last decade, however, selective plane illumination microscopy has emerged as the preferred choice for three-dimensional time lapse imaging combining axial sectioning capability with fast, camera-based image acquisition, and minimal light exposure. We demonstrate how selective plane illumination microscopy can be utilized for three-dimensional fluorescence anisotropy imaging of live cells. We further examined the formation of focal adhesions by three-dimensional time lapse anisotropy imaging of CHO-K1 cells expressing an EGFP-paxillin fusion protein.
Barrier Coverage for 3D Camera Sensor Networks
Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi; Ji, Peng; Chu, Hao
2017-01-01
Barrier coverage, an important research area with respect to camera sensor networks, consists of a number of camera sensors to detect intruders that pass through the barrier area. Existing works on barrier coverage such as local face-view barrier coverage and full-view barrier coverage typically assume that each intruder is considered as a point. However, the crucial feature (e.g., size) of the intruder should be taken into account in the real-world applications. In this paper, we propose a realistic resolution criterion based on a three-dimensional (3D) sensing model of a camera sensor for capturing the intruder’s face. Based on the new resolution criterion, we study the barrier coverage of a feasible deployment strategy in camera sensor networks. Performance results demonstrate that our barrier coverage with more practical considerations is capable of providing a desirable surveillance level. Moreover, compared with local face-view barrier coverage and full-view barrier coverage, our barrier coverage is more reasonable and closer to reality. To the best of our knowledge, our work is the first to propose barrier coverage for 3D camera sensor networks. PMID:28771167
Barrier Coverage for 3D Camera Sensor Networks.
Si, Pengju; Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi; Ji, Peng; Chu, Hao
2017-08-03
Barrier coverage, an important research area with respect to camera sensor networks, consists of a number of camera sensors to detect intruders that pass through the barrier area. Existing works on barrier coverage such as local face-view barrier coverage and full-view barrier coverage typically assume that each intruder is considered as a point. However, the crucial feature (e.g., size) of the intruder should be taken into account in the real-world applications. In this paper, we propose a realistic resolution criterion based on a three-dimensional (3D) sensing model of a camera sensor for capturing the intruder's face. Based on the new resolution criterion, we study the barrier coverage of a feasible deployment strategy in camera sensor networks. Performance results demonstrate that our barrier coverage with more practical considerations is capable of providing a desirable surveillance level. Moreover, compared with local face-view barrier coverage and full-view barrier coverage, our barrier coverage is more reasonable and closer to reality. To the best of our knowledge, our work is the first to propose barrier coverage for 3D camera sensor networks.
Navigating surgical fluorescence cameras using near-infrared optical tracking.
van Oosterom, Matthias; den Houting, David; van de Velde, Cornelis; van Leeuwen, Fijs
2018-05-01
Fluorescence guidance facilitates real-time intraoperative visualization of the tissue of interest. However, due to attenuation, the application of fluorescence guidance is restricted to superficial lesions. To overcome this shortcoming, we have previously applied three-dimensional surgical navigation to position the fluorescence camera in reach of the superficial fluorescent signal. Unfortunately, in open surgery, the near-infrared (NIR) optical tracking system (OTS) used for navigation also induced an interference during NIR fluorescence imaging. In an attempt to support future implementation of navigated fluorescence cameras, different aspects of this interference were characterized and solutions were sought after. Two commercial fluorescence cameras for open surgery were studied in (surgical) phantom and human tissue setups using two different NIR OTSs and one OTS simulating light-emitting diode setup. Following the outcome of these measurements, OTS settings were optimized. Measurements indicated the OTS interference was caused by: (1) spectral overlap between the OTS light and camera, (2) OTS light intensity, (3) OTS duty cycle, (4) OTS frequency, (5) fluorescence camera frequency, and (6) fluorescence camera sensitivity. By optimizing points 2 to 4, navigation of fluorescence cameras during open surgery could be facilitated. Optimization of the OTS and camera compatibility can be used to support navigated fluorescence guidance concepts. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Optical stereo video signal processor
NASA Technical Reports Server (NTRS)
Craig, G. D. (Inventor)
1985-01-01
An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.
A Real-Time Optical 3D Tracker for Head-Mounted Display Systems
1990-03-01
paper. OPTOTRAK [Nor88] uses one camera with two dual-axis CCD infrared position sensors. Each position sen- sor has a dedicated processor board to...enhance the use- [Nor88] Northern Digital. Trade literature on Optotrak fulness of head-mounted display systems. - Northern Digital’s Three Dimensional
Settling dynamics of asymmetric rigid fibers
E.J. Tozzi; C Tim Scott; David Vahey; D.J. Klingenberg
2011-01-01
The three-dimensional motion of asymmetric rigid fibers settling under gravity in a quiescent fluid was experimentally measured using a pair of cameras located on a movable platform. The particle motion typically consisted of an initial transient after which the particle approached a steady rate of rotation about an axis parallel to the acceleration of gravity, with...
Advances in Heavy Ion Beam Probe Technology and Operation on MST
NASA Astrophysics Data System (ADS)
Demers, D. R.; Connor, K. A.; Schoch, P. M.; Radke, R. J.; Anderson, J. K.; Craig, D.; den Hartog, D. J.
2003-10-01
A technique to map the magnetic field of a plasma via spectral imaging is being developed with the Heavy Ion Beam Probe on the Madison Symmetric Torus. The technique will utilize two-dimensional images of the ion beam in the plasma, acquired by two CCD cameras, to generate a three-dimensional reconstruction of the beam trajectory. This trajectory, and the known beam ion mass, energy and charge-state, will be used to determine the magnetic field of the plasma. A suitable emission line has not yet been observed since radiation from the MST plasma is both broadband and intense. An effort to raise the emission intensity from the ion beam by increasing beam focus and current has been undertaken. Simulations of the accelerator ion optics and beam characteristics led to a technique, confirmed by experiment, that achieves a narrower beam and marked increase in ion current near the plasma surface. The improvements arising from these simulations will be discussed. Realization of the magnetic field mapping technique is contingent upon accurate reconstruction of the beam trajectory from the camera images. Simulations of two camera CCD images, including the interior of MST, its various landmarks and beam trajectories have been developed. These simulations accept user input such as camera locations, resolution via pixellization and noise. The quality of the images simulated with these and other variables will help guide the selection of viewing port pairs, image size and camera specifications. The results of these simulations will be presented.
Integrated multi sensors and camera video sequence application for performance monitoring in archery
NASA Astrophysics Data System (ADS)
Taha, Zahari; Arif Mat-Jizat, Jessnor; Amirul Abdullah, Muhammad; Muazu Musa, Rabiu; Razali Abdullah, Mohamad; Fauzi Ibrahim, Mohamad; Hanafiah Shaharudin, Mohd Ali
2018-03-01
This paper explains the development of a comprehensive archery performance monitoring software which consisted of three camera views and five body sensors. The five body sensors evaluate biomechanical related variables of flexor and extensor muscle activity, heart rate, postural sway and bow movement during archery performance. The three camera views with the five body sensors are integrated into a single computer application which enables the user to view all the data in a single user interface. The five body sensors’ data are displayed in a numerical and graphical form in real-time. The information transmitted by the body sensors are computed with an embedded algorithm that automatically transforms the summary of the athlete’s biomechanical performance and displays in the application interface. This performance will be later compared to the pre-computed psycho-fitness performance from the prefilled data into the application. All the data; camera views, body sensors; performance-computations; are recorded for further analysis by a sports scientist. Our developed application serves as a powerful tool for assisting the coach and athletes to observe and identify any wrong technique employ during training which gives room for correction and re-evaluation to improve overall performance in the sport of archery.
Slaker, Brent A.; Mohamed, Khaled M.
2017-01-01
Understanding coal mine rib behavior is important for inferring pillar loading conditions as well as ensuring the safety of miners who are regularly exposed to ribs. Due to the variability in the geometry of underground openings and ground behavior, point measurements often fail to capture the true movement of mine workings. Photogrammetry is a potentially fast, cheap, and precise supplemental measurement tool in comparison to extensometers, tape measures, or laser range meters, but its application in underground coal has been limited. The practical use of photogrammetry was tested at the Safety Research Coal Mine, National Institute for Occupational Safety and Health (NIOSH). A commercially available, digital single-lens reflex (DSLR) camera was used to perform the photogrammetric surveys for the experiment. Several experiments were performed using different lighting conditions, distances to subject, camera settings, and photograph overlaps, with results summarized as follows: the lighting method was found to be insignificant if the scene was appropriately illuminated. It was found that the distance to the subject has a minimal impact on result accuracy, and that camera settings have a significant impact on the photogrammetric quality of images. An increasing photograph resolution was preferable when measuring plane orientations; otherwise a high point cloud density would likely be excessive. Focal ratio (F-stop) changes affect the depth of field and image quality in situations where multiple angles are necessary to survey cleat orientations. Photograph overlap is very important to proper three-dimensional reconstruction, and at least 60% overlap between photograph pairs is ideal to avoid unnecessary post-processing. The suggestions and guidelines proposed are designed to increase the quality of photogrammetry inputs and outputs as well as minimize processing time, and serve as a starting point for an underground coal photogrammetry study. PMID:28663826
Slaker, Brent A; Mohamed, Khaled M
2017-01-01
Understanding coal mine rib behavior is important for inferring pillar loading conditions as well as ensuring the safety of miners who are regularly exposed to ribs. Due to the variability in the geometry of underground openings and ground behavior, point measurements often fail to capture the true movement of mine workings. Photogrammetry is a potentially fast, cheap, and precise supplemental measurement tool in comparison to extensometers, tape measures, or laser range meters, but its application in underground coal has been limited. The practical use of photogrammetry was tested at the Safety Research Coal Mine, National Institute for Occupational Safety and Health (NIOSH). A commercially available, digital single-lens reflex (DSLR) camera was used to perform the photogrammetric surveys for the experiment. Several experiments were performed using different lighting conditions, distances to subject, camera settings, and photograph overlaps, with results summarized as follows: the lighting method was found to be insignificant if the scene was appropriately illuminated. It was found that the distance to the subject has a minimal impact on result accuracy, and that camera settings have a significant impact on the photogrammetric quality of images. An increasing photograph resolution was preferable when measuring plane orientations; otherwise a high point cloud density would likely be excessive. Focal ratio (F-stop) changes affect the depth of field and image quality in situations where multiple angles are necessary to survey cleat orientations. Photograph overlap is very important to proper three-dimensional reconstruction, and at least 60% overlap between photograph pairs is ideal to avoid unnecessary post-processing. The suggestions and guidelines proposed are designed to increase the quality of photogrammetry inputs and outputs as well as minimize processing time, and serve as a starting point for an underground coal photogrammetry study.
Full color natural light holographic camera.
Kim, Myung K
2013-04-22
Full-color, three-dimensional images of objects under incoherent illumination are obtained by a digital holography technique. Based on self-interference of two beam-split copies of the object's optical field with differential curvatures, the apparatus consists of a beam-splitter, a few mirrors and lenses, a piezo-actuator, and a color camera. No lasers or other special illuminations are used for recording or reconstruction. Color holographic images of daylight-illuminated outdoor scenes and a halogen lamp-illuminated toy figure are obtained. From a recorded hologram, images can be calculated, or numerically focused, at any distances for viewing.
Electronic recording of holograms with applications to holographic displays
NASA Technical Reports Server (NTRS)
Claspy, P. C.; Merat, F. L.
1979-01-01
The paper describes an electronic heterodyne recording which uses electrooptic modulation to introduce a sinusoidal phase shift between the object and reference wave. The resulting temporally modulated holographic interference pattern is scanned by a commercial image dissector camera, and the rejection of the self-interference terms is accomplished by heterodyne detection at the camera output. The electrical signal representing this processed hologram can then be used to modify the properties of a liquid crystal light valve or a similar device. Such display devices transform the displayed interference pattern into a phase modulated wave front rendering a three-dimensional image.
Oh, Hyun Jun; Yang, Il-Hyung
2016-01-01
Objectives: To propose a novel method for determining the three-dimensional (3D) root apex position of maxillary teeth using a two-dimensional (2D) panoramic radiograph image and a 3D virtual maxillary cast model. Methods: The subjects were 10 adult orthodontic patients treated with non-extraction. The multiple camera matrices were used to define transformative relationships between tooth images of the 2D panoramic radiographs and the 3D virtual maxillary cast models. After construction of the root apex-specific projective (RASP) models, overdetermined equations were used to calculate the 3D root apex position with a direct linear transformation algorithm and the known 2D co-ordinates of the root apex in the panoramic radiograph. For verification of the estimated 3D root apex position, the RASP and 3D-CT models were superimposed using a best-fit method. Then, the values of estimation error (EE; mean, standard deviation, minimum error and maximum error) between the two models were calculated. Results: The intraclass correlation coefficient values exhibited good reliability for the landmark identification. The mean EE of all root apices of maxillary teeth was 1.88 mm. The EE values, in descending order, were as follows: canine, 2.30 mm; first premolar, 1.93 mm; second premolar, 1.91 mm; first molar, 1.83 mm; second molar, 1.82 mm; lateral incisor, 1.80 mm; and central incisor, 1.53 mm. Conclusions: Camera calibration technology allows reliable determination of the 3D root apex position of maxillary teeth without the need for 3D-CT scan or tooth templates. PMID:26317151
UWB Tracking System Design with TDOA Algorithm
NASA Technical Reports Server (NTRS)
Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Phan, Chau; Gross, Julia; Dusl, John; Schwing, Alan
2006-01-01
This presentation discusses an ultra-wideband (UWB) tracking system design effort using a tracking algorithm TDOA (Time Difference of Arrival). UWB technology is exploited to implement the tracking system due to its properties, such as high data rate, fine time resolution, and low power spectral density. A system design using commercially available UWB products is proposed. A two-stage weighted least square method is chosen to solve the TDOA non-linear equations. Matlab simulations in both two-dimensional space and three-dimensional space show that the tracking algorithm can achieve fine tracking resolution with low noise TDOA data. The error analysis reveals various ways to improve the tracking resolution. Lab experiments demonstrate the UWBTDOA tracking capability with fine resolution. This research effort is motivated by a prototype development project Mini-AERCam (Autonomous Extra-vehicular Robotic Camera), a free-flying video camera system under development at NASA Johnson Space Center for aid in surveillance around the International Space Station (ISS).
Simultaneous one-dimensional fluorescence lifetime measurements of OH and CO in premixed flames
NASA Astrophysics Data System (ADS)
Jonsson, Malin; Ehn, Andreas; Christensen, Moah; Aldén, Marcus; Bood, Joakim
2014-04-01
A method for simultaneous measurements of fluorescence lifetimes of two species along a line is described. The experimental setup is based on picosecond laser pulses from two tunable optical parametric generator/optical parametric amplifier systems together with a streak camera. With an appropriate optical time delay between the two laser pulses, whose wavelengths are tuned to excite two different species, laser-induced fluorescence can be both detected temporally and spatially resolved by the streak camera. Hence, our method enables one-dimensional imaging of fluorescence lifetimes of two species in the same streak camera recording. The concept is demonstrated for fluorescence lifetime measurements of CO and OH in a laminar methane/air flame on a Bunsen-type burner. Measurements were taken in flames with four different equivalence ratios, namely ϕ = 0.9, 1.0, 1.15, and 1.25. The measured one-dimensional lifetime profiles generally agree well with lifetimes calculated from quenching cross sections found in the literature and quencher concentrations predicted by the GRI 3.0 mechanism. For OH, there is a systematic deviation of approximately 30 % between calculated and measured lifetimes. It is found that this is mainly due to the adiabatic assumption regarding the flame and uncertainty in H2O quenching cross section. This emphasizes the strength of measuring the quenching rates rather than relying on models. The measurement concept might be useful for single-shot measurements of fluorescence lifetimes of several species pairs of vital importance in combustion processes, hence allowing fluorescence signals to be corrected for quenching and ultimately yield quantitative concentration profiles.
Superficial vessel reconstruction with a multiview camera system
Marreiros, Filipe M. M.; Rossitti, Sandro; Karlsson, Per M.; Wang, Chunliang; Gustafsson, Torbjörn; Carleberg, Per; Smedby, Örjan
2016-01-01
Abstract. We aim at reconstructing superficial vessels of the brain. Ultimately, they will serve to guide the deformation methods to compensate for the brain shift. A pipeline for three-dimensional (3-D) vessel reconstruction using three mono-complementary metal-oxide semiconductor cameras has been developed. Vessel centerlines are manually selected in the images. Using the properties of the Hessian matrix, the centerline points are assigned direction information. For correspondence matching, a combination of methods was used. The process starts with epipolar and spatial coherence constraints (geometrical constraints), followed by relaxation labeling and an iterative filtering where the 3-D points are compared to surfaces obtained using the thin-plate spline with decreasing relaxation parameter. Finally, the points are shifted to their local centroid position. Evaluation in virtual, phantom, and experimental images, including intraoperative data from patient experiments, shows that, with appropriate camera positions, the error estimates (root-mean square error and mean error) are ∼1 mm. PMID:26759814
Automated Reconstruction of Three-Dimensional Fish Motion, Forces, and Torques
Voesenek, Cees J.; Pieters, Remco P. M.; van Leeuwen, Johan L.
2016-01-01
Fish can move freely through the water column and make complex three-dimensional motions to explore their environment, escape or feed. Nevertheless, the majority of swimming studies is currently limited to two-dimensional analyses. Accurate experimental quantification of changes in body shape, position and orientation (swimming kinematics) in three dimensions is therefore essential to advance biomechanical research of fish swimming. Here, we present a validated method that automatically tracks a swimming fish in three dimensions from multi-camera high-speed video. We use an optimisation procedure to fit a parameterised, morphology-based fish model to each set of video images. This results in a time sequence of position, orientation and body curvature. We post-process this data to derive additional kinematic parameters (e.g. velocities, accelerations) and propose an inverse-dynamics method to compute the resultant forces and torques during swimming. The presented method for quantifying 3D fish motion paves the way for future analyses of swimming biomechanics. PMID:26752597
NASA Astrophysics Data System (ADS)
Zhan, You-Bang; Zhang, Qun-Yong; Wang, Yu-Wu; Ma, Peng-Cheng
2010-01-01
We propose a scheme to teleport an unknown single-qubit state by using a high-dimensional entangled state as the quantum channel. As a special case, a scheme for teleportation of an unknown single-qubit state via three-dimensional entangled state is investigated in detail. Also, this scheme can be directly generalized to an unknown f-dimensional state by using a d-dimensional entangled state (d > f) as the quantum channel.
A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.
Qian, Shuo; Sheng, Yang
2011-11-01
Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.
An augmented-reality edge enhancement application for Google Glass.
Hwang, Alex D; Peli, Eli
2014-08-01
Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer's real-world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Google Glass' camera lens distortions were corrected by using an image warping. Because the camera and virtual display are horizontally separated by 16 mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of three-dimensional transformations to minimize parallax errors before the final projection to the Glass' see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal-vision subjects, with and without a diffuser film to simulate vision loss. For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera's performance. The authors assume that this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration.
Li, Xiangping; Lan, Tzu-Hsiang; Tien, Chung-Hao; Gu, Min
2012-01-01
The interplay between light polarization and matter is the basis of many fundamental physical processes and applications. However, the electromagnetic wave nature of light in free space sets a fundamental limit on the three-dimensional polarization orientation of a light beam. Although a high numerical aperture objective can be used to bend the wavefront of a radially polarized beam to generate the longitudinal polarization state in the focal volume, the arbitrary three-dimensional polarization orientation of a beam has not been achieved yet. Here we present a novel technique for generating arbitrary three-dimensional polarization orientation by a single optically configured vectorial beam. As a consequence, by applying this technique to gold nanorods, orientation-unlimited polarization encryption with ultra-security is demonstrated. These results represent a new landmark of the orientation-unlimited three-dimensional polarization control of the light-matter interaction.
NASA Astrophysics Data System (ADS)
Nguyen van yen, R.; Fedorczak, N.; Brochard, F.; Bonhomme, G.; Schneider, K.; Farge, M.; Monier-Garbet, P.
2012-01-01
Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we propose an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.
Three-dimensional facial recognition using passive long-wavelength infrared polarimetric imaging.
Yuffa, Alex J; Gurton, Kristan P; Videen, Gorden
2014-12-20
We use a polarimetric camera to record the Stokes parameters and the degree of linear polarization of long-wavelength infrared radiation emitted by human faces. These Stokes images are combined with Fresnel relations to extract the surface normal at each pixel. Integrating over these surface normals yields a three-dimensional facial image. One major difficulty of this technique is that the normal vectors determined from the polarizations are not unique. We overcome this problem by introducing an additional boundary condition on the subject. The major sources of error in producing inversions are noise in the images caused by scattering of the background signal and the ambiguity in determining the surface normals from the Fresnel coefficients.
NASA Technical Reports Server (NTRS)
Robbins, Woodrow E. (Editor); Fisher, Scott S. (Editor)
1989-01-01
Special attention was given to problems of stereoscopic display devices, such as CAD for enhancement of the design process in visual arts, stereo-TV improvement of remote manipulator performance, a voice-controlled stereographic video camera system, and head-mounted displays and their low-cost design alternatives. Also discussed was a novel approach to chromostereoscopic microscopy, computer-generated barrier-strip autostereography and lenticular stereograms, and parallax barrier three-dimensional TV. Additional topics include processing and user interface isssues and visualization applications, including automated analysis and fliud flow topology, optical tomographic measusrements of mixing fluids, visualization of complex data, visualization environments, and visualization management systems.
Handy Microscopic Close-Range Videogrammetry
NASA Astrophysics Data System (ADS)
Esmaeili, F.; Ebadi, H.
2017-09-01
The modeling of small-scale objects is used in different applications such as medicine, industry, and cultural heritage. The capability of modeling small-scale objects using imaging with the help of hand USB digital microscopes and use of videogrammetry techniques has been implemented and evaluated in this paper. Use of this equipment and convergent imaging of the environment for modeling, provides an appropriate set of images for generation of three-dimensional models. The results of the measurements made with the help of a microscope micrometer calibration ruler have demonstrated that self-calibration of a hand camera-microscope set can help obtain a three-dimensional detail extraction precision of about 0.1 millimeters on small-scale environments.
Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.
Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L
2015-06-01
Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.
A sniffer-camera for imaging of ethanol vaporization from wine: the effect of wine glass shape.
Arakawa, Takahiro; Iitani, Kenta; Wang, Xin; Kajiro, Takumi; Toma, Koji; Yano, Kazuyoshi; Mitsubayashi, Kohji
2015-04-21
A two-dimensional imaging system (Sniffer-camera) for visualizing the concentration distribution of ethanol vapor emitting from wine in a wine glass has been developed. This system provides image information of ethanol vapor concentration using chemiluminescence (CL) from an enzyme-immobilized mesh. This system measures ethanol vapor concentration as CL intensities from luminol reactions induced by alcohol oxidase and a horseradish peroxidase (HRP)-luminol-hydrogen peroxide system. Conversion of ethanol distribution and concentration to two-dimensional CL was conducted using an enzyme-immobilized mesh containing an alcohol oxidase, horseradish peroxidase, and luminol solution. The temporal changes in CL were detected using an electron multiplier (EM)-CCD camera and analyzed. We selected three types of glasses-a wine glass, a cocktail glass, and a straight glass-to determine the differences in ethanol emission caused by the shape effects of the glass. The emission measurements of ethanol vapor from wine in each glass were successfully visualized, with pixel intensity reflecting ethanol concentration. Of note, a characteristic ring shape attributed to high alcohol concentration appeared near the rim of the wine glass containing 13 °C wine. Thus, the alcohol concentration in the center of the wine glass was comparatively lower. The Sniffer-camera was demonstrated to be sufficiently useful for non-destructive ethanol measurement for the assessment of food characteristics.
Implicit multiplane 3D camera calibration matrices for stereo image processing
NASA Astrophysics Data System (ADS)
McKee, James W.; Burgett, Sherrie J.
1997-12-01
By implicit camera calibration, we mean the process of calibrating cameras without explicitly computing their physical parameters. We introduce a new implicit model based on a generalized mapping between an image plane and multiple, parallel calibration planes (usually between four to seven planes). This paper presents a method of computing a relationship between a point on a three-dimensional (3D) object and its corresponding two-dimensional (2D) coordinate in a camera image. This relationship is expanded to form a mapping of points in 3D space to points in image (camera) space and visa versa that requires only matrix multiplication operations. This paper presents the rationale behind the selection of the forms of four matrices and the algorithms to calculate the parameters for the matrices. Two of the matrices are used to map 3D points in object space to 2D points on the CCD camera image plane. The other two matrices are used to map 2D points on the image plane to points on user defined planes in 3D object space. The mappings include compensation for lens distortion and measurement errors. The number of parameters used can be increased, in a straight forward fashion, to calculate and use as many parameters as needed to obtain a user desired accuracy. Previous methods of camera calibration use a fixed number of parameters which can limit the obtainable accuracy and most require the solution of nonlinear equations. The procedure presented can be used to calibrate a single camera to make 2D measurements or calibrate stereo cameras to make 3D measurements. Positional accuracy of better than 3 parts in 10,000 have been achieved. The algorithms in this paper were developed and are implemented in MATLABR (registered trademark of The Math Works, Inc.). We have developed a system to analyze the path of optical fiber during high speed payout (unwinding) of optical fiber off a bobbin. This requires recording and analyzing high speed (5 microsecond exposure time), synchronous, stereo images of the optical fiber during payout. A 3D equation for the fiber at an instant in time is calculated from the corresponding pair of stereo images as follows. In each image, about 20 points along the 2D projection of the fiber are located. Each of these 'fiber points' in one image is mapped to its projection line in 3D space. Each projection line is mapped into another line in the second image. The intersection of each mapped projection line and a curve fitted to the fiber points of the second image (fiber projection in second image) is calculated. Each intersection point is mapped back to the 3D space. A 3D fiber coordinate is formed from the intersection, in 3D space, of a mapped intersection point with its corresponding projection line. The 3D equation for the fiber is computed from this ordered list of 3D coordinates. This process requires a method of accurately mapping 2D (image space) to 3D (object space) and visa versa.3173
NASA Astrophysics Data System (ADS)
Takiguchi, Yu; Toyoda, Haruyoshi
2017-11-01
We report here an algorithm for calculating a hologram to be employed in a high-access speed microscope for observing sensory-driven synaptic activity across all inputs to single living neurons in an intact cerebral cortex. The system is based on holographic multi-beam generation using a two-dimensional phase-only spatial light modulator to excite multiple locations in three dimensions with a single hologram. The hologram was calculated with a three-dimensional weighted iterative Fourier transform method using the Ewald sphere restriction to increase the calculation speed. Our algorithm achieved good uniformity of three dimensionally generated excitation spots; the standard deviation of the spot intensities was reduced by a factor of two compared with a conventional algorithm.
NASA Astrophysics Data System (ADS)
Takiguchi, Yu; Toyoda, Haruyoshi
2018-06-01
We report here an algorithm for calculating a hologram to be employed in a high-access speed microscope for observing sensory-driven synaptic activity across all inputs to single living neurons in an intact cerebral cortex. The system is based on holographic multi-beam generation using a two-dimensional phase-only spatial light modulator to excite multiple locations in three dimensions with a single hologram. The hologram was calculated with a three-dimensional weighted iterative Fourier transform method using the Ewald sphere restriction to increase the calculation speed. Our algorithm achieved good uniformity of three dimensionally generated excitation spots; the standard deviation of the spot intensities was reduced by a factor of two compared with a conventional algorithm.
NASA Technical Reports Server (NTRS)
Bisagni, Chiara; Davila, Carlos G.; Rose, Cheryl A.; Zalameda, Joseph N.
2014-01-01
The durability and damage tolerance of postbuckled composite structures are not yet completely understood, and remain difficult to predict due to the nonlinearity of the geometric response and its interaction with local damage modes. A research effort was conducted to investigate experimentally the quasi-static and fatigue damage progression in a single-stringer compression (SSC) specimen. Three specimens were manufactured with a hat-stiffener, and an initial defect was introduced with a Teflon film embedded between one flange of the stringer and the skin. One of the specimens was tested under quasi-static compressive loading, while the remaining two specimens were tested by cycling in postbuckling. The tests were performed at the NASA Langley Research Center under controlled conditions and with instrumentation that allows a precise evaluation of the postbuckling response and of the damage modes. Three-dimensional digital image correlation VIC-3D systems were used to provide full field displacements and strains on the skin and the stringer. Passive thermal monitoring was conducted during the fatigue tests using an infrared camera that showed the location of the delamination front while the specimen was being cycled. The live information from the thermography was used to stop the fatigue tests at critical stages of the damage evolution to allow detailed ultrasonic scans.
NASA Astrophysics Data System (ADS)
Schroeder, Walter; Schulze, Wolfram; Wetter, Thomas; Chen, Chi-Hsien
2008-08-01
Three-dimensional (3D) body surface reconstruction is an important field in health care. A popular method for this purpose is laser scanning. However, using Photometric Stereo (PS) to record lumbar lordosis and the surface contour of the back poses a viable alternative due to its lower costs and higher flexibility compared to laser techniques and other methods of three-dimensional body surface reconstruction. In this work, we extended the traditional PS method and proposed a new method for obtaining surface and volume data of a moving object. The principle of traditional Photometric Stereo uses at least three images of a static object taken under different light sources to obtain 3D information of the object. Instead of using normal light, the light sources in the proposed method consist of the RGB-Color-Model's three colors: red, green and blue. A series of pictures taken with a video camera can now be separated into the different color channels. Each set of the three images can then be used to calculate the surface normals as a traditional PS. This method waives the requirement that the object imaged must be kept still as in almost all the other body surface reconstruction methods. By putting two cameras opposite to a moving object and lighting the object with the colored light, the time-varying surface (4D) data can easily be calculated. The obtained information can be used in many medical fields such as rehabilitation, diabetes screening or orthopedics.
Measuring frequency of one-dimensional vibration with video camera using electronic rolling shutter
NASA Astrophysics Data System (ADS)
Zhao, Yipeng; Liu, Jinyue; Guo, Shijie; Li, Tiejun
2018-04-01
Cameras offer a unique capability of collecting high density spatial data from a distant scene of interest. They can be employed as remote monitoring or inspection sensors to measure vibrating objects because of their commonplace availability, simplicity, and potentially low cost. A defect of vibrating measurement with the camera is to process the massive data generated by camera. In order to reduce the data collected from the camera, the camera using electronic rolling shutter (ERS) is applied to measure the frequency of one-dimensional vibration, whose frequency is much higher than the speed of the camera. Every row in the image captured by the ERS camera records the vibrating displacement at different times. Those displacements that form the vibration could be extracted by local analysis with sliding windows. This methodology is demonstrated on vibrating structures, a cantilever beam, and an air compressor to identify the validity of the proposed algorithm. Suggestions for applications of this methodology and challenges in real-world implementation are given at last.
Study on the Spatial Resolution of Single and Multiple Coincidences Compton Camera
NASA Astrophysics Data System (ADS)
Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna
2012-10-01
In this paper we study the image resolution that can be obtained from the Multiple Coincidences Compton Camera (MCCC). The principle of MCCC is based on a simultaneous acquisition of several gamma-rays emitted in cascade from a single nucleus. Contrary to a standard Compton camera, MCCC can theoretically provide the exact location of a radioactive source (based only on the identification of the intersection point of three cones created by a single decay), without complicated tomographic reconstruction. However, practical implementation of the MCCC approach encounters several problems, such as low detection sensitivities result in very low probability of coincident triple gamma-ray detection, which is necessary for the source localization. It is also important to evaluate how the detection uncertainties (finite energy and spatial resolution) influence identification of the intersection of three cones, thus the resulting image quality. In this study we investigate how the spatial resolution of the reconstructed images using the triple-cone reconstruction (TCR) approach compares to images reconstructed from the same data using standard iterative method based on single-cone. Results show, that FWHM for the point source reconstructed with TCR was 20-30% higher than the one obtained from the standard iterative reconstruction based on expectation maximization (EM) algorithm and conventional single-cone Compton imaging. Finite energy and spatial resolutions of the MCCC detectors lead to errors in conical surfaces definitions (“thick” conical surfaces) which only amplify in image reconstruction when intersection of three cones is being sought. Our investigations show that, in spite of being conceptually appealing, the identification of triple cone intersection constitutes yet another restriction of the multiple coincidence approach which limits the image resolution that can be obtained with MCCC and TCR algorithm.
Wang, Q; Yang, Y; Fei, Q; Li, D; Li, J J; Meng, H; Su, N; Fan, Z H; Wang, B Q
2017-06-06
Objective: To build a three-dimensional finite element models of a modified posterior cervical single open-door laminoplasty with short-segmental lateral mass screws fusion. Methods: The C(2)-C(7) segmental data were obtained from computed tomography (CT) scans of a male patient with cervical spondylotic myelopathy and spinal stenosis.Three-dimensional finite element models of a modified cervical single open-door laminoplasty (before and after surgery) were constructed by the combination of software package MIMICS, Geomagic and ABAQUS.The models were composed of bony vertebrae, articulating facets, intervertebral disc and associated ligaments.The loads of moments 1.5Nm at different directions (flexion, extension, lateral bending and axial rotation)were applied at preoperative model to calculate intersegmental ranges of motion.The results were compared with the previous studies to verify the validation of the models. Results: Three-dimensional finite element models of the modified cervical single open- door laminoplasty had 102258 elements (preoperative model) and 161 892 elements (postoperative model) respectively, including C(2-7) six bony vertebraes, C(2-3)-C(6-7) five intervertebral disc, main ligaments and lateral mass screws.The intersegmental responses at the preoperative model under the loads of moments 1.5 Nm at different directions were similar to the previous published data. Conclusion: Three-dimensional finite element models of the modified cervical single open- door laminoplasty were successfully established and had a good biological fidelity, which can be used for further study.
Xin, Zhaowei; Wei, Dong; Xie, Xingwang; Chen, Mingce; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng
2018-02-19
Light-field imaging is a crucial and straightforward way of measuring and analyzing surrounding light worlds. In this paper, a dual-polarized light-field imaging micro-system based on a twisted nematic liquid-crystal microlens array (TN-LCMLA) for direct three-dimensional (3D) observation is fabricated and demonstrated. The prototyped camera has been constructed by integrating a TN-LCMLA with a common CMOS sensor array. By switching the working state of the TN-LCMLA, two orthogonally polarized light-field images can be remapped through the functioned imaging sensors. The imaging micro-system in conjunction with the electric-optical microstructure can be used to perform polarization and light-field imaging, simultaneously. Compared with conventional plenoptic cameras using liquid-crystal microlens array, the polarization-independent light-field images with a high image quality can be obtained in the arbitrary polarization state selected. We experimentally demonstrate characters including a relatively wide operation range in the manipulation of incident beams and the multiple imaging modes, such as conventional two-dimensional imaging, light-field imaging, and polarization imaging. Considering the obvious features of the TN-LCMLA, such as very low power consumption, providing multiple imaging modes mentioned, simple and low-cost manufacturing, the imaging micro-system integrated with this kind of liquid-crystal microstructure driven electrically presents the potential capability of directly observing a 3D object in typical scattering media.
Utilizing Light-field Imaging Technology in Neurosurgery.
Chen, Brian R; Buchanan, Ian A; Kellis, Spencer; Kramer, Daniel; Ohiorhenuan, Ifije; Blumenfeld, Zack; Grisafe Ii, Dominic J; Barbaro, Michael F; Gogia, Angad S; Lu, James Y; Chen, Beverly B; Lee, Brian
2018-04-10
Traditional still cameras can only focus on a single plane for each image while rendering everything outside of that plane out of focus. However, new light-field imaging technology makes it possible to adjust the focus plane after an image has already been captured. This technology allows the viewer to interactively explore an image with objects and anatomy at varying depths and clearly focus on any feature of interest by selecting that location during post-capture viewing. These images with adjustable focus can serve as valuable educational tools for neurosurgical residents. We explore the utility of light-field cameras and review their strengths and limitations compared to other conventional types of imaging. The strength of light-field images is the adjustable focus, as opposed to the fixed-focus of traditional photography and video. A light-field image also is interactive by nature, as it requires the viewer to select the plane of focus and helps with visualizing the three-dimensional anatomy of an image. Limitations include the relatively low resolution of light-field images compared to traditional photography and video. Although light-field imaging is still in its infancy, there are several potential uses for the technology to complement traditional still photography and videography in neurosurgical education.
Utilizing Light-field Imaging Technology in Neurosurgery
Chen, Brian R; Kellis, Spencer; Kramer, Daniel; Ohiorhenuan, Ifije; Blumenfeld, Zack; Grisafe II, Dominic J; Barbaro, Michael F; Gogia, Angad S; Lu, James Y; Chen, Beverly B; Lee, Brian
2018-01-01
Traditional still cameras can only focus on a single plane for each image while rendering everything outside of that plane out of focus. However, new light-field imaging technology makes it possible to adjust the focus plane after an image has already been captured. This technology allows the viewer to interactively explore an image with objects and anatomy at varying depths and clearly focus on any feature of interest by selecting that location during post-capture viewing. These images with adjustable focus can serve as valuable educational tools for neurosurgical residents. We explore the utility of light-field cameras and review their strengths and limitations compared to other conventional types of imaging. The strength of light-field images is the adjustable focus, as opposed to the fixed-focus of traditional photography and video. A light-field image also is interactive by nature, as it requires the viewer to select the plane of focus and helps with visualizing the three-dimensional anatomy of an image. Limitations include the relatively low resolution of light-field images compared to traditional photography and video. Although light-field imaging is still in its infancy, there are several potential uses for the technology to complement traditional still photography and videography in neurosurgical education. PMID:29888163
Volume estimation using food specific shape templates in mobile image-based dietary assessment
NASA Astrophysics Data System (ADS)
Chae, Junghoon; Woo, Insoo; Kim, SungYe; Maciejewski, Ross; Zhu, Fengqing; Delp, Edward J.; Boushey, Carol J.; Ebert, David S.
2011-03-01
As obesity concerns mount, dietary assessment methods for prevention and intervention are being developed. These methods include recording, cataloging and analyzing daily dietary records to monitor energy and nutrient intakes. Given the ubiquity of mobile devices with built-in cameras, one possible means of improving dietary assessment is through photographing foods and inputting these images into a system that can determine the nutrient content of foods in the images. One of the critical issues in such the image-based dietary assessment tool is the accurate and consistent estimation of food portion sizes. The objective of our study is to automatically estimate food volumes through the use of food specific shape templates. In our system, users capture food images using a mobile phone camera. Based on information (i.e., food name and code) determined through food segmentation and classification of the food images, our system choose a particular food template shape corresponding to each segmented food. Finally, our system reconstructs the three-dimensional properties of the food shape from a single image by extracting feature points in order to size the food shape template. By employing this template-based approach, our system automatically estimates food portion size, providing a consistent method for estimation food volume.
Three-dimensional reconstruction of single-cell chromosome structure using recurrence plots.
Hirata, Yoshito; Oda, Arisa; Ohta, Kunihiro; Aihara, Kazuyuki
2016-10-11
Single-cell analysis of the three-dimensional (3D) chromosome structure can reveal cell-to-cell variability in genome activities. Here, we propose to apply recurrence plots, a mathematical method of nonlinear time series analysis, to reconstruct the 3D chromosome structure of a single cell based on information of chromosomal contacts from genome-wide chromosome conformation capture (Hi-C) data. This recurrence plot-based reconstruction (RPR) method enables rapid reconstruction of a unique structure in single cells, even from incomplete Hi-C information.
Three-dimensional reconstruction of single-cell chromosome structure using recurrence plots
NASA Astrophysics Data System (ADS)
Hirata, Yoshito; Oda, Arisa; Ohta, Kunihiro; Aihara, Kazuyuki
2016-10-01
Single-cell analysis of the three-dimensional (3D) chromosome structure can reveal cell-to-cell variability in genome activities. Here, we propose to apply recurrence plots, a mathematical method of nonlinear time series analysis, to reconstruct the 3D chromosome structure of a single cell based on information of chromosomal contacts from genome-wide chromosome conformation capture (Hi-C) data. This recurrence plot-based reconstruction (RPR) method enables rapid reconstruction of a unique structure in single cells, even from incomplete Hi-C information.
A detailed comparison of single-camera light-field PIV and tomographic PIV
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.
2018-03-01
This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.
Salau, J; Haas, J H; Thaller, G; Leisen, M; Junge, W
2016-09-01
Camera-based systems in dairy cattle were intensively studied over the last years. Different from this study, single camera systems with a limited range of applications were presented, mostly using 2D cameras. This study presents current steps in the development of a camera system comprising multiple 3D cameras (six Microsoft Kinect cameras) for monitoring purposes in dairy cows. An early prototype was constructed, and alpha versions of software for recording, synchronizing, sorting and segmenting images and transforming the 3D data in a joint coordinate system have already been implemented. This study introduced the application of two-dimensional wavelet transforms as method for object recognition and surface analyses. The method was explained in detail, and four differently shaped wavelets were tested with respect to their reconstruction error concerning Kinect recorded depth maps from different camera positions. The images' high frequency parts reconstructed from wavelet decompositions using the haar and the biorthogonal 1.5 wavelet were statistically analyzed with regard to the effects of image fore- or background and of cows' or persons' surface. Furthermore, binary classifiers based on the local high frequencies have been implemented to decide whether a pixel belongs to the image foreground and if it was located on a cow or a person. Classifiers distinguishing between image regions showed high (⩾0.8) values of Area Under reciever operation characteristic Curve (AUC). The classifications due to species showed maximal AUC values of 0.69.
NASA Astrophysics Data System (ADS)
Maj, P.; Kasiński, K.; Gryboś, P.; Szczygieł, R.; Kozioł, A.
2015-12-01
Integrated circuits designed for specific applications generally use non-standard communication methods. Hybrid pixel detector readout electronics produces a huge amount of data as a result of number of frames per seconds. The data needs to be transmitted to a higher level system without limiting the ASIC's capabilities. Nowadays, the Camera Link interface is still one of the fastest communication methods, allowing transmission speeds up to 800 MB/s. In order to communicate between a higher level system and the ASIC with a dedicated protocol, an FPGA with dedicated code is required. The configuration data is received from the PC and written to the ASIC. At the same time, the same FPGA should be able to transmit the data from the ASIC to the PC at the very high speed. The camera should be an embedded system enabling autonomous operation and self-monitoring. In the presented solution, at least three different hardware platforms are used—FPGA, microprocessor with real-time operating system and the PC with end-user software. We present the use of a single software platform for high speed data transfer from 65k pixel camera to the personal computer.
Multiple film plane diagnostic for shocked lattice measurements (invited)
NASA Astrophysics Data System (ADS)
Kalantar, Daniel H.; Bringa, E.; Caturla, M.; Colvin, J.; Lorenz, K. T.; Kumar, M.; Stölken, J.; Allen, A. M.; Rosolankova, K.; Wark, J. S.; Meyers, M. A.; Schneider, M.; Boehly, T. R.
2003-03-01
Laser-based shock experiments have been conducted in thin Si and Cu crystals at pressures above the Hugoniot elastic limit. In these experiments, static film and x-ray streak cameras recorded x rays diffracted from lattice planes both parallel and perpendicular to the shock direction. These data showed uniaxial compression of Si(100) along the shock direction and three-dimensional compression of Cu(100). In the case of the Si diffraction, there was a multiple wave structure observed, which may be due to a one-dimensional phase transition or a time variation in the shock pressure. A new film-based detector has been developed for these in situ dynamic diffraction experiments. This large-angle detector consists of three film cassettes that are positioned to record x rays diffracted from a shocked crystal anywhere within a full π steradian. It records x rays that are diffracted from multiple lattice planes both parallel and at oblique angles with respect to the shock direction. It is a time-integrating measurement, but time-resolved data may be recorded using a short duration laser pulse to create the diffraction source x rays. This new instrument has been fielded at the OMEGA and Janus lasers to study single-crystal materials shock compressed by direct laser irradiation. In these experiments, a multiple wave structure was observed on many different lattice planes in Si. These data provide information on the structure under compression.
Gutierrez-Heredia, Luis; Benzoni, Francesca; Murphy, Emma; Reynaud, Emmanuel G
2016-01-01
Coral reefs hosts nearly 25% of all marine species and provide food sources for half a billion people worldwide while only a very small percentage have been surveyed. Advances in technology and processing along with affordable underwater cameras and Internet availability gives us the possibility to provide tools and softwares to survey entire coral reefs. Holistic ecological analyses of corals require not only the community view (10s to 100s of meters), but also the single colony analysis as well as corallite identification. As corals are three-dimensional, classical approaches to determine percent cover and structural complexity across spatial scales are inefficient, time-consuming and limited to experts. Here we propose an end-to-end approach to estimate these parameters using low-cost equipment (GoPro, Canon) and freeware (123D Catch, Meshmixer and Netfabb), allowing every community to participate in surveys and monitoring of their coral ecosystem. We demonstrate our approach on 9 species of underwater colonies in ranging size and morphology. 3D models of underwater colonies, fresh samples and bleached skeletons with high quality texture mapping and detailed topographic morphology were produced, and Surface Area and Volume measurements (parameters widely used for ecological and coral health studies) were calculated and analysed. Moreover, we integrated collected sample models with micro-photogrammetry models of individual corallites to aid identification and colony and polyp scale analysis.
Gutierrez-Heredia, Luis; Benzoni, Francesca; Murphy, Emma; Reynaud, Emmanuel G.
2016-01-01
Coral reefs hosts nearly 25% of all marine species and provide food sources for half a billion people worldwide while only a very small percentage have been surveyed. Advances in technology and processing along with affordable underwater cameras and Internet availability gives us the possibility to provide tools and softwares to survey entire coral reefs. Holistic ecological analyses of corals require not only the community view (10s to 100s of meters), but also the single colony analysis as well as corallite identification. As corals are three-dimensional, classical approaches to determine percent cover and structural complexity across spatial scales are inefficient, time-consuming and limited to experts. Here we propose an end-to-end approach to estimate these parameters using low-cost equipment (GoPro, Canon) and freeware (123D Catch, Meshmixer and Netfabb), allowing every community to participate in surveys and monitoring of their coral ecosystem. We demonstrate our approach on 9 species of underwater colonies in ranging size and morphology. 3D models of underwater colonies, fresh samples and bleached skeletons with high quality texture mapping and detailed topographic morphology were produced, and Surface Area and Volume measurements (parameters widely used for ecological and coral health studies) were calculated and analysed. Moreover, we integrated collected sample models with micro-photogrammetry models of individual corallites to aid identification and colony and polyp scale analysis. PMID:26901845
Techniques of noninvasive optical tomographic imaging
NASA Astrophysics Data System (ADS)
Rosen, Joseph; Abookasis, David; Gokhler, Mark
2006-01-01
Recently invented methods of optical tomographic imaging through scattering and absorbing media are presented. In one method, the three-dimensional structure of an object hidden between two biological tissues is recovered from many noisy speckle pictures obtained on the output of a multi-channeled optical imaging system. Objects are recovered from many speckled images observed by a digital camera through two stereoscopic microlens arrays. Each microlens in each array generates a speckle image of the object buried between the layers. In the computer each image is Fourier transformed jointly with an image of the speckled point-like source captured under the same conditions. A set of the squared magnitudes of the Fourier-transformed pictures is accumulated to form a single average picture. This final picture is again Fourier transformed, resulting in the three-dimensional reconstruction of the hidden object. In the other method, the effect of spatial longitudinal coherence is used for imaging through an absorbing layer with different thickness, or different index of refraction, along the layer. The technique is based on synthesis of multiple peak spatial degree of coherence. This degree of coherence enables us to scan simultaneously different sample points on different altitudes, and thus decreases the acquisition time. The same multi peak degree of coherence is also used for imaging through the absorbing layer. Our entire experiments are performed with a quasi-monochromatic light source. Therefore problems of dispersion and inhomogeneous absorption are avoided.
The Real And Its Holographic Double
NASA Astrophysics Data System (ADS)
Rabinovitch, Gerard
1980-06-01
The attempt to produce animated, three-dimensional image representations has haunted the occident since the 4th century-the camera ottica and the zograscope have answered the camera obscura as its prolongation and as its amplification -with the invention of photography (Niepoe Daguerre) the same phenomenon occurred: the appearance of stereopho-tography (principals stated by Wheastone and applied by Brewster, then Dubooq). -the same phenomenon happened again with cinema-tography the stereocinematography (screening, embossing, anaglyhic, polaroid glasses processes, etc...), haunt researchers. -finally, holography which, although depending on a space produced by a major technological jump, follows this quest of the reproduction of the effect of stereoscopic vision.
Development of compact Compton camera for 3D image reconstruction of radioactive contamination
NASA Astrophysics Data System (ADS)
Sato, Y.; Terasaka, Y.; Ozawa, S.; Nakamura Miyamura, H.; Kaburagi, M.; Tanifuji, Y.; Kawabata, K.; Torii, T.
2017-11-01
The Fukushima Daiichi Nuclear Power Station (FDNPS), operated by Tokyo Electric Power Company Holdings, Inc., went into meltdown after the large tsunami caused by the Great East Japan Earthquake of March 11, 2011. Very large amounts of radionuclides were released from the damaged plant. Radiation distribution measurements inside FDNPS buildings are indispensable to execute decommissioning tasks in the reactor buildings. We have developed a compact Compton camera to measure the distribution of radioactive contamination inside the FDNPS buildings three-dimensionally (3D). The total weight of the Compton camera is lower than 1.0 kg. The gamma-ray sensor of the Compton camera employs Ce-doped GAGG (Gd3Al2Ga3O12) scintillators coupled with a multi-pixel photon counter. Angular correction of the detection efficiency of the Compton camera was conducted. Moreover, we developed a 3D back-projection method using the multi-angle data measured with the Compton camera. We successfully observed 3D radiation images resulting from the two 137Cs radioactive sources, and the image of the 9.2 MBq source appeared stronger than that of the 2.7 MBq source.
Multi-camera volumetric PIV for the study of jumping fish
NASA Astrophysics Data System (ADS)
Mendelson, Leah; Techet, Alexandra H.
2018-01-01
Archer fish accurately jump multiple body lengths for aerial prey from directly below the free surface. Multiple fins provide combinations of propulsion and stabilization, enabling prey capture success. Volumetric flow field measurements are crucial to characterizing multi-propulsor interactions during this highly three-dimensional maneuver; however, the fish's behavior also drives unique experimental constraints. Measurements must be obtained in close proximity to the water's surface and in regions of the flow field which are partially-occluded by the fish body. Aerial jump trajectories must also be known to assess performance. This article describes experiment setup and processing modifications to the three-dimensional synthetic aperture particle image velocimetry (SAPIV) technique to address these challenges and facilitate experimental measurements on live jumping fish. The performance of traditional SAPIV algorithms in partially-occluded regions is characterized, and an improved non-iterative reconstruction routine for SAPIV around bodies is introduced. This reconstruction procedure is combined with three-dimensional imaging on both sides of the free surface to reveal the fish's three-dimensional wake, including a series of propulsive vortex rings generated by the tail. In addition, wake measurements from the anal and dorsal fins indicate their stabilizing and thrust-producing contributions as the archer fish jumps.
Multi-sensor fusion over the World Trade Center disaster site
NASA Astrophysics Data System (ADS)
Rodarmel, Craig; Scott, Lawrence; Simerlink, Deborah A.; Walker, Jeffrey
2002-09-01
The immense size and scope of the rescue and clean-up of the World Trade Center site created a need for data that would provide a total overview of the disaster area. To fulfill this need, the New York State Office for Technology (NYSOFT) contracted with EarthData International to collect airborne remote sensing data over Ground Zero with an airborne light detection and ranging (LIDAR) sensor, a high-resolution digital camera, and a thermal camera. The LIDAR data provided a three-dimensional elevation model of the ground surface that was used for volumetric calculations and also in the orthorectification of the digital images. The digital camera provided high-resolution imagery over the site to aide the rescuers in placement of equipment and other assets. In addition, the digital imagery was used to georeference the thermal imagery and also provided the visual background for the thermal data. The thermal camera aided in the location and tracking of underground fires. The combination of data from these three sensors provided the emergency crews with a timely, accurate overview containing a wealth of information of the rapidly changing disaster site. Because of the dynamic nature of the site, the data was acquired on a daily basis, processed, and turned over to NYSOFT within twelve hours of the collection. During processing, the three datasets were combined and georeferenced to allow them to be inserted into the client's geographic information systems.
NASA Astrophysics Data System (ADS)
Koskelo, Elise Anne C.; Flynn, Eric B.
2017-02-01
Inspection of and around joints, beams, and other three-dimensional structures is integral to practical nondestructive evaluation of large structures. Non-contact, scanning laser ultrasound techniques offer an automated means of physically accessing these regions. However, to realize the benefits of laser-scanning techniques, simultaneous inspection of multiple surfaces at different orientations to the scanner must not significantly degrade the signal level nor diminish the ability to distinguish defects from healthy geometric features. In this study, we evaluated the implementation of acoustic wavenumber spectroscopy for inspecting metal joints and crossbeams from interior angles. With this technique, we used a single-tone, steady-state, ultrasonic excitation to excite the joints via a single transducer attached to one surface. We then measured the full-field velocity responses using a scanning Laser Doppler vibrometer and produced maps of local wavenumber estimates. With the high signal level associated with steady-state excitation, scans could be performed at surface orientations of up to 45 degrees. We applied camera perspective projection transformations to remove the distortion in the scans due to a known projection angle, leading to a significant improvement in the local estimates of wavenumber. Projection leads to asymmetrical distortion in the wavenumber in one direction, making it possible to estimate view angle even when neither it nor the nominal wavenumber is known. Since plate thinning produces a purely symmetric increase in wavenumber, it also possible to independently estimate the degree of hidden corrosion. With a two-surface joint, using the wavenumber estimate maps, we were able to automatically calculate the orthographic projection component of each angled surface in the scan area.
Montessori, A; Falcucci, G; Prestininzi, P; La Rocca, M; Succi, S
2014-05-01
We investigate the accuracy and performance of the regularized version of the single-relaxation-time lattice Boltzmann equation for the case of two- and three-dimensional lid-driven cavities. The regularized version is shown to provide a significant gain in stability over the standard single-relaxation time, at a moderate computational overhead.
Solo surgery--early results of robot-assisted three-dimensional laparoscopic hysterectomy.
Tuschy, Benjamin; Berlit, Sebastian; Brade, Joachim; Sütterlin, Marc; Hornemann, Amadeus
2014-08-01
Report of our initial experience in laparoscopic hysterectomy by a solo surgeon using a robotic camera system with three-dimensional visualisation. This novel device (Einstein Vision®, B. Braun, Aesculap AG, Tuttlingen, Germany) (EV) was used for laparoscopic supracervical hysterectomy (LASH) performed by one surgeon. Demographic data, clinical and surgical parameters were evaluated. Our first 22 cases, performed between June and November 2012, were compared with a cohort of 22 age-matched controls who underwent two-dimensional LASH performed by the same surgeon with a second surgeon assisting. Compared to standard two-dimensional laparoscopic hysterectomy, there were no significant differences regarding duration of surgery, hospital stay, blood loss or incidence of complications. The number of trocars used was significantly higher in the control group (p <.0001). All hysterectomies in the treatment group were performed without assistance of a second physician. Robot-assisted solo surgery laparoscopic hysterectomy is a feasible and safe procedure. Duration of surgery, hospital stay, blood loss, and complication rates are comparable to a conventional laparoscopic hysterectomy.
NASA Astrophysics Data System (ADS)
Wang, Binbin; Socolofsky, Scott A.
2015-10-01
Development, testing, and application of a deep-sea, high-speed, stereoscopic imaging system are presented. The new system is designed for field-ready deployment, focusing on measurement of the characteristics of natural seep bubbles and droplets with high-speed and high-resolution image capture. The stereo view configuration allows precise evaluation of the physical scale of the moving particles in image pairs. Two laboratory validation experiments (a continuous bubble chain and an airstone bubble plume) were carried out to test the calibration procedure, performance of image processing and bubble matching algorithms, three-dimensional viewing, and estimation of bubble size distribution and volumetric flow rate. The results showed that the stereo view was able to improve the individual bubble size measurement over the single-camera view by up to 90% in the two validation cases, with the single-camera being biased toward overestimation of the flow rate. We also present the first application of this imaging system in a study of natural gas seeps in the Gulf of Mexico. The high-speed images reveal the rigidity of the transparent bubble interface, indicating the presence of clathrate hydrate skins on the natural gas bubbles near the source (lowest measurement 1.3 m above the vent). We estimated the dominant bubble size at the seep site Sleeping Dragon in Mississippi Canyon block 118 to be in the range of 2-4 mm and the volumetric flow rate to be 0.2-0.3 L/min during our measurements from 17 to 21 July 2014.
A remote camera operation system using a marker attached cap
NASA Astrophysics Data System (ADS)
Kawai, Hironori; Hama, Hiromitsu
2005-12-01
In this paper, we propose a convenient system to control a remote camera according to the eye-gazing direction of the operator, which is approximately obtained through calculating the face direction by means of image processing. The operator put a marker attached cap on his head, and the system takes an image of the operator from above with only one video camera. Three markers are set up on the cap, and 'three' is the minimum number to calculate the tilt angle of the head. The more markers are used, the robuster system may be made to occlusion, and the wider moving range of the head is tolerated. It is supposed that the markers must not exist on any three dimensional straight line. To compensate the marker's color change due to illumination conditions, the threshold for the marker extraction is adaptively decided using a k-means clustering method. The system was implemented with MATLAB on a personal computer, and the real-time operation was realized. Through the experimental results, robustness of the system was confirmed and tilt and pan angles of the head could be calculated with enough accuracy to use.
Three-dimensional nanometre localization of nanoparticles to enhance super-resolution microscopy
NASA Astrophysics Data System (ADS)
Bon, Pierre; Bourg, Nicolas; Lécart, Sandrine; Monneret, Serge; Fort, Emmanuel; Wenger, Jérôme; Lévêque-Fort, Sandrine
2015-07-01
Meeting the nanometre resolution promised by super-resolution microscopy techniques (pointillist: PALM, STORM, scanning: STED) requires stabilizing the sample drifts in real time during the whole acquisition process. Metal nanoparticles are excellent probes to track the lateral drifts as they provide crisp and photostable information. However, achieving nanometre axial super-localization is still a major challenge, as diffraction imposes large depths-of-fields. Here we demonstrate fast full three-dimensional nanometre super-localization of gold nanoparticles through simultaneous intensity and phase imaging with a wavefront-sensing camera based on quadriwave lateral shearing interferometry. We show how to combine the intensity and phase information to provide the key to the third axial dimension. Presently, we demonstrate even in the occurrence of large three-dimensional fluctuations of several microns, unprecedented sub-nanometre localization accuracies down to 0.7 nm in lateral and 2.7 nm in axial directions at 50 frames per second. We demonstrate that nanoscale stabilization greatly enhances the image quality and resolution in direct stochastic optical reconstruction microscopy imaging.
Three dimensional shape measurement of wear particle by iterative volume intersection
NASA Astrophysics Data System (ADS)
Wu, Hongkun; Li, Ruowei; Liu, Shilong; Rahman, Md Arifur; Liu, Sanchi; Kwok, Ngaiming; Peng, Zhongxiao
2018-04-01
The morphology of wear particle is a fundamental indicator where wear oriented machine health can be assessed. Previous research proved that thorough measurement of the particle shape allows more reliable explanation of the occurred wear mechanism. However, most of current particle measurement techniques are focused on extraction of the two-dimensional (2-D) morphology, while other critical particle features including volume and thickness are not available. As a result, a three-dimensional (3-D) shape measurement method is developed to enable a more comprehensive particle feature description. The developed method is implemented in three steps: (1) particle profiles in multiple views are captured via a camera mounted above a micro fluid channel; (2) a preliminary reconstruction is accomplished by the shape-from-silhouette approach with the collected particle contours; (3) an iterative re-projection process follows to obtain the final 3-D measurement by minimizing the difference between the original and the re-projected contours. Results from real data are presented, demonstrating the feasibility of the proposed method.
3-D flow and scour near a submerged wing dike: ADCP measurements on the Missouri River
Jamieson, E.C.; Rennie, C.D.; Jacobson, R.B.; Townsend, R.D.
2011-01-01
Detailed mapping of bathymetry and three-dimensional water velocities using a boat-mounted single-beam sonar and acoustic Doppler current profiler (ADCP) was carried out in the vicinity of two submerged wing dikes located in the Lower Missouri River near Columbia, Missouri. During high spring flows the wing dikes become submerged, creating a unique combination of vertical flow separation and overtopping (plunging) flow conditions, causing large-scale three-dimensional turbulent flow structures to form. On three different days and for a range of discharges, sampling transects at 5 and 20 m spacing were completed, covering the area adjacent to and upstream and downstream from two different wing dikes. The objectives of this research are to evaluate whether an ADCP can identify and measure large-scale flow features such as recirculating flow and vortex shedding that develop in the vicinity of a submerged wing dike; and whether or not moving-boat (single-transect) data are sufficient for resolving complex three-dimensional flow fields. Results indicate that spatial averaging from multiple nearby single transects may be more representative of an inherently complex (temporally and spatially variable) three-dimensional flow field than repeated single transects. Results also indicate a correspondence between the location of calculated vortex cores (resolved from the interpolated three-dimensional flow field) and the nearby scour holes, providing new insight into the connections between vertically oriented coherent structures and local scour, with the unique perspective of flow and morphology in a large river.
Volume Segmentation and Ghost Particles
NASA Astrophysics Data System (ADS)
Ziskin, Isaac; Adrian, Ronald
2011-11-01
Volume Segmentation Tomographic PIV (VS-TPIV) is a type of tomographic PIV in which images of particles in a relatively thick volume are segmented into images on a set of much thinner volumes that may be approximated as planes, as in 2D planar PIV. The planes of images can be analysed by standard mono-PIV, and the volume of flow vectors can be recreated by assembling the planes of vectors. The interrogation process is similar to a Holographic PIV analysis, except that the planes of image data are extracted from two-dimensional camera images of the volume of particles instead of three-dimensional holographic images. Like the tomographic PIV method using the MART algorithm, Volume Segmentation requires at least two cameras and works best with three or four. Unlike the MART method, Volume Segmentation does not require reconstruction of individual particle images one pixel at a time and it does not require an iterative process, so it operates much faster. As in all tomographic reconstruction strategies, ambiguities known as ghost particles are produced in the segmentation process. The effect of these ghost particles on the PIV measurement is discussed. This research was supported by Contract 79419-001-09, Los Alamos National Laboratory.
Three-dimensional device characterization by high-speed cinematography
NASA Astrophysics Data System (ADS)
Maier, Claus; Hofer, Eberhard P.
2001-10-01
Testing of micro-electro-mechanical systems (MEMS) for optimization purposes or reliability checks can be supported by device visualization whenever an optical access is available. The difficulty in such an investigation is the short time duration of dynamical phenomena in micro devices. This paper presents a test setup to visualize movements within MEMS in real-time and in two perpendicular directions. A three-dimensional view is achieved by the combination of a commercial high-speed camera system, which allows to take up to 8 images of the same process with a minimum interframe time of 10 ns for the first direction, with a second visualization system consisting of a highly sensitive CCD camera working with a multiple exposure LED illumination in the perpendicular direction. Well synchronized this provides 3-D information which is treated by digital image processing to correct image distortions and to perform the detection of object contours. Symmetric and asymmetric binary collisions of micro drops are chosen as test experiments, featuring coalescence and surface rupture. Another application shown here is the investigation of sprays produced by an atomizer. The second direction of view is a prerequisite for this measurement to select an intended plane of focus.
NASA Astrophysics Data System (ADS)
Unger, Jakob; Lagarto, Joao; Phipps, Jennifer; Ma, Dinglong; Bec, Julien; Sorger, Jonathan; Farwell, Gregory; Bold, Richard; Marcu, Laura
2017-02-01
Multi-Spectral Time-Resolved Fluorescence Spectroscopy (ms-TRFS) can provide label-free real-time feedback on tissue composition and pathology during surgical procedures by resolving the fluorescence decay dynamics of the tissue. Recently, an ms-TRFS system has been developed in our group, allowing for either point-spectroscopy fluorescence lifetime measurements or dynamic raster tissue scanning by merging a 450 nm aiming beam with the pulsed fluorescence excitation light in a single fiber collection. In order to facilitate an augmented real-time display of fluorescence decay parameters, the lifetime values are back projected to the white light video. The goal of this study is to develop a 3D real-time surface reconstruction aiming for a comprehensive visualization of the decay parameters and providing an enhanced navigation for the surgeon. Using a stereo camera setup, we use a combination of image feature matching and aiming beam stereo segmentation to establish a 3D surface model of the decay parameters. After camera calibration, texture-related features are extracted for both camera images and matched providing a rough estimation of the surface. During the raster scanning, the rough estimation is successively refined in real-time by tracking the aiming beam positions using an advanced segmentation algorithm. The method is evaluated for excised breast tissue specimens showing a high precision and running in real-time with approximately 20 frames per second. The proposed method shows promising potential for intraoperative navigation, i.e. tumor margin assessment. Furthermore, it provides the basis for registering the fluorescence lifetime maps to the tissue surface adapting it to possible tissue deformations.
Gully measurement strategies in a pixel using python
NASA Astrophysics Data System (ADS)
Wells, Robert; Momm, Henrique; Bennett, Sean; Dabney, Seth
2015-04-01
Gullies are often the single largest sediment sources within the landscape; however, measurement and process description of these channels presents challenges that have limited complete understanding. A strategy currently being employed in the field and laboratory to measure topography of gullies utilizes inexpensive, off-the-shelf cameras and software. Photogrammetry may be entering an enlightened period, as users have numerous choices (cameras, lenses, and software) and many are utilizing the technology to define their surroundings; however, the key for those seeking answers will be what happens once topography is represented as a three-dimensional digital surface model. Perhaps the model can be compared with another model to visualize change, either in topography or in vegetation cover, or both. With these models of our landscape, prediction technology should be rejuvenated and/or reinvented. Over the past several decades, researchers have endeavored to capture the erosion process and transfer these observations through oral and written word. Several have hypothesized a fundamental system for gully expression in the landscape; however, this understanding has not transferred well into our prediction technology. Unlike many materials, soils often times do not behave in a predictable fashion. Which soil physical properties lend themselves to erosion process description? In most cases, several disciplines are required to visualize the erosion process and its impact on our landscape. With a small camera, the landscape becomes more accessible and this accessibility will lead to a deeper understanding and development of uncompromised erosion theory. Why? Conservation of our soil resources is inherently linked to a complete understanding of soil wasting.
Three-dimensional face model reproduction method using multiview images
NASA Astrophysics Data System (ADS)
Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio
1991-11-01
This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.
Biocompatible Near-Infrared Three-Dimensional Tracking System.
Decker, Ryan S; Shademan, Azad; Opfermann, Justin D; Leonard, Simon; Kim, Peter C W; Krieger, Axel
2017-03-01
A fundamental challenge in soft-tissue surgery is that target tissue moves and deforms, becomes occluded by blood or other tissue, and is difficult to differentiate from surrounding tissue. We developed small biocompatible near-infrared fluorescent (NIRF) markers with a novel fused plenoptic and NIR camera tracking system, enabling three-dimensional tracking of tools and target tissue while overcoming blood and tissue occlusion in the uncontrolled, rapidly changing surgical environment. In this work, we present the tracking system and marker design and compare tracking accuracies to standard optical tracking methods using robotic experiments. At speeds of 1 mm/s, we observe tracking accuracies of 1.61 mm, degrading only to 1.71 mm when the markers are covered in blood and tissue.
Three-dimensional microscopic tomographic imagings of the cataract in a human lens in vivo
NASA Astrophysics Data System (ADS)
Masters, Barry R.
1998-10-01
The problem of three-dimensional visualization of a human lens in vivo has been solved by a technique of volume rendering a transformed series of 60 rotated Scheimpflug (a dual slit reflected light microscope) digital images. The data set was obtained by rotating the Scheimpflug camera about the optic axis of the lens in 3 degree increments. The transformed set of optical sections were first aligned to correct for small eye movements, and then rendered into a volume reconstruction with volume rendering computer graphics techniques. To help visualize the distribution of lens opacities (cataracts) in the living, human lens the intensity of light scattering was pseudocolor coded and the cataract opacities were displayed as a movie.
A Soft Sensor-Based Three-Dimensional (3-D) Finger Motion Measurement System
Park, Wookeun; Ro, Kyongkwan; Kim, Suin; Bae, Joonbum
2017-01-01
In this study, a soft sensor-based three-dimensional (3-D) finger motion measurement system is proposed. The sensors, made of the soft material Ecoflex, comprise embedded microchannels filled with a conductive liquid metal (EGaln). The superior elasticity, light weight, and sensitivity of soft sensors allows them to be embedded in environments in which conventional sensors cannot. Complicated finger joints, such as the carpometacarpal (CMC) joint of the thumb are modeled to specify the location of the sensors. Algorithms to decouple the signals from soft sensors are proposed to extract the pure flexion, extension, abduction, and adduction joint angles. The performance of the proposed system and algorithms are verified by comparison with a camera-based motion capture system. PMID:28241414
Two-dimensional numerical simulation of flow around three-stranded rope
NASA Astrophysics Data System (ADS)
Wang, Xinxin; Wan, Rong; Huang, Liuyi; Zhao, Fenfang; Sun, Peng
2016-08-01
Three-stranded rope is widely used in fishing gear and mooring system. Results of numerical simulation are presented for flow around a three-stranded rope in uniform flow. The simulation was carried out to study the hydrodynamic characteristics of pressure and velocity fields of steady incompressible laminar and turbulent wakes behind a three-stranded rope. A three-cylinder configuration and single circular cylinder configuration are used to model the three-stranded rope in the two-dimensional simulation. The governing equations, Navier-Stokes equations, are solved by using two-dimensional finite volume method. The turbulence flow is simulated using Standard κ-ɛ model and Shear-Stress Transport κ-ω (SST) model. The drag of the three-cylinder model and single cylinder model is calculated for different Reynolds numbers by using control volume analysis method. The pressure coefficient is also calculated for the turbulent model and laminar model based on the control surface method. From the comparison of the drag coefficient and the pressure of the single cylinder and three-cylinder models, it is found that the drag coefficients of the three-cylinder model are generally 1.3-1.5 times those of the single circular cylinder for different Reynolds numbers. Comparing the numerical results with water tank test data, the results of the three-cylinder model are closer to the experiment results than the single cylinder model results.
Measurement of the timing behaviour of off-the-shelf cameras
NASA Astrophysics Data System (ADS)
Schatz, Volker
2017-04-01
This paper presents a measurement method suitable for investigating the timing properties of cameras. A single light source illuminates the camera detector starting with a varying defined delay after the camera trigger. Pixels from the recorded camera frames are summed up and normalised, and the resulting function is indicative of the overlap between illumination and exposure. This allows one to infer the trigger delay and the exposure time with sub-microsecond accuracy. The method is therefore of interest when off-the-shelf cameras are used in reactive systems or synchronised with other cameras. It can supplement radiometric and geometric calibration methods for cameras in scientific use. A closer look at the measurement results reveals deviations from the ideal camera behaviour of constant sensitivity limited to the exposure interval. One of the industrial cameras investigated retains a small sensitivity long after the end of the nominal exposure interval. All three investigated cameras show non-linear variations of sensitivity at O≤ft({{10}-3}\\right) to O≤ft({{10}-2}\\right) during exposure. Due to its sign, the latter effect cannot be described by a sensitivity function depending on the time after triggering, but represents non-linear pixel characteristics.
Intermediate view synthesis algorithm using mesh clustering for rectangular multiview camera system
NASA Astrophysics Data System (ADS)
Choi, Byeongho; Kim, Taewan; Oh, Kwan-Jung; Ho, Yo-Sung; Choi, Jong-Soo
2010-02-01
A multiview video-based three-dimensional (3-D) video system offers a realistic impression and a free view navigation to the user. The efficient compression and intermediate view synthesis are key technologies since 3-D video systems deal multiple views. We propose an intermediate view synthesis using a rectangular multiview camera system that is suitable to realize 3-D video systems. The rectangular multiview camera system not only can offer free view navigation both horizontally and vertically but also can employ three reference views such as left, right, and bottom for intermediate view synthesis. The proposed view synthesis method first represents the each reference view to meshes and then finds the best disparity for each mesh element by using the stereo matching between reference views. Before stereo matching, we separate the virtual image to be synthesized into several regions to enhance the accuracy of disparities. The mesh is classified into foreground and background groups by disparity values and then affine transformed. By experiments, we confirm that the proposed method synthesizes a high-quality image and is suitable for 3-D video systems.
Image Alignment for Multiple Camera High Dynamic Range Microscopy.
Eastwood, Brian S; Childs, Elisabeth C
2012-01-09
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.
Image Alignment for Multiple Camera High Dynamic Range Microscopy
Eastwood, Brian S.; Childs, Elisabeth C.
2012-01-01
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028
Tokas, Theodoros; Gözen, Ali Serdar; Avgeris, Margaritis; Tschada, Alexandra; Fiedler, Marcel; Klein, Jan; Rassweiler, Jens
2017-10-01
Posture, vision, and instrumentation limitations are the main predicaments of conventional laparoscopy. To combine the ETHOS surgical chair, the three-dimensional laparoscope, and the Radius Surgical System manipulators, and compare the system with conventional laparoscopy and da Vinci in terms of task completion times and discomfort. Fifteen trainees performed the three main laparoscopic suturing tasks of the Heilbronn training program (IV: simulation of dorsal venous complex suturing; V: circular suturing of tubular structure; and VI: urethrovesical anastomosis) in a pelvi trainer. The tasks were performed conventionally, utilizing the three devices, and robotically. Task completion times were recorded and the surgeon discomfort was evaluated using questionnaires. Task completion times were compared using nonparametric Wilcoxon signed rank test and ergonomic scores were compared using Pearson chi-square test. The use of the full laparoscopic set (ETHOS chair, three-dimensional laparoscopic camera, Radius Surgical System needle holders), resulted in a significant improvement of the completion time of the three tested tasks compared with conventional laparoscopy (p<0.001) and similar to da Vinci surgery. After completing Tasks IV, V, and VI conventionally, 12 (80%), 13 (86.7%), and 13 (86.7%) of the 15 trainees, respectively, reported heavy total discomfort. The full laparoscopic system nullified heavy discomfort for Tasks IV and V and minimized it (6.7%) for the most demanding Task VI. Especially for Task VI, all trainees gained benefit, by using the system, in terms of task completion times and discomfort. The limited trainee robotic experience and the questionnaire subjectivity could be a potential limitation. The ergonomic laparoscopic system offers significantly improved task completion times and ergonomy than conventional laparoscopy. Furthermore, it demonstrates comparable results to robotic surgery. The study was conducted in a pelvi trainer and no patients were recruited. Copyright © 2016 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Vision sensing techniques in aeronautics and astronautics
NASA Technical Reports Server (NTRS)
Hall, E. L.
1988-01-01
The close relationship between sensing and other tasks in orbital space, and the integral role of vision sensing in practical aerospace applications, are illustrated. Typical space mission-vision tasks encompass the docking of space vehicles, the detection of unexpected objects, the diagnosis of spacecraft damage, and the inspection of critical spacecraft components. Attention is presently given to image functions, the 'windowing' of a view, the number of cameras required for inspection tasks, the choice of incoherent or coherent (laser) illumination, three-dimensional-to-two-dimensional model-matching, edge- and region-segmentation techniques, and motion analysis for tracking.
NASA Technical Reports Server (NTRS)
Badler, N. I.
1985-01-01
Human motion analysis is the task of converting actual human movements into computer readable data. Such movement information may be obtained though active or passive sensing methods. Active methods include physical measuring devices such as goniometers on joints of the body, force plates, and manually operated sensors such as a Cybex dynamometer. Passive sensing de-couples the position measuring device from actual human contact. Passive sensors include Selspot scanning systems (since there is no mechanical connection between the subject's attached LEDs and the infrared sensing cameras), sonic (spark-based) three-dimensional digitizers, Polhemus six-dimensional tracking systems, and image processing systems based on multiple views and photogrammetric calculations.
NASA Astrophysics Data System (ADS)
Metzger, Philip T.; Lane, John E.; Carilli, Robert A.; Long, Jason M.; Shawn, Kathy L.
2010-07-01
A method combining photogrammetry with ballistic analysis is demonstrated to identify flying debris in a rocket launch environment. Debris traveling near the STS-124 Space Shuttle was captured on cameras viewing the launch pad within the first few seconds after launch. One particular piece of debris caught the attention of investigators studying the release of flame trench fire bricks because its high trajectory could indicate a flight risk to the Space Shuttle. Digitized images from two pad perimeter high-speed 16-mm film cameras were processed using photogrammetry software based on a multi-parameter optimization technique. Reference points in the image were found from 3D CAD models of the launch pad and from surveyed points on the pad. The three-dimensional reference points were matched to the equivalent two-dimensional camera projections by optimizing the camera model parameters using a gradient search optimization technique. Using this method of solving the triangulation problem, the xyz position of the object's path relative to the reference point coordinate system was found for every set of synchronized images. This trajectory was then compared to a predicted trajectory while performing regression analysis on the ballistic coefficient and other parameters. This identified, with a high degree of confidence, the object's material density and thus its probable origin within the launch pad environment. Future extensions of this methodology may make it possible to diagnose the underlying causes of debris-releasing events in near-real time, thus improving flight safety.
Does a 3D Image Improve Laparoscopic Motor Skills?
Folaranmi, Semiu Eniola; Partridge, Roland W; Brennan, Paul M; Hennessey, Iain A M
2016-08-01
To quantitatively determine whether a three-dimensional (3D) image improves laparoscopic performance compared with a two-dimensional (2D) image. This is a prospective study with two groups of participants: novices (5) and experts (5). Individuals within each group undertook a validated laparoscopic task on a box simulator, alternating between 2D and a 3D laparoscopic image until they had repeated the task five times with each imaging modality. A dedicated motion capture camera was used to determine the time taken to complete the task (seconds) and instrument distance traveled (meters). Among the experts, the mean time taken to perform the task on the 3D image was significantly quicker than on the 2D image, 40.2 seconds versus 51.2 seconds, P < .0001. Among the novices, the mean task time again was significantly quicker on the 3D image, 56.4 seconds versus 82.7 seconds, P < .0001. There was no significant difference in the mean time it took a novice to perform the task using a 3D camera compared with an expert on a 2D camera, 56.4 seconds versus 51.3 seconds, P = .3341. The use of a 3D image confers a significant performance advantage over a 2D camera in quantitatively measured laparoscopic skills for both experts and novices. The use of a 3D image appears to improve a novice's performance to the extent that it is not statistically different from an expert using a 2D image.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, Aaron M.; DeVore, Matthew S.; Stich, Dominik G.
Single-molecule fluorescence resonance energy transfer (smFRET) remains a widely utilized and powerful tool for quantifying heterogeneous interactions and conformational dynamics of biomolecules. However, traditional smFRET experiments either are limited to short observation times (typically less than 1 ms) in the case of “burst” confocal measurements or require surface immobilization which usually has a temporal resolution limited by the camera framing rate. We developed a smFRET 3D tracking microscope that is capable of observing single particles for extended periods of time with high temporal resolution. The confocal tracking microscope utilizes closed-loop feedback to follow the particle in solution by recentering itmore » within two overlapping tetrahedral detection elements, corresponding to donor and acceptor channels. We demonstrated the microscope’s multicolor tracking capability via random walk simulations and experimental tracking of 200 nm fluorescent beads in water with a range of apparent smFRET efficiency values, 0.45-0.69. We also demonstrated the microscope’s capability to track and quantify double-stranded DNA undergoing intramolecular smFRET in a viscous glycerol solution. In future experiments, the smFRET 3D tracking system will be used to study protein conformational dynamics while diffusing in solution and native biological environments with high temporal resolution.« less
Keller, Aaron M.; DeVore, Matthew S.; Stich, Dominik G.; ...
2018-04-19
Single-molecule fluorescence resonance energy transfer (smFRET) remains a widely utilized and powerful tool for quantifying heterogeneous interactions and conformational dynamics of biomolecules. However, traditional smFRET experiments either are limited to short observation times (typically less than 1 ms) in the case of “burst” confocal measurements or require surface immobilization which usually has a temporal resolution limited by the camera framing rate. We developed a smFRET 3D tracking microscope that is capable of observing single particles for extended periods of time with high temporal resolution. The confocal tracking microscope utilizes closed-loop feedback to follow the particle in solution by recentering itmore » within two overlapping tetrahedral detection elements, corresponding to donor and acceptor channels. We demonstrated the microscope’s multicolor tracking capability via random walk simulations and experimental tracking of 200 nm fluorescent beads in water with a range of apparent smFRET efficiency values, 0.45-0.69. We also demonstrated the microscope’s capability to track and quantify double-stranded DNA undergoing intramolecular smFRET in a viscous glycerol solution. In future experiments, the smFRET 3D tracking system will be used to study protein conformational dynamics while diffusing in solution and native biological environments with high temporal resolution.« less
Compact Autonomous Hemispheric Vision System
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.
2012-01-01
Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.
NASA Astrophysics Data System (ADS)
Xie, Xingwang; Han, Xinjie; Long, Huabao; Dai, Wanwan; Xin, Zhaowei; Wei, Dong; Zhang, Xinyu; Wang, Haiwei; Xie, Changsheng
2018-02-01
In this paper, a new liquid-crystal microlens array (LCMLA) with patterned ring-electrode arrays (PREAs) is investigated, which has an ability to acquire multiple-mode two-dimensional images with better electrically tunable efficiency than common liquid-crystal devices. The new type of LCMLA can be used to overcome several remarkable disadvantage of conventional liquid-crystal microlens arrays switched and adjusted electrically by relatively complex mechanism. There are two layer electrodes in the LCMLA developed by us. The top electrode layer consists of PREAs with different featured diameter but the same center for each single cell, and the bottom is a plate electrode. When both electrode structures are driven independently by variable AC voltage signal, a gradient electric field distribution could be obtained, which can drive liquid-crystal molecules to reorient themselves along the gradient electric field shaped, so as to demonstrate a satisfactory refractive index distribution. The common experiments are carried out to validate the performances needed. As shown, the focal length of the LCMLA can be adjusted continuously according to the variable voltage signal applied. According to designing, the LCMLA will be integrated continuously with an image sensors to set up a camera with desired performances. The test results indicate that our camera based on the LCMLA can obtain distinct multiple-mode two-dimensional images under the condition of using relatively low driving signal voltage.
Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System
2016-01-01
This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165
Three-dimensional single-mode nonlinear ablative Rayleigh-Taylor instability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, R.; Aluie, H.; Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14627
The nonlinear evolution of the single-mode ablative Rayleigh-Taylor instability is studied in three dimensions. As the mode wavelength approaches the cutoff of the linear spectrum (short-wavelength modes), it is found that the three-dimensional (3D) terminal bubble velocity greatly exceeds both the two-dimensional (2D) value and the classical 3D bubble velocity. Unlike in 2D, the 3D short-wavelength bubble velocity does not saturate. The growing 3D bubble acceleration is driven by the unbounded accumulation of vorticity inside the bubble. The vorticity is transferred by mass ablation from the Rayleigh-Taylor spikes to the ablated plasma filling the bubble volume.
Orthographic Stereo Correlator on the Terrain Model for Apollo Metric Images
NASA Technical Reports Server (NTRS)
Kim, Taemin; Husmann, Kyle; Moratto, Zachary; Nefian, Ara V.
2011-01-01
A stereo correlation method on the object domain is proposed to generate the accurate and dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce high-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. Given camera parameters of an image pair from bundle adjustment in ASP, a correlation window is defined on the terrain with the predefined surface normal of a post rather than image domain. The squared error of back-projected images on the local terrain is minimized with respect to the post elevation. This single dimensional optimization is solved efficiently and improves the accuracy of the elevation estimate.
Broadband Terahertz Computed Tomography Using a 5k-pixel Real-time THz Camera
NASA Astrophysics Data System (ADS)
Trichopoulos, Georgios C.; Sertel, Kubilay
2015-07-01
We present a novel THz computed tomography system that enables fast 3-dimensional imaging and spectroscopy in the 0.6-1.2 THz band. The system is based on a new real-time broadband THz camera that enables rapid acquisition of multiple cross-sectional images required in computed tomography. Tomographic reconstruction is achieved using digital images from the densely-packed large-format (80×64) focal plane array sensor located behind a hyper-hemispherical silicon lens. Each pixel of the sensor array consists of an 85 μm × 92 μm lithographically fabricated wideband dual-slot antenna, monolithically integrated with an ultra-fast diode tuned to operate in the 0.6-1.2 THz regime. Concurrently, optimum impedance matching was implemented for maximum pixel sensitivity, enabling 5 frames-per-second image acquisition speed. As such, the THz computed tomography system generates diffraction-limited resolution cross-section images as well as the three-dimensional models of various opaque and partially transparent objects. As an example, an over-the-counter vitamin supplement pill is imaged and its material composition is reconstructed. The new THz camera enables, for the first time, a practical application of THz computed tomography for non-destructive evaluation and biomedical imaging.
3D Medical Collaboration Technology to Enhance Emergency Healthcare
Welch, Greg; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M.; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E.
2009-01-01
Two-dimensional (2D) videoconferencing has been explored widely in the past 15–20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals’ viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare. PMID:19521951
3D medical collaboration technology to enhance emergency healthcare.
Welch, Gregory F; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj K; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E
2009-04-19
Two-dimensional (2D) videoconferencing has been explored widely in the past 15-20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals' viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare.
Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras
NASA Astrophysics Data System (ADS)
Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.
2017-02-01
Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.
Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera
NASA Astrophysics Data System (ADS)
Cruz Perez, Carlos; Lauri, Antonella; Symvoulidis, Panagiotis; Cappetta, Michele; Erdmann, Arne; Westmeyer, Gil Gregor
2015-09-01
Reconstructing a three-dimensional scene from multiple simultaneously acquired perspectives (the light field) is an elegant scanless imaging concept that can exceed the temporal resolution of currently available scanning-based imaging methods for capturing fast cellular processes. We tested the performance of commercially available light field cameras on a fluorescent microscopy setup for monitoring calcium activity in the brain of awake and behaving reporter zebrafish larvae. The plenoptic imaging system could volumetrically resolve diverse neuronal response profiles throughout the zebrafish brain upon stimulation with an aversive odorant. Behavioral responses of the reporter fish could be captured simultaneously together with depth-resolved neuronal activity. Overall, our assessment showed that with some optimizations for fluorescence microscopy applications, commercial light field cameras have the potential of becoming an attractive alternative to custom-built systems to accelerate molecular imaging research on cellular dynamics.
Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera.
Perez, Carlos Cruz; Lauri, Antonella; Symvoulidis, Panagiotis; Cappetta, Michele; Erdmann, Arne; Westmeyer, Gil Gregor
2015-09-01
Reconstructing a three-dimensional scene from multiple simultaneously acquired perspectives (the light field) is an elegant scanless imaging concept that can exceed the temporal resolution of currently available scanning-based imaging methods for capturing fast cellular processes. We tested the performance of commercially available light field cameras on a fluorescent microscopy setup for monitoring calcium activity in the brain of awake and behaving reporter zebrafish larvae. The plenoptic imaging system could volumetrically resolve diverse neuronal response profiles throughout the zebrafish brain upon stimulation with an aversive odorant. Behavioral responses of the reporter fish could be captured simultaneously together with depth-resolved neuronal activity. Overall, our assessment showed that with some optimizations for fluorescence microscopy applications, commercial light field cameras have the potential of becoming an attractive alternative to custom-built systems to accelerate molecular imaging research on cellular dynamics.
Midhun Mohan; Carlos Alberto Silva; Carine Klauberg; Prahlad Jat; Glenn Catts; Adrian Cardil; Andrew Thomas Hudak; Mahendra Dia
2017-01-01
Advances in Unmanned Aerial Vehicle (UAV) technology and data processing capabilities have made it feasible to obtain high-resolution imagery and three dimensional (3D) data which can be used for forest monitoring and assessing tree attributes. This study evaluates the applicability of low consumer grade cameras attached to UAVs and structure-from-motion (SfM)...
Space trajectory calculation based on G-sensor
NASA Astrophysics Data System (ADS)
Xu, Biya; Zhan, Yinwei; Shao, Yang
2017-08-01
At present, without full use of the mobile phone around us, most of the research in human body posture recognition field is use camera or portable acceleration sensor to collect data. In this paper, G-sensor built-in mobile phone is use to collect data. After processing data with the way of moving average filter and acceleration integral, joint point's space three-dimensional coordinates can be abtained accurately.
A fuzzy structural matching scheme for space robotics vision
NASA Technical Reports Server (NTRS)
Naka, Masao; Yamamoto, Hiromichi; Homma, Khozo; Iwata, Yoshitaka
1994-01-01
In this paper, we propose a new fuzzy structural matching scheme for space stereo vision which is based on the fuzzy properties of regions of images and effectively reduces the computational burden in the following low level matching process. Three dimensional distance images of a space truss structural model are estimated using this scheme from stereo images sensed by Charge Coupled Device (CCD) TV cameras.
Clustering method for counting passengers getting in a bus with single camera
NASA Astrophysics Data System (ADS)
Yang, Tao; Zhang, Yanning; Shao, Dapei; Li, Ying
2010-03-01
Automatic counting of passengers is very important for both business and security applications. We present a single-camera-based vision system that is able to count passengers in a highly crowded situation at the entrance of a traffic bus. The unique characteristics of the proposed system include, First, a novel feature-point-tracking- and online clustering-based passenger counting framework, which performs much better than those of background-modeling-and foreground-blob-tracking-based methods. Second, a simple and highly accurate clustering algorithm is developed that projects the high-dimensional feature point trajectories into a 2-D feature space by their appearance and disappearance times and counts the number of people through online clustering. Finally, all test video sequences in the experiment are captured from a real traffic bus in Shanghai, China. The results show that the system can process two 320×240 video sequences at a frame rate of 25 fps simultaneously, and can count passengers reliably in various difficult scenarios with complex interaction and occlusion among people. The method achieves high accuracy rates up to 96.5%.
Agreement and reading time for differently-priced devices for the digital capture of X-ray films.
Salazar, Antonio José; Camacho, Juan Camilo; Aguirre, Diego Andrés
2012-03-01
We assessed the reliability of three digital capture devices: a film digitizer (which cost US $15,000), a flat-bed scanner (US $1800) and a digital camera (US $450). Reliability was measured as the agreement between six observers when reading images acquired from a single device and also in terms of the pair-device agreement. The images were 136 chest X-ray cases. The variables measured were the interstitial opacities distribution, interstitial patterns, nodule size and percentage pneumothorax size. The agreement between the six readers when reading images acquired from a single device was similar for the three devices. The pair-device agreements were moderate for all variables. There were significant differences in reading-time between devices: the mean reading-time for the film digitizer was 93 s, it was 59 s for the flat-bed scanner and 70 s for the digital camera. Despite the differences in their cost, there were no substantial differences in the performance of the three devices.
High dynamic range CMOS (HDRC) imagers for safety systems
NASA Astrophysics Data System (ADS)
Strobel, Markus; Döttling, Dietmar
2013-04-01
The first part of this paper describes the high dynamic range CMOS (HDRC®) imager - a special type of CMOS image sensor with logarithmic response. The powerful property of a high dynamic range (HDR) image acquisition is detailed by mathematical definition and measurement of the optoelectronic conversion function (OECF) of two different HDRC imagers. Specific sensor parameters will be discussed including the pixel design for the global shutter readout. The second part will give an outline on the applications and requirements of cameras for industrial safety. Equipped with HDRC global shutter sensors SafetyEYE® is a high-performance stereo camera system for safe three-dimensional zone monitoring enabling new and more flexible solutions compared to existing safety guards.
NASA Astrophysics Data System (ADS)
Liu, Z. X.; Xu, X. Q.; Gao, X.; Xia, T. Y.; Joseph, I.; Meyer, W. H.; Liu, S. C.; Xu, G. S.; Shao, L. M.; Ding, S. Y.; Li, G. Q.; Li, J. G.
2014-09-01
Experimental measurements of edge localized modes (ELMs) observed on the EAST experiment are compared to linear and nonlinear theoretical simulations of peeling-ballooning modes using the BOUT++ code. Simulations predict that the dominant toroidal mode number of the ELM instability becomes larger for lower current, which is consistent with the mode structure captured with visible light using an optical CCD camera. The poloidal mode number of the simulated pressure perturbation shows good agreement with the filamentary structure observed by the camera. The nonlinear simulation is also consistent with the experimentally measured energy loss during an ELM crash and with the radial speed of ELM effluxes measured using a gas puffing imaging diagnostic.
Experimental study of 3-D structure and evolution of foam
NASA Astrophysics Data System (ADS)
Thoroddsen, S. T.; Tan, E.; Bauer, J. M.
1998-11-01
Liquid foam coarsens due to diffusion of gas between adjacent foam cells. This evolution process is slow, but leads to rapid topological changes taking place during localized rearrangements of Plateau borders or disappearance of small cells. We are developing a new imaging technique to construct the three-dimensional topology of real soap foam contained in a small glass container. The technique uses 3 video cameras equipped with lenses having narrow depth-of-field. These cameras are moved with respect to the container, in effect obtaining numerous slices through the foam. Preliminary experimental results showing typical rearrangement events will also be presented. These events involve for example disappearance of either triangular or rectangular cell faces.
Three-dimensional charge transport in organic semiconductor single crystals.
He, Tao; Zhang, Xiying; Jia, Jiong; Li, Yexin; Tao, Xutang
2012-04-24
Three-dimensional charge transport anisotropy in organic semiconductor single crystals - both plates and rods (above and below, respectively, in the figure) - is measured in well-performing organic field-effect transistors for the first time. The results provide an excellent model for molecular design and device preparation that leads to good performance. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Turbine endwall single cylinder program
NASA Technical Reports Server (NTRS)
Langston, L. S.
1982-01-01
Detailed measurement of the flow field in front of a large-scale single cylinder, mounted in a wind tunnel is discussed. A better understanding of the three dimensional separation occuring in front of the cylinder on the endwall, and of the vortex system that is formed is sought. A data base with which to check analytical and numerical computer models of three dimensional flows is also anticipated.
SU-C-207A-03: Development of Proton CT Imaging System Using Thick Scintillator and CCD Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanaka, S; Uesaka, M; Nishio, T
2016-06-15
Purpose: In the treatment planning of proton therapy, Water Equivalent Length (WEL), which is the parameter for the calculation of dose and the range of proton, is derived by X-ray CT (xCT) image and xCT-WEL conversion. However, about a few percent error in the accuracy of proton range calculation through this conversion has been reported. The purpose of this study is to construct a proton CT (pCT) imaging system for an evaluation of the error. Methods: The pCT imaging system was constructed with a thick scintillator and a cooled CCD camera, which acquires the two-dimensional image of integrated value ofmore » the scintillation light toward the beam direction. The pCT image is reconstructed by FBP method using a correction between the light intensity and residual range of proton beam. An experiment for the demonstration of this system was performed with 70-MeV proton beam provided by NIRS cyclotron. The pCT image of several objects reconstructed from the experimental data was evaluated quantitatively. Results: Three-dimensional pCT images of several objects were reconstructed experimentally. A finestructure of approximately 1 mm was clearly observed. The position resolution of pCT image was almost the same as that of xCT image. And the error of proton CT pixel value was up to 4%. The deterioration of image quality was caused mainly by the effect of multiple Coulomb scattering. Conclusion: We designed and constructed the pCT imaging system using a thick scintillator and a CCD camera. And the system was evaluated with the experiment by use of 70-MeV proton beam. Three-dimensional pCT images of several objects were acquired by the system. This work was supported by JST SENTAN Grant Number 13A1101 and JSPS KAKENHI Grant Number 15H04912.« less
Plyku, Donika; Loeb, David M.; Prideaux, Andrew R.; Baechler, Sébastien; Wahl, Richard L.; Sgouros, George
2015-01-01
Abstract Purpose: Dosimetric accuracy depends directly upon the accuracy of the activity measurements in tumors and organs. The authors present the methods and results of a retrospective tumor dosimetry analysis in 14 patients with a total of 28 tumors treated with high activities of 153Sm-ethylenediaminetetramethylenephosphonate (153Sm-EDTMP) for therapy of metastatic osteosarcoma using planar images and compare the results with three-dimensional dosimetry. Materials and Methods: Analysis of phantom data provided a complete set of parameters for dosimetric calculations, including buildup factor, attenuation coefficient, and camera dead-time compensation. The latter was obtained using a previously developed methodology that accounts for the relative motion of the camera and patient during whole-body (WB) imaging. Tumor activity values calculated from the anterior and posterior views of WB planar images of patients treated with 153Sm-EDTMP for pediatric osteosarcoma were compared with the geometric mean value. The mean activities were integrated over time and tumor-absorbed doses were calculated using the software package OLINDA/EXM. Results: The authors found that it was necessary to employ the dead-time correction algorithm to prevent measured tumor activity half-lives from often exceeding the physical decay half-life of 153Sm. Measured half-lives so long are unquestionably in error. Tumor-absorbed doses varied between 0.0022 and 0.27 cGy/MBq with an average of 0.065 cGy/MBq; however, a comparison with absorbed dose values derived from a three-dimensional analysis for the same tumors showed no correlation; moreover, the ratio of three-dimensional absorbed dose value to planar absorbed dose value was 2.19. From the anterior and posterior activity comparisons, the order of clinical uncertainty for activity and dose calculations from WB planar images, with the present methodology, is hypothesized to be about 70%. Conclusion: The dosimetric results from clinical patient data indicate that absolute planar dosimetry is unreliable and dosimetry using three-dimensional imaging is preferable, particularly for tumors, except perhaps for the most sophisticated planar methods. The relative activity and patient kinetics derived from planar imaging show a greater level of reliability than the dosimetry. PMID:26560193
Dey, B.; Ratcliff, B.; Va’vra, J.
2017-02-16
In this article, we explore the angular resolution limits attainable in small FDIRC designs taking advantage of the new highly pixelated detectors that are now available. Since the basic FDIRC design concept attains its particle separation performance mostly in the angular domain as measured by two-dimensional pixels, this paper relies primarily on a pixel-based analysis, with additional chromatic corrections using the time domain, requiring single photon timing resolution at a level of 100–200 ps only. This approach differs from other modern DIRC design concepts such as TOP or TORCH detectors, whose separation performances rely more strongly on time-dependent analyses. Inmore » conclusion, we find excellent single photon resolution with a geometry where individual bars are coupled to a single plate, which is coupled in turn to a cylindrical lens focusing camera.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dey, B.; Ratcliff, B.; Va’vra, J.
In this article, we explore the angular resolution limits attainable in small FDIRC designs taking advantage of the new highly pixelated detectors that are now available. Since the basic FDIRC design concept attains its particle separation performance mostly in the angular domain as measured by two-dimensional pixels, this paper relies primarily on a pixel-based analysis, with additional chromatic corrections using the time domain, requiring single photon timing resolution at a level of 100–200 ps only. This approach differs from other modern DIRC design concepts such as TOP or TORCH detectors, whose separation performances rely more strongly on time-dependent analyses. Inmore » conclusion, we find excellent single photon resolution with a geometry where individual bars are coupled to a single plate, which is coupled in turn to a cylindrical lens focusing camera.« less
Effects of camera location on the reconstruction of 3D flare trajectory with two cameras
NASA Astrophysics Data System (ADS)
Özsaraç, Seçkin; Yeşilkaya, Muhammed
2015-05-01
Flares are used as valuable electronic warfare assets for the battle against infrared guided missiles. The trajectory of the flare is one of the most important factors that determine the effectiveness of the counter measure. Reconstruction of the three dimensional (3D) position of a point, which is seen by multiple cameras, is a common problem. Camera placement, camera calibration, corresponding pixel determination in between the images of different cameras and also the triangulation algorithm affect the performance of 3D position estimation. In this paper, we specifically investigate the effects of camera placement on the flare trajectory estimation performance by simulations. Firstly, 3D trajectory of a flare and also the aircraft, which dispenses the flare, are generated with simple motion models. Then, we place two virtual ideal pinhole camera models on different locations. Assuming the cameras are tracking the aircraft perfectly, the view vectors of the cameras are computed. Afterwards, using the view vector of each camera and also the 3D position of the flare, image plane coordinates of the flare on both cameras are computed using the field of view (FOV) values. To increase the fidelity of the simulation, we have used two sources of error. One is used to model the uncertainties in the determination of the camera view vectors, i.e. the orientations of the cameras are measured noisy. Second noise source is used to model the imperfections of the corresponding pixel determination of the flare in between the two cameras. Finally, 3D position of the flare is estimated using the corresponding pixel indices, view vector and also the FOV of the cameras by triangulation. All the processes mentioned so far are repeated for different relative camera placements so that the optimum estimation error performance is found for the given aircraft and are trajectories.
Cooperative single-photon subradiant states in a three-dimensional atomic array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jen, H.H., E-mail: sappyjen@gmail.com
2016-11-15
We propose a complete superradiant and subradiant states that can be manipulated and prepared in a three-dimensional atomic array. These subradiant states can be realized by absorbing a single photon and imprinting the spatially-dependent phases on the atomic system. We find that the collective decay rates and associated cooperative Lamb shifts are highly dependent on the phases we manage to imprint, and the subradiant state of long lifetime can be found for various lattice spacings and atom numbers. We also investigate both optically thin and thick atomic arrays, which can serve for systematic studies of super- and sub-radiance. Our proposal offers an alternative schememore » for quantum memory of light in a three-dimensional array of two-level atoms, which is applicable and potentially advantageous in quantum information processing. - Highlights: • Cooperative single-photon subradiant states in a three-dimensional atomic array. • Subradiant state manipulation via spatially-increasing phase imprinting. • Quantum storage of light in the subradiant state in two-level atoms.« less
Vemuri, Anant S; Wu, Jungle Chi-Hsiang; Liu, Kai-Che; Wu, Hurng-Sheng
2012-12-01
Surgical procedures have undergone considerable advancement during the last few decades. More recently, the availability of some imaging methods intraoperatively has added a new dimension to minimally invasive techniques. Augmented reality in surgery has been a topic of intense interest and research. Augmented reality involves usage of computer vision algorithms on video from endoscopic cameras or cameras mounted in the operating room to provide the surgeon additional information that he or she otherwise would have to recognize intuitively. One of the techniques combines a virtual preoperative model of the patient with the endoscope camera using natural or artificial landmarks to provide an augmented reality view in the operating room. The authors' approach is to provide this with the least number of changes to the operating room. Software architecture is presented to provide interactive adjustment in the registration of a three-dimensional (3D) model and endoscope video. Augmented reality including adrenalectomy, ureteropelvic junction obstruction, and retrocaval ureter and pancreas was used to perform 12 surgeries. The general feedback from the surgeons has been very positive not only in terms of deciding the positions for inserting points but also in knowing the least change in anatomy. The approach involves providing a deformable 3D model architecture and its application to the operating room. A 3D model with a deformable structure is needed to show the shape change of soft tissue during the surgery. The software architecture to provide interactive adjustment in registration of the 3D model and endoscope video with adjustability of every 3D model is presented.
A three dimensional point cloud registration method based on rotation matrix eigenvalue
NASA Astrophysics Data System (ADS)
Wang, Chao; Zhou, Xiang; Fei, Zixuan; Gao, Xiaofei; Jin, Rui
2017-09-01
We usually need to measure an object at multiple angles in the traditional optical three-dimensional measurement method, due to the reasons for the block, and then use point cloud registration methods to obtain a complete threedimensional shape of the object. The point cloud registration based on a turntable is essential to calculate the coordinate transformation matrix between the camera coordinate system and the turntable coordinate system. We usually calculate the transformation matrix by fitting the rotation center and the rotation axis normal of the turntable in the traditional method, which is limited by measuring the field of view. The range of exact feature points used for fitting the rotation center and the rotation axis normal is approximately distributed within an arc less than 120 degrees, resulting in a low fit accuracy. In this paper, we proposes a better method, based on the invariant eigenvalue principle of rotation matrix in the turntable coordinate system and the coordinate transformation matrix of the corresponding coordinate points. First of all, we control the rotation angle of the calibration plate with the turntable to calibrate the coordinate transformation matrix of the corresponding coordinate points by using the least squares method. And then we use the feature decomposition to calculate the coordinate transformation matrix of the camera coordinate system and the turntable coordinate system. Compared with the traditional previous method, it has a higher accuracy, better robustness and it is not affected by the camera field of view. In this method, the coincidence error of the corresponding points on the calibration plate after registration is less than 0.1mm.
Three-dimensional microscopic deformation measurements on cellular solids.
Genovese, K
2016-07-01
The increasing interest in small-scale problems demands novel experimental protocols providing dense sets of 3D deformation data of complex shaped microstructures. Obtaining such information is particularly significant for the study of natural and engineered cellular solids for which experimental data collected at macro scale and describing the global mechanical response provide only limited information on their function/structure relationship. Cellular solids, in fact, due their superior mechanical performances to a unique arrangement of the bulk material properties (i.e. anisotropy and heterogeneity) and cell structural features (i.e. pores shape, size and distribution) at the micro- and nano-scales. To address the need for full-field experimental data down to the cell level, this paper proposes a single-camera stereo-Digital Image Correlation (DIC) system that makes use of a wedge prism in series to a telecentric lens for performing surface shape and deformation measurements on microstructures in three dimensions. Although the system possesses a limited measurement volume (FOV~2.8×4.3mm(2), error-free DOF ~1mm), large surface areas of cellular samples can be accurately covered by employing a sequential image capturing scheme followed by an optimization-based mosaicing procedure. The basic principles of the proposed method together with the results of the benchmarking of its metrological performances and error analysis are here reported and discussed in detail. Finally, the potential utility of this method is illustrated with micro-resolution three-dimensional measurements on a 3D printed honeycomb and on a block sample of a Luffa sponge under compression. Copyright © 2016 Elsevier Ltd. All rights reserved.
Hu, Qing; Jin, Dafei; Xiao, Jun; ...
2017-09-05
Two-dimensional molecular aggregate (2DMA), a thin sheet of strongly interacting dipole molecules self-assembled at close distance on an ordered lattice, is a fascinating fluorescent material. It is distinctively different from the conventional (single or colloidal) dye molecules and quantum dots. Here, in this paper, we verify that when a 2DMA is placed at a nanometric distance from a metallic substrate, the strong and coherent interaction between the dipoles inside the 2DMA dominates its fluorescent decay at a picosecond timescale. Our streak-camera lifetime measurement and interacting lattice–dipole calculation reveal that the metal-mediated dipole–dipole interaction shortens the fluorescent lifetime to about one-halfmore » and increases the energy dissipation rate by 10 times that expected from the noninteracting single-dipole picture. In conclusion, our finding can enrich our understanding of nanoscale energy transfer in molecular excitonic systems and may designate a unique direction for developing fast and efficient optoelectronic devices.« less
Synchronous infrared imaging methods to characterize thermal properties of materials
NASA Astrophysics Data System (ADS)
Ouyang, Zhong
1999-11-01
A fundamental thermal property of a material is its thermal conductivity. The current state-of-the art for measurement of thermal conductivity is inadequate, especially in the case of composite materials. This dissertation addresses the need for a rapid and accurate measurement of thermal conductivity that can provide values for three orthogonal directions in a single measurement. The theoretical approach is based on three-dimensional thermal wave propagation and scattering treatments that have been developed earlier at Wayne State University. The experimental approach makes use of a state-of-the-art focal-plane-array infrared camera, which is used to follow the time- and spatial-progression of the planar heat pulse on both surfaces of the slab. The method has been used to determine the thermal diffusivity of six pure elemental single crystal materials (Cu, Ti, Bi, Al, Ag, Pb). The results are in good agreement (better than 1%) with the diffusivities calculated from the handbook. The diffusivities of some alloys and unidirectional graphite-fiber-reinforced-polymer composite also are determined by this method. As a byproduct of one of the experimental approaches measuring the IR radiation from the heated surface, direct evidence is obtained for the presence of a thermal wave "echo". The theory and confirming measurements in this dissertation represent its first clear confirmation. A second experimental method which is studied in this dissertation, and which may be used to characterize thermal properties of materials, is that of lock-in thermal wave imaging. In this technique, pioneered earlier at Wayne State University, a periodic heat source is applied to the surface of the material, and synchronous, phase-sensitive detection of the IR radiation from that surface is used to determine the effects of thermal wave propagation to subsurface features, and the effects of reflected thermal waves from those features on the observed IR radiation from the surface. The rationale for re-visiting this technique is the availability of the focal-plane-array IR camera, with its "snapshot" capability, its high spatial resolution, and its high pixel rate. A lock-in imaging method is developed for use with this camera, which can be used at frequencies that considerably exceed the maximum frame rate, with illustrative applications to characterize the thermal properties of printed circuits and electronic packages.
NASA Astrophysics Data System (ADS)
Hong, Jun
2006-02-01
A three-dimensional supramolecular compound, [Zn(INO) 2(DMF)]·DMF (1) (INO=isonicotinic acid N-oxide), has been prepared in the DMF solution at room temperature, and characterized by elemental analysis, TG and single crystal X-ray diffraction. The three-dimensional supramolecular open framework of 1 contains rectangular channels with the dimensions of 9.02×10.15 Å, assembled from one-dimensional helical chains via hydrogen-bonding and π-π stacking interactions. Furthermore, compound 1 shows blue photoluminescence at room temperature.
Whitcomb, Mary Beth; Doval, John; Peters, Jason
2011-01-01
Ultrasonography has gained increased utility to diagnose pelvic fractures in horses; however, internal pelvic contours can be difficult to appreciate from external palpable landmarks. We developed three-dimensional (3D) simulations of the pelvic ultrasonographic examination to assist with translation of pelvic contours into two-dimensional (2D) images. Contiguous 1mm transverse computed tomography (CT) images were acquired through an equine femur and hemipelvis using a single slice helical scanner. 3D surface models were created using a DICOM reader and imported into a 3D modeling and animation program. The bone models were combined with a purchased 3D horse model and the skin made translucent to visualize pelvic surface contours. 3D models of ultrasound transducers were made from reference photos, and a thin sector shape was created to depict the ultrasound beam. Ultrasonographic examinations were simulated by moving transducers on the skin surface and rectally to produce images of pelvic structures. Camera angles were manipulated to best illustrate the transducer-beam-bone interface. Fractures were created in multiple configurations. Animations were exported as QuickTime movie files for use in presentations coupled with corresponding ultrasound videoclips. 3D models provide a link between ultrasonographic technique and image generation by depicting the interaction of the transducer, ultrasound beam, and structure of interest. The horse model was important to facilitate understanding of the location of pelvic structures relative to the skin surface. While CT acquisition time was brief, manipulation within the 3D software program was time intensive. Results were worthwhile from an instructional standpoint based on user feedback. © 2011 Veterinary Radiology & Ultrasound.
Analysis of Performance of Stereoscopic-Vision Software
NASA Technical Reports Server (NTRS)
Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert
2007-01-01
A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.
Three-Dimensional Imaging by Self-Reference Single-Channel Digital Incoherent Holography
Rosen, Joseph; Kelner, Roy
2016-01-01
Digital holography offers a reliable and fast method to image a three-dimensional scene from a single perspective. This article reviews recent developments of self-reference single-channel incoherent hologram recorders. Hologram recorders in which both interfering beams, commonly referred to as the signal and the reference beams, originate from the same observed objects are considered as self-reference systems. Moreover, the hologram recorders reviewed herein are configured in a setup of a single channel interferometer. This unique configuration is achieved through the use of one or more spatial light modulators. PMID:28757811
Wheat, J S; Clarkson, S; Flint, S W; Simpson, C; Broom, D R
2018-05-21
Three dimensional (3D) surface imaging is a viable alternative to traditional body morphology measures, but the feasibility of using this technique with people with obesity has not been fully established. Therefore, the aim of this study was to investigate the validity, repeatability and acceptability of a consumer depth camera 3D surface imaging system in imaging people with obesity. The concurrent validity of the depth camera based system was investigated by comparing measures of mid-trunk volume to a gold-standard. The repeatability and acceptability of the depth camera system was assessed in people with obesity at a clinic. There was evidence of a fixed systematic difference between the depth camera system and the gold standard but excellent correlation between volume estimates (r 2 =0.997), with little evidence of proportional bias. The depth camera system was highly repeatable - low typical error (0.192L), high intraclass correlation coefficient (>0.999) and low technical error of measurement (0.64%). Depth camera based 3D surface imaging was also acceptable to people with obesity. It is feasible (valid, repeatable and acceptable) to use a low cost, flexible 3D surface imaging system to monitor the body size and shape of people with obesity in a clinical setting. Copyright © 2018 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Renaud, Olivier; Heintzmann, Rainer; Sáez-Cirión, Asier; Schnelle, Thomas; Mueller, Torsten; Shorte, Spencer
2007-02-01
Three dimensional imaging provides high-content information from living intact biology, and can serve as a visual screening cue. In the case of single cell imaging the current state of the art uses so-called "axial through-stacking". However, three-dimensional axial through-stacking requires that the object (i.e. a living cell) be adherently stabilized on an optically transparent surface, usually glass; evidently precluding use of cells in suspension. Aiming to overcome this limitation we present here the utility of dielectric field trapping of single cells in three-dimensional electrode cages. Our approach allows gentle and precise spatial orientation and vectored rotation of living, non-adherent cells in fluid suspension. Using various modes of widefield, and confocal microscope imaging we show how so-called "microrotation" can provide a unique and powerful method for multiple point-of-view (three-dimensional) interrogation of intact living biological micro-objects (e.g. single-cells, cell aggregates, and embryos). Further, we show how visual screening by micro-rotation imaging can be combined with micro-fluidic sorting, allowing selection of rare phenotype targets from small populations of cells in suspension, and subsequent one-step single cell cloning (with high-viability). Our methodology combining high-content 3D visual screening with one-step single cell cloning, will impact diverse paradigms, for example cytological and cytogenetic analysis on haematopoietic stem cells, blood cells including lymphocytes, and cancer cells.
Single chip camera active pixel sensor
NASA Technical Reports Server (NTRS)
Shaw, Timothy (Inventor); Pain, Bedabrata (Inventor); Olson, Brita (Inventor); Nixon, Robert H. (Inventor); Fossum, Eric R. (Inventor); Panicacci, Roger A. (Inventor); Mansoorian, Barmak (Inventor)
2003-01-01
A totally digital single chip camera includes communications to operate most of its structure in serial communication mode. The digital single chip camera include a D/A converter for converting an input digital word into an analog reference signal. The chip includes all of the necessary circuitry for operating the chip using a single pin.
Dual-color three-dimensional STED microscopy with a single high-repetition-rate laser
Han, Kyu Young; Ha, Taekjip
2016-01-01
We describe a dual-color three-dimensional stimulated emission depletion (3D-STED) microscopy employing a single laser source with a repetition rate of 80 MHz. Multiple excitation pulses synchronized with a STED pulse were generated by a photonic crystal fiber and the desired wavelengths were selected by an acousto-optic tunable filter with high spectral purity. Selective excitation at different wavelengths permits simultaneous imaging of two fluorescent markers at a nanoscale resolution in three dimensions. PMID:26030581
Resonance fluorescence based two- and three-dimensional atom localization
NASA Astrophysics Data System (ADS)
Wahab, Abdul; Rahmatullah; Qamar, Sajid
2016-06-01
Two- and three-dimensional atom localization in a two-level atom-field system via resonance fluorescence is suggested. For the two-dimensional localization, the atom interacts with two orthogonal standing-wave fields, whereas for the three-dimensional atom localization, the atom interacts with three orthogonal standing-wave fields. The effect of the detuning and phase shifts associated with the corresponding standing-wave fields is investigated. A precision enhancement in position measurement of the single atom can be noticed via the control of the detuning and phase shifts.
Nanometric depth resolution from multi-focal images in microscopy.
Dalgarno, Heather I C; Dalgarno, Paul A; Dada, Adetunmise C; Towers, Catherine E; Gibson, Gavin J; Parton, Richard M; Davis, Ilan; Warburton, Richard J; Greenaway, Alan H
2011-07-06
We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels.
Nanometric depth resolution from multi-focal images in microscopy
Dalgarno, Heather I. C.; Dalgarno, Paul A.; Dada, Adetunmise C.; Towers, Catherine E.; Gibson, Gavin J.; Parton, Richard M.; Davis, Ilan; Warburton, Richard J.; Greenaway, Alan H.
2011-01-01
We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels. PMID:21247948
A Three-Line Stereo Camera Concept for Planetary Exploration
NASA Technical Reports Server (NTRS)
Sandau, Rainer; Hilbert, Stefan; Venus, Holger; Walter, Ingo; Fang, Wai-Chi; Alkalai, Leon
1997-01-01
This paper presents a low-weight stereo camera concept for planetary exploration. The camera uses three CCD lines within the image plane of one single objective. Some of the main features of the camera include: focal length-90 mm, FOV-18.5 deg, IFOV-78 (mu)rad, convergence angles-(+/-)10 deg, radiometric dynamics-14 bit, weight-2 kg, and power consumption-12.5 Watts. From an orbit altitude of 250 km the ground pixel size is 20m x 20m and the swath width is 82 km. The CCD line data is buffered in the camera internal mass memory of 1 Gbit. After performing radiometric correction and application-dependent preprocessing the data is compressed and ready for downlink. Due to the aggressive application of advanced technologies in the area of microelectronics and innovative optics, the low mass and power budgets of 2 kg and 12.5 Watts is achieved, while still maintaining high performance. The design of the proposed light-weight camera is also general purpose enough to be applicable to other planetary missions such as the exploration of Mars, Mercury, and the Moon. Moreover, it is an example of excellent international collaboration on advanced technology concepts developed at DLR, Germany, and NASA's Jet Propulsion Laboratory, USA.
Multiview hyperspectral topography of tissue structural and functional characteristics
NASA Astrophysics Data System (ADS)
Liu, Peng; Huang, Jiwei; Zhang, Shiwu; Xu, Ronald X.
2016-01-01
Accurate and in vivo characterization of structural, functional, and molecular characteristics of biological tissue will facilitate quantitative diagnosis, therapeutic guidance, and outcome assessment in many clinical applications, such as wound healing, cancer surgery, and organ transplantation. We introduced and tested a multiview hyperspectral imaging technique for noninvasive topographic imaging of cutaneous wound oxygenation. The technique integrated a multiview module and a hyperspectral module in a single portable unit. Four plane mirrors were cohered to form a multiview reflective mirror set with a rectangular cross section. The mirror set was placed between a hyperspectral camera and the target biological tissue. For a single image acquisition task, a hyperspectral data cube with five views was obtained. The five-view hyperspectral image consisted of a main objective image and four reflective images. Three-dimensional (3-D) topography of the scene was achieved by correlating the matching pixels between the objective image and the reflective images. 3-D mapping of tissue oxygenation was achieved using a hyperspectral oxygenation algorithm. The multiview hyperspectral imaging technique was validated in a wound model, a tissue-simulating blood phantom, and in vivo biological tissue. The experimental results demonstrated the technical feasibility of using multiview hyperspectral imaging for 3-D topography of tissue functional properties.
Wiring up pre-characterized single-photon emitters by laser lithography
NASA Astrophysics Data System (ADS)
Shi, Q.; Sontheimer, B.; Nikolay, N.; Schell, A. W.; Fischer, J.; Naber, A.; Benson, O.; Wegener, M.
2016-08-01
Future quantum optical chips will likely be hybrid in nature and include many single-photon emitters, waveguides, filters, as well as single-photon detectors. Here, we introduce a scalable optical localization-selection-lithography procedure for wiring up a large number of single-photon emitters via polymeric photonic wire bonds in three dimensions. First, we localize and characterize nitrogen vacancies in nanodiamonds inside a solid photoresist exhibiting low background fluorescence. Next, without intermediate steps and using the same optical instrument, we perform aligned three-dimensional laser lithography. As a proof of concept, we design, fabricate, and characterize three-dimensional functional waveguide elements on an optical chip. Each element consists of one single-photon emitter centered in a crossed-arc waveguide configuration, allowing for integrated optical excitation and efficient background suppression at the same time.
Experimental and Computational Investigation of Triple-rotating Blades in a Mower Deck
NASA Astrophysics Data System (ADS)
Chon, Woochong; Amano, Ryoichi S.
Experimental and computational studies were performed on the 1.27m wide three-spindle lawn mower deck with side discharge arrangement. Laser Doppler Velocimetry was used to measure the air velocity at 12 different sections under the mower deck. The high-speed video camera test provided valuable visual evidence of airflow and grass discharge patterns. The strain gages were attached at several predetermined locations of the mower blades to measure the strain. In computational fluid dynamics work, computer based analytical studies were performed. During this phase of work, two different trials were attempted. First, two-dimensional blade shapes at several arbitrary radial sections were selected for flow computations around the blade model. Finally, a three-dimensional full deck model was developed and compared with the experimental results.
Diefenbach, Angela K.; Crider, Juliet G.; Schilling, Steve P.; Dzurisin, Daniel
2012-01-01
We describe a low-cost application of digital photogrammetry using commercially available photogrammetric software and oblique photographs taken with an off-the-shelf digital camera to create sequential digital elevation models (DEMs) of a lava dome that grew during the 2004–2008 eruption of Mount St. Helens (MSH) volcano. Renewed activity at MSH provided an opportunity to devise and test this method, because it could be validated against other observations of this well-monitored volcano. The datasets consist of oblique aerial photographs (snapshots) taken from a helicopter using a digital single-lens reflex camera. Twelve sets of overlapping digital images of the dome taken during 2004–2007 were used to produce DEMs and to calculate lava dome volumes and extrusion rates. Analyses of the digital images were carried out using photogrammetric software to produce three-dimensional coordinates of points identified in multiple photos. The evolving morphology of the dome was modeled by comparing successive DEMs. Results were validated by comparison to volume measurements derived from traditional vertical photogrammetric surveys by the US Geological Survey Cascades Volcano Observatory. Our technique was significantly less expensive and required less time than traditional vertical photogrammetric techniques; yet, it consistently yielded volume estimates within 5% of the traditional method. This technique provides an inexpensive, rapid assessment tool for tracking lava dome growth or other topographic changes at restless volcanoes.
Three-dimensional single-mode nonlinear ablative Rayleigh-Taylor instability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, R.; Betti, R.; Sanz, J.
The nonlinear evolution of the single-mode ablative Rayleigh-Taylor instability is studied in three dimensions. As the mode wavelength approaches the cutoff of the linear spectrum (short-wavelength modes), it is found that the three-dimensional (3D) terminal bubble velocity greatly exceeds both the two-dimensional (2D) value and the classical 3D bubble velocity. Unlike in 2D, the 3D short-wavelength bubble velocity does not saturate. The growing 3D bubble acceleration is driven by the unbounded accumulation of vorticity inside the bubble. As a result, the vorticity is transferred by mass ablation from the Rayleigh-Taylor spikes to the ablated plasma filling the bubble volume.
Turchini, John; Buckland, Michael E; Gill, Anthony J; Battye, Shane
2018-05-30
- Three-dimensional (3D) photogrammetry is a method of image-based modeling in which data points in digital images, taken from offset viewpoints, are analyzed to generate a 3D model. This modeling technique has been widely used in the context of geomorphology and artificial imagery, but has yet to be used within the realm of anatomic pathology. - To describe the application of a 3D photogrammetry system capable of producing high-quality 3D digital models and its uses in routine surgical pathology practice as well as medical education. - We modeled specimens received in the 2 participating laboratories. The capture and photogrammetry process was automated using user control software, a digital single-lens reflex camera, and digital turntable, to generate a 3D model with the output in a PDF file. - The entity demonstrated in each specimen was well demarcated and easily identified. Adjacent normal tissue could also be easily distinguished. Colors were preserved. The concave shapes of any cystic structures or normal convex rounded structures were discernable. Surgically important regions were identifiable. - Macroscopic 3D modeling of specimens can be achieved through Structure-From-Motion photogrammetry technology and can be applied quickly and easily in routine laboratory practice. There are numerous advantages to the use of 3D photogrammetry in pathology, including improved clinicopathologic correlation for the surgeon and enhanced medical education, revolutionizing the digital pathology museum with virtual reality environments and 3D-printing specimen models.
Visual fatigue modeling for stereoscopic video shot based on camera motion
NASA Astrophysics Data System (ADS)
Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing
2014-11-01
As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.
Cryptic iridescence in a fossil weevil generated by single diamond photonic crystals
McNamara, Maria E.; Saranathan, Vinod; Locatelli, Emma R.; Noh, Heeso; Briggs, Derek E. G.; Orr, Patrick J.; Cao, Hui
2014-01-01
Nature's most spectacular colours originate in integumentary tissue architectures that scatter light via nanoscale modulations of the refractive index. The most intricate biophotonic nanostructures are three-dimensional crystals with opal, single diamond or single gyroid lattices. Despite intense interest in their optical and structural properties, the evolution of such nanostructures is poorly understood, due in part to a lack of data from the fossil record. Here, we report preservation of single diamond (Fd-3m) three-dimensional photonic crystals in scales of a 735 000 year old specimen of the brown Nearctic weevil Hypera diversipunctata from Gold Run, Canada, and in extant conspecifics. The preserved red to green structural colours exhibit near-field brilliancy yet are inconspicuous from afar; they most likely had cryptic functions in substrate matching. The discovery of pristine fossil examples indicates that the fossil record is likely to yield further data on the evolution of three-dimensional photonic nanostructures and their biological functions. PMID:25185581
Cryptic iridescence in a fossil weevil generated by single diamond photonic crystals.
McNamara, Maria E; Saranathan, Vinod; Locatelli, Emma R; Noh, Heeso; Briggs, Derek E G; Orr, Patrick J; Cao, Hui
2014-11-06
Nature's most spectacular colours originate in integumentary tissue architectures that scatter light via nanoscale modulations of the refractive index. The most intricate biophotonic nanostructures are three-dimensional crystals with opal, single diamond or single gyroid lattices. Despite intense interest in their optical and structural properties, the evolution of such nanostructures is poorly understood, due in part to a lack of data from the fossil record. Here, we report preservation of single diamond (Fd-3m) three-dimensional photonic crystals in scales of a 735,000 year old specimen of the brown Nearctic weevil Hypera diversipunctata from Gold Run, Canada, and in extant conspecifics. The preserved red to green structural colours exhibit near-field brilliancy yet are inconspicuous from afar; they most likely had cryptic functions in substrate matching. The discovery of pristine fossil examples indicates that the fossil record is likely to yield further data on the evolution of three-dimensional photonic nanostructures and their biological functions. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Ulibarrena, Manuel; Carretero, Luis; Acebal, Pablo; Madrigal, Roque; Blaya, Salvador; Fimia, Antonio
2004-09-01
Holographic techniques have been used for manufacturing multiple band one-dimensional, two-dimensional, and three-dimensional photonic crystals with different configurations, by multiplexing reflection and transmission setups on a single layer of holographic material. The recording material used for storage is an ultra fine grain silver halide emulsion, with an average grain size around 20 nm. The results are a set of photonic crystals with the one-dimensional, two-dimensional, and three-dimensional index modulation structure consisting of silver halide particles embedded in the gelatin layer of the emulsion. The characterisation of the fabricated photonic crystals by measuring their transmission band structures has been done and compared with theoretical calculations.
Ultrasound to video registration using a bi-plane transrectal probe with photoacoustic markers
NASA Astrophysics Data System (ADS)
Cheng, Alexis; Kang, Hyun Jae; Zhang, Haichong K.; Taylor, Russell H.; Boctor, Emad M.
2016-03-01
Modern surgical scenarios typically provide surgeons with additional information through fusion of video and other imaging modalities. To provide this information, the tools and devices used in surgery must be registered together with interventional guidance equipment and surgical navigation systems. In this work, we focus explicitly on registering ultrasound with a stereo camera system using photoacoustic markers. Previous work has shown that photoacoustic markers can be used in this registration task to achieve target registration errors lower than the current available systems. Photoacoustic markers are defined as a set of non-collinear laser spots projected onto some surface. They can be simultaneously visualized by a stereo camera system and an ultrasound transducer because of the photoacoustic effect. In more recent work, the three-dimensional ultrasound volume was replaced by images from a single ultrasound image pose from a convex array transducer. The feasibility of this approach was demonstrated, but the accuracy was lacking due to the physical limitations of the convex array transducer. In this work, we propose the use of a bi-plane transrectal ultrasound transducer. The main advantage of using this type of transducer is that the ultrasound elements are no longer restricted to a single plane. While this development would be limited to prostate applications, liver and kidney applications are also feasible if a suitable transducer is built. This work is demonstrated in two experiments, one without photoacoustic sources and one with. The resulting target registration error for these experiments were 1.07mm±0.35mm and 1.27mm+/-0.47mm respectively, both of which are better than current available navigation systems.
Laser Measurement Of Convective-Heat-Transfer Coefficient
NASA Technical Reports Server (NTRS)
Porro, A. Robert; Hingst, Warren R.; Chriss, Randall M.; Seablom, Kirk D.; Keith, Theo G., Jr.
1994-01-01
Coefficient of convective transfer of heat at spot on surface of wind-tunnel model computed from measurements acquired by developmental laser-induced-heat-flux technique. Enables non-intrusive measurements of convective-heat-transfer coefficients at many points across surfaces of models in complicated, three-dimensional, high-speed flows. Measurement spot scanned across surface of model. Apparatus includes argon-ion laser, attenuator/beam splitter electronic shutter infrared camera, and subsystem.
Wu, Defeng; Chen, Tianfei; Li, Aiguo
2016-08-30
A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.
Anaglyph Image Technology As a Visualization Tool for Teaching Geology of National Parks
NASA Astrophysics Data System (ADS)
Stoffer, P. W.; Phillips, E.; Messina, P.
2003-12-01
Anaglyphic stereo viewing technology emerged in the mid 1800's. Anaglyphs use offset images in contrasting colors (typically red and cyan) that when viewed through color filters produce a three-dimensional (3-D) image. Modern anaglyph image technology has become increasingly easy to use and relatively inexpensive using digital cameras, scanners, color printing, and common image manipulation software. Perhaps the primary drawbacks of anaglyph images include visualization problems with primary colors (such as flowers, bright clothing, or blue sky) and distortion factors in large depth-of-field images. However, anaglyphs are more versatile than polarization techniques since they can be printed, displayed on computer screens (such as on websites), or projected with a single projector (as slides or digital images), and red and cyan viewing glasses cost less than polarization glasses and other 3-D viewing alternatives. Anaglyph images are especially well suited for most natural landscapes, such as views dominated by natural earth tones (grays, browns, greens), and they work well for sepia and black and white images (making the conversion of historic stereo photography into anaglyphs easy). We used a simple stereo camera setup incorporating two digital cameras with a rigid base to photograph landscape features in national parks (including arches, caverns, cactus, forests, and coastlines). We also scanned historic stereographic images. Using common digital image manipulation software we created websites featuring anaglyphs of geologic features from national parks. We used the same images for popular 3-D poster displays at the U.S. Geological Survey Open House 2003 in Menlo Park, CA. Anaglyph photography could easily be used in combined educational outdoor activities and laboratory exercises.
Effect of compression rate on ice VI crystal growth using dDAC
NASA Astrophysics Data System (ADS)
Lee, Yun-Hee; Kim, Yong-Jae; Lee, Sooheyong; Cho, Yong Chan; Lee, Geun Woo; Frontier in Extreme Physics Team
It is well known that static and dynamic pressure give different results in many aspects. Understanding of crystal growth under such different pressure condition is one of the crucial issues for the formation of materials in the earth and planets. To figure out the crystal growth under the different pressure condition, we should control compression rate from static to dynamic pressurization. Here, we use a dynamic diamond anvil cell (dDAC) technique to study the effect of compression rate of ice VI crystal growth. Using dDAC with high speed camera, we monitored growth of a single crystal ice VI. A rounded ice crystal with rough surface was selected in the phase boundary of water and ice VI and then, its repetitive growth and melting has been carried out by dynamic operation of the pressure cell. The roughened crystal showed interesting growth transition with compression rate from three dimensional to two dimensional growth as well as faceting process. We will discuss possible mechanism of the growth change by compression rate with diffusion mechanism of water. This research was supported by the Converging Research Center Program through the Ministry of Science, ICT and Future Planning, Korea (NRF-2014M1A7A1A01030128).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Proudnikov, D.; Kirillov, E.; Chumakov, K.
2000-01-01
This paper describes use of a new technology of hybridization with a micro-array of immobilized oligonucleotides for detection and quantification of neurovirulent mutants in Oral Poliovirus Vaccine (OPV). We used a micro-array consisting of three-dimensional gel-elements containing all possible hexamers (total of 4096 probes). Hybridization of fluorescently labelled viral cDNA samples with such microchips resulted in a pattern of spots that was registered and quantified by a computer-linked CCD camera, so that the sequence of the original cDNA could be deduced. The method could reliably identify single point mutations, since each of them affected fluorescence intensity of 12 micro-array elements.more » Micro-array hybridization of DNA mixtures with varying contents of point mutants demonstrated that the method can detect as little as 10% of revertants in a population of vaccine virus. This new technology should be useful for quality control of live viral vaccines, as well as for other applications requiring identification and quantification of point mutations.« less
Dai, Meiling; Yang, Fujun; He, Xiaoyuan
2012-04-20
A simple but effective fringe projection profilometry is proposed to measure 3D shape by using one snapshot color sinusoidal fringe pattern. One color fringe pattern encoded with a sinusoidal fringe (as red component) and one uniform intensity pattern (as blue component) is projected by a digital video projector, and the deformed fringe pattern is recorded by a color CCD camera. The captured color fringe pattern is separated into its RGB components and division operation is applied to red and blue channels to reduce the variable reflection intensity. Shape information of the tested object is decoded by applying an arcsine algorithm on the normalized fringe pattern with subpixel resolution. In the case of fringe discontinuities caused by height steps, or spatially isolated surfaces, the separated blue component is binarized and used for correcting the phase demodulation. A simple and robust method is also introduced to compensate for nonlinear intensity response of the digital video projector. The experimental results demonstrate the validity of the proposed method.
Applications Of Digital Image Acquisition In Anthropometry
NASA Astrophysics Data System (ADS)
Woolford, Barbara; Lewis, James L.
1981-10-01
Anthropometric data on reach and mobility have traditionally been collected by time consuming and relatively inaccurate manual methods. Three dimensional digital image acquisition promises to radically increase the speed and ease of data collection and analysis. A three-camera video anthropometric system for collecting position, velocity, and force data in real time is under development for the Anthropometric Measurement Laboratory at NASA's Johnson Space Center. The use of a prototype of this system for collecting data on reach capabilities and on lateral stability is described. Two extensions of this system are planned.
NASA Astrophysics Data System (ADS)
Theule, Joshua; Crema, Stefano; Comiti, Francesco; Cavalli, Marco; Marchi, Lorenzo
2015-04-01
Large scale particle image velocimetry (LSPIV) is a technique mostly used in rivers to measure two dimensional velocities from high resolution images at high frame rates. This technique still needs to be thoroughly explored in the field of debris flow studies. The Gadria debris flow monitoring catchment in Val Venosta (Italian Alps) has been equipped with four MOBOTIX M12 video cameras. Two cameras are located in a sediment trap located close to the alluvial fan apex, one looking upstream and the other looking down and more perpendicular to the flow. The third camera is in the next reach upstream from the sediment trap at a closer proximity to the flow. These three cameras are connected to a field shelter equipped with power supply and a server collecting all the monitoring data. The fourth camera is located in an active gully, the camera is activated by a rain gauge when there is one minute of rainfall. Before LSPIV can be used, the highly distorted images need to be corrected and accurate reference points need to be made. We decided to use IMGRAFT (an opensource image georectification toolbox) which can correct distorted images using reference points and camera location, and then finally rectifies the batch of images onto a DEM grid (or the DEM grid onto the image coordinates). With the orthorectified images, we used the freeware Fudaa-LSPIV (developed by EDF, IRSTEA, and DeltaCAD Company) to generate the LSPIV calculations of the flow events. Calculated velocities can easily be checked manually because of the already orthorectified images. During the monitoring program (since 2011) we recorded three debris flow events at the sediment trap area (each with very different surge dynamics). The camera in the gully was in operation in 2014 which managed to record granular flows and rockfalls, which particle tracking may be more appropriate for velocity measurements. The four cameras allows us to explore the limitations of camera distance, angle, frame rate, and image quality.
Single-leg squats can predict leg alignment in dancers performing ballet movements in "turnout".
Hopper, Luke S; Sato, Nahoko; Weidemann, Andries L
2016-01-01
The physical assessments used in dance injury surveillance programs are often adapted from the sports and exercise domain. Bespoke physical assessments may be required for dance, particularly when ballet movements involve "turning out" or external rotation of the legs beyond that typically used in sports. This study evaluated the ability of the traditional single-leg squat to predict the leg alignment of dancers performing ballet movements with turnout. Three-dimensional kinematic data of dancers performing the single-leg squat and five ballet movements were recorded and analyzed. Reduction of the three-dimensional data into a one-dimensional variable incorporating the ankle, knee, and hip joint center positions provided the strongest predictive model between the single-leg squat and the ballet movements. The single-leg squat can predict leg alignment in dancers performing ballet movements, even in "turned out" postures. Clinicians should pay careful attention to observational positioning and rating criteria when assessing dancers performing the single-leg squat.
Single-leg squats can predict leg alignment in dancers performing ballet movements in “turnout”
Hopper, Luke S; Sato, Nahoko; Weidemann, Andries L
2016-01-01
The physical assessments used in dance injury surveillance programs are often adapted from the sports and exercise domain. Bespoke physical assessments may be required for dance, particularly when ballet movements involve “turning out” or external rotation of the legs beyond that typically used in sports. This study evaluated the ability of the traditional single-leg squat to predict the leg alignment of dancers performing ballet movements with turnout. Three-dimensional kinematic data of dancers performing the single-leg squat and five ballet movements were recorded and analyzed. Reduction of the three-dimensional data into a one-dimensional variable incorporating the ankle, knee, and hip joint center positions provided the strongest predictive model between the single-leg squat and the ballet movements. The single-leg squat can predict leg alignment in dancers performing ballet movements, even in “turned out” postures. Clinicians should pay careful attention to observational positioning and rating criteria when assessing dancers performing the single-leg squat. PMID:27895518
A novel optical investigation technique for railroad track inspection and assessment
NASA Astrophysics Data System (ADS)
Sabato, Alessandro; Beale, Christopher H.; Niezrecki, Christopher
2017-04-01
Track failures due to cross tie degradation or loss in ballast support may result in a number of problems ranging from simple service interruptions to derailments. Structural Health Monitoring (SHM) of railway track is important for safety reasons and to reduce downtime and maintenance costs. For this reason, novel and cost-effective track inspection technologies for assessing tracks' health are currently insufficient and needed. Advancements achieved in recent years in cameras technology, optical sensors, and image-processing algorithms have made machine vision, Structure from Motion (SfM), and three-dimensional (3D) Digital Image Correlation (DIC) systems extremely appealing techniques for extracting structural deformations and geometry profiles. Therefore, optically based, non-contact measurement techniques may be used for assessing surface defects, rail and tie deflection profiles, and ballast condition. In this study, the design of two camera-based measurement systems is proposed for crossties-ballast condition assessment and track examination purposes. The first one consists of four pairs of cameras installed on the underside of a rail car to detect the induced deformation and displacement on the whole length of the track's cross tie using 3D DIC measurement techniques. The second consists of another set of cameras using SfM techniques for obtaining a 3D rendering of the infrastructure from a series of two-dimensional (2D) images to evaluate the state of the track qualitatively. The feasibility of the proposed optical systems is evaluated through extensive laboratory tests, demonstrating their ability to measure parameters of interest (e.g. crosstie's full-field displacement, vertical deflection, shape, etc.) for assessment and SHM of railroad track.
Yuan, Fusong; Lv, Peijun; Wang, Dangxiao; Wang, Lei; Sun, Yuchun; Wang, Yong
2015-02-01
The purpose of this study was to establish a depth-control method in enamel-cavity ablation by optimizing the timing of the focal-plane-normal stepping and the single-step size of a three axis, numerically controlled picosecond laser. Although it has been proposed that picosecond lasers may be used to ablate dental hard tissue, the viability of such a depth-control method in enamel-cavity ablation remains uncertain. Forty-two enamel slices with approximately level surfaces were prepared and subjected to two-dimensional ablation by a picosecond laser. The additive-pulse layer, n, was set to 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70. A three-dimensional microscope was then used to measure the ablation depth, d, to obtain a quantitative function relating n and d. Six enamel slices were then subjected to three dimensional ablation to produce 10 cavities, respectively, with additive-pulse layer and single-step size set to corresponding values. The difference between the theoretical and measured values was calculated for both the cavity depth and the ablation depth of a single step. These were used to determine minimum-difference values for both the additive-pulse layer (n) and single-step size (d). When the additive-pulse layer and the single-step size were set 5 and 45, respectively, the depth error had a minimum of 2.25 μm, and 450 μm deep enamel cavities were produced. When performing three-dimensional ablating of enamel with a picosecond laser, adjusting the timing of the focal-plane-normal stepping and the single-step size allows for the control of ablation-depth error to the order of micrometers.
An optical system for detecting 3D high-speed oscillation of a single ultrasound microbubble
Liu, Yuan; Yuan, Baohong
2013-01-01
As contrast agents, microbubbles have been playing significant roles in ultrasound imaging. Investigation of microbubble oscillation is crucial for microbubble characterization and detection. Unfortunately, 3-dimensional (3D) observation of microbubble oscillation is challenging and costly because of the bubble size—a few microns in diameter—and the high-speed dynamics under MHz ultrasound pressure waves. In this study, a cost-efficient optical confocal microscopic system combined with a gated and intensified charge-coupled device (ICCD) camera were developed to detect 3D microbubble oscillation. The capability of imaging microbubble high-speed oscillation with much lower costs than with an ultra-fast framing or streak camera system was demonstrated. In addition, microbubble oscillations along both lateral (x and y) and axial (z) directions were demonstrated. Accordingly, this system is an excellent alternative for 3D investigation of microbubble high-speed oscillation, especially when budgets are limited. PMID:24049677
Spatial EPR entanglement in atomic vapor quantum memory
NASA Astrophysics Data System (ADS)
Parniak, Michal; Dabrowski, Michal; Wasilewski, Wojciech
Spatially-structured quantum states of light are staring to play a key role in modern quantum science with the rapid development of single-photon sensitive cameras. In particular, spatial degree of freedom holds a promise to enhance continous-variable quantum memories. Here we present the first demonstration of spatial entanglement between an atomic spin-wave and a photon measured with an I-sCMOS camera. The system is realized in a warm atomic vapor quantum memory based on rubidium atoms immersed in inert buffer gas. In the experiment we create and characterize a 12-dimensional entangled state exhibiting quantum correlations between a photon and an atomic ensemble in position and momentum bases. This state allows us to demonstrate the Einstein-Podolsky-Rosen paradox in its original version, with an unprecedented delay time of 6 μs between generation of entanglement and detection of the atomic state.
NASA Astrophysics Data System (ADS)
Reinhart, Anna Merle; Spindeldreier, Claudia Katharina; Jakubek, Jan; Martišíková, Mária
2017-06-01
Carbon ion beam radiotherapy enables a very localised dose deposition. However, even small changes in the patient geometry or positioning errors can significantly distort the dose distribution. A live, non-invasive monitoring system of the beam delivery within the patient is therefore highly desirable, and could improve patient treatment. We present a novel three-dimensional method for imaging the beam in the irradiated object, exploiting the measured tracks of single secondary ions emerging under irradiation. The secondary particle tracks are detected with a TimePix stack—a set of parallel pixelated semiconductor detectors. We developed a three-dimensional reconstruction algorithm based on maximum likelihood expectation maximization. We demonstrate the applicability of the new method in the irradiation of a cylindrical PMMA phantom of human head size with a carbon ion pencil beam of {226} MeV u-1. The beam image in the phantom is reconstructed from a set of nine discrete detector positions between {-80}^\\circ and {50}^\\circ from the beam axis. Furthermore, we demonstrate the potential to visualize inhomogeneities by irradiating a PMMA phantom with an air gap as well as bone and adipose tissue surrogate inserts. We successfully reconstructed a three-dimensional image of the treatment beam in the phantom from single secondary ion tracks. The beam image corresponds well to the beam direction and energy. In addition, cylindrical inhomogeneities with a diameter of {2.85} cm and density differences down to {0.3} g cm-3 to the surrounding material are clearly visualized. This novel three-dimensional method to image a therapeutic carbon ion beam in the irradiated object does not interfere with the treatment and requires knowledge only of single secondary ion tracks. Even with detectors with only a small angular coverage, the three-dimensional reconstruction of the fragmentation points presented in this work was found to be feasible.
Reinhart, Anna Merle; Spindeldreier, Claudia Katharina; Jakubek, Jan; Martišíková, Mária
2017-06-21
Carbon ion beam radiotherapy enables a very localised dose deposition. However, even small changes in the patient geometry or positioning errors can significantly distort the dose distribution. A live, non-invasive monitoring system of the beam delivery within the patient is therefore highly desirable, and could improve patient treatment. We present a novel three-dimensional method for imaging the beam in the irradiated object, exploiting the measured tracks of single secondary ions emerging under irradiation. The secondary particle tracks are detected with a TimePix stack-a set of parallel pixelated semiconductor detectors. We developed a three-dimensional reconstruction algorithm based on maximum likelihood expectation maximization. We demonstrate the applicability of the new method in the irradiation of a cylindrical PMMA phantom of human head size with a carbon ion pencil beam of [Formula: see text] MeV u -1 . The beam image in the phantom is reconstructed from a set of nine discrete detector positions between [Formula: see text] and [Formula: see text] from the beam axis. Furthermore, we demonstrate the potential to visualize inhomogeneities by irradiating a PMMA phantom with an air gap as well as bone and adipose tissue surrogate inserts. We successfully reconstructed a three-dimensional image of the treatment beam in the phantom from single secondary ion tracks. The beam image corresponds well to the beam direction and energy. In addition, cylindrical inhomogeneities with a diameter of [Formula: see text] cm and density differences down to [Formula: see text] g cm -3 to the surrounding material are clearly visualized. This novel three-dimensional method to image a therapeutic carbon ion beam in the irradiated object does not interfere with the treatment and requires knowledge only of single secondary ion tracks. Even with detectors with only a small angular coverage, the three-dimensional reconstruction of the fragmentation points presented in this work was found to be feasible.
NASA Astrophysics Data System (ADS)
Diefenbach, A. K.; Crider, J. G.; Schilling, S. P.; Dzurisin, D.
2007-12-01
We describe a low-cost application of digital photogrammetry using commercial grade software, an off-the-shelf digital camera, a laptop computer and oblique photographs to reconstruct volcanic dome morphology during the on-going eruption at Mount St. Helens, Washington. Renewed activity at Mount St. Helens provides a rare opportunity to devise and test new methods for better understanding and predicting volcanic events, because the new method can be validated against other observations on this well-instrumented volcano. Uncalibrated, oblique aerial photographs (snap shots) taken from a helicopter are the raw data. Twelve sets of overlapping digital images of the dome taken during 2004-2007 were used to produce digital elevation models (DEMs) from which dome height, eruption volume and extrusion rate can be derived. Analyses of the digital images were carried out using PhotoModeler software, which produces three dimensional coordinates of points identified in multiple photos. The steps involved include: (1) calibrating the digital camera using this software package, (2) establishing control points derived from existing DEMs, (3) identifying tie points located in each photo of any given model date, and (4) identifying points in pairs of photos to build a three dimensional model of the evolving dome at each photo date. Text files of three-dimensional points encompassing the dome at each date were imported into ArcGIS and three-dimensional models (triangulated irregular network or TINs) were generated. TINs were then converted to 2 m raster DEMs. The evolving morphology of the growing dome was modeled by comparison of successive DEMs. The volume of extruded lava visible in each DEM was calculated using the 1986 pre-eruption crater floor topography as a basal surface. Results were validated by comparing volume measurements derived from traditional aerophotogrammetric surveys run by the USGS Cascades Volcano Observatory. Our new "quick and cheap" technique yields estimates of eruptive volume consistently within 5% of the volumes estimated with traditional surveys. The end result of this project is a new technique that provides an inexpensive, rapid assessment tool for tracking lava dome growth or other topographic changes at restless volcanoes.
2007-09-01
the projective camera matrix (P) which is a 3x4 matrix that is represents both the intrinsic and extrinsic parameters of a camera. It is used to...K contains the intrinsic parameters of the camera and |R t⎡ ⎤⎣ ⎦ represents the extrinsic parameters of the camera. By definition, the extrinsic ... extrinsic parameters are known then the camera is said to be calibrated. If only the intrinsic parameters are known, then the projective camera can
NASA Astrophysics Data System (ADS)
Carlsohn, Matthias F.; Kemmling, André; Petersen, Arne; Wietzke, Lennart
2016-04-01
Cerebral aneurysms require endovascular treatment to eliminate potentially lethal hemorrhagic rupture by hemostasis of blood flow within the aneurysm. Devices (e.g. coils and flow diverters) promote homeostasis, however, measurement of blood flow within an aneurysm or cerebral vessel before and after device placement on a microscopic level has not been possible so far. This would allow better individualized treatment planning and improve manufacture design of devices. For experimental analysis, direct measurement of real-time microscopic cerebrovascular flow in micro-structures may be an alternative to computed flow simulations. An application of microscopic aneurysm flow measurement on a regular basis to empirically assess a high number of different anatomic shapes and the corresponding effect of different devices would require a fast and reliable method at low cost with high throughout assessment. Transparent three dimensional 3D models of brain vessels and aneurysms may be used for microscopic flow measurements by particle image velocimetry (PIV), however, up to now the size of structures has set the limits for conventional 3D-imaging camera set-ups. On line flow assessment requires additional computational power to cope with the processing large amounts of data generated by sequences of multi-view stereo images, e.g. generated by a light field camera capturing the 3D information by plenoptic imaging of complex flow processes. Recently, a fast and low cost workflow for producing patient specific three dimensional models of cerebral arteries has been established by stereo-lithographic (SLA) 3D printing. These 3D arterial models are transparent an exhibit a replication precision within a submillimeter range required for accurate flow measurements under physiological conditions. We therefore test the feasibility of microscopic flow measurements by PIV analysis using a plenoptic camera system capturing light field image sequences. Averaging across a sequence of single double or triple shots of flashed images enables reconstruction of the real-time corpuscular flow through the vessel system before and after device placement. This approach could enable 3D-insight of microscopic flow within blood vessels and aneurysms at submillimeter resolution. We present an approach that allows real-time assessment of 3D particle flow by high-speed light field image analysis including a solution that addresses high computational load by image processing. The imaging set-up accomplishes fast and reliable PIV analysis in transparent 3D models of brain aneurysms at low cost. High throughput microscopic flow assessment of different shapes of brain aneurysms may therefore be possibly required for patient specific device designs.
Holocinematographic velocimeter for measuring time-dependent, three-dimensional flows
NASA Technical Reports Server (NTRS)
Beeler, George B.; Weinstein, Leonard M.
1987-01-01
Two simulatneous, orthogonal-axis holographic movies are made of tracer particles in a low-speed water tunnel to determine the time-dependent, three-dimensional velocity field. This instrument is called a Holocinematographic Velocimeter (HCV). The holographic movies are reduced to the velocity field with an automatic data reduction system. This permits the reduction of large numbers of holograms (time steps) in a reasonable amount of time. The current version of the HCV, built for proof-of-concept tests, uses low-frame rate holographic cameras and a prototype of a new type of water tunnel. This water tunnel is a unique low-disturbance facility which has minimal wall effects on the flow. This paper presents the first flow field examined by the HCV, the two-dimensional von Karman vortex street downstream of an unswept circular cylinder. Key factors in the HCV are flow speed, spatial and temporal resolution required, measurement volume, film transport speed, and laser pulse length. The interactions between these factors are discussed.
Target-Tracking Camera for a Metrology System
NASA Technical Reports Server (NTRS)
Liebe, Carl; Bartman, Randall; Chapsky, Jacob; Abramovici, Alexander; Brown, David
2009-01-01
An analog electronic camera that is part of a metrology system measures the varying direction to a light-emitting diode that serves as a bright point target. In the original application for which the camera was developed, the metrological system is used to determine the varying relative positions of radiating elements of an airborne synthetic aperture-radar (SAR) antenna as the airplane flexes during flight; precise knowledge of the relative positions as a function of time is needed for processing SAR readings. It has been common metrology system practice to measure the varying direction to a bright target by use of an electronic camera of the charge-coupled-device or active-pixel-sensor type. A major disadvantage of this practice arises from the necessity of reading out and digitizing the outputs from a large number of pixels and processing the resulting digital values in a computer to determine the centroid of a target: Because of the time taken by the readout, digitization, and computation, the update rate is limited to tens of hertz. In contrast, the analog nature of the present camera makes it possible to achieve an update rate of hundreds of hertz, and no computer is needed to determine the centroid. The camera is based on a position-sensitive detector (PSD), which is a rectangular photodiode with output contacts at opposite ends. PSDs are usually used in triangulation for measuring small distances. PSDs are manufactured in both one- and two-dimensional versions. Because it is very difficult to calibrate two-dimensional PSDs accurately, the focal-plane sensors used in this camera are two orthogonally mounted one-dimensional PSDs.
NASA Astrophysics Data System (ADS)
Kajiwara, K.; Shobu, T.; Toyokawa, H.; Sato, M.
2014-04-01
A technique for three-dimensional visualization of grain boundaries was developed at BL28B2 at SPring-8. The technique uses white X-ray microbeam diffraction and a rotating slit. Three-dimensional images of small silicon single crystals filled in a plastic tube were successfully obtained using this technique for demonstration purposes. The images were consistent with those obtained by X-ray computed tomography.
Applying UV cameras for SO2 detection to distant or optically thick volcanic plumes
Kern, Christoph; Werner, Cynthia; Elias, Tamar; Sutton, A. Jeff; Lübcke, Peter
2013-01-01
Ultraviolet (UV) camera systems represent an exciting new technology for measuring two dimensional sulfur dioxide (SO2) distributions in volcanic plumes. The high frame rate of the cameras allows the retrieval of SO2 emission rates at time scales of 1 Hz or higher, thus allowing the investigation of high-frequency signals and making integrated and comparative studies with other high-data-rate volcano monitoring techniques possible. One drawback of the technique, however, is the limited spectral information recorded by the imaging systems. Here, a framework for simulating the sensitivity of UV cameras to various SO2 distributions is introduced. Both the wavelength-dependent transmittance of the optical imaging system and the radiative transfer in the atmosphere are modeled. The framework is then applied to study the behavior of different optical setups and used to simulate the response of these instruments to volcanic plumes containing varying SO2 and aerosol abundances located at various distances from the sensor. Results show that UV radiative transfer in and around distant and/or optically thick plumes typically leads to a lower sensitivity to SO2 than expected when assuming a standard Beer–Lambert absorption model. Furthermore, camera response is often non-linear in SO2 and dependent on distance to the plume and plume aerosol optical thickness and single scatter albedo. The model results are compared with camera measurements made at Kilauea Volcano (Hawaii) and a method for integrating moderate resolution differential optical absorption spectroscopy data with UV imagery to retrieve improved SO2 column densities is discussed.
Shin, Jong Won; Jeong, Ah Rim; Jeoung, Sungeun; Moon, Hoi Ri; Komatsumaru, Yuki; Hayami, Shinya; Moon, Dohyun; Min, Kil Sik
2018-04-24
We report a three-dimensional Fe(ii) porous coordination polymer that exhibits a spin crossover temperature change following CO2 sorption (though not N2 sorption). Furthermore, single crystals of the desolvated polymer with CO2 molecules at three different temperatures were characterised by X-ray crystallography.
NASA Astrophysics Data System (ADS)
Dubey, Vishesh; Singh, Veena; Ahmad, Azeem; Singh, Gyanendra; Mehta, Dalip Singh
2016-03-01
We report white light phase shifting interferometry in conjunction with color fringe analysis for the detection of contaminants in water such as Escherichia coli (E.coli), Campylobacter coli and Bacillus cereus. The experimental setup is based on a common path interferometer using Mirau interferometric objective lens. White light interferograms are recorded using a 3-chip color CCD camera based on prism technology. The 3-chip color camera have lesser color cross talk and better spatial resolution in comparison to single chip CCD camera. A piezo-electric transducer (PZT) phase shifter is fixed with the Mirau objective and they are attached with a conventional microscope. Five phase shifted white light interferograms are recorded by the 3-chip color CCD camera and each phase shifted interferogram is decomposed into the red, green and blue constituent colors, thus making three sets of five phase shifted intererograms for three different colors from a single set of white light interferogram. This makes the system less time consuming and have lesser effect due to surrounding environment. Initially 3D phase maps of the bacteria are reconstructed for red, green and blue wavelengths from these interferograms using MATLAB, from these phase maps we determines the refractive index (RI) of the bacteria. Experimental results of 3D shape measurement and RI at multiple wavelengths will be presented. These results might find applications for detection of contaminants in water without using any chemical processing and fluorescent dyes.
NASA Technical Reports Server (NTRS)
Rhodes, M. D.; Selberg, B. P.
1982-01-01
An investigation was performed to compare closely coupled dual wing and swept forward swept rearward wing aircraft to corresponding single wing 'baseline' designs to judge the advantages offered by aircraft designed with multiple wing systems. The optimum multiple wing geometry used on the multiple wing designs was determined in an analytic study which investigated the two- and three-dimensional aerodynamic behavior of a wide range of multiple wing configurations in order to find the wing geometry that created the minimum cruise drag. This analysis used a multi-element inviscid vortex panel program coupled to a momentum integral boundary layer analysis program to account for the aerodynamic coupling between the wings and to provide the two-dimensional aerodynamic data, which was then used as input for a three-dimensional vortex lattice program, which calculated the three-dimensional aerodynamic data. The low drag of the multiple wing configurations is due to a combination of two dimensional drag reductions, tailoring the three dimensional drag for the swept forward swept rearward design, and the structural advantages of the two wings that because of the structural connections permitted higher aspect ratios.
Red ball ranging optimization based on dual camera ranging method
NASA Astrophysics Data System (ADS)
Kuang, Lei; Sun, Weijia; Liu, Jiaming; Tang, Matthew Wai-Chung
2018-05-01
In this paper, the process of positioning and moving to target red ball by NAO robot through its camera system is analyzed and improved using the dual camera ranging method. The single camera ranging method, which is adapted by NAO robot, was first studied and experimented. Since the existing error of current NAO Robot is not a single variable, the experiments were divided into two parts to obtain more accurate single camera ranging experiment data: forward ranging and backward ranging. Moreover, two USB cameras were used in our experiments that adapted Hough's circular method to identify a ball, while the HSV color space model was used to identify red color. Our results showed that the dual camera ranging method reduced the variance of error in ball tracking from 0.68 to 0.20.
Semi-autonomous wheelchair system using stereoscopic cameras.
Nguyen, Jordan S; Nguyen, Thanh H; Nguyen, Hung T
2009-01-01
This paper is concerned with the design and development of a semi-autonomous wheelchair system using stereoscopic cameras to assist hands-free control technologies for severely disabled people. The stereoscopic cameras capture an image from both the left and right cameras, which are then processed with a Sum of Absolute Differences (SAD) correlation algorithm to establish correspondence between image features in the different views of the scene. This is used to produce a stereo disparity image containing information about the depth of objects away from the camera in the image. A geometric projection algorithm is then used to generate a 3-Dimensional (3D) point map, placing pixels of the disparity image in 3D space. This is then converted to a 2-Dimensional (2D) depth map allowing objects in the scene to be viewed and a safe travel path for the wheelchair to be planned and followed based on the user's commands. This assistive technology utilising stereoscopic cameras has the purpose of automated obstacle detection, path planning and following, and collision avoidance during navigation. Experimental results obtained in an indoor environment displayed the effectiveness of this assistive technology.
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-01-01
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-03-25
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.
Quasi-three-dimensional particle imaging with digital holography.
Kemppinen, Osku; Heinson, Yuli; Berg, Matthew
2017-05-01
In this work, approximate three-dimensional structures of microparticles are generated with digital holography using an automated focus method. This is done by stacking a collection of silhouette-like images of a particle reconstructed from a single in-line hologram. The method enables estimation of the particle size in the longitudinal and transverse dimensions. Using the discrete dipole approximation, the method is tested computationally by simulating holograms for a variety of particles and attempting to reconstruct the known three-dimensional structure. It is found that poor longitudinal resolution strongly perturbs the reconstructed structure, yet the method does provide an approximate sense for the structure's longitudinal dimension. The method is then applied to laboratory measurements of holograms of single microparticles and their scattering patterns.
Two-dimensional and three-dimensional evaluation of the deformation relief
NASA Astrophysics Data System (ADS)
Alfyorova, E. A.; Lychagin, D. V.
2017-12-01
This work presents the experimental results concerning the research of the morphology of the face-centered cubic single crystal surface after compression deformation. Our aim is to identify the method of forming a quasiperiodic profile of single crystals with different crystal geometrical orientation and quantitative description of deformation structures. A set of modern methods such as optical and confocal microscopy is applied to determine the morphology of surface parameters. The results show that octahedral slip is an integral part of the formation of the quasiperiodic profile surface starting with initial strain. The similarity of the formation process of the surface profile at different scale levels is given. The size of consistent deformation regions is found. This is 45 µm for slip lines ([001]-single crystal) and 30 µm for mesobands ([110]-single crystal). The possibility of using two- and three-dimensional roughness parameters to describe the deformation structures was shown.
Image-based path planning for automated virtual colonoscopy navigation
NASA Astrophysics Data System (ADS)
Hong, Wei
2008-03-01
Virtual colonoscopy (VC) is a noninvasive method for colonic polyp screening, by reconstructing three-dimensional models of the colon using computerized tomography (CT). In virtual colonoscopy fly-through navigation, it is crucial to generate an optimal camera path for efficient clinical examination. In conventional methods, the centerline of the colon lumen is usually used as the camera path. In order to extract colon centerline, some time consuming pre-processing algorithms must be performed before the fly-through navigation, such as colon segmentation, distance transformation, or topological thinning. In this paper, we present an efficient image-based path planning algorithm for automated virtual colonoscopy fly-through navigation without the requirement of any pre-processing. Our algorithm only needs the physician to provide a seed point as the starting camera position using 2D axial CT images. A wide angle fisheye camera model is used to generate a depth image from the current camera position. Two types of navigational landmarks, safe regions and target regions are extracted from the depth images. Camera position and its corresponding view direction are then determined using these landmarks. The experimental results show that the generated paths are accurate and increase the user comfort during the fly-through navigation. Moreover, because of the efficiency of our path planning algorithm and rendering algorithm, our VC fly-through navigation system can still guarantee 30 FPS.
The High Resolution Stereo Camera (HRSC): 10 Years of Imaging Mars
NASA Astrophysics Data System (ADS)
Jaumann, R.; Neukum, G.; Tirsch, D.; Hoffmann, H.
2014-04-01
The HRSC Experiment: Imagery is the major source for our current understanding of the geologic evolution of Mars in qualitative and quantitative terms.Imaging is required to enhance our knowledge of Mars with respect to geological processes occurring on local, regional and global scales and is an essential prerequisite for detailed surface exploration. The High Resolution Stereo Camera (HRSC) of ESA's Mars Express Mission (MEx) is designed to simultaneously map the morphology, topography, structure and geologic context of the surface of Mars as well as atmospheric phenomena [1]. The HRSC directly addresses two of the main scientific goals of the Mars Express mission: (1) High-resolution three-dimensional photogeologic surface exploration and (2) the investigation of surface-atmosphere interactions over time; and significantly supports: (3) the study of atmospheric phenomena by multi-angle coverage and limb sounding as well as (4) multispectral mapping by providing high-resolution threedimensional color context information. In addition, the stereoscopic imagery will especially characterize landing sites and their geologic context [1]. The HRSC surface resolution and the digital terrain models bridge the gap in scales between highest ground resolution images (e.g., HiRISE) and global coverage observations (e.g., Viking). This is also the case with respect to DTMs (e.g., MOLA and local high-resolution DTMs). HRSC is also used as cartographic basis to correlate between panchromatic and multispectral stereo data. The unique multi-angle imaging technique of the HRSC supports its stereo capability by providing not only a stereo triplet but also a stereo quintuplet, making the photogrammetric processing very robust [1, 3]. The capabilities for three dimensional orbital reconnaissance of the Martian surface are ideally met by HRSC making this camera unique in the international Mars exploration effort.
LaFleur, Karl; Cassady, Kaitlin; Doud, Alexander; Shades, Kaleb; Rogin, Eitan; He, Bin
2013-01-01
Objective At the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional physical space using noninvasive scalp EEG in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that operation of a real world device has on subjects’ control with comparison to a two-dimensional virtual cursor task. Approach Five human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a three-dimensional physical space. Visual feedback was provided via a forward facing camera on the hull of the drone. Individual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m/s. Significance Freely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user’s ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in the three-dimensional physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG based BCI systems to accomplish complex control in three-dimensional physical space. The present study may serve as a framework for the investigation of multidimensional non-invasive brain-computer interface control in a physical environment using telepresence robotics. PMID:23735712
NASA Astrophysics Data System (ADS)
Selmke, Markus; Selmke, Sarah
2018-04-01
We describe a three-dimensional (3D) rainbow demonstration experiment. Its key idea is to convey a particular aspect of the natural phenomenon, namely, the origin of the perceived rainbow being multiple individual glints from within a rainshower. Raindrops in this demonstration are represented by acrylic spheres arranged on pillars within a cubic volume. Defocused imaging with a camera or the eye reveals a mosaic rainbow (segment) when viewed and illuminated in the appropriate fashion.
Multi-camera real-time three-dimensional tracking of multiple flying animals
Straw, Andrew D.; Branson, Kristin; Neumann, Titus R.; Dickinson, Michael H.
2011-01-01
Automated tracking of animal movement allows analyses that would not otherwise be possible by providing great quantities of data. The additional capability of tracking in real time—with minimal latency—opens up the experimental possibility of manipulating sensory feedback, thus allowing detailed explorations of the neural basis for control of behaviour. Here, we describe a system capable of tracking the three-dimensional position and body orientation of animals such as flies and birds. The system operates with less than 40 ms latency and can track multiple animals simultaneously. To achieve these results, a multi-target tracking algorithm was developed based on the extended Kalman filter and the nearest neighbour standard filter data association algorithm. In one implementation, an 11-camera system is capable of tracking three flies simultaneously at 60 frames per second using a gigabit network of nine standard Intel Pentium 4 and Core 2 Duo computers. This manuscript presents the rationale and details of the algorithms employed and shows three implementations of the system. An experiment was performed using the tracking system to measure the effect of visual contrast on the flight speed of Drosophila melanogaster. At low contrasts, speed is more variable and faster on average than at high contrasts. Thus, the system is already a useful tool to study the neurobiology and behaviour of freely flying animals. If combined with other techniques, such as ‘virtual reality’-type computer graphics or genetic manipulation, the tracking system would offer a powerful new way to investigate the biology of flying animals. PMID:20630879
NASA Astrophysics Data System (ADS)
Yamamoto, Naoyuki; Saito, Tsubasa; Ogawa, Satoru; Ishimaru, Ichiro
2016-05-01
We developed the palm size (optical unit: 73[mm]×102[mm]×66[mm]) and light weight (total weight with electrical controller: 1.7[kg]) middle infrared (wavelength range: 8[μm]-14[μm]) 2-dimensional spectroscopy for UAV (Unmanned Air Vehicle) like drone. And we successfully demonstrated the flights with the developed hyperspectral camera mounted on the multi-copter so-called drone in 15/Sep./2015 at Kagawa prefecture in Japan. We had proposed 2 dimensional imaging type Fourier spectroscopy that was the near-common path temporal phase-shift interferometer. We install the variable phase shifter onto optical Fourier transform plane of infinity corrected imaging optical systems. The variable phase shifter was configured with a movable mirror and a fixed mirror. The movable mirror was actuated by the impact drive piezo-electric device (stroke: 4.5[mm], resolution: 0.01[μm], maker: Technohands Co.,Ltd., type:XDT50-45, price: around 1,000USD). We realized the wavefront division type and near common path interferometry that has strong robustness against mechanical vibrations. Without anti-mechanical vibration systems, the palm-size Fourier spectroscopy was realized. And we were able to utilize the small and low-cost middle infrared camera that was the micro borometer array (un-cooled VOxMicroborometer, pixel array: 336×256, pixel pitch: 17[μm], frame rate 60[Hz], maker: FLIR, type: Quark 336, price: around 5,000USD). And this apparatus was able to be operated by single board computer (Raspberry Pi.). Thus, total cost was less than 10,000 USD. We joined with KAMOME-PJ (Kanagawa Advanced MOdule for Material Evaluation Project) with DRONE FACTORY Corp., KUUSATSU Corp., Fuji Imvac Inc. And we successfully obtained the middle infrared spectroscopic imaging with multi-copter drone.
Three-dimensional reconstruction of the fast-start swimming kinematics of densely schooling fish
Paley, Derek A.
2012-01-01
Information transmission via non-verbal cues such as a fright response can be quantified in a fish school by reconstructing individual fish motion in three dimensions. In this paper, we describe an automated tracking framework to reconstruct the full-body trajectories of densely schooling fish using two-dimensional silhouettes in multiple cameras. We model the shape of each fish as a series of elliptical cross sections along a flexible midline. We estimate the size of each ellipse using an iterated extended Kalman filter. The shape model is used in a model-based tracking framework in which simulated annealing is applied at each step to estimate the midline. Results are presented for eight fish with occlusions. The tracking system is currently being used to investigate fast-start behaviour of schooling fish in response to looming stimuli. PMID:21642367
3D thermography in non-destructive testing of composite structures
NASA Astrophysics Data System (ADS)
Hellstein, Piotr; Szwedo, Mariusz
2016-12-01
The combination of 3D scanners and infrared cameras has lead to the introduction of 3D thermography. Such analysis produces results in the form of three-dimensional thermograms, where the temperatures are mapped on a 3D model reconstruction of the inspected object. All work in the field of 3D thermography focused on its utility in passive thermography inspections. The authors propose a new real-time 3D temperature mapping method, which for the first time can be applied to active thermography analyses. All steps required to utilise 3D thermography are discussed, starting from acquisition of three-dimensional and infrared data, going through image processing and scene reconstruction, finishing with thermal projection and ray-tracing visualisation techniques. The application of the developed method was tested during diagnosis of several industrial composite structures—boats, planes and wind turbine blades.
An Interactive Augmented Reality Implementation of Hijaiyah Alphabet for Children Education
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Akbar, F.; Syahputra, M. F.; Budiman, M. A.; Hizriadi, A.
2018-03-01
Hijaiyah alphabet is letters used in the Qur’an. An attractive and exciting learning process of Hijaiyah alphabet is necessary for the children. One of the alternatives to create attractive and interesting learning process of Hijaiyah alphabet is to develop it into a mobile application using augmented reality technology. Augmented reality is a technology that combines two-dimensional or three-dimensional virtual objects into actual three-dimensional circles and projects them in real time. The purpose of application aims to foster the children interest in learning Hijaiyah alphabet. This application is using Smartphone and marker as the medium. It was built using Unity and augmented reality library, namely Vuforia, then using Blender as the 3D object modeling software. The output generated from this research is the learning application of Hijaiyah letters using augmented reality. How to use it is as follows: first, place marker that has been registered and printed; second, the smartphone camera will track the marker. If the marker is invalid, the user should repeat the tracking process. If the marker is valid and identified, the marker will have projected the objects of Hijaiyah alphabet in three-dimensional form. Lastly, the user can learn and understand the shape and pronunciation of Hijaiyah alphabet by touching the virtual button on the marker
Robust reconstruction of time-resolved diffraction from ultrafast streak cameras
Badali, Daniel S.; Dwayne Miller, R. J.
2017-01-01
In conjunction with ultrafast diffraction, streak cameras offer an unprecedented opportunity for recording an entire molecular movie with a single probe pulse. This is an attractive alternative to conventional pump-probe experiments and opens the door to studying irreversible dynamics. However, due to the “smearing” of the diffraction pattern across the detector, the streaking technique has thus far been limited to simple mono-crystalline samples and extreme care has been taken to avoid overlapping diffraction spots. In this article, this limitation is addressed by developing a general theory of streaking of time-dependent diffraction patterns. Understanding the underlying physics of this process leads to the development of an algorithm based on Bayesian analysis to reconstruct the time evolution of the two-dimensional diffraction pattern from a single streaked image. It is demonstrated that this approach works on diffraction peaks that overlap when streaked, which not only removes the necessity of carefully choosing the streaking direction but also extends the streaking technique to be able to study polycrystalline samples and materials with complex crystalline structures. Furthermore, it is shown that the conventional analysis of streaked diffraction can lead to erroneous interpretations of the data. PMID:28653022
Video-Camera-Based Position-Measuring System
NASA Technical Reports Server (NTRS)
Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert
2005-01-01
A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white squares to an object of interest (see Figure 2). For other situations, where circular symmetry is more desirable, circular targets also can be created. Such a target can readily be generated and modified by use of commercially available software and printed by use of a standard office printer. All three relative coordinates (x, y, and z) of each target can be determined by processing the video image of the target. Because of the unique design of corresponding image-processing filters and targets, the vision-based position- measurement system is extremely robust and tolerant of widely varying fields of view, lighting conditions, and varying background imagery.
Six-degrees-of-freedom sensing based on pictures taken by single camera.
Zhongke, Li; Yong, Wang; Yongyuan, Qin; Peijun, Lu
2005-02-01
Two six-degrees-of-freedom sensing methods are presented. In the first method, three laser beams are employed to set up Descartes' frame on a rigid body and a screen is adopted to form diffuse spots. In the second method, two superimposed grid screens and two laser beams are used. A CCD camera is used to take photographs in both methods. Both approaches provide a simple and error-free method to record the positions and the attitudes of a rigid body in motion continuously.
SSM/OOM - SSM WITH OOM MANIPULATION CODE
NASA Technical Reports Server (NTRS)
Goza, S. P.
1994-01-01
Creating, animating, and recording solid-shaded and wireframe three-dimensional geometric models can be of great assistance in the research and design phases of product development, in project planning, and in engineering analyses. SSM and OOM are application programs which together allow for interactive construction and manipulation of three-dimensional models of real-world objects as simple as boxes or as complex as Space Station Freedom. The output of SSM, in the form of binary files defining geometric three dimensional models, is used as input to OOM. Animation in OOM is done using 3D models from SSM as well as cameras and light sources. The animated results of OOM can be output to videotape recorders, film recorders, color printers and disk files. SSM and OOM are also available separately as MSC-21914 and MSC-22263, respectively. The Solid Surface Modeler (SSM) is an interactive graphics software application for solid-shaded and wireframe three-dimensional geometric modeling. The program has a versatile user interface that, in many cases, allows mouse input for intuitive operation or keyboard input when accuracy is critical. SSM can be used as a stand-alone model generation and display program and offers high-fidelity still image rendering. Models created in SSM can also be loaded into the Object Orientation Manipulator for animation or engineering simulation. The Object Orientation Manipulator (OOM) is an application program for creating, rendering, and recording three-dimensional computer-generated still and animated images. This is done using geometrically defined 3D models, cameras, and light sources, referred to collectively as animation elements. OOM does not provide the tools necessary to construct 3D models; instead, it imports binary format model files generated by the Solid Surface Modeler (SSM). Model files stored in other formats must be converted to the SSM binary format before they can be used in OOM. SSM is available as MSC-21914 or as part of the SSM/OOM bundle, COS-10047. Among OOM's features are collision detection (with visual and audio feedback), the capability to define and manipulate hierarchical relationships between animation elements, stereographic display, and ray- traced rendering. OOM uses Euler angle transformations for calculating the results of translation and rotation operations. OOM and SSM are written in C-language for implementation on SGI IRIS 4D series workstations running the IRIX operating system. A minimum of 8Mb of RAM is recommended for each program. The standard distribution medium for this program package is a .25 inch streaming magnetic IRIX tape cartridge in UNIX tar format. These versions of OOM and SSM were released in 1993.
Joint estimation of high resolution images and depth maps from light field cameras
NASA Astrophysics Data System (ADS)
Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki
2014-03-01
Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.
Simulation-based camera navigation training in laparoscopy-a randomized trial.
Nilsson, Cecilia; Sorensen, Jette Led; Konge, Lars; Westen, Mikkel; Stadeager, Morten; Ottesen, Bent; Bjerrum, Flemming
2017-05-01
Inexperienced operating assistants are often tasked with the important role of handling camera navigation during laparoscopic surgery. Incorrect handling can lead to poor visualization, increased operating time, and frustration for the operating surgeon-all of which can compromise patient safety. The objectives of this trial were to examine how to train laparoscopic camera navigation and to explore the transfer of skills to the operating room. A randomized, single-center superiority trial with three groups: The first group practiced simulation-based camera navigation tasks (camera group), the second group practiced performing a simulation-based cholecystectomy (procedure group), and the third group received no training (control group). Participants were surgical novices without prior laparoscopic experience. The primary outcome was assessment of camera navigation skills during a laparoscopic cholecystectomy. The secondary outcome was technical skills after training, using a previously developed model for testing camera navigational skills. The exploratory outcome measured participants' motivation toward the task as an operating assistant. Thirty-six participants were randomized. No significant difference was found in the primary outcome between the three groups (p = 0.279). The secondary outcome showed no significant difference between the interventions groups, total time 167 s (95% CI, 118-217) and 194 s (95% CI, 152-236) for the camera group and the procedure group, respectively (p = 0.369). Both interventions groups were significantly faster than the control group, 307 s (95% CI, 202-412), p = 0.018 and p = 0.045, respectively. On the exploratory outcome, the control group for two dimensions, interest/enjoyment (p = 0.030) and perceived choice (p = 0.033), had a higher score. Simulation-based training improves the technical skills required for camera navigation, regardless of practicing camera navigation or the procedure itself. Transfer to the clinical setting could, however, not be demonstrated. The control group demonstrated higher interest/enjoyment and perceived choice than the camera group.
Forget, Benoît-Claude; Ramaz, François; Atlan, Michaël; Selb, Juliette; Boccara, Albert-Claude
2003-03-01
We report new results on acousto-optical tomography in phantom tissues using a frequency chirp modulation and a CCD camera. This technique allows quick recording of three-dimensional images of the optical contrast with a two-dimensional scan of the ultrasound source in a plane perpendicular to the ultrasonic path. The entire optical contrast along the ultrasonic path is concurrently obtained from the capture of a film sequence at a rate of 200 Hz. This technique reduces the acquisition time, and it enhances the axial resolution and thus the contrast, which are usually poor owing to the large volume of interaction of the ultrasound perturbation.
Photogrammetry on glaciers: Old and new knowledge
NASA Astrophysics Data System (ADS)
Pfeffer, W. T.; Welty, E.; O'Neel, S.
2014-12-01
In the past few decades terrestrial photogrammetry has become a widely used tool for glaciological research, brought about in part by the proliferation of high-quality, low-cost digital cameras, dramatic increases in image-processing power of computers, and very innovative progress in image processing, much of which has come from computer vision research and from the computer gaming industry. At present, glaciologists have developed their capacity to gather images much further than their ability to process them. Many researchers have accumulated vast inventories of imagery, but have no efficient means to extract the data they desire from them. In many cases these are single-image time series where the processing limitation lies in the paucity of methods to obtain 3-dimension object space information from measurements in the 2-dimensional image space; in other cases camera pairs have been operated but no automated means is in hand for conventional stereometric analysis of many thousands of image pairs. Often the processing task is further complicated by weak camera geometry or ground control distribution, either of which will compromise the quality of 3-dimensional object space solutions. Solutions exist for many of these problems, found sometimes among the latest computer vision results, and sometimes buried in decades-old pre-digital terrestrial photogrammetric literature. Other problems, particularly those arising from poorly constrained or underdetermined camera and ground control geometry, may be unsolvable. Small-scale, ground-based photography and photogrammetry of glaciers has grown over the past few decades in an organic and disorganized fashion, with much duplication of effort and little coordination or sharing of knowledge among researchers. Given the utility of terrestrial photogrammetry, its low cost (if properly developed and implemented), and the substantial value of the information to be had from it, some further effort to share knowledge and methods would be a great benefit for the community. We consider some of the main problems to be solved, and aspects of how optimal knowledge sharing might be accomplished.
Cooperative single-photon subradiant states in a three-dimensional atomic array
NASA Astrophysics Data System (ADS)
Jen, H. H.
2016-11-01
We propose a complete superradiant and subradiant states that can be manipulated and prepared in a three-dimensional atomic array. These subradiant states can be realized by absorbing a single photon and imprinting the spatially-dependent phases on the atomic system. We find that the collective decay rates and associated cooperative Lamb shifts are highly dependent on the phases we manage to imprint, and the subradiant state of long lifetime can be found for various lattice spacings and atom numbers. We also investigate both optically thin and thick atomic arrays, which can serve for systematic studies of super- and sub-radiance. Our proposal offers an alternative scheme for quantum memory of light in a three-dimensional array of two-level atoms, which is applicable and potentially advantageous in quantum information processing.
To catch a comet: Technical overview of CAN DO G-324
NASA Technical Reports Server (NTRS)
Obrien, T. J. (Editor)
1986-01-01
The primary objective of the C. E. Williams Middle School Get Away Special CAN DO is the photographing of Comet Halley. The project will involve middle school students, grades 6 through 8, in the study and interpretation of astronomical photographs and techniques. G-324 is contained in a 5 cubic foot GAS Canister with an opening door and pyrex window for photography. It will be pressurized with one atmosphere of dry nitrogen. Three 35mm still cameras with 250 exposure film backs and different focal length lenses will be fired by a combination of automatic timer and an active comet detector. A lightweight 35mm movie camera will shoot single exposures at about 1/2 minute intervals to give an overlapping skymap of the mission. The fifth camera is a solid state television camera specially constructed for detection of the comet by microprocessor.
Phase Diagram of a Three-Dimensional Antiferromagnet with Random Magnetic Anisotropy
Perez, Felio A.; Borisov, Pavel; Johnson, Trent A.; ...
2015-03-04
Three-dimensional (3D) antiferromagnets with random magnetic anisotropy (RMA) that were experimentally studied to date have competing two-dimensional and three-dimensional exchange interactions which can obscure the authentic effects of RMA. The magnetic phase diagram of Fe xNi 1-xF 2 epitaxial thin films with true random single-ion anisotropy was deduced from magnetometry and neutron scattering measurements and analyzed using mean field theory. Regions with uniaxial, oblique and easy plane anisotropies were identified. A RMA-induced glass region was discovered where a Griffiths-like breakdown of long-range spin order occurs.
The 3-D image recognition based on fuzzy neural network technology
NASA Technical Reports Server (NTRS)
Hirota, Kaoru; Yamauchi, Kenichi; Murakami, Jun; Tanaka, Kei
1993-01-01
Three dimensional stereoscopic image recognition system based on fuzzy-neural network technology was developed. The system consists of three parts; preprocessing part, feature extraction part, and matching part. Two CCD color camera image are fed to the preprocessing part, where several operations including RGB-HSV transformation are done. A multi-layer perception is used for the line detection in the feature extraction part. Then fuzzy matching technique is introduced in the matching part. The system is realized on SUN spark station and special image input hardware system. An experimental result on bottle images is also presented.
NASA Technical Reports Server (NTRS)
Otaguro, W. S.; Kesler, L. O.; Land, K. C.; Rhoades, D. E.
1987-01-01
An intelligent tracker capable of robotic applications requiring guidance and control of platforms, robotic arms, and end effectors has been developed. This packaged system capable of supervised autonomous robotic functions is partitioned into a multiple processor/parallel processing configuration. The system currently interfaces to cameras but has the capability to also use three-dimensional inputs from scanning laser rangers. The inputs are fed into an image processing and tracking section where the camera inputs are conditioned for the multiple tracker algorithms. An executive section monitors the image processing and tracker outputs and performs all the control and decision processes. The present architecture of the system is presented with discussion of its evolutionary growth for space applications. An autonomous rendezvous demonstration of this system was performed last year. More realistic demonstrations in planning are discussed.
Sensor for In-Motion Continuous 3D Shape Measurement Based on Dual Line-Scan Cameras
Sun, Bo; Zhu, Jigui; Yang, Linghui; Yang, Shourui; Guo, Yin
2016-01-01
The acquisition of three-dimensional surface data plays an increasingly important role in the industrial sector. Numerous 3D shape measurement techniques have been developed. However, there are still limitations and challenges in fast measurement of large-scale objects or high-speed moving objects. The innovative line scan technology opens up new potentialities owing to the ultra-high resolution and line rate. To this end, a sensor for in-motion continuous 3D shape measurement based on dual line-scan cameras is presented. In this paper, the principle and structure of the sensor are investigated. The image matching strategy is addressed and the matching error is analyzed. The sensor has been verified by experiments and high-quality results are obtained. PMID:27869731
Using computer graphics to design Space Station Freedom viewing
NASA Technical Reports Server (NTRS)
Goldsberry, B. S.; Lippert, B. O.; Mckee, S. D.; Lewis, J. L., Jr.; Mount, F. E.
1989-01-01
An important aspect of planning for Space Station Freedom at the United States National Aeronautics and Space Administration (NASA) is the placement of the viewing windows and cameras for optimum crewmember use. Researchers and analysts are evaluating the placement options using a three-dimensional graphics program called PLAID. This program, developed at the NASA Johnson Space Center (JSC), is being used to determine the extent to which the viewing requirements for assembly and operations are being met. A variety of window placement options in specific modules are assessed for accessibility. In addition, window and camera placements are analyzed to insure that viewing areas are not obstructed by the truss assemblies, externally-mounted payloads, or any other station element. Other factors being examined include anthropometric design considerations, workstation interfaces, structural issues, and mechanical elements.
NASA Astrophysics Data System (ADS)
Tian, Suyun; Zhu, Guannan; Tang, Yanping; Xie, Xiaohua; Wang, Qian; Ma, Yufei; Ding, Guqiao; Xie, Xiaoming
2018-03-01
Various graphene-based Si nanocomposites have been reported to improve the performance of active materials in Li-ion batteries. However, these candidates still yield severe capacity fading due to the electrical disconnection and fractures caused by the huge volume changes over extended cycles. Therefore, we have designed a novel three-dimensional cross-linked graphene and single-wall carbon nanotube structure to encapsulate the Si nanoparticles. The synthesized three-dimensional structure is attributed to the excellent self-assembly of carbon nanotubes with graphene oxide as well as a thermal treatment process at 900 °C. This special structure provides sufficient void spaces for the volume expansion of Si nanoparticles and channels for the diffusion of ions and electrons. In addition, the cross-linking of the graphene and single-wall carbon nanotubes also strengthens the stability of the structure. As a result, the volume expansion of the Si nanoparticles is restrained. The specific capacity remains at 1450 mAh g-1 after 100 cycles at 200 mA g-1. This well-defined three-dimensional structure facilitates superior capacity and cycling stability in comparison with bare Si and a mechanically mixed composite electrode of graphene, single-wall carbon nanotubes and silicon nanoparticles.
Camera pose estimation for augmented reality in a small indoor dynamic scene
NASA Astrophysics Data System (ADS)
Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad
2017-09-01
Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.
Non-iterative volumetric particle reconstruction near moving bodies
NASA Astrophysics Data System (ADS)
Mendelson, Leah; Techet, Alexandra
2017-11-01
When multi-camera 3D PIV experiments are performed around a moving body, the body often obscures visibility of regions of interest in the flow field in a subset of cameras. We evaluate the performance of non-iterative particle reconstruction algorithms used for synthetic aperture PIV (SAPIV) in these partially-occluded regions. We show that when partial occlusions are present, the quality and availability of 3D tracer particle information depends on the number of cameras and reconstruction procedure used. Based on these findings, we introduce an improved non-iterative reconstruction routine for SAPIV around bodies. The reconstruction procedure combines binary masks, already required for reconstruction of the body's 3D visual hull, and a minimum line-of-sight algorithm. This approach accounts for partial occlusions without performing separate processing for each possible subset of cameras. We combine this reconstruction procedure with three-dimensional imaging on both sides of the free surface to reveal multi-fin wake interactions generated by a jumping archer fish. Sufficient particle reconstruction in near-body regions is crucial to resolving the wake structures of upstream fins (i.e., dorsal and anal fins) before and during interactions with the caudal tail.
Babcock, Hazen P
2018-01-29
This work explores the use of industrial grade CMOS cameras for single molecule localization microscopy (SMLM). We show that industrial grade CMOS cameras approach the performance of scientific grade CMOS cameras at a fraction of the cost. This makes it more economically feasible to construct high-performance imaging systems with multiple cameras that are capable of a diversity of applications. In particular we demonstrate the use of industrial CMOS cameras for biplane, multiplane and spectrally resolved SMLM. We also provide open-source software for simultaneous control of multiple CMOS cameras and for the reduction of the movies that are acquired to super-resolution images.
NASA Astrophysics Data System (ADS)
Lien, Chi-Hsiang; Lin, Chun-Yu; Chen, Shean-Jen; Chien, Fan-Ching
2017-02-01
A three-dimensional (3D) single fluorescent particle tracking strategy based on temporal focusing multiphoton excitation microscopy (TFMPEM) combined with astigmatism imaging is proposed for delivering nanoscale-level axial information that reveals 3D trajectories of single fluorospheres in the axially-resolved multiphoton excitation volume without z-axis scanning. It provides the dynamical ability by measuring the diffusion coefficient of fluorospheres in glycerol solutions with a position standard deviation of 14 nm and 21 nm in the lateral and axial direction and a frame rate of 100 Hz. Moreover, the optical trapping force based on the TFMPEM is minimized to avoid the interference in the tracing measurements compared to that in the spatial focusing MPE approaches. Therefore, we presented a three dimensional single particle tracking strategy to overcome the limitation of the time resolution of the multiphoton imaging using fast frame rate of TFMPEM, and provide three dimensional locations of multiple particles using an astigmatism method.
NASA Astrophysics Data System (ADS)
Wang, Ying; Liu, Qi; Wang, Jun; Wang, Qiong-Hua
2018-03-01
We present an optical encryption method of multiple three-dimensional objects based on multiple interferences and single-pixel digital holography. By modifying the Mach–Zehnder interferometer, the interference of the multiple objects beams and the one reference beam is used to simultaneously encrypt multiple objects into a ciphertext. During decryption, each three-dimensional object can be decrypted independently without having to decrypt other objects. Since the single-pixel digital holography based on compressive sensing theory is introduced, the encrypted data of this method is effectively reduced. In addition, recording fewer encrypted data can greatly reduce the bandwidth of network transmission. Moreover, the compressive sensing essentially serves as a secret key that makes an intruder attack invalid, which means that the system is more secure than the conventional encryption method. Simulation results demonstrate the feasibility of the proposed method and show that the system has good security performance. Project supported by the National Natural Science Foundation of China (Grant Nos. 61405130 and 61320106015).
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site] Figure 1 (Click on image for larger view)
This image, acquired by the Mars Exploration Rover Spirit's panoramic camera on the 53rd martian day, or sol, of the rover's mission, struck science and engineering teams as not only scientifically interesting but remarkably beautiful. The large, shadowed rock in the foreground is nicknamed 'Sandia' for a mountain range in New Mexico. An imposing rock, 'Sandia' is about 33 centimeters high (1 foot) and about 1.7 meters (5.5 feet) long.Figure 1 above is a lightened version of the more artistic image above.The combination of the rover's high-resolution cameras with software tools used by scientists allows the minute details on martian targets to be visualized. When lightened, this image reveals much about the pictured rocks, which the science team believes are ejected material, or ejecta, from the nearby crater called 'Bonneville.' Scientists believe 'Sandia' is a basaltic rock that landed on its side after being ejected from the crater. The vertical lines on the side of the rock facing the camera are known by geologists as 'flow banding' and typically run horizontally, indicating that 'Sandia' is on its side. What look like small holes on the two visible sides of the rock are called vesicles; they were probably once gas bubbles within the lava.The lighting not only makes for an artistic image, it helps scientists get a virtual three-dimensional feel for target rocks. Observations taken at different times of day, as shadows move and surface texture details on target rocks are revealed, are entered into modeling software that turns a two-dimensional image into a three-dimensional research tool.Many smaller rocks can be seen in the background of the image. Some rocks are completely exposed, while others are only peeking out of the surface. Scientists believe that two processes might be at work here: accretion, which occurs when winds deposit material that slowly buries many of the rocks; and deflation, which occurs when surface material is removed by wind, exposing more and more of the rocks.NASA Technical Reports Server (NTRS)
2004-01-01
This is a three-dimensional stereo anaglyph of an image taken by the front navigation camera onboard the Mars Exploration Rover Spirit, showing an interesting patch of rippled soil. Spirit took this image on sol 37 (Feb. 9, 2004) after completing the longest drive ever made by a rover on another planet - 21.2 meters (69.6 feet). On sol 38 scientists plan to investigate this interesting location with the microscopic imager and Moessbauer spectrometer on Spirit's instrument deployment device.
Generation of animation sequences of three dimensional models
NASA Technical Reports Server (NTRS)
Poi, Sharon (Inventor); Bell, Brad N. (Inventor)
1990-01-01
The invention is directed toward a method and apparatus for generating an animated sequence through the movement of three-dimensional graphical models. A plurality of pre-defined graphical models are stored and manipulated in response to interactive commands or by means of a pre-defined command file. The models may be combined as part of a hierarchical structure to represent physical systems without need to create a separate model which represents the combined system. System motion is simulated through the introduction of translation, rotation and scaling parameters upon a model within the system. The motion is then transmitted down through the system hierarchy of models in accordance with hierarchical definitions and joint movement limitations. The present invention also calls for a method of editing hierarchical structure in response to interactive commands or a command file such that a model may be included, deleted, copied or moved within multiple system model hierarchies. The present invention also calls for the definition of multiple viewpoints or cameras which may exist as part of a system hierarchy or as an independent camera. The simulated movement of the models and systems is graphically displayed on a monitor and a frame is recorded by means of a video controller. Multiple movement and hierarchy manipulations are then recorded as a sequence of frames which may be played back as an animation sequence on a video cassette recorder.
NASA Astrophysics Data System (ADS)
Zhang, Chun-Sen; Zhang, Meng-Meng; Zhang, Wei-Xing
2017-01-01
This paper outlines a low-cost, user-friendly photogrammetric technique with nonmetric cameras to obtain excavation site digital sequence images, based on photogrammetry and computer vision. Digital camera calibration, automatic aerial triangulation, image feature extraction, image sequence matching, and dense digital differential rectification are used, combined with a certain number of global control points of the excavation site, to reconstruct the high precision of measured three-dimensional (3-D) models. Using the acrobatic figurines in the Qin Shi Huang mausoleum excavation as an example, our method solves the problems of little base-to-height ratio, high inclination, unstable altitudes, and significant ground elevation changes affecting image matching. Compared to 3-D laser scanning, the 3-D color point cloud obtained by this method can maintain the same visual result and has advantages of low project cost, simple data processing, and high accuracy. Structure-from-motion (SfM) is often used to reconstruct 3-D models of large scenes and has lower accuracy if it is a reconstructed 3-D model of a small scene at close range. Results indicate that this method quickly achieves 3-D reconstruction of large archaeological sites and produces heritage site distribution of orthophotos providing a scientific basis for accurate location of cultural relics, archaeological excavations, investigation, and site protection planning. This proposed method has a comprehensive application value.
Evaluation of sequential images for photogrammetrically point determination
NASA Astrophysics Data System (ADS)
Kowalczyk, M.
2011-12-01
Close range photogrammetry encounters many problems with reconstruction of objects three-dimensional shape. Relative orientation parameters of taken photos makes usually key role leading to right solution of this problem. Automation of technology process is hardly performed due to recorded scene complexity and configuration of camera positions. This configuration makes the process of joining photos into one set usually impossible automatically. Application of camcorder is the solution widely proposed in literature for support in 3D models creation. Main advantages of this tool are connected with large number of recorded images and camera positions. Exterior orientation changes barely between two neighboring frames. Those features of film sequence gives possibilities for creating models with basic algorithms, working faster and more robust, than with remotely taken photos. The first part of this paper presents results of experiments determining interior orientation parameters of some sets of frames, presenting three-dimensional test field. This section describes calibration repeatability of film frames taken from camcorder. It is important due to stability of interior camera geometric parameters. Parametric model of systematical errors was applied for correcting images. Afterwards a short film of the same test field had been taken for determination of check points group. This part has been done for controlling purposes of camera application in measurement tasks. Finally there are presented some results of experiments which compare determination of recorded object points in 3D space. In common digital photogrammetry, where separate photos are used, first levels of image pyramids are taken to connect with feature based matching. This complicated process creates a lot of emergencies, which can produce false detections of image similarities. In case of digital film camera, authors of publications avoid this dangerous step, going straightly to area based matching, aiming high degree of similarity for two corresponding film frames. First approximation, in establishing connections between photos, comes from whole image distance. This image distance method can work with more than just two dimensions of translation vector. Scale and angles are also used for improving image matching. This operation creates more similar looking frames where corresponding characteristic points lays close to each other. Procedure searching for pairs of points works faster and more accurately, because analyzed areas can be reduced. Another proposed solution comes from image created by adding differences between particular frames, gives more rough results, but works much faster than standard matching.
Eltoukhy, Moataz; Kelly, Adam; Kim, Chang-Young; Jun, Hyung-Pil; Campbell, Richard; Kuenze, Christopher
2016-01-01
Cost effective, quantifiable assessment of lower extremity movement represents potential improvement over standard tools for evaluation of injury risk. Ten healthy participants completed three trials of a drop jump, overhead squat, and single leg squat task. Peak hip and knee kinematics were assessed using an 8 camera BTS Smart 7000DX motion analysis system and the Microsoft Kinect® camera system. The agreement and consistency between both uncorrected and correct Kinect kinematic variables and the BTS camera system were assessed using interclass correlations coefficients. Peak sagittal plane kinematics measured using the Microsoft Kinect® camera system explained a significant amount of variance [Range(hip) = 43.5-62.8%; Range(knee) = 67.5-89.6%] in peak kinematics measured using the BTS camera system. Across tasks, peak knee flexion angle and peak hip flexion were found to be consistent and in agreement when the Microsoft Kinect® camera system was directly compared to the BTS camera system but these values were improved following application of a corrective factor. The Microsoft Kinect® may not be an appropriate surrogate for traditional motion analysis technology, but it may have potential applications as a real-time feedback tool in pathological or high injury risk populations.
Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott
2015-01-01
The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.
CMOS detectors: lessons learned during the STC stereo channel preflight calibration
NASA Astrophysics Data System (ADS)
Simioni, E.; De Sio, A.; Da Deppo, V.; Naletto, G.; Cremonese, G.
2017-09-01
The Stereo Camera (STC), mounted on-board the BepiColombo spacecraft, will acquire in push frame stereo mode the entire surface of Mercury. STC will provide the images for the global three-dimensional reconstruction of the surface of the innermost planet of the Solar System. The launch of BepiColombo is foreseen in 2018. STC has an innovative optical system configuration, which allows good optical performances with a mass and volume reduction of a factor two with respect to classical stereo camera approach. In such a telescope, two different optical paths inclined of +/-20°, with respect to the nadir direction, are merged together in a unique off axis path and focused on a single detector. The focal plane is equipped with a 2k x 2k hybrid Si-PIN detector, based on CMOS technology, combining low read-out noise, high radiation hardness, compactness, lack of parasitic light, capability of snapshot image acquisition and short exposure times (less than 1 ms) and small pixel size (10 μm). During the preflight calibration campaign of STC, some detector spurious effects have been noticed. Analyzing the images taken during the calibration phase, two different signals affecting the background level have been measured. These signals can reduce the detector dynamics down to a factor of 1/4th and they are not due to dark current, stray light or similar effects. In this work we will describe all the features of these unwilled effects, and the calibration procedures we developed to analyze them.
Comparison of Cyberware PX and PS 3D human head scanners
NASA Astrophysics Data System (ADS)
Carson, Jeremy; Corner, Brian D.; Crockett, Eric; Li, Peng; Paquette, Steven
2008-02-01
A common limitation of laser line three-Dimensional (3D) scanners is the inability to scan objects with surfaces that are either parallel to the laser line or that self-occlude. Filling in missing areas adds some unwanted inaccuracy to the 3D model. Capturing the human head with a Cyberware PS Head Scanner is an example of obtaining a model where the incomplete areas are difficult to fill accurately. The PS scanner uses a single vertical laser line to illuminate the head and is unable to capture data at top of the head, where the line of sight is tangent to the surface, and under the chin, an area occluded by the chin when the subject looks straight forward. The Cyberware PX Scanner was developed to obtain this missing 3D head data. The PX scanner uses two cameras offset at different angles to provide a more detailed head scan that captures surfaces missed by the PS scanner. The PX scanner cameras also use new technology to obtain color maps that are of higher resolution than the PS Scanner. The two scanners were compared in terms of amount of surface captured (surface area and volume) and the quality of head measurements when compared to direct measurements obtained through standard anthropometry methods. Relative to the PS scanner, the PX head scans were more complete and provided the full set of head measurements, but actual measurement values, when available from both scanners, were about the same.
Modeling job sites in real time to improve safety during equipment operation
NASA Astrophysics Data System (ADS)
Caldas, Carlos H.; Haas, Carl T.; Liapi, Katherine A.; Teizer, Jochen
2006-03-01
Real-time three-dimensional (3D) modeling of work zones has received an increasing interest to perform equipment operation faster, safer and more precisely. In addition, hazardous job site environment like they exist on construction sites ask for new devices which can rapidly and actively model static and dynamic objects. Flash LADAR (Laser Detection and Ranging) cameras are one of the recent technology developments which allow rapid spatial data acquisition of scenes. Algorithms that can process and interpret the output of such enabling technologies into threedimensional models have the potential to significantly improve work processes. One particular important application is modeling the location and path of objects in the trajectory of heavy construction equipment navigation. Detecting and mapping people, materials and equipment into a three-dimensional computer model allows analyzing the location, path, and can limit or restrict access to hazardous areas. This paper presents experiments and results of a real-time three-dimensional modeling technique to detect static and moving objects within the field of view of a high-frame update rate laser range scanning device. Applications related to heavy equipment operations on transportation and construction job sites are specified.
Uncertainty quantification in volumetric Particle Image Velocimetry
NASA Astrophysics Data System (ADS)
Bhattacharya, Sayantan; Charonko, John; Vlachos, Pavlos
2016-11-01
Particle Image Velocimetry (PIV) uncertainty quantification is challenging due to coupled sources of elemental uncertainty and complex data reduction procedures in the measurement chain. Recent developments in this field have led to uncertainty estimation methods for planar PIV. However, no framework exists for three-dimensional volumetric PIV. In volumetric PIV the measurement uncertainty is a function of reconstructed three-dimensional particle location that in turn is very sensitive to the accuracy of the calibration mapping function. Furthermore, the iterative correction to the camera mapping function using triangulated particle locations in space (volumetric self-calibration) has its own associated uncertainty due to image noise and ghost particle reconstructions. Here we first quantify the uncertainty in the triangulated particle position which is a function of particle detection and mapping function uncertainty. The location uncertainty is then combined with the three-dimensional cross-correlation uncertainty that is estimated as an extension of the 2D PIV uncertainty framework. Finally the overall measurement uncertainty is quantified using an uncertainty propagation equation. The framework is tested with both simulated and experimental cases. For the simulated cases the variation of estimated uncertainty with the elemental volumetric PIV error sources are also evaluated. The results show reasonable prediction of standard uncertainty with good coverage.
Compact, cost-effective and field-portable microscope prototype based on MISHELF microscopy
NASA Astrophysics Data System (ADS)
Sanz, Martín; Picazo-Bueno, José Ángel; Granero, Luis; García, Javier; Micó, Vicente
2017-02-01
We report on a reduced cost, portable and compact prototype design of lensless holographic microscope with an illumination/detection scheme based on wavelength multiplexing, working with single hologram acquisition and using a fast convergence algorithm for image processing. All together, MISHELF (initials coming from Multi-Illumination Single-Holographic-Exposure Lensless Fresnel) microscopy allows the recording of three Fresnel domain diffraction patterns in a single camera snap-shot incoming from illuminating the sample with three coherent lights at once. Previous implementations have proposed an illumination/detection procedure based on a tuned (illumination wavelengths centered at the maximum sensitivity of the camera detection channels) configuration but here we report on a detuned (non-centered ones) scheme resulting in prototype miniaturization and cost reduction. Thus, MISHELF microscopy in combination with a novel and fast iterative algorithm allows high-resolution (μm range) phase-retrieved (twin image elimination) quantitative phase imaging of dynamic events (video rate recording speed). The performance of this microscope prototype is validated through experiments using both amplitude (USAF resolution test) and complex (live swine sperm cells and flowing microbeads) samples. The proposed method becomes in an alternative instrument improving some capabilities of existing lensless microscopes.
Compact, cost-effective and field-portable microscope prototype based on MISHELF microscopy
Sanz, Martín; Picazo-Bueno, José Ángel; Granero, Luis; García, Javier; Micó, Vicente
2017-01-01
We report on a reduced cost, portable and compact prototype design of lensless holographic microscope with an illumination/detection scheme based on wavelength multiplexing, working with single hologram acquisition and using a fast convergence algorithm for image processing. All together, MISHELF (initials coming from Multi-Illumination Single-Holographic-Exposure Lensless Fresnel) microscopy allows the recording of three Fresnel domain diffraction patterns in a single camera snap-shot incoming from illuminating the sample with three coherent lights at once. Previous implementations have proposed an illumination/detection procedure based on a tuned (illumination wavelengths centered at the maximum sensitivity of the camera detection channels) configuration but here we report on a detuned (non-centered ones) scheme resulting in prototype miniaturization and cost reduction. Thus, MISHELF microscopy in combination with a novel and fast iterative algorithm allows high-resolution (μm range) phase-retrieved (twin image elimination) quantitative phase imaging of dynamic events (video rate recording speed). The performance of this microscope prototype is validated through experiments using both amplitude (USAF resolution test) and complex (live swine sperm cells and flowing microbeads) samples. The proposed method becomes in an alternative instrument improving some capabilities of existing lensless microscopes. PMID:28233829
1993-08-01
measure the inherent fracture toughness of a material. A thor- ough understanding of the test specimen behavior is a prerequisite to the application of...measured material properties in structural applications . Three- dimensional dynamic analyses are performed for three different specimen configurations...derstanding of the test specimen behavior is a prerequisite to the application of measured ma- terial properties in structural applications . Three
Three-camera stereo vision for intelligent transportation systems
NASA Astrophysics Data System (ADS)
Bergendahl, Jason; Masaki, Ichiro; Horn, Berthold K. P.
1997-02-01
A major obstacle in the application of stereo vision to intelligent transportation system is high computational cost. In this paper, a PC based three-camera stereo vision system constructed with off-the-shelf components is described. The system serves as a tool for developing and testing robust algorithms which approach real-time performance. We present an edge based, subpixel stereo algorithm which is adapted to permit accurate distance measurements to objects in the field of view using a compact camera assembly. Once computed, the 3D scene information may be directly applied to a number of in-vehicle applications, such as adaptive cruise control, obstacle detection, and lane tracking. Moreover, since the largest computational costs is incurred in generating the 3D scene information, multiple applications that leverage this information can be implemented in a single system with minimal cost. On-road applications, such as vehicle counting and incident detection, are also possible. Preliminary in-vehicle road trial results are presented.
Depth-tunable three-dimensional display with interactive light field control
NASA Astrophysics Data System (ADS)
Xie, Songlin; Wang, Peng; Sang, Xinzhu; Li, Chenyu; Dou, Wenhua; Xiao, Liquan
2016-07-01
A software-defined depth-tunable three-dimensional (3D) display with interactive 3D depth control is presented. With the proposed post-processing system, the disparity of the multi-view media can be freely adjusted. Benefiting from a wealth of information inherently contains in dense multi-view images captured with parallel arrangement camera array, the 3D light field is built and the light field structure is controlled to adjust the disparity without additional acquired depth information since the light field structure itself contains depth information. A statistical analysis based on the least square is carried out to extract the depth information inherently exists in the light field structure and the accurate depth information can be used to re-parameterize light fields for the autostereoscopic display, and a smooth motion parallax can be guaranteed. Experimental results show that the system is convenient and effective to adjust the 3D scene performance in the 3D display.
Scene analysis for effective visual search in rough three-dimensional-modeling scenes
NASA Astrophysics Data System (ADS)
Wang, Qi; Hu, Xiaopeng
2016-11-01
Visual search is a fundamental technology in the computer vision community. It is difficult to find an object in complex scenes when there exist similar distracters in the background. We propose a target search method in rough three-dimensional-modeling scenes based on a vision salience theory and camera imaging model. We give the definition of salience of objects (or features) and explain the way that salience measurements of objects are calculated. Also, we present one type of search path that guides to the target through salience objects. Along the search path, when the previous objects are localized, the search region of each subsequent object decreases, which is calculated through imaging model and an optimization method. The experimental results indicate that the proposed method is capable of resolving the ambiguities resulting from distracters containing similar visual features with the target, leading to an improvement of search speed by over 50%.
Integral image rendering procedure for aberration correction and size measurement.
Sommer, Holger; Ihrig, Andreas; Ebenau, Melanie; Flühs, Dirk; Spaan, Bernhard; Eichmann, Marion
2014-05-20
The challenge in rendering integral images is to use as much information preserved by the light field as possible to reconstruct a captured scene in a three-dimensional way. We propose a rendering algorithm based on the projection of rays through a detailed simulation of the optical path, considering all the physical properties and locations of the optical elements. The rendered images contain information about the correct size of imaged objects without the need to calibrate the imaging device. Additionally, aberrations of the optical system may be corrected, depending on the setup of the integral imaging device. We show simulation data that illustrates the aberration correction ability and experimental data from our plenoptic camera, which illustrates the capability of our proposed algorithm to measure size and distance. We believe this rendering procedure will be useful in the future for three-dimensional ophthalmic imaging of the human retina.
Stereo Image Ranging For An Autonomous Robot Vision System
NASA Astrophysics Data System (ADS)
Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven
1985-12-01
The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.
NASA Astrophysics Data System (ADS)
Thomas, Edward; Williams, Jeremiah; Silver, Jennifer
2004-11-01
Over the past five years, the Auburn Plasma Sciences Laboratory (PSL) has applied two-dimensional particle image velocimetry (2D-PIV) techniques [E. Thomas, Phys. Plasmas, 6, 2672 (1999)] to make measurements of particle transport in dusty plasmas. Although important information was obtained from these earlier studies, the complex behavior of the charged microparticles clearly indicated that three-dimensional velocity information is needed. The PSL has recently acquired and installed a stereoscopic PIV (stereo-PIV) diagnostic tool for dusty plasma investigations [E. Thomas. et al, Phys. Plasmas, L37 (2004)]. It employs a synchronized dual-laser, dual-camera system for measuring particle transport in three dimensions. Results will be presented on the initial application of stereo-PIV to dusty plasma studies. Additional results will be presented on the use of stereo-PIV for measuring the controlled interaction of two dust clouds.
In memoriam: Fumio Okano, innovator of 3D display
NASA Astrophysics Data System (ADS)
Arai, Jun
2014-06-01
Dr. Fumio Okano, a well-known pioneer and innovator of three-dimensional (3D) displays, passed away on 26 November 2013 in Kanagawa, Japan, at the age of 61. Okano joined Japan Broadcasting Corporation (NHK) in Tokyo in 1978. In 1981, he began researching high-definition television (HDTV) cameras, HDTV systems, ultrahigh-definition television systems, and 3D televisions at NHK Science and Technology Research Laboratories. His publications have been frequently cited by other researchers. Okano served eight years as chair of the annual SPIE conference on Three- Dimensional Imaging, Visualization, and Display and another four years as co-chair. Okano's leadership in this field will be greatly missed and he will be remembered for his enduring contributions and innovations in the field of 3D displays. This paper is a summary of the career of Fumio Okano, as well as a tribute to that career and its lasting legacy.
NASA Astrophysics Data System (ADS)
Gao, Xinya; Wang, Yonghong; Li, Junrui; Dan, Xizuo; Wu, Sijin; Yang, Lianxiang
2017-06-01
It is difficult to measure absolute three-dimensional deformation using traditional digital speckle pattern interferometry (DSPI) when the boundary condition of an object being tested is not exactly given. In practical applications, the boundary condition cannot always be specifically provided, limiting the use of DSPI in real-world applications. To tackle this problem, a DSPI system that is integrated by the spatial carrier method and a color camera has been established. Four phase maps are obtained simultaneously by spatial carrier color-digital speckle pattern interferometry using four speckle interferometers with different illumination directions. One out-of-plane and two in-plane absolute deformations can be acquired simultaneously without knowing the boundary conditions using the absolute deformation extraction algorithm based on four phase maps. Finally, the system is proved by experimental results through measurement of the deformation of a flat aluminum plate with a groove.
Holographic digital microscopy in on-line process control
NASA Astrophysics Data System (ADS)
Osanlou, Ardeshir
2011-09-01
This article investigates the feasibility of real-time three-dimensional imaging of microscopic objects within various emulsions while being produced in specialized production vessels. The study is particularly relevant to on-line process monitoring and control in chemical, pharmaceutical, food, cleaning, and personal hygiene industries. Such processes are often dynamic and the materials cannot be measured once removed from the production vessel. The technique reported here is applicable to three-dimensional characterization analyses on stirred fluids in small reaction vessels. Relatively expensive pulsed lasers have been avoided through the careful control of the speed of the moving fluid in relation to the speed of the camera exposure and the wavelength of the continuous wave laser used. The ultimate aim of the project is to introduce a fully robust and compact digital holographic microscope as a process control tool in a full size specialized production vessel.
Three dimensional interactive display
NASA Technical Reports Server (NTRS)
Vranish, John M. (Inventor)
2005-01-01
A three-dimensional (3-D) interactive display and method of forming the same, includes a transparent capaciflector (TC) camera formed on a transparent shield layer on the screen surface. A first dielectric layer is formed on the shield layer. A first wire layer is formed on the first dielectric layer. A second dielectric layer is formed on the first wire layer. A second wire layer is formed on the second dielectric layer. Wires on the first wire layer and second wire layer are grouped into groups of parallel wires with a turnaround at one end of each group and a sensor pad at the opposite end. An operational amplifier is connected to each of the sensor pads and the shield pad biases the pads and receives a signal from connected sensor pads in response to intrusion of a probe. The signal is proportional to probe location with respect to the monitor screen.
Three-Dimensional Conformation of Folded Polymers in Single Crystals
NASA Astrophysics Data System (ADS)
Hong, You-lee; Yuan, Shichen; Li, Zhen; Ke, Yutian; Nozaki, Koji; Miyoshi, Toshikazu
2015-10-01
The chain-folding mechanism and structure of semicrystalline polymers have long been controversial. Solid-state NMR was applied to determine the chain trajectory of 13C CH3 -labeled isotactic poly(1-butene) (i PB 1 ) in form III chiral single crystals blended with nonlabeled i PB 1 crystallized in dilute solutions under low supercooling. An advanced 13C - 13C double-quantum NMR technique probing the spatial proximity pattern of labeled 13C nuclei revealed that the chains adopt a three-dimensional (3D) conformation in single crystals. The determined results indicate a two-step crystallization process of (i) cluster formation via self-folding in the precrystallization stage and (ii) deposition of the nanoclusters as a building block at the growth front in single crystals.
Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.
Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P
2016-01-01
Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.
Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis
Cerveri, Pietro; Barros, Ricardo M. L.; Marins, João C. B.; Silvatti, Amanda P.
2016-01-01
Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapol, B.D.; Kornreich, D.E.
Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) pointmore » source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.« less