Image intensifier-based volume tomographic angiography imaging system: system evaluation
NASA Astrophysics Data System (ADS)
Ning, Ruola; Wang, Xiaohui; Shen, Jianjun; Conover, David L.
1995-05-01
An image intensifier-based rotational volume tomographic angiography imaging system has been constructed. The system consists of an x-ray tube and an image intensifier that are separately mounted on a gantry. This system uses an image intensifier coupled to a TV camera as a two-dimensional detector so that a set of two-dimensional projections can be acquired for a direct three-dimensional reconstruction (3D). This system has been evaluated with two phantoms: a vascular phantom and a monkey head cadaver. One hundred eighty projections of each phantom were acquired with the system. A set of three-dimensional images were directly reconstructed from the projection data. The experimental results indicate that good imaging quality can be obtained with this system.
Morris, Michael D.; Treado, Patrick J.
1991-01-01
An imaging system for providing spectrographically resolved images. The system incorporates a one-dimensional spatial encoding mask which enables an image to be projected onto a two-dimensional image detector after spectral dispersion of the image. The dimension of the image which is lost due to spectral dispersion on the two-dimensional detector is recovered through employing a reverse transform based on presenting a multiplicity of different spatial encoding patterns to the image. The system is especially adapted for detecting Raman scattering of monochromatic light transmitted through or reflected from physical samples. Preferably, spatial encoding is achieved through the use of Hadamard mask which selectively transmits or blocks portions of the image from the sample being evaluated.
Phase-sensitive two-dimensional neutron shearing interferometer and Hartmann sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kevin
2015-12-08
A neutron imaging system detects both the phase shift and absorption of neutrons passing through an object. The neutron imaging system is based on either of two different neutron wavefront sensor techniques: 2-D shearing interferometry and Hartmann wavefront sensing. Both approaches measure an entire two-dimensional neutron complex field, including its amplitude and phase. Each measures the full-field, two-dimensional phase gradients and, concomitantly, the two-dimensional amplitude mapping, requiring only a single measurement.
NASA Technical Reports Server (NTRS)
McDowell, Mark (Inventor); Glasgow, Thomas K. (Inventor)
1999-01-01
A system and a method for measuring three-dimensional velocities at a plurality of points in a fluid employing at least two cameras positioned approximately perpendicular to one another. The cameras are calibrated to accurately represent image coordinates in world coordinate system. The two-dimensional views of the cameras are recorded for image processing and centroid coordinate determination. Any overlapping particle clusters are decomposed into constituent centroids. The tracer particles are tracked on a two-dimensional basis and then stereo matched to obtain three-dimensional locations of the particles as a function of time so that velocities can be measured therefrom The stereo imaging velocimetry technique of the present invention provides a full-field. quantitative, three-dimensional map of any optically transparent fluid which is seeded with tracer particles.
Three-Dimensional Root Phenotyping with a Novel Imaging and Software Platform1[C][W][OA
Clark, Randy T.; MacCurdy, Robert B.; Jung, Janelle K.; Shaff, Jon E.; McCouch, Susan R.; Aneshansley, Daniel J.; Kochian, Leon V.
2011-01-01
A novel imaging and software platform was developed for the high-throughput phenotyping of three-dimensional root traits during seedling development. To demonstrate the platform’s capacity, plants of two rice (Oryza sativa) genotypes, Azucena and IR64, were grown in a transparent gellan gum system and imaged daily for 10 d. Rotational image sequences consisting of 40 two-dimensional images were captured using an optically corrected digital imaging system. Three-dimensional root reconstructions were generated and analyzed using a custom-designed software, RootReader3D. Using the automated and interactive capabilities of RootReader3D, five rice root types were classified and 27 phenotypic root traits were measured to characterize these two genotypes. Where possible, measurements from the three-dimensional platform were validated and were highly correlated with conventional two-dimensional measurements. When comparing gellan gum-grown plants with those grown under hydroponic and sand culture, significant differences were detected in morphological root traits (P < 0.05). This highly flexible platform provides the capacity to measure root traits with a high degree of spatial and temporal resolution and will facilitate novel investigations into the development of entire root systems or selected components of root systems. In combination with the extensive genetic resources that are now available, this platform will be a powerful resource to further explore the molecular and genetic determinants of root system architecture. PMID:21454799
Image matrix processor for fast multi-dimensional computations
Roberson, George P.; Skeate, Michael F.
1996-01-01
An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arinilhaq,; Widita, Rena
2014-09-30
Optical Coherence Tomography is often used in medical image acquisition to diagnose that change due easy to use and low price. Unfortunately, this type of examination produces a two-dimensional retinal image of the point of acquisition. Therefore, this study developed a method that combines and reconstruct 2-dimensional retinal images into three-dimensional images to display volumetric macular accurately. The system is built with three main stages: data acquisition, data extraction and 3-dimensional reconstruction. At data acquisition step, Optical Coherence Tomography produced six *.jpg images of each patient were further extracted with MATLAB 2010a software into six one-dimensional arrays. The six arraysmore » are combined into a 3-dimensional matrix using a kriging interpolation method with SURFER9 resulting 3-dimensional graphics of macula. Finally, system provides three-dimensional color graphs based on the data distribution normal macula. The reconstruction system which has been designed produces three-dimensional images with size of 481 × 481 × h (retinal thickness) pixels.« less
Hsieh, K S; Lin, C C; Liu, W S; Chen, F L
1996-01-01
Two-dimensional echocardiography had long been a standard diagnostic modality for congenital heart disease. Further attempts of three-dimensional reconstruction using two-dimensional echocardiographic images to visualize stereotypic structure of cardiac lesions have been successful only recently. So far only very few studies have been done to display three-dimensional anatomy of the heart through two-dimensional image acquisition because such complex procedures were involved. This study introduced a recently developed image acquisition and processing system for dynamic three-dimensional visualization of various congenital cardiac lesions. From December 1994 to April 1995, 35 cases were selected in the Echo Laboratory here from about 3000 Echo examinations completed. Each image was acquired on-line with specially designed high resolution image grazmber with EKG and respiratory gating technique. Off-line image processing using a window-architectured interactive software package includes construction of 2-D ehcocardiographic pixel to 3-D "voxel" with conversion of orthogonal to rotatory axial system, interpolation, extraction of region of interest, segmentation, shading and, finally, 3D rendering. Three-dimensional anatomy of various congenital cardiac defects was shown, including four cases with ventricular septal defects, two cases with atrial septal defects, and two cases with aortic stenosis. Dynamic reconstruction of a "beating heart" is recorded as vedio tape with video interface. The potential application of 3D display of the reconstruction from 2D echocardiographic images for the diagnosis of various congenital heart defects has been shown. The 3D display was able to improve the diagnostic ability of echocardiography, and clear-cut display of the various congenital cardiac defects and vavular stenosis could be demonstrated. Reinforcement of current techniques will expand future application of 3D display of conventional 2D images.
A system for extracting 3-dimensional measurements from a stereo pair of TV cameras
NASA Technical Reports Server (NTRS)
Yakimovsky, Y.; Cunningham, R.
1976-01-01
Obtaining accurate three-dimensional (3-D) measurement from a stereo pair of TV cameras is a task requiring camera modeling, calibration, and the matching of the two images of a real 3-D point on the two TV pictures. A system which models and calibrates the cameras and pairs the two images of a real-world point in the two pictures, either manually or automatically, was implemented. This system is operating and provides three-dimensional measurements resolution of + or - mm at distances of about 2 m.
Image matrix processor for fast multi-dimensional computations
Roberson, G.P.; Skeate, M.F.
1996-10-15
An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.
Data processing from lobster eye type optics
NASA Astrophysics Data System (ADS)
Nentvich, Ondrej; Stehlikova, Veronika; Urban, Martin; Hudec, Rene; Sieger, Ladislav
2017-05-01
Wolter I optics are commonly used for imaging in X-Ray spectrum. This system uses two reflections, and at higher energies, this system is not so much efficient but has a very good optical resolution. Here is another type of optics Lobster Eye, which is using also two reflections for focusing rays in Schmidt's or Angel's arrangement. Here is also possible to use Lobster eye optics as two one dimensional independent optics. This paper describes advantages of one dimensional and two dimensional Lobster Eye optics in Schmidt's arrangement and its data processing - find out a number of sources in wide field of view. Two dimensional (2D) optics are suitable to detect the number of point X-ray sources and their magnitude, but it is necessary to expose for a long time because a 2D system has much lower transitivity, due to double reflection, compared to one dimensional (1D) optics. Not only for this reason, two 1D optics are better to use for lower magnitudes of sources. In this case, additional image processing is necessary to achieve a 2D image. This article describes of approach an image reconstruction and advantages of two 1D optics without significant losses of transitivity.
He, Longjun; Xu, Lang; Ming, Xing; Liu, Qian
2015-02-01
Three-dimensional post-processing operations on the volume data generated by a series of CT or MR images had important significance on image reading and diagnosis. As a part of the DIOCM standard, WADO service defined how to access DICOM objects on the Web, but it didn't involve three-dimensional post-processing operations on the series images. This paper analyzed the technical features of three-dimensional post-processing operations on the volume data, and then designed and implemented a web service system for three-dimensional post-processing operations of medical images based on the WADO protocol. In order to improve the scalability of the proposed system, the business tasks and calculation operations were separated into two modules. As results, it was proved that the proposed system could support three-dimensional post-processing service of medical images for multiple clients at the same moment, which met the demand of accessing three-dimensional post-processing operations on the volume data on the web.
Three-dimensional imaging of the craniofacial complex.
Nguyen, Can X.; Nissanov, Jonathan; Öztürk, Cengizhan; Nuveen, Michiel J.; Tuncay, Orhan C.
2000-02-01
Orthodontic treatment requires the rearrangement of craniofacial complex elements in three planes of space, but oddly the diagnosis is done with two-dimensional images. Here we report on a three-dimensional (3D) imaging system that employs the stereoimaging method of structured light to capture the facial image. The images can be subsequently integrated with 3D cephalometric tracings derived from lateral and PA films (www.clinorthodres.com/cor-c-070). The accuracy of the reconstruction obtained with this inexpensive system is about 400 µ.
Recognition of Equations Using a Two-Dimensional Stochastic Context-Free Grammar
NASA Astrophysics Data System (ADS)
Chou, Philip A.
1989-11-01
We propose using two-dimensional stochastic context-free grammars for image recognition, in a manner analogous to using hidden Markov models for speech recognition. The value of the approach is demonstrated in a system that recognizes printed, noisy equations. The system uses a two-dimensional probabilistic version of the Cocke-Younger-Kasami parsing algorithm to find the most likely parse of the observed image, and then traverses the corresponding parse tree in accordance with translation formats associated with each production rule, to produce eqn I troff commands for the imaged equation. In addition, it uses two-dimensional versions of the Inside/Outside and Baum re-estimation algorithms for learning the parameters of the grammar from a training set of examples. Parsing the image of a simple noisy equation currently takes about one second of cpu time on an Alliant FX/80.
A novel method to acquire 3D data from serial 2D images of a dental cast
NASA Astrophysics Data System (ADS)
Yi, Yaxing; Li, Zhongke; Chen, Qi; Shao, Jun; Li, Xinshe; Liu, Zhiqin
2007-05-01
This paper introduced a newly developed method to acquire three-dimensional data from serial two-dimensional images of a dental cast. The system consists of a computer and a set of data acquiring device. The data acquiring device is used to take serial pictures of the a dental cast; an artificial neural network works to translate two-dimensional pictures to three-dimensional data; then three-dimensional image can reconstruct by the computer. The three-dimensional data acquiring of dental casts is the foundation of computer-aided diagnosis and treatment planning in orthodontics.
3D Imaging with Structured Illumination for Advanced Security Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birch, Gabriel Carisle; Dagel, Amber Lynn; Kast, Brian A.
2015-09-01
Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capabilitymore » are discussed.« less
NASA Astrophysics Data System (ADS)
Petrochenko, Andrey; Konyakhin, Igor
2017-06-01
In connection with the development of robotics have become increasingly popular variety of three-dimensional reconstruction of the system mapping and image-set received from the optical sensors. The main objective of technical and robot vision is the detection, tracking and classification of objects of the space in which these systems and robots operate [15,16,18]. Two-dimensional images sometimes don't contain sufficient information to address those or other problems: the construction of the map of the surrounding area for a route; object identification, tracking their relative position and movement; selection of objects and their attributes to complement the knowledge base. Three-dimensional reconstruction of the surrounding space allows you to obtain information on the relative positions of objects, their shape, surface texture. Systems, providing training on the basis of three-dimensional reconstruction of the results of the comparison can produce two-dimensional images of three-dimensional model that allows for the recognition of volume objects on flat images. The problem of the relative orientation of industrial robots with the ability to build threedimensional scenes of controlled surfaces is becoming actual nowadays.
Two-dimensional vacuum ultraviolet images in different MHD events on the EAST tokamak
NASA Astrophysics Data System (ADS)
Zhijun, WANG; Xiang, GAO; Tingfeng, MING; Yumin, WANG; Fan, ZHOU; Feifei, LONG; Qing, ZHUANG; EAST Team
2018-02-01
A high-speed vacuum ultraviolet (VUV) imaging telescope system has been developed to measure the edge plasma emission (including the pedestal region) in the Experimental Advanced Superconducting Tokamak (EAST). The key optics of the high-speed VUV imaging system consists of three parts: an inverse Schwarzschild-type telescope, a micro-channel plate (MCP) and a visible imaging high-speed camera. The VUV imaging system has been operated routinely in the 2016 EAST experiment campaign. The dynamics of the two-dimensional (2D) images of magnetohydrodynamic (MHD) instabilities, such as edge localized modes (ELMs), tearing-like modes and disruptions, have been observed using this system. The related VUV images are presented in this paper, and it indicates the VUV imaging system is a potential tool which can be applied successfully in various plasma conditions.
NASA Astrophysics Data System (ADS)
Edwards, Warren S.; Ritchie, Cameron J.; Kim, Yongmin; Mack, Laurence A.
1995-04-01
We have developed a three-dimensional (3D) imaging system using power Doppler (PD) ultrasound (US). This system can be used for visualizing and analyzing the vascular anatomy of parenchymal organs. To create the 3D PD images, we acquired a series of two-dimensional PD images from a commercial US scanner and recorded the position and orientation of each image using a 3D magnetic position sensor. Three-dimensional volumes were reconstructed using specially designed software and then volume rendered for display. We assessed the feasibility and geometric accuracy of our system with various flow phantoms. The system was then tested on a volunteer by scanning a transplanted kidney. The reconstructed volumes of the flow phantom contained less than 1 mm of geometric distortion and the 3D images of the transplanted kidney depicted the segmental, arcuate, and interlobar vessels.
Multimodal imaging system for dental caries detection
NASA Astrophysics Data System (ADS)
Liang, Rongguang; Wong, Victor; Marcus, Michael; Burns, Peter; McLaughlin, Paul
2007-02-01
Dental caries is a disease in which minerals of the tooth are dissolved by surrounding bacterial plaques. A caries process present for some time may result in a caries lesion. However, if it is detected early enough, the dentist and dental professionals can implement measures to reverse and control caries. Several optical, nonionized methods have been investigated and used to detect dental caries in early stages. However, there is not a method that can singly detect the caries process with both high sensitivity and high specificity. In this paper, we present a multimodal imaging system that combines visible reflectance, fluorescence, and Optical Coherence Tomography (OCT) imaging. This imaging system is designed to obtain one or more two-dimensional images of the tooth (reflectance and fluorescence images) and a three-dimensional OCT image providing depth and size information of the caries. The combination of two- and three-dimensional images of the tooth has the potential for highly sensitive and specific detection of dental caries.
NASA Astrophysics Data System (ADS)
Chaa, Mourad; Boukezzoula, Naceur-Eddine; Attia, Abdelouahab
2017-01-01
Two types of scores extracted from two-dimensional (2-D) and three-dimensional (3-D) palmprint for personal recognition systems are merged, introducing a local image descriptor for 2-D palmprint-based recognition systems, named bank of binarized statistical image features (B-BSIF). The main idea of B-BSIF is that the extracted histograms from the binarized statistical image features (BSIF) code images (the results of applying the different BSIF descriptor size with the length 12) are concatenated into one to produce a large feature vector. 3-D palmprint contains the depth information of the palm surface. The self-quotient image (SQI) algorithm is applied for reconstructing illumination-invariant 3-D palmprint images. To extract discriminative Gabor features from SQI images, Gabor wavelets are defined and used. Indeed, the dimensionality reduction methods have shown their ability in biometrics systems. Given this, a principal component analysis (PCA)+linear discriminant analysis (LDA) technique is employed. For the matching process, the cosine Mahalanobis distance is applied. Extensive experiments were conducted on a 2-D and 3-D palmprint database with 10,400 range images from 260 individuals. Then, a comparison was made between the proposed algorithm and other existing methods in the literature. Results clearly show that the proposed framework provides a higher correct recognition rate. Furthermore, the best results were obtained by merging the score of B-BSIF descriptor with the score of the SQI+Gabor wavelets+PCA+LDA method, yielding an equal error rate of 0.00% and a recognition rate of rank-1=100.00%.
Study of optical design of three-dimensional digital ophthalmoscopes.
Fang, Yi-Chin; Yen, Chih-Ta; Chu, Chin-Hsien
2015-10-01
This study primarily involves using optical zoom structures to design a three-dimensional (3D) human-eye optical sensory system with infrared and visible light. According to experimental data on two-dimensional (2D) and 3D images, human-eye recognition of 3D images is substantially higher (approximately 13.182%) than that of 2D images. Thus, 3D images are more effective than 2D images when they are used at work or in high-recognition devices. In the optical system design, infrared and visible light wavebands were incorporated as light sources to perform simulations. The results can be used to facilitate the design of optical systems suitable for 3D digital ophthalmoscopes.
OPTICAL PROCESSING OF INFORMATION: Multistage optoelectronic two-dimensional image switches
NASA Astrophysics Data System (ADS)
Fedorov, V. B.
1994-06-01
The implementation principles and the feasibility of construction of high-throughput multistage optoelectronic switches, capable of transmitting data in the form of two-dimensional images along interconnected pairs of optical channels, are considered. Different ways of realising compact switches are proposed. They are based on the use of polarisation-sensitive elements, arrays of modulators of the plane of polarisation of light, arrays of objectives, and free-space optics. Optical systems of such switches can theoretically ensure that the resolution and optical losses in two-dimensional image transmission are limited only by diffraction. Estimates are obtained of the main maximum-performance parameters of the proposed optoelectronic image switches.
Three-dimensional imaging technology offers promise in medicine.
Karako, Kenji; Wu, Qiong; Gao, Jianjun
2014-04-01
Medical imaging plays an increasingly important role in the diagnosis and treatment of disease. Currently, medical equipment mainly has two-dimensional (2D) imaging systems. Although this conventional imaging largely satisfies clinical requirements, it cannot depict pathologic changes in 3 dimensions. The development of three-dimensional (3D) imaging technology has encouraged advances in medical imaging. Three-dimensional imaging technology offers doctors much more information on a pathology than 2D imaging, thus significantly improving diagnostic capability and the quality of treatment. Moreover, the combination of 3D imaging with augmented reality significantly improves surgical navigation process. The advantages of 3D imaging technology have made it an important component of technological progress in the field of medical imaging.
Kinoshita, Hidefumi; Nakagawa, Ken; Usui, Yukio; Iwamura, Masatsugu; Ito, Akihiro; Miyajima, Akira; Hoshi, Akio; Arai, Yoichi; Baba, Shiro; Matsuda, Tadashi
2015-08-01
Three-dimensional (3D) imaging systems have been introduced worldwide for surgical instrumentation. A difficulty of laparoscopic surgery involves converting two-dimensional (2D) images into 3D images and depth perception rearrangement. 3D imaging may remove the need for depth perception rearrangement and therefore have clinical benefits. We conducted a multicenter, open-label, randomized trial to compare the surgical outcome of 3D-high-definition (HD) resolution and 2D-HD imaging in laparoscopic radical prostatectomy (LRP), in order to determine whether an LRP under HD resolution 3D imaging is superior to that under HD resolution 2D imaging in perioperative outcome, feasibility, and fatigue. One-hundred twenty-two patients were randomly assigned to a 2D or 3D group. The primary outcome was time to perform vesicourethral anastomosis (VUA), which is technically demanding and may include a number of technical difficulties considered in laparoscopic surgeries. VUA time was not significantly shorter in the 3D group (26.7 min, mean) compared with the 2D group (30.1 min, mean) (p = 0.11, Student's t test). However, experienced surgeons and 3D-HD imaging were independent predictors for shorter VUA times (p = 0.000, p = 0.014, multivariate logistic regression analysis). Total pneumoperitoneum time was not different. No conversion case from 3D to 2D or LRP to open RP was observed. Fatigue was evaluated by a simulation sickness questionnaire and critical flicker frequency. Results were not different between the two groups. Subjective feasibility and satisfaction scores were significantly higher in the 3D group. Using a 3D imaging system in LRP may have only limited advantages in decreasing operation times over 2D imaging systems. However, the 3D system increased surgical feasibility and decreased surgeons' effort levels without inducing significant fatigue.
Li, Changqing; Zhao, Hongzhi; Anderson, Bonnie; Jiang, Huabei
2006-03-01
We describe a compact diffuse optical tomography system specifically designed for breast imaging. The system consists of 64 silicon photodiode detectors, 64 excitation points, and 10 diode lasers in the near-infrared region, allowing multispectral, three-dimensional optical imaging of breast tissue. We also detail the system performance and optimization through a calibration procedure. The system is evaluated using tissue-like phantom experiments and an in vivo clinic experiment. Quantitative two-dimensional (2D) and three-dimensional (3D) images of absorption and reduced scattering coefficients are obtained from these experiments. The ten-wavelength spectra of the extracted reduced scattering coefficient enable quantitative morphological images to be reconstructed with this system. From the in vivo clinic experiment, functional images including deoxyhemoglobin, oxyhemoglobin, and water concentration are recovered and tumors are detected with correct size and position compared with the mammography.
Optical stereo video signal processor
NASA Technical Reports Server (NTRS)
Craig, G. D. (Inventor)
1985-01-01
An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.
Lateral resolution testing of a novel developed confocal microscopic imaging system
NASA Astrophysics Data System (ADS)
Zhang, Xin; Zhang, Yunhai; Chang, Jian; Huang, Wei; Xue, Xiaojun; Xiao, Yun
2015-10-01
Laser scanning confocal microscope has been widely used in biology, medicine and material science owing to its advantages of high resolution and tomographic imaging. Based on a set of confirmatory experiments and system design, a novel confocal microscopic imaging system is developed. The system is composed of a conventional fluorescence microscope and a confocal scanning unit. In the scanning unit a laser beam coupling module provides four different wavelengths 405nm 488nm 561nm and 638nm which can excite a variety of dyes. The system works in spot-to-spot scanning mode with a two-dimensional galvanometer. A 50 microns pinhole is used to guarantee that stray light is blocked and only the fluorescence signal from the focal point can be received . The three-channel spectral splitter is used to perform fluorescence imaging at three different working wavelengths simultaneously. The rat kidney tissue slice is imaged using the developed confocal microscopic imaging system. Nucleues labeled by DAPI and kidney spherule curved pipe labeled by Alexa Fluor 488 can be imaged clearly and respectively, realizing the distinction between the different components of mouse kidney tissue. The three-dimensional tomographic imaging of mouse kidney tissue is reconstructed by several two-dimensional images obtained in different depths. At last the resolution of the confocal microscopic imaging system is tested quantitatively. The experimental result shows that the system can achieve lateral resolution priority to 230nm.
Microwave Imaging in Large Helical Device
NASA Astrophysics Data System (ADS)
Yoshinaga, T.; Nagayama, Y.; Tsuchiya, H.; Kuwahara, D.; Tsuji-Iio, S.; Akaki, K.; Mase, A.; Kogi, Y.; Yamaguchi, S.; Shi, Z. B.; Hojo, H.
2011-02-01
Microwave imaging reflectometry (MIR) system and electron cyclotron emission imaging (ECEI) system are under development for the simultaneous reconstruction of the electron density and temperature fluctuation structures in the Large Helical Device (LHD). The MIR observes three-dimensional structure of disturbed cutoff surfaces by using the two-dimensionally distributed horn-antenna mixer array (HMA) of 5 × 7 channels in combination with the simultaneous projection of microwaves with four different frequency components (60.410, 61.808, 63.008 and 64.610 GHz). The ECEI is designed to observe two-dimensional structure of electron temperature by detecting second-harmonic ECE at 97-107 GHz with the one-dimensional HMA (7 channels) in the common optics with MIR system. Both the MIR and the ECEI are realized by the HMA and the band-pass filter (BPF) arrays, which are fabricated by micro-strip-line technique at low-cost.
A novel and compact spectral imaging system based on two curved prisms
NASA Astrophysics Data System (ADS)
Nie, Yunfeng; Bin, Xiangli; Zhou, Jinsong; Li, Yang
2013-09-01
As a novel detection approach which simultaneously acquires two-dimensional visual picture and one-dimensional spectral information, spectral imaging offers promising applications on biomedical imaging, conservation and identification of artworks, surveillance of food safety, and so forth. A novel moderate-resolution spectral imaging system consisting of merely two optical elements is illustrated in this paper. It can realize the function of a relay imaging system as well as a 10nm spectral resolution spectroscopy. Compared to conventional prismatic imaging spectrometers, this design is compact and concise with only two special curved prisms by utilizing two reflective surfaces. In contrast to spectral imagers based on diffractive grating, the usage of compound-prism possesses characteristics of higher energy utilization and wider free spectral range. The seidel aberration theory and dispersive principle of this special prism are analyzed at first. According to the results, the optical system of this design is simulated, and the performance evaluation including spot diagram, MTF and distortion, is presented. In the end, considering the difficulty and particularity of manufacture and alignment, an available method for fabrication and measurement is proposed.
A new Schwarzschild optical system for two-dimensional EUV imaging of MRX plasmas
NASA Astrophysics Data System (ADS)
Bolgert, P.; Bitter, M.; Efthimion, P.; Hill, K. W.; Ji, H.; Myers, C. E.; Yamada, M.; Yoo, J.; Zweben, S.
2013-10-01
This poster describes the design and construction of a new Schwarzschild optical system for two-dimensional EUV imaging of plasmas. This optical system consists of two concentric spherical mirrors with radii R1 and R2, and is designed to operate with certain angles of incidence θ1 and θ2. The special feature of this system resides in the fact that all the rays passing through the system are tangential to a third concentric circle; it assures that the condition for Bragg reflection is simultaneously fulfilled at each point on the two reflecting surfaces if the spherical mirrors are replaced by spherical multi-layer structures. A prototype of this imaging system will be implemented in the Magnetic Reconnection Experiment (MRX) at PPPL to obtain two-dimensional EUV images of the plasma in the energy range from 18 to 62 eV; the relative intensity of the emitted radiation in this energy range was determined from survey measurements with a photodiode. It is thought that the radiation at these energies is due to Bremsstrahlung and line emission caused by suprathermal electrons. This research is supported by DoE Contract Number DE-AC02-09CH11466 and by the Center for Magnetic Self-Organization (CMSO).
A cometary ion mass spectrometer
NASA Technical Reports Server (NTRS)
Shelley, E. G.; Simpson, D. A.
1984-01-01
The development of flight suitable analyzer units for that part of the GIOTTO Ion Mass Spectrometer (IMS) experiment designated the High Energy Range Spectrometer (HERS) is discussed. Topics covered include: design of the total ion-optical system for the HERS analyzer; the preparation of the design of analyzing magnet; the evaluation of microchannel plate detectors and associated two-dimensional anode arrays; and the fabrication and evaluation of two flight-suitable units of the complete ion-optical analyzer system including two-dimensional imaging detectors and associated image encoding electronics.
The AIS-5000 parallel processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmitt, L.A.; Wilson, S.S.
1988-05-01
The AIS-5000 is a commercially available massively parallel processor which has been designed to operate in an industrial environment. It has fine-grained parallelism with up to 1024 processing elements arranged in a single-instruction multiple-data (SIMD) architecture. The processing elements are arranged in a one-dimensional chain that, for computer vision applications, can be as wide as the image itself. This architecture has superior cost/performance characteristics than two-dimensional mesh-connected systems. The design of the processing elements and their interconnections as well as the software used to program the system allow a wide variety of algorithms and applications to be implemented. In thismore » paper, the overall architecture of the system is described. Various components of the system are discussed, including details of the processing elements, data I/O pathways and parallel memory organization. A virtual two-dimensional model for programming image-based algorithms for the system is presented. This model is supported by the AIS-5000 hardware and software and allows the system to be treated as a full-image-size, two-dimensional, mesh-connected parallel processor. Performance bench marks are given for certain simple and complex functions.« less
Stereoscopic medical imaging collaboration system
NASA Astrophysics Data System (ADS)
Okuyama, Fumio; Hirano, Takenori; Nakabayasi, Yuusuke; Minoura, Hirohito; Tsuruoka, Shinji
2007-02-01
The computerization of the clinical record and the realization of the multimedia have brought improvement of the medical service in medical facilities. It is very important for the patients to obtain comprehensible informed consent. Therefore, the doctor should plainly explain the purpose and the content of the diagnoses and treatments for the patient. We propose and design a Telemedicine Imaging Collaboration System which presents a three dimensional medical image as X-ray CT, MRI with stereoscopic image by using virtual common information space and operating the image from a remote location. This system is composed of two personal computers, two 15 inches stereoscopic parallax barrier type LCD display (LL-151D, Sharp), one 1Gbps router and 1000base LAN cables. The software is composed of a DICOM format data transfer program, an operation program of the images, the communication program between two personal computers and a real time rendering program. Two identical images of 512×768 pixcels are displayed on two stereoscopic LCD display, and both images show an expansion, reduction by mouse operation. This system can offer a comprehensible three-dimensional image of the diseased part. Therefore, the doctor and the patient can easily understand it, depending on their needs.
Song, Hajun; Hwang, Sejin; An, Hongsung; Song, Ho-Jin; Song, Jong-In
2017-08-21
We propose and demonstrate a continuous-wave vector THz imaging system utilizing a photonic generation of two-tone THz signals and self-mixing detection. The proposed system measures amplitude and phase information simultaneously without the local oscillator reference or phase rotation scheme that is required for heterodyne or homodyne detection. In addition, 2π phase ambiguity that occurs when the sample is thicker than the wavelength of THz radiation can be avoided. In this work, THz signal having two frequency components was generated with a uni-traveling-carrier photodiode and electro-optic modulator on the emitter side and detected with a Schottky barrier diode detector used as a self-mixer on the receiver side. The proposed THz vector imaging system exhibited a 50-dB signal to noise ratio and 0.012-rad phase fluctuation with 100-μs integration time at 325-GHz. With the system, we demonstrate two-dimensional THz phase contrast imaging. Considering the recent use of two-dimensional arrays of Schottky barrier diodes as a THz image sensor, the proposed system is greatly advantageous for realizing a real-time THz vector imaging system due to its simple receiver configuration.
Liu, Jonathan T. C.; Mandella, Michael J.; Ra, Hyejun; Wong, Larry K.; Solgaard, Olav; Kino, Gordon S.; Piyawattanametha, Wibool; Contag, Christopher H.; Wang, Thomas D.
2007-01-01
The first, to our knowledge, miniature dual-axes confocal microscope has been developed, with an outer diameter of 10 mm, for subsurface imaging of biological tissues with 5–7 μm resolution. Depth-resolved en face images are obtained at 30 frames per second, with a field of view of 800 × 100 μm, by employing a two-dimensional scanning microelectromechanical systems mirror. Reflectance and fluorescence images are obtained with a laser source at 785 nm, demonstrating the ability to perform real-time optical biopsy. PMID:17215937
Experimental image alignment system
NASA Technical Reports Server (NTRS)
Moyer, A. L.; Kowel, S. T.; Kornreich, P. G.
1980-01-01
A microcomputer-based instrument for image alignment with respect to a reference image is described which uses the DEFT sensor (Direct Electronic Fourier Transform) for image sensing and preprocessing. The instrument alignment algorithm which uses the two-dimensional Fourier transform as input is also described. It generates signals used to steer the stage carrying the test image into the correct orientation. This algorithm has computational advantages over algorithms which use image intensity data as input and is suitable for a microcomputer-based instrument since the two-dimensional Fourier transform is provided by the DEFT sensor.
Xin, Zhaowei; Wei, Dong; Xie, Xingwang; Chen, Mingce; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng
2018-02-19
Light-field imaging is a crucial and straightforward way of measuring and analyzing surrounding light worlds. In this paper, a dual-polarized light-field imaging micro-system based on a twisted nematic liquid-crystal microlens array (TN-LCMLA) for direct three-dimensional (3D) observation is fabricated and demonstrated. The prototyped camera has been constructed by integrating a TN-LCMLA with a common CMOS sensor array. By switching the working state of the TN-LCMLA, two orthogonally polarized light-field images can be remapped through the functioned imaging sensors. The imaging micro-system in conjunction with the electric-optical microstructure can be used to perform polarization and light-field imaging, simultaneously. Compared with conventional plenoptic cameras using liquid-crystal microlens array, the polarization-independent light-field images with a high image quality can be obtained in the arbitrary polarization state selected. We experimentally demonstrate characters including a relatively wide operation range in the manipulation of incident beams and the multiple imaging modes, such as conventional two-dimensional imaging, light-field imaging, and polarization imaging. Considering the obvious features of the TN-LCMLA, such as very low power consumption, providing multiple imaging modes mentioned, simple and low-cost manufacturing, the imaging micro-system integrated with this kind of liquid-crystal microstructure driven electrically presents the potential capability of directly observing a 3D object in typical scattering media.
NASA Astrophysics Data System (ADS)
Xia, Wenze; Ma, Yayun; Han, Shaokun; Wang, Yulin; Liu, Fei; Zhai, Yu
2018-06-01
One of the most important goals of research on three-dimensional nonscanning laser imaging systems is the improvement of the illumination system. In this paper, a new three-dimensional nonscanning laser imaging system based on the illumination pattern of a point-light-source array is proposed. This array is obtained using a fiber array connected to a laser array with each unit laser having independent control circuits. This system uses a point-to-point imaging process, which is realized using the exact corresponding optical relationship between the point-light-source array and a linear-mode avalanche photodiode array detector. The complete working process of this system is explained in detail, and the mathematical model of this system containing four equations is established. A simulated contrast experiment and two real contrast experiments which use the simplified setup without a laser array are performed. The final results demonstrate that unlike a conventional three-dimensional nonscanning laser imaging system, the proposed system meets all the requirements of an eligible illumination system. Finally, the imaging performance of this system is analyzed under defocusing situations, and analytical results show that the system has good defocusing robustness and can be easily adjusted in real applications.
Computer-assisted surgical planning and automation of laser delivery systems
NASA Astrophysics Data System (ADS)
Zamorano, Lucia J.; Dujovny, Manuel; Dong, Ada; Kadi, A. Majeed
1991-05-01
This paper describes a 'real time' surgical treatment planning interactive workstation, utilizing multimodality imaging (computer tomography, magnetic resonance imaging, digital angiography) that has been developed to provide the neurosurgeon with two-dimensional multiplanar and three-dimensional 'display' of a patient's lesion.
Two dimensional recursive digital filters for near real time image processing
NASA Technical Reports Server (NTRS)
Olson, D.; Sherrod, E.
1980-01-01
A program was designed toward the demonstration of the feasibility of using two dimensional recursive digital filters for subjective image processing applications that require rapid turn around. The concept of the use of a dedicated minicomputer for the processor for this application was demonstrated. The minicomputer used was the HP1000 series E with a RTE 2 disc operating system and 32K words of memory. A Grinnel 256 x 512 x 8 bit display system was used to display the images. Sample images were provided by NASA Goddard on a 800 BPI, 9 track tape. Four 512 x 512 images representing 4 spectral regions of the same scene were provided. These images were filtered with enhancement filters developed during this effort.
Images multiplexing by code division technique
NASA Astrophysics Data System (ADS)
Kuo, Chung J.; Rigas, Harriett
Spread Spectrum System (SSS) or Code Division Multiple Access System (CDMAS) has been studied for a long time, but most of the attention was focused on the transmission problems. In this paper, we study the results when the code division technique is applied to the image at the source stage. The idea is to convolve the N different images with the corresponding m-sequence to obtain the encrypted image. The superimposed image (summation of the encrypted images) is then stored or transmitted. The benefit of this is that no one knows what is stored or transmitted unless the m-sequence is known. The recovery of the original image is recovered by correlating the superimposed image with corresponding m-sequence. Two cases are studied in this paper. First, the two-dimensional image is treated as a long one-dimensional vector and the m-sequence is employed to obtain the results. Secondly, the two-dimensional quasi m-array is proposed and used for the code division multiplexing. It is shown that quasi m-array is faster when the image size is 256 x 256. The important features of the proposed technique are not only the image security but also the data compactness. The compression ratio depends on how many images are superimposed.
An improved three-dimensional non-scanning laser imaging system based on digital micromirror device
NASA Astrophysics Data System (ADS)
Xia, Wenze; Han, Shaokun; Lei, Jieyu; Zhai, Yu; Timofeev, Alexander N.
2018-01-01
Nowadays, there are two main methods to realize three-dimensional non-scanning laser imaging detection, which are detection method based on APD and detection method based on Streak Tube. However, the detection method based on APD possesses some disadvantages, such as small number of pixels, big pixel interval and complex supporting circuit. The detection method based on Streak Tube possesses some disadvantages, such as big volume, bad reliability and high cost. In order to resolve the above questions, this paper proposes an improved three-dimensional non-scanning laser imaging system based on Digital Micromirror Device. In this imaging system, accurate control of laser beams and compact design of imaging structure are realized by several quarter-wave plates and a polarizing beam splitter. The remapping fiber optics is used to sample the image plane of receiving optical lens, and transform the image into line light resource, which can realize the non-scanning imaging principle. The Digital Micromirror Device is used to convert laser pulses from temporal domain to spatial domain. The CCD with strong sensitivity is used to detect the final reflected laser pulses. In this paper, we also use an algorithm which is used to simulate this improved laser imaging system. In the last, the simulated imaging experiment demonstrates that this improved laser imaging system can realize three-dimensional non-scanning laser imaging detection.
NASA Technical Reports Server (NTRS)
Bathel, Brett F.; Danehy, Paul M.; Johansen, Craig T.; Ashcraft, Scott W.; Novak, Luke A.
2013-01-01
Numerical predictions of the Mars Science Laboratory reaction control system jets interacting with a Mach 10 hypersonic flow are compared to experimental nitric oxide planar laser-induced fluorescence data. The steady Reynolds Averaged Navier Stokes equations using the Baldwin-Barth one-equation turbulence model were solved using the OVERFLOW code. The experimental fluorescence data used for comparison consists of qualitative two-dimensional visualization images, qualitative reconstructed three-dimensional flow structures, and quantitative two-dimensional distributions of streamwise velocity. Through modeling of the fluorescence signal equation, computational flow images were produced and directly compared to the qualitative fluorescence data.
Walton, Katherine D; Kolterud, Asa
2014-09-04
Most morphogenetic processes in the fetal intestine have been inferred from thin sections of fixed tissues, providing snapshots of changes over developmental stages. Three-dimensional information from thin serial sections can be challenging to interpret because of the difficulty of reconstructing serial sections perfectly and maintaining proper orientation of the tissue over serial sections. Recent findings by Grosse et al., 2011 highlight the importance of three- dimensional information in understanding morphogenesis of the developing villi of the intestine(1). Three-dimensional reconstruction of singly labeled intestinal cells demonstrated that the majority of the intestinal epithelial cells contact both the apical and basal surfaces. Furthermore, three-dimensional reconstruction of the actin cytoskeleton at the apical surface of the epithelium demonstrated that the intestinal lumen is continuous and that secondary lumens are an artifact of sectioning. Those two points, along with the demonstration of interkinetic nuclear migration in the intestinal epithelium, defined the developing intestinal epithelium as a pseudostratified epithelium and not stratified as previously thought(1). The ability to observe the epithelium three-dimensionally was seminal to demonstrating this point and redefining epithelial morphogenesis in the fetal intestine. With the evolution of multi-photon imaging technology and three-dimensional reconstruction software, the ability to visualize intact, developing organs is rapidly improving. Two-photon excitation allows less damaging penetration deeper into tissues with high resolution. Two-photon imaging and 3D reconstruction of the whole fetal mouse intestines in Walton et al., 2012 helped to define the pattern of villus outgrowth(2). Here we describe a whole organ culture system that allows ex vivo development of villi and extensions of that culture system to allow the intestines to be three-dimensionally imaged during their development.
Real time three dimensional sensing system
Gordon, S.J.
1996-12-31
The invention is a three dimensional sensing system which utilizes two flexibly located cameras for receiving and recording visual information with respect to a sensed object illuminated by a series of light planes. Each pixel of each image is converted to a digital word and the words are grouped into stripes, each stripe comprising contiguous pixels. One pixel of each stripe in one image is selected and an epi-polar line of that point is drawn in the other image. The three dimensional coordinate of each selected point is determined by determining the point on said epi-polar line which also lies on a stripe in the second image and which is closest to a known light plane. 7 figs.
Real time three dimensional sensing system
Gordon, Steven J.
1996-01-01
The invention is a three dimensional sensing system which utilizes two flexibly located cameras for receiving and recording visual information with respect to a sensed object illuminated by a series of light planes. Each pixel of each image is converted to a digital word and the words are grouped into stripes, each stripe comprising contiguous pixels. One pixel of each stripe in one image is selected and an epi-polar line of that point is drawn in the other image. The three dimensional coordinate of each selected point is determined by determining the point on said epi-polar line which also lies on a stripe in the second image and which is closest to a known light plane.
NASA Astrophysics Data System (ADS)
Duling, Irl N.
2016-05-01
Terahertz energy, with its ability to penetrate clothing and non-conductive materials, has held much promise in the area of security scanning. Millimeter wave systems (300 GHz and below) have been widely deployed. These systems have used full two-dimensional surface imaging, and have resulted in privacy concerns. Pulsed terahertz imaging, can detect the presence of unwanted objects without the need for two-dimensional photographic imaging. With high-speed waveform acquisition it is possible to create handheld tools that can be used to locate anomalies under clothing or headgear looking exclusively at either single point waveforms or cross-sectional images which do not pose a privacy concern. Identification of the anomaly to classify it as a potential threat or a benign object is also possible.
Distributed Two-Dimensional Fourier Transforms on DSPs with an Application for Phase Retrieval
NASA Technical Reports Server (NTRS)
Smith, Jeffrey Scott
2006-01-01
Many applications of two-dimensional Fourier Transforms require fixed timing as defined by system specifications. One example is image-based wavefront sensing. The image-based approach has many benefits, yet it is a computational intensive solution for adaptive optic correction, where optical adjustments are made in real-time to correct for external (atmospheric turbulence) and internal (stability) aberrations, which cause image degradation. For phase retrieval, a type of image-based wavefront sensing, numerous two-dimensional Fast Fourier Transforms (FFTs) are used. To meet the required real-time specifications, a distributed system is needed, and thus, the 2-D FFT necessitates an all-to-all communication among the computational nodes. The 1-D floating point FFT is very efficient on a digital signal processor (DSP). For this study, several architectures and analysis of such are presented which address the all-to-all communication with DSPs. Emphasis of this research is on a 64-node cluster of Analog Devices TigerSharc TS-101 DSPs.
NASA Astrophysics Data System (ADS)
Jun, Brian; Giarra, Matthew; Golz, Brian; Main, Russell; Vlachos, Pavlos
2016-11-01
We present a methodology to mitigate the major sources of error associated with two-dimensional confocal laser scanning microscopy (CLSM) images of nanoparticles flowing through a microfluidic channel. The correlation-based velocity measurements from CLSM images are subject to random error due to the Brownian motion of nanometer-sized tracer particles, and a bias error due to the formation of images by raster scanning. Here, we develop a novel ensemble phase correlation with dynamic optimal filter that maximizes the correlation strength, which diminishes the random error. In addition, we introduce an analytical model of CLSM measurement bias error correction due to two-dimensional image scanning of tracer particles. We tested our technique using both synthetic and experimental images of nanoparticles flowing through a microfluidic channel. We observed that our technique reduced the error by up to a factor of ten compared to ensemble standard cross correlation (SCC) for the images tested in the present work. Subsequently, we will assess our framework further, by interrogating nanoscale flow in the cell culture environment (transport within the lacunar-canalicular system) to demonstrate our ability to accurately resolve flow measurements in a biological system.
NASA Astrophysics Data System (ADS)
Hirayama, Ryuji; Shiraki, Atsushi; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2017-07-01
We designed and developed a control circuit for a three-dimensional (3-D) light-emitting diode (LED) array to be used in volumetric displays exhibiting full-color dynamic 3-D images. The circuit was implemented on a field-programmable gate array; therefore, pulse-width modulation, which requires high-speed processing, could be operated in real time. We experimentally evaluated the developed system by measuring the luminance of an LED with varying input and confirmed that the system works appropriately. In addition, we demonstrated that the volumetric display exhibits different full-color dynamic two-dimensional images in two orthogonal directions. Each of the exhibited images could be obtained only from the prescribed viewpoint. Such directional characteristics of the system are beneficial for applications, including digital signage, security systems, art, and amusement.
Arakawa, Takahiro; Sato, Toshiyuki; Iitani, Kenta; Toma, Koji; Mitsubayashi, Kohji
2017-04-18
Various volatile organic compounds can be found in human transpiration, breath and body odor. In this paper, a novel two-dimensional fluorometric imaging system, known as a "sniffer-cam" for ethanol vapor released from human breath and palm skin was constructed and validated. This imaging system measures ethanol vapor concentrations as intensities of fluorescence through an enzymatic reaction induced by alcohol dehydrogenase (ADH). The imaging system consisted of multiple ultraviolet light emitting diode (UV-LED) excitation sheet, an ADH enzyme immobilized mesh substrate and a high-sensitive CCD camera. This imaging system uses ADH for recognition of ethanol vapor. It measures ethanol vapor by measuring fluorescence of nicotinamide adenine dinucleotide (NADH), which is produced by an enzymatic reaction on the mesh. This NADH fluorometric imaging system achieved the two-dimensional real-time imaging of ethanol vapor distribution (0.5-200 ppm). The system showed rapid and accurate responses and a visible measurement, which could lead to an analysis of metabolism function at real time in the near future.
An advanced scanning method for space-borne hyper-spectral imaging system
NASA Astrophysics Data System (ADS)
Wang, Yue-ming; Lang, Jun-Wei; Wang, Jian-Yu; Jiang, Zi-Qing
2011-08-01
Space-borne hyper-spectral imagery is an important means for the studies and applications of earth science. High cost efficiency could be acquired by optimized system design. In this paper, an advanced scanning method is proposed, which contributes to implement both high temporal and spatial resolution imaging system. Revisit frequency and effective working time of space-borne hyper-spectral imagers could be greatly improved by adopting two-axis scanning system if spatial resolution and radiometric accuracy are not harshly demanded. In order to avoid the quality degradation caused by image rotation, an idea of two-axis rotation has been presented based on the analysis and simulation of two-dimensional scanning motion path and features. Further improvement of the imagers' detection ability under the conditions of small solar altitude angle and low surface reflectance can be realized by the Ground Motion Compensation on pitch axis. The structure and control performance are also described. An intelligent integration technology of two-dimensional scanning and image motion compensation is elaborated in this paper. With this technology, sun-synchronous hyper-spectral imagers are able to pay quick visit to hot spots, acquiring both high spatial and temporal resolution hyper-spectral images, which enables rapid response of emergencies. The result has reference value for developing operational space-borne hyper-spectral imagers.
Compact microwave imaging system to measure spatial distribution of plasma density
NASA Astrophysics Data System (ADS)
Ito, H.; Oba, R.; Yugami, N.; Nishida, Y.
2004-10-01
We have developed an advanced microwave interferometric system operating in the K band (18-27 GHz) with the use of a fan-shaped microwave based on a heterodyne detection system for measuring the spatial distribution of the plasma density. In order to make a simple, low-cost, and compact microwave interferometer with better spatial resolution, a microwave scattering technique by a microstrip antenna array is employed. Experimental results show that the imaging system with the microstrip antenna array can have finer spatial resolution than one with the diode antenna array and reconstruct a good spatially resolved image of the finite size dielectric phantoms placed between the horn antenna and the micro strip antenna array. The precise two-dimensional electron density distribution of the cylindrical plasma produced by an electron cyclotron resonance has been observed. As a result, the present imaging system is more suitable for a two- or three-dimensional display of the objects or stationary plasmas and it is possible to realize a compact microwave imaging system.
Virtual three-dimensional blackboard: three-dimensional finger tracking with a single camera
NASA Astrophysics Data System (ADS)
Wu, Andrew; Hassan-Shafique, Khurram; Shah, Mubarak; da Vitoria Lobo, N.
2004-01-01
We present a method for three-dimensional (3D) tracking of a human finger from a monocular sequence of images. To recover the third dimension from the two-dimensional images, we use the fact that the motion of the human arm is highly constrained owing to the dependencies between elbow and forearm and the physical constraints on joint angles. We use these anthropometric constraints to derive a 3D trajectory of a gesticulating arm. The system is fully automated and does not require human intervention. The system presented can be used as a visualization tool, as a user-input interface, or as part of some gesture-analysis system in which 3D information is important.
Phase correction system for automatic focusing of synthetic aperture radar
Eichel, Paul H.; Ghiglia, Dennis C.; Jakowatz, Jr., Charles V.
1990-01-01
A phase gradient autofocus system for use in synthetic aperture imaging accurately compensates for arbitrary phase errors in each imaged frame by locating highlighted areas and determining the phase disturbance or image spread associated with each of these highlight areas. An estimate of the image spread for each highlighted area in a line in the case of one dimensional processing or in a sector, in the case of two-dimensional processing, is determined. The phase error is determined using phase gradient processing. The phase error is then removed from the uncorrected image and the process is iteratively performed to substantially eliminate phase errors which can degrade the image.
Kim, Min-Gab; Kim, Jin-Yong
2018-05-01
In this paper, we introduce a method to overcome the limitation of thickness measurement of a micro-patterned thin film. A spectroscopic imaging reflectometer system that consists of an acousto-optic tunable filter, a charge-coupled-device camera, and a high-magnitude objective lens was proposed, and a stack of multispectral images was generated. To secure improved accuracy and lateral resolution in the reconstruction of a two-dimensional thin film thickness, prior to the analysis of spectral reflectance profiles from each pixel of multispectral images, the image restoration based on an iterative deconvolution algorithm was applied to compensate for image degradation caused by blurring.
Songnian, Zhao; Qi, Zou; Chang, Liu; Xuemin, Liu; Shousi, Sun; Jun, Qiu
2014-04-23
How it is possible to "faithfully" represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway's optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. 1. We introduce two different mathematical expressions of the plenoptic functions, Pw and Pv that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene.2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex.3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three-dimensional Cartesian coordinate system into a two dimensional plane reveals a corresponding mapping between a retinal image and the vanishing point and line.
2014-01-01
Background How it is possible to “faithfully” represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. Results The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway’s optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. Conclusions 1. We introduce two different mathematical expressions of the plenoptic functions, P w and P v that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene. 2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex. 3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three-dimensional Cartesian coordinate system into a two dimensional plane reveals a corresponding mapping between a retinal image and the vanishing point and line. PMID:24755246
Biwasaka, Hitoshi; Saigusa, Kiyoshi; Aoki, Yasuhiro
2005-03-01
In this study, the applicability of holography in the 3-dimensional recording of forensic objects such as skulls and mandibulae, and the accuracy of the reconstructed 3-D images, were examined. The virtual holographic image, which records the 3-dimensional data of the original object, is visually observed on the other side of the holographic plate, and reproduces the 3-dimensional shape of the object well. Another type of holographic image, the real image, is focused on a frosted glass screen, and cross-sectional images of the object can be observed. When measuring the distances between anatomical reference points using an image-processing software, the average deviations in the holographic images as compared to the actual objects were less than 0.1 mm. Therefore, holography could be useful as a 3-dimensional recording method of forensic objects. Two superimposition systems using holographic images were examined. In the 2D-3D system, the transparent virtual holographic image of an object is directly superimposed onto the digitized photograph of the same object on the LCD monitor. On the other hand, in the video system, the holographic image captured by the CCD camera is superimposed onto the digitized photographic image using a personal computer. We found that the discrepancy between the outlines of the superimposed holographic and photographic dental images using the video system was smaller than that using the 2D-3D system. Holography seemed to perform comparably to the computer graphic system; however, a fusion with the digital technique would expand the utility of holography in superimposition.
Laser electro-optic system for rapid three-dimensional /3-D/ topographic mapping of surfaces
NASA Technical Reports Server (NTRS)
Altschuler, M. D.; Altschuler, B. R.; Taboada, J.
1981-01-01
It is pointed out that the generic utility of a robot in a factory/assembly environment could be substantially enhanced by providing a vision capability to the robot. A standard videocamera for robot vision provides a two-dimensional image which contains insufficient information for a detailed three-dimensional reconstruction of an object. Approaches which supply the additional information needed for the three-dimensional mapping of objects with complex surface shapes are briefly considered and a description is presented of a laser-based system which can provide three-dimensional vision to a robot. The system consists of a laser beam array generator, an optical image recorder, and software for controlling the required operations. The projection of a laser beam array onto a surface produces a dot pattern image which is viewed from one or more suitable perspectives. Attention is given to the mathematical method employed, the space coding technique, the approaches used for obtaining the transformation parameters, the optics for laser beam array generation, the hardware for beam array coding, and aspects of image acquisition.
Portable Fluorescence Imaging System for Hypersonic Flow Facilities
NASA Technical Reports Server (NTRS)
Wilkes, J. A.; Alderfer, D. W.; Jones, S. B.; Danehy, P. M.
2003-01-01
A portable fluorescence imaging system has been developed for use in NASA Langley s hypersonic wind tunnels. The system has been applied to a small-scale free jet flow. Two-dimensional images were taken of the flow out of a nozzle into a low-pressure test section using the portable planar laser-induced fluorescence system. Images were taken from the center of the jet at various test section pressures, showing the formation of a barrel shock at low pressures, transitioning to a turbulent jet at high pressures. A spanwise scan through the jet at constant pressure reveals the three-dimensional structure of the flow. Future capabilities of the system for making measurements in large-scale hypersonic wind tunnel facilities are discussed.
SERODS optical data storage with parallel signal transfer
Vo-Dinh, Tuan
2003-09-02
Surface-enhanced Raman optical data storage (SERODS) systems having increased reading and writing speeds, that is, increased data transfer rates, are disclosed. In the various SERODS read and write systems, the surface-enhanced Raman scattering (SERS) data is written and read using a two-dimensional process called parallel signal transfer (PST). The various embodiments utilize laser light beam excitation of the SERODS medium, optical filtering, beam imaging, and two-dimensional light detection. Two- and three-dimensional SERODS media are utilized. The SERODS write systems employ either a different laser or a different level of laser power.
SERODS optical data storage with parallel signal transfer
Vo-Dinh, Tuan
2003-06-24
Surface-enhanced Raman optical data storage (SERODS) systems having increased reading and writing speeds, that is, increased data transfer rates, are disclosed. In the various SERODS read and write systems, the surface-enhanced Raman scattering (SERS) data is written and read using a two-dimensional process called parallel signal transfer (PST). The various embodiments utilize laser light beam excitation of the SERODS medium, optical filtering, beam imaging, and two-dimensional light detection. Two- and three-dimensional SERODS media are utilized. The SERODS write systems employ either a different laser or a different level of laser power.
Paganin, David M; Beltran, Mario A; Petersen, Timothy C
2018-03-01
We obtain exact polynomial solutions for two-dimensional coherent complex scalar fields propagating through arbitrary aberrated shift-invariant linear imaging systems. These solutions are used to model nodal-line dynamics of coherent fields output by such systems.
Depth measurements through controlled aberrations of projected patterns.
Birch, Gabriel C; Tyo, J Scott; Schwiegerling, Jim
2012-03-12
Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture three-dimensional images in space confined environments and without major modifications to current cameras is uncommon. Our goal is to create a simple modification to a conventional camera that allows for three dimensional reconstruction. We require such an imaging system have imaging and illumination paths coincident. Furthermore, we require that any three-dimensional modification to a camera also permits full resolution 2D image capture.Here we present a method of extracting depth information with a single camera and aberrated projected pattern. A commercial digital camera is used in conjunction with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of a projected pattern, thereby encoding depth. By designing an aberrated projected pattern, we are able to exploit this differential focus in post-processing designed to exploit the projected pattern and optical system. We are able to correlate the distance of an object at a particular transverse position from the camera to ratios of particular wavelet coefficients.We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.
NASA Astrophysics Data System (ADS)
Onuma, Takashi; Otani, Yukitoshi
2014-03-01
A two-dimensional birefringence distribution measurement system with a sampling rate of 1.3 MHz is proposed. A polarization image sensor is developed as core device of the system. It is composed of a pixelated polarizer array made from photonic crystal and a parallel read out circuit with a multi-channel analog to digital converter specialized for two-dimensional polarization detection. By applying phase shifting algorism with circularly-polarized incident light, birefringence phase difference and azimuthal angle can be measured. The performance of the system is demonstrated experimentally by measuring actual birefringence distribution and polarization device such as Babinet-Soleil compensator.
High-Resolution Gamma-Ray Imaging Measurements Using Externally Segmented Germanium Detectors
NASA Technical Reports Server (NTRS)
Callas, J.; Mahoney, W.; Skelton, R.; Varnell, L.; Wheaton, W.
1994-01-01
Fully two-dimensional gamma-ray imaging with simultaneous high-resolution spectroscopy has been demonstrated using an externally segmented germanium sensor. The system employs a single high-purity coaxial detector with its outer electrode segmented into 5 distinct charge collection regions and a lead coded aperture with a uniformly redundant array (URA) pattern. A series of one-dimensional responses was collected around 511 keV while the system was rotated in steps through 180 degrees. A non-negative, linear least-squares algorithm was then employed to reconstruct a 2-dimensional image. Corrections for multiple scattering in the detector, and the finite distance of source and detector are made in the reconstruction process.
Multidimensionally encoded magnetic resonance imaging.
Lin, Fa-Hsuan
2013-07-01
Magnetic resonance imaging (MRI) typically achieves spatial encoding by measuring the projection of a q-dimensional object over q-dimensional spatial bases created by linear spatial encoding magnetic fields (SEMs). Recently, imaging strategies using nonlinear SEMs have demonstrated potential advantages for reconstructing images with higher spatiotemporal resolution and reducing peripheral nerve stimulation. In practice, nonlinear SEMs and linear SEMs can be used jointly to further improve the image reconstruction performance. Here, we propose the multidimensionally encoded (MDE) MRI to map a q-dimensional object onto a p-dimensional encoding space where p > q. MDE MRI is a theoretical framework linking imaging strategies using linear and nonlinear SEMs. Using a system of eight surface SEM coils with an eight-channel radiofrequency coil array, we demonstrate the five-dimensional MDE MRI for a two-dimensional object as a further generalization of PatLoc imaging and O-space imaging. We also present a method of optimizing spatial bases in MDE MRI. Results show that MDE MRI with a higher dimensional encoding space can reconstruct images more efficiently and with a smaller reconstruction error when the k-space sampling distribution and the number of samples are controlled. Copyright © 2012 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Jackson, Deborah J. (Inventor)
1998-01-01
An analog optical encryption system based on phase scrambling of two-dimensional optical images and holographic transformation for achieving large encryption keys and high encryption speed. An enciphering interface uses a spatial light modulator for converting a digital data stream into a two dimensional optical image. The optical image is further transformed into a hologram with a random phase distribution. The hologram is converted into digital form for transmission over a shared information channel. A respective deciphering interface at a receiver reverses the encrypting process by using a phase conjugate reconstruction of the phase scrambled hologram.
Target recognition for ladar range image using slice image
NASA Astrophysics Data System (ADS)
Xia, Wenze; Han, Shaokun; Wang, Liang
2015-12-01
A shape descriptor and a complete shape-based recognition system using slice images as geometric feature descriptor for ladar range images are introduced. A slice image is a two-dimensional image generated by three-dimensional Hough transform and the corresponding mathematical transformation. The system consists of two processes, the model library construction and recognition. In the model library construction process, a series of range images are obtained after the model object is sampled at preset attitude angles. Then, all the range images are converted into slice images. The number of slice images is reduced by clustering analysis and finding a representation to reduce the size of the model library. In the recognition process, the slice image of the scene is compared with the slice image in the model library. The recognition results depend on the comparison. Simulated ladar range images are used to analyze the recognition and misjudgment rates, and comparison between the slice image representation method and moment invariants representation method is performed. The experimental results show that whether in conditions without noise or with ladar noise, the system has a high recognition rate and low misjudgment rate. The comparison experiment demonstrates that the slice image has better representation ability than moment invariants.
Three-Dimensional Optical Coherence Tomography
NASA Technical Reports Server (NTRS)
Gutin, Mikhail; Wang, Xu-Ming; Gutin, Olga
2009-01-01
Three-dimensional (3D) optical coherence tomography (OCT) is an advanced method of noninvasive infrared imaging of tissues in depth. Heretofore, commercial OCT systems for 3D imaging have been designed principally for external ophthalmological examination. As explained below, such systems have been based on a one-dimensional OCT principle, and in the operation of such a system, 3D imaging is accomplished partly by means of a combination of electronic scanning along the optical (Z) axis and mechanical scanning along the two axes (X and Y) orthogonal to the optical axis. In 3D OCT, 3D imaging involves a form of electronic scanning (without mechanical scanning) along all three axes. Consequently, the need for mechanical adjustment is minimal and the mechanism used to position the OCT probe can be correspondingly more compact. A 3D OCT system also includes a probe of improved design and utilizes advanced signal- processing techniques. Improvements in performance over prior OCT systems include finer resolution, greater speed, and greater depth of field.
Hirakawa, Takeshi; Matsunaga, Sachihiro
2016-01-01
In plants, chromatin dynamics spatiotemporally change in response to various environmental stimuli. However, little is known about chromatin dynamics in the nuclei of plants. Here, we introduce a three-dimensional, live-cell imaging method that can monitor chromatin dynamics in nuclei via a chromatin tagging system that can visualize specific genomic loci in living plant cells. The chromatin tagging system is based on a bacterial operator/repressor system in which the repressor is fused to fluorescent proteins. A recent refinement of promoters for the system solved the problem of gene silencing and abnormal pairing frequencies between operators. Using this system, we can detect the spatiotemporal dynamics of two homologous loci as two fluorescent signals within a nucleus and monitor the distance between homologous loci. These live-cell imaging methods will provide new insights into genome organization, development processes, and subnuclear responses to environmental stimuli in plants.
Three-dimensional analysis of alveolar bone resorption by image processing of 3-D dental CT images
NASA Astrophysics Data System (ADS)
Nagao, Jiro; Kitasaka, Takayuki; Mori, Kensaku; Suenaga, Yasuhito; Yamada, Shohzoh; Naitoh, Munetaka
2006-03-01
We have developed a novel system that provides total support for assessment of alveolar bone resorption, caused by periodontitis, based on three-dimensional (3-D) dental CT images. In spite of the difficulty in perceiving the complex 3-D shape of resorption, dentists assessing resorption location and severity have been relying on two-dimensional radiography and probing, which merely provides one-dimensional information (depth) about resorption shape. However, there has been little work on assisting assessment of the disease by 3-D image processing and visualization techniques. This work provides quantitative evaluation results and figures for our system that measures the three-dimensional shape and spread of resorption. It has the following functions: (1) measures the depth of resorption by virtually simulating probing in the 3-D CT images, taking advantage of image processing of not suffering obstruction by teeth on the inter-proximal sides and much smaller measurement intervals than the conventional examination; (2) visualizes the disposition of the depth by movies and graphs; (3) produces a quantitative index and intuitive visual representation of the spread of resorption in the inter-radicular region in terms of area; and (4) calculates the volume of resorption as another severity index in the inter-radicular region and the region outside it. Experimental results in two cases of 3-D dental CT images and a comparison of the results with the clinical examination results and experts' measurements of the corresponding patients confirmed that the proposed system gives satisfying results, including 0.1 to 0.6mm of resorption measurement (probing) error and fairly intuitive presentation of measurement and calculation results.
Method and apparatus for coherent imaging of infrared energy
Hutchinson, Donald P.
1998-01-01
A coherent camera system performs ranging, spectroscopy, and thermal imaging. Local oscillator radiation is combined with target scene radiation to enable heterodyne detection by the coherent camera's two-dimensional photodetector array. Versatility enables deployment of the system in either a passive mode (where no laser energy is actively transmitted toward the target scene) or an active mode (where a transmitting laser is used to actively illuminate the target scene). The two-dimensional photodetector array eliminates the need to mechanically scan the detector. Each element of the photodetector array produces an intermediate frequency signal that is amplified, filtered, and rectified by the coherent camera's integrated circuitry. By spectroscopic examination of the frequency components of each pixel of the detector array, a high-resolution, three-dimensional or holographic image of the target scene is produced for applications such as air pollution studies, atmospheric disturbance monitoring, and military weapons targeting.
Method and apparatus for coherent imaging of infrared energy
Hutchinson, D.P.
1998-05-12
A coherent camera system performs ranging, spectroscopy, and thermal imaging. Local oscillator radiation is combined with target scene radiation to enable heterodyne detection by the coherent camera`s two-dimensional photodetector array. Versatility enables deployment of the system in either a passive mode (where no laser energy is actively transmitted toward the target scene) or an active mode (where a transmitting laser is used to actively illuminate the target scene). The two-dimensional photodetector array eliminates the need to mechanically scan the detector. Each element of the photodetector array produces an intermediate frequency signal that is amplified, filtered, and rectified by the coherent camera`s integrated circuitry. By spectroscopic examination of the frequency components of each pixel of the detector array, a high-resolution, three-dimensional or holographic image of the target scene is produced for applications such as air pollution studies, atmospheric disturbance monitoring, and military weapons targeting. 8 figs.
Cell culture imaging using microimpedance tomography.
Linderholm, Pontus; Marescot, Laurent; Loke, Meng Heng; Renaud, Philippe
2008-01-01
We present a novel, inexpensive, and fast microimpedance tomography system for two-dimensional imaging of cell and tissue cultures. The system is based on four-electrode measurements using 16 planar microelectrodes (5 microm x 4 mm) integrated into a culture chamber. An Agilent 4294A impedance analyzer combined with a front-end amplifier is used for the impedance measurements. Two-dimensional images are obtained using a reconstruction algorithm. This system is capable of accurately resolving the shape and position of a human hair, yielding vertical cross sections of the object. Human epithelial stem cells (YF 29) are also grown directly on the device surface. Tissue growth can be followed over several days. A rapid resistivity decrease caused by permeabilized cell membranes is also monitored, suggesting that this technique can be used in electroporation studies.
Flor-Henry, Michel; McCabe, Tulene C; de Bruxelles, Guy L; Roberts, Michael R
2004-01-01
Background All living organisms emit spontaneous low-level bioluminescence, which can be increased in response to stress. Methods for imaging this ultra-weak luminescence have previously been limited by the sensitivity of the detection systems used. Results We developed a novel configuration of a cooled charge-coupled device (CCD) for 2-dimensional imaging of light emission from biological material. In this study, we imaged photon emission from plant leaves. The equipment allowed short integration times for image acquisition, providing high resolution spatial and temporal information on bioluminescence. We were able to carry out time course imaging of both delayed chlorophyll fluorescence from whole leaves, and of low level wound-induced luminescence that we showed to be localised to sites of tissue damage. We found that wound-induced luminescence was chlorophyll-dependent and was enhanced at higher temperatures. Conclusions The data gathered on plant bioluminescence illustrate that the equipment described here represents an improvement in 2-dimensional luminescence imaging technology. Using this system, we identify chlorophyll as the origin of wound-induced luminescence from leaves. PMID:15550176
Endoscopes with latest technology and concept.
Gotoh
2003-09-01
Endoscopic imaging systems that perform as the "eye" of the operator during endoscopic surgical procedures have developed rapidly due to various technological developments. In addition, since the most recent turn of the century robotic surgery has increased its scope through the utilization of systems such as Intuitive Surgical's da Vinci System. To optimize the imaging required for precise robotic surgery, a unique endoscope has been developed, consisting of both a two dimensional (2D) image optical system for wider observation of the entire surgical field, and a three dimensional (3D) image optical system for observation of the more precise details at the operative site. Additionally, a "near infrared radiation" endoscopic system is under development to detect the sentinel lymph node more readily. Such progress in the area of endoscopic imaging is expected to enhance the surgical procedure from both the patient's and the surgeon's point of view.
NASA Astrophysics Data System (ADS)
Oku, H.; Ogawa, N.; Ishikawa, M.; Hashimoto, K.
2005-03-01
In this article, a micro-organism tracking system using a high-speed vision system is reported. This system two dimensionally tracks a freely swimming micro-organism within the field of an optical microscope by moving a chamber of target micro-organisms based on high-speed visual feedback. The system we developed could track a paramecium using various imaging techniques, including bright-field illumination, dark-field illumination, and differential interference contrast, at magnifications of 5 times and 20 times. A maximum tracking duration of 300s was demonstrated. Also, the system could track an object with a velocity of up to 35 000μm/s (175diameters/s), which is significantly faster than swimming micro-organisms.
Two-dimensional DFA scaling analysis applied to encrypted images
NASA Astrophysics Data System (ADS)
Vargas-Olmos, C.; Murguía, J. S.; Ramírez-Torres, M. T.; Mejía Carlos, M.; Rosu, H. C.; González-Aguilar, H.
2015-01-01
The technique of detrended fluctuation analysis (DFA) has been widely used to unveil scaling properties of many different signals. In this paper, we determine scaling properties in the encrypted images by means of a two-dimensional DFA approach. To carry out the image encryption, we use an enhanced cryptosystem based on a rule-90 cellular automaton and we compare the results obtained with its unmodified version and the encryption system AES. The numerical results show that the encrypted images present a persistent behavior which is close to that of the 1/f-noise. These results point to the possibility that the DFA scaling exponent can be used to measure the quality of the encrypted image content.
Three-dimensional rendering of segmented object using matlab - biomed 2010.
Anderson, Jeffrey R; Barrett, Steven F
2010-01-01
The three-dimensional rendering of microscopic objects is a difficult and challenging task that often requires specialized image processing techniques. Previous work has been described of a semi-automatic segmentation process of fluorescently stained neurons collected as a sequence of slice images with a confocal laser scanning microscope. Once properly segmented, each individual object can be rendered and studied as a three-dimensional virtual object. This paper describes the work associated with the design and development of Matlab files to create three-dimensional images from the segmented object data previously mentioned. Part of the motivation for this work is to integrate both the segmentation and rendering processes into one software application, providing a seamless transition from the segmentation tasks to the rendering and visualization tasks. Previously these tasks were accomplished on two different computer systems, windows and Linux. This transition basically limits the usefulness of the segmentation and rendering applications to those who have both computer systems readily available. The focus of this work is to create custom Matlab image processing algorithms for object rendering and visualization, and merge these capabilities to the Matlab files that were developed especially for the image segmentation task. The completed Matlab application will contain both the segmentation and rendering processes in a single graphical user interface, or GUI. This process for rendering three-dimensional images in Matlab requires that a sequence of two-dimensional binary images, representing a cross-sectional slice of the object, be reassembled in a 3D space, and covered with a surface. Additional segmented objects can be rendered in the same 3D space. The surface properties of each object can be varied by the user to aid in the study and analysis of the objects. This inter-active process becomes a powerful visual tool to study and understand microscopic objects.
Amin Nili, Vahid; Mansouri, Ehsan; Kavehvash, Zahra; Fakharzadeh, Mohammad; Shabany, Mahdi; Khavasi, Amin
2018-01-01
In this paper, a closed-form two-dimensional reconstruction technique for hybrid frequency and mechanical scanning millimeter-wave (MMW) imaging systems is proposed. Although being commercially implemented in many imaging systems as a low-cost real-time solution, the results of frequency scanning systems have been reconstructed numerically or have been reported as the captured raw data with no clear details. Furthermore, this paper proposes a new framework to utilize the captured data of different frequencies for three-dimensional (3D) reconstruction based on novel proposed closed-form relations. The hybrid frequency and mechanical scanning structure, together with the proposed reconstruction method, yields a low-cost MMW imaging system with a satisfying performance. The extracted reconstruction formulations are validated through numerical simulations, which show comparable image quality with conventional MMW imaging systems, i.e., switched-array (SA) and phased-array (PA) structures. Extensive simulations are also performed in the presence of additive noise, demonstrating the acceptable robustness of the system against system noise compared to SA and comparable performance with PA. Finally, 3D reconstruction of the simulated data shows a depth resolution of better than 10 cm with minimum degradation of lateral resolution in the 10 GHz frequency bandwidth.
Tafreshi, Azadeh Kamali; Top, Can Barış; Gençer, Nevzat Güneri
2017-06-21
Harmonic motion microwave Doppler imaging (HMMDI) is a novel imaging modality for imaging the coupled electrical and mechanical properties of body tissues. In this paper, we used two experimental systems with different receiver configurations to obtain HMMDI images from tissue-mimicking phantoms at multiple vibration frequencies between 15 Hz and 35 Hz. In the first system, we used a spectrum analyzer to obtain the Doppler data in the frequency domain, while in the second one, we used a homodyne receiver that was designed to acquire time-domain data. The developed phantoms mimicked the elastic and dielectric properties of breast fat tissue, and included a [Formula: see text] mm cylindrical inclusion representing the tumor. A focused ultrasound probe was mechanically scanned in two lateral dimensions to obtain two-dimensional HMMDI images of the phantoms. The inclusions were resolved inside the fat phantom using both experimental setups. The image resolution increased with increasing vibration frequency. The designed receiver showed higher sensitivity than the spectrum analyzer measurements. The results also showed that time-domain data acquisition should be used to fully exploit the potential of the HMMDI method.
NASA Astrophysics Data System (ADS)
Kamali Tafreshi, Azadeh; Barış Top, Can; Güneri Gençer, Nevzat
2017-06-01
Harmonic motion microwave Doppler imaging (HMMDI) is a novel imaging modality for imaging the coupled electrical and mechanical properties of body tissues. In this paper, we used two experimental systems with different receiver configurations to obtain HMMDI images from tissue-mimicking phantoms at multiple vibration frequencies between 15 Hz and 35 Hz. In the first system, we used a spectrum analyzer to obtain the Doppler data in the frequency domain, while in the second one, we used a homodyne receiver that was designed to acquire time-domain data. The developed phantoms mimicked the elastic and dielectric properties of breast fat tissue, and included a 14~\\text{mm}× 9 mm cylindrical inclusion representing the tumor. A focused ultrasound probe was mechanically scanned in two lateral dimensions to obtain two-dimensional HMMDI images of the phantoms. The inclusions were resolved inside the fat phantom using both experimental setups. The image resolution increased with increasing vibration frequency. The designed receiver showed higher sensitivity than the spectrum analyzer measurements. The results also showed that time-domain data acquisition should be used to fully exploit the potential of the HMMDI method.
Efficient processing of two-dimensional arrays with C or C++
Donato, David I.
2017-07-20
Because fast and efficient serial processing of raster-graphic images and other two-dimensional arrays is a requirement in land-change modeling and other applications, the effects of 10 factors on the runtimes for processing two-dimensional arrays with C and C++ are evaluated in a comparative factorial study. This study’s factors include the choice among three C or C++ source-code techniques for array processing; the choice of Microsoft Windows 7 or a Linux operating system; the choice of 4-byte or 8-byte array elements and indexes; and the choice of 32-bit or 64-bit memory addressing. This study demonstrates how programmer choices can reduce runtimes by 75 percent or more, even after compiler optimizations. Ten points of practical advice for faster processing of two-dimensional arrays are offered to C and C++ programmers. Further study and the development of a C and C++ software test suite are recommended.Key words: array processing, C, C++, compiler, computational speed, land-change modeling, raster-graphic image, two-dimensional array, software efficiency
Three-Dimensional Anatomic Evaluation of the Anterior Cruciate Ligament for Planning Reconstruction
Hoshino, Yuichi; Kim, Donghwi; Fu, Freddie H.
2012-01-01
Anatomic study related to the anterior cruciate ligament (ACL) reconstruction surgery has been developed in accordance with the progress of imaging technology. Advances in imaging techniques, especially the move from two-dimensional (2D) to three-dimensional (3D) image analysis, substantially contribute to anatomic understanding and its application to advanced ACL reconstruction surgery. This paper introduces previous research about image analysis of the ACL anatomy and its application to ACL reconstruction surgery. Crucial bony landmarks for the accurate placement of the ACL graft can be identified by 3D imaging technique. Additionally, 3D-CT analysis of the ACL insertion site anatomy provides better and more consistent evaluation than conventional “clock-face” reference and roentgenologic quadrant method. Since the human anatomy has a complex three-dimensional structure, further anatomic research using three-dimensional imaging analysis and its clinical application by navigation system or other technologies is warranted for the improvement of the ACL reconstruction. PMID:22567310
NASA Technical Reports Server (NTRS)
Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)
2012-01-01
Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.
Three-dimensional surface imaging system for assessing human obesity
NASA Astrophysics Data System (ADS)
Xu, Bugao; Yu, Wurong; Yao, Ming; Pepper, M. Reese; Freeland-Graves, Jeanne H.
2009-10-01
The increasing prevalence of obesity suggests a need to develop a convenient, reliable, and economical tool for assessment of this condition. Three-dimensional (3-D) body surface imaging has emerged as an exciting technology for the estimation of body composition. We present a new 3-D body imaging system, which is designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology is used to satisfy the requirement for a simple hardware setup and fast image acquisition. The portability of the system is created via a two-stand configuration, and the accuracy of body volume measurements is improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3-D body imaging. Body measurement functions dedicated to body composition assessment also are developed. The overall performance of the system is evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.
NASA Astrophysics Data System (ADS)
Robbins, Woodrow E.
1988-01-01
The present conference discusses topics in novel technologies and techniques of three-dimensional imaging, human factors-related issues in three-dimensional display system design, three-dimensional imaging applications, and image processing for remote sensing. Attention is given to a 19-inch parallactiscope, a chromostereoscopic CRT-based display, the 'SpaceGraph' true three-dimensional peripheral, advantages of three-dimensional displays, holographic stereograms generated with a liquid crystal spatial light modulator, algorithms and display techniques for four-dimensional Cartesian graphics, an image processing system for automatic retina diagnosis, the automatic frequency control of a pulsed CO2 laser, and a three-dimensional display of magnetic resonance imaging of the spine.
Dust as a versatile matter for high-temperature plasma diagnostic.
Wang, Zhehui; Ticos, Catalin M
2008-10-01
Dust varies from a few nanometers to a fraction of a millimeter in size. Dust also offers essentially unlimited choices in material composition and structure. The potential of dust for high-temperature plasma diagnostic is largely unfulfilled yet. The principles of dust spectroscopy to measure internal magnetic field, microparticle tracer velocimetry to measure plasma flow, and dust photometry to measure heat flux are described. Two main components of the different dust diagnostics are a dust injector and a dust imaging system. The dust injector delivers a certain number of dust grains into a plasma. The imaging system collects and selectively detects certain photons resulted from dust-plasma interaction. One piece of dust gives the local plasma quantity, a collection of dust grains together reveals either two-dimensional (using only one or two imaging cameras) or three-dimensional (using two or more imaging cameras) structures of the measured quantity. A generic conceptual design suitable for all three types of dust diagnostics is presented.
Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition
NASA Astrophysics Data System (ADS)
Rouabhia, C.; Tebbikh, H.
2008-06-01
Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).
Baum, S.; Sillem, M.; Ney, J. T.; Baum, A.; Friedrich, M.; Radosa, J.; Kramer, K. M.; Gronwald, B.; Gottschling, S.; Solomayer, E. F.; Rody, A.; Joukhadar, R.
2017-01-01
Introduction Minimally invasive operative techniques are being used increasingly in gynaecological surgery. The expansion of the laparoscopic operation spectrum is in part the result of improved imaging. This study investigates the practical advantages of using 3D cameras in routine surgical practice. Materials and Methods Two different 3-dimensional camera systems were compared with a 2-dimensional HD system; the operating surgeonʼs experiences were documented immediately postoperatively using a questionnaire. Results Significant advantages were reported for suturing and cutting of anatomical structures when using the 3D compared to 2D camera systems. There was only a slight advantage for coagulating. The use of 3D cameras significantly improved the general operative visibility and in particular the representation of spacial depth compared to 2-dimensional images. There was not a significant advantage for image width. Depiction of adhesions and retroperitoneal neural structures was significantly improved by the stereoscopic cameras, though this did not apply to blood vessels, ureter, uterus or ovaries. Conclusion 3-dimensional cameras were particularly advantageous for the depiction of fine anatomical structures due to improved spacial depth representation compared to 2D systems. 3D cameras provide the operating surgeon with a monitor image that more closely resembles actual anatomy, thus simplifying laparoscopic procedures. PMID:28190888
NASA Technical Reports Server (NTRS)
Chen, Fang-Jenq
1997-01-01
Flow visualization produces data in the form of two-dimensional images. If the optical components of a camera system are perfect, the transformation equations between the two-dimensional image and the three-dimensional object space are linear and easy to solve. However, real camera lenses introduce nonlinear distortions that affect the accuracy of transformation unless proper corrections are applied. An iterative least-squares adjustment algorithm is developed to solve the nonlinear transformation equations incorporated with distortion corrections. Experimental applications demonstrate that a relative precision on the order of 40,000 is achievable without tedious laboratory calibrations of the camera.
[3D Virtual Reality Laparoscopic Simulation in Surgical Education - Results of a Pilot Study].
Kneist, W; Huber, T; Paschold, M; Lang, H
2016-06-01
The use of three-dimensional imaging in laparoscopy is a growing issue and has led to 3D systems in laparoscopic simulation. Studies on box trainers have shown differing results concerning the benefit of 3D imaging. There are currently no studies analysing 3D imaging in virtual reality laparoscopy (VRL). Five surgical fellows, 10 surgical residents and 29 undergraduate medical students performed abstract and procedural tasks on a VRL simulator using conventional 2D and 3D imaging in a randomised order. No significant differences between the two imaging systems were shown for students or medical professionals. Participants who preferred three-dimensional imaging showed significantly better results in 2D as wells as in 3D imaging. First results on three-dimensional imaging on box trainers showed different results. Some studies resulted in an advantage of 3D imaging for laparoscopic novices. This study did not confirm the superiority of 3D imaging over conventional 2D imaging in a VRL simulator. In the present study on 3D imaging on a VRL simulator there was no significant advantage for 3D imaging compared to conventional 2D imaging. Georg Thieme Verlag KG Stuttgart · New York.
Kim, Jonghyun; Moon, Seokil; Jeong, Youngmo; Jang, Changwon; Kim, Youngmin; Lee, Byoungho
2018-06-01
Here, we present dual-dimensional microscopy that captures both two-dimensional (2-D) and light-field images of an in-vivo sample simultaneously, synthesizes an upsampled light-field image in real time, and visualizes it with a computational light-field display system in real time. Compared with conventional light-field microscopy, the additional 2-D image greatly enhances the lateral resolution at the native object plane up to the diffraction limit and compensates for the image degradation at the native object plane. The whole process from capturing to displaying is done in real time with the parallel computation algorithm, which enables the observation of the sample's three-dimensional (3-D) movement and direct interaction with the in-vivo sample. We demonstrate a real-time 3-D interactive experiment with Caenorhabditis elegans. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
NASA Astrophysics Data System (ADS)
Bykov, A. A.; Kutuza, I. B.; Zinin, P. V.; Machikhin, A. S.; Troyan, I. A.; Bulatov, K. M.; Batshev, V. I.; Mantrova, Y. V.; Gaponov, M. I.; Prakapenka, V. B.; Sharma, S. K.
2018-01-01
Recently it has been shown that it is possible to measure the two-dimensional distribution of the surface temperature of microscopic specimens. The main component of the system is a tandem imaging acousto-optical tunable filter synchronized with a video camera. In this report, we demonstrate that combining the laser heating system with a tandem imaging acousto-optical tunable filter allows measurement of the temperature distribution under laser heating of the platinum plates as well as a visualization of the infrared laser beam, that is widely used for laser heating in diamond anvil cells.
NASA Technical Reports Server (NTRS)
Wilson, Daniel W. (Inventor); Johnson, William R. (Inventor); Bearman, Gregory H. (Inventor)
2011-01-01
Computed tomography imaging spectrometers ("CTISs") employing a single lens are provided. The CTISs may be either transmissive or reflective, and the single lens is either configured to transmit and receive uncollimated light (in transmissive systems), or is configured to reflect and receive uncollimated light (in reflective systems). An exemplary transmissive CTIS includes a focal plane array detector, a single lens configured to transmit and receive uncollimated light, a two-dimensional grating, and a field stop aperture. An exemplary reflective CTIS includes a focal plane array detector, a single mirror configured to reflect and receive uncollimated light, a two-dimensional grating, and a field stop aperture.
NASA Astrophysics Data System (ADS)
Tsuji, Hidenobu; Imaki, Masaharu; Kotake, Nobuki; Hirai, Akihito; Nakaji, Masaharu; Kameyama, Shumpei
2017-03-01
We demonstrate a range imaging pulsed laser sensor with two-dimensional scanning of a transmitted beam and a scanless receiver using a high-aspect avalanche photodiode (APD) array for the eye-safe wavelength. The system achieves a high frame rate and long-range imaging with a relatively simple sensor configuration. We developed a high-aspect APD array for the wavelength of 1.5 μm, a receiver integrated circuit, and a range and intensity detector. By combining these devices, we realized 160×120 pixels range imaging with a frame rate of 8 Hz at a distance of about 50 m.
Costa, Marcus V C; Carvalho, Joao L A; Berger, Pedro A; Zaghetto, Alexandre; da Rocha, Adson F; Nascimento, Francisco A O
2009-01-01
We present a new preprocessing technique for two-dimensional compression of surface electromyographic (S-EMG) signals, based on correlation sorting. We show that the JPEG2000 coding system (originally designed for compression of still images) and the H.264/AVC encoder (video compression algorithm operating in intraframe mode) can be used for compression of S-EMG signals. We compare the performance of these two off-the-shelf image compression algorithms for S-EMG compression, with and without the proposed preprocessing step. Compression of both isotonic and isometric contraction S-EMG signals is evaluated. The proposed methods were compared with other S-EMG compression algorithms from the literature.
Image formation analysis and high resolution image reconstruction for plenoptic imaging systems.
Shroff, Sapna A; Berkner, Kathrin
2013-04-01
Plenoptic imaging systems are often used for applications like refocusing, multimodal imaging, and multiview imaging. However, their resolution is limited to the number of lenslets. In this paper we investigate paraxial, incoherent, plenoptic image formation, and develop a method to recover some of the resolution for the case of a two-dimensional (2D) in-focus object. This enables the recovery of a conventional-resolution, 2D image from the data captured in a plenoptic system. We show simulation results for a plenoptic system with a known response and Gaussian sensor noise.
Teng, Dongdong; Xiong, Yi; Liu, Lilin; Wang, Biao
2015-03-09
Existing multiview three-dimensional (3D) display technologies encounter discontinuous motion parallax problem, due to a limited number of stereo-images which are presented to corresponding sub-viewing zones (SVZs). This paper proposes a novel multiview 3D display system to obtain continuous motion parallax by using a group of planar aligned OLED microdisplays. Through blocking partial light-rays by baffles inserted between adjacent OLED microdisplays, transitional stereo-image assembled by two spatially complementary segments from adjacent stereo-images is presented to a complementary fusing zone (CFZ) which locates between two adjacent SVZs. For a moving observation point, the spatial ratio of the two complementary segments evolves gradually, resulting in continuously changing transitional stereo-images and thus overcoming the problem of discontinuous motion parallax. The proposed display system employs projection-type architecture, taking the merit of full display resolution, but at the same time having a thin optical structure, offering great potentials for portable or mobile 3D display applications. Experimentally, a prototype display system is demonstrated by 9 OLED microdisplays.
Real-time two-dimensional temperature imaging using ultrasound.
Liu, Dalong; Ebbini, Emad S
2009-01-01
We present a system for real-time 2D imaging of temperature change in tissue media using pulse-echo ultrasound. The frontend of the system is a SonixRP ultrasound scanner with a research interface giving us the capability of controlling the beam sequence and accessing radio frequency (RF) data in real-time. The beamformed RF data is streamlined to the backend of the system, where the data is processed using a two-dimensional temperature estimation algorithm running in the graphics processing unit (GPU). The estimated temperature is displayed in real-time providing feedback that can be used for real-time control of the heating source. Currently we have verified our system with elastography tissue mimicking phantom and in vitro porcine heart tissue, excellent repeatability and sensitivity were demonstrated.
Three-dimensional head anthropometric analysis
NASA Astrophysics Data System (ADS)
Enciso, Reyes; Shaw, Alex M.; Neumann, Ulrich; Mah, James
2003-05-01
Currently, two-dimensional photographs are most commonly used to facilitate visualization, assessment and treatment of facial abnormalities in craniofacial care but are subject to errors because of perspective, projection, lack metric and 3-dimensional information. One can find in the literature a variety of methods to generate 3-dimensional facial images such as laser scans, stereo-photogrammetry, infrared imaging and even CT however each of these methods contain inherent limitations and as such no systems are in common clinical use. In this paper we will focus on development of indirect 3-dimensional landmark location and measurement of facial soft-tissue with light-based techniques. In this paper we will statistically evaluate and validate a current three-dimensional image-based face modeling technique using a plaster head model. We will also develop computer graphics tools for indirect anthropometric measurements in a three-dimensional head model (or polygonal mesh) including linear distances currently used in anthropometry. The measurements will be tested against a validated 3-dimensional digitizer (MicroScribe 3DX).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, W.; Yun, G. S.; Nam, Y.
2010-10-15
Recently, two-dimensional microwave imaging diagnostics such as the electron cyclotron emission imaging (ECEI) system and microwave imaging reflectometry (MIR) have been developed to study magnetohydrodynamics instabilities and turbulence in magnetically confined plasmas. These imaging systems utilize large optics to collect passive emission or reflected radiation. The design of this optics can be classified into two different types: reflective or refractive optical systems. For instance, an ECEI/MIR system on the TEXTOR tokamak [Park et al., Rev. Sci. Instrum. 75, 3787 (2004)] employed the reflective optics which consisted of two large mirrors, while the TEXTOR ECEI upgrade [B. Tobias et al., Rev.more » Sci. Instrum. 80, 093502 (2009)] and systems on DIII-D, ASDEX-U, and KSTAR adopted refractive systems. Each system has advantages and disadvantages in the standing wave problem and optical aberrations. In this paper, a comparative study between the two optical systems has been performed in order to design a MIR system for KSTAR.« less
NASA Astrophysics Data System (ADS)
Teng, Dongdong; Liu, Lilin; Zhang, Yueli; Pang, Zhiyong; Wang, Biao
2014-09-01
Through the creative usage of a shiftable cylindrical lens, a wide-view-angle holographic display system is developed for medical object display in real three-dimensional (3D) space based on a time-multiplexing method. The two-dimensional (2D) source images for all computer generated holograms (CGHs) needed by the display system are only one group of computerized tomography (CT) or magnetic resonance imaging (MRI) slices from the scanning device. Complicated 3D message reconstruction on the computer is not necessary. A pelvis is taken as the target medical object to demonstrate this method and the obtained horizontal viewing angle reaches 28°.
A synchrotron radiation microtomography system for the analysis of trabecular bone samples.
Salomé, M; Peyrin, F; Cloetens, P; Odet, C; Laval-Jeantet, A M; Baruchel, J; Spanne, P
1999-10-01
X-ray computed microtomography is particularly well suited for studying trabecular bone architecture, which requires three-dimensional (3-D) images with high spatial resolution. For this purpose, we describe a three-dimensional computed microtomography (microCT) system using synchrotron radiation, developed at ESRF. Since synchrotron radiation provides a monochromatic and high photon flux x-ray beam, it allows high resolution and a high signal-to-noise ratio imaging. The principle of the system is based on truly three-dimensional parallel tomographic acquisition. It uses a two-dimensional (2-D) CCD-based detector to record 2-D radiographs of the transmitted beam through the sample under different angles of view. The 3-D tomographic reconstruction, performed by an exact 3-D filtered backprojection algorithm, yields 3-D images with cubic voxels. The spatial resolution of the detector was experimentally measured. For the application to bone investigation, the voxel size was set to 6.65 microm, and the experimental spatial resolution was found to be 11 microm. The reconstructed linear attenuation coefficient was calibrated from hydroxyapatite phantoms. Image processing tools are being developed to extract structural parameters quantifying trabecular bone architecture from the 3-D microCT images. First results on human trabecular bone samples are presented.
Jini service to reconstruct tomographic data
NASA Astrophysics Data System (ADS)
Knoll, Peter; Mirzaei, S.; Koriska, K.; Koehn, H.
2002-06-01
A number of imaging systems rely on the reconstruction of a 3- dimensional model from its projections through the process of computed tomography (CT). In medical imaging, for example magnetic resonance imaging (MRI), positron emission tomography (PET), and Single Computer Tomography (SPECT) acquire two-dimensional projections of a three dimensional projections of a three dimensional object. In order to calculate the 3-dimensional representation of the object, i.e. its voxel distribution, several reconstruction algorithms have been developed. Currently, mainly two reconstruct use: the filtered back projection(FBP) and iterative methods. Although the quality of iterative reconstructed SPECT slices is better than that of FBP slices, such iterative algorithms are rarely used for clinical routine studies because of their low availability and increased reconstruction time. We used Jini and a self-developed iterative reconstructions algorithm to design and implement a Jini reconstruction service. With this service, the physician selects the patient study from a database and a Jini client automatically discovers the registered Jini reconstruction services in the department's Intranet. After downloading the proxy object the this Jini service, the SPECT acquisition data are reconstructed. The resulting transaxial slices are visualized using a Jini slice viewer, which can be used for various imaging modalities.
Ultra-fast framing camera tube
Kalibjian, Ralph
1981-01-01
An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.
NASA Astrophysics Data System (ADS)
Yamauchi, Toyohiko; Kakuno, Yumi; Goto, Kentaro; Fukami, Tadashi; Sugiyama, Norikazu; Iwai, Hidenao; Mizuguchi, Yoshinori; Yamashita, Yutaka
2014-03-01
There is an increasing need for non-invasive imaging techniques in the field of stem cell research. Label-free techniques are the best choice for assessment of stem cells because the cells remain intact after imaging and can be used for further studies such as differentiation induction. To develop a high-resolution label-free imaging system, we have been working on a low-coherence quantitative phase microscope (LC-QPM). LC-QPM is a Linnik-type interference microscope equipped with nanometer-resolution optical-path-length control and capable of obtaining three-dimensional volumetric images. The lateral and vertical resolutions of our system are respectively 0.5 and 0.93 μm and this performance allows capturing sub-cellular morphological features of live cells without labeling. Utilizing LC-QPM, we reported on three-dimensional imaging of membrane fluctuations, dynamics of filopodia, and motions of intracellular organelles. In this presentation, we report three-dimensional morphological imaging of human induced pluripotent stem cells (hiPS cells). Two groups of monolayer hiPS cell cultures were prepared so that one group was cultured in a suitable culture medium that kept the cells undifferentiated, and the other group was cultured in a medium supplemented with retinoic acid, which forces the stem cells to differentiate. The volumetric images of the 2 groups show distinctive differences, especially in surface roughness. We believe that our LC-QPM system will prove useful in assessing many other stem cell conditions.
Compton imaging tomography for nondestructive evaluation of spacecraft thermal protection systems
NASA Astrophysics Data System (ADS)
Romanov, Volodymyr; Burke, Eric; Grubsky, Victor
2017-02-01
Here we present new results of in situ nondestructive evaluation (NDE) of spacecraft thermal protection system materials obtained with POC-developed NDE tool based on a novel Compton Imaging Tomography (CIT) technique recently pioneered and patented by Physical Optics Corporation (POC). In general, CIT provides high-resolution three-dimensional Compton scattered X-ray imaging of the internal structure of evaluated objects, using a set of acquired two-dimensional Compton scattered X-ray images of consecutive cross sections of these objects. Unlike conventional computed tomography, CIT requires only one-sided access to objects, has no limitation on the dimensions and geometry of the objects, and can be applied to large multilayer non-uniform objects with complicated geometries. Also, CIT does not require any contact with the objects being imaged during its application.
DAVIS: A direct algorithm for velocity-map imaging system
NASA Astrophysics Data System (ADS)
Harrison, G. R.; Vaughan, J. C.; Hidle, B.; Laurent, G. M.
2018-05-01
In this work, we report a direct (non-iterative) algorithm to reconstruct the three-dimensional (3D) momentum-space picture of any charged particles collected with a velocity-map imaging system from the two-dimensional (2D) projected image captured by a position-sensitive detector. The method consists of fitting the measured image with the 2D projection of a model 3D velocity distribution defined by the physics of the light-matter interaction. The meaningful angle-correlated information is first extracted from the raw data by expanding the image with a complete set of Legendre polynomials. Both the particle's angular and energy distributions are then directly retrieved from the expansion coefficients. The algorithm is simple, easy to implement, fast, and explicitly takes into account the pixelization effect in the measurement.
Lobster eye X-ray optics: Data processing from two 1D modules
NASA Astrophysics Data System (ADS)
Nentvich, O.; Urban, M.; Stehlikova, V.; Sieger, L.; Hudec, R.
2017-07-01
The X-ray imaging is usually done by Wolter I telescopes. They are suitable for imaging of a small part of the sky, not for all-sky monitoring. This monitoring could be done by a Lobster eye optics which can theoretically have a field of view up to 360 deg. All sky monitoring system enables a quick identification of source and its direction. This paper describes the possibility of using two independent one-dimensional Lobster Eye modules for this purpose instead of Wolter I and their post-processing into an 2D image. This arrangement allows scanning with less energy loss compared to Wolter I or two-dimensional Lobster Eye optics. It is most suitable especially for very weak sources.
Erdenebat, Munkh-Uchral; Kim, Byeong-Jun; Piao, Yan-Ling; Park, Seo-Yeon; Kwon, Ki-Chul; Piao, Mei-Lan; Yoo, Kwan-Hee; Kim, Nam
2017-10-01
A mobile three-dimensional image acquisition and reconstruction system using a computer-generated integral imaging technique is proposed. A depth camera connected to the mobile device acquires the color and depth data of a real object simultaneously, and an elemental image array is generated based on the original three-dimensional information for the object, with lens array specifications input into the mobile device. The three-dimensional visualization of the real object is reconstructed on the mobile display through optical or digital reconstruction methods. The proposed system is implemented successfully and the experimental results certify that the system is an effective and interesting method of displaying real three-dimensional content on a mobile device.
NASA Astrophysics Data System (ADS)
Aleshin, I. M.; Alpatov, V. V.; Vasil'ev, A. E.; Burguchev, S. S.; Kholodkov, K. I.; Budnikov, P. A.; Molodtsov, D. A.; Koryagin, V. N.; Perederin, F. V.
2014-07-01
A service is described that makes possible the effective construction of a three-dimensional ionospheric model based on the data of ground receivers of signals from global navigation satellite positioning systems (GNSS). The obtained image has a high resolution, mainly because data from the IPG GNSS network of the Federal Service for Hydrometeorology and Environmental Monitoring (Rosgidromet) are used. A specially developed format and its implementation in the form of SQL structures are used to collect, transmit, and store data. The method of high-altitude radio tomography is used to construct the three-dimensional model. The operation of all system components (from registration point organization to the procedure for constructing the electron density three-dimensional distribution and publication of the total electron content map on the Internet) has been described in detail. The three-dimensional image of the ionosphere, obtained automatically, is compared with the ionosonde measurements, calculated using the two-dimensional low-altitude tomography method and averaged by the ionospheric model.
Li, Zhongke; Yang, Huifang; Lü, Peijun; Wang, Yong; Sun, Yuchun
2015-01-01
Background and Objective To develop a real-time recording system based on computer binocular vision and two-dimensional image feature extraction to accurately record mandibular movement in three dimensions. Methods A computer-based binocular vision device with two digital cameras was used in conjunction with a fixed head retention bracket to track occlusal movement. Software was developed for extracting target spatial coordinates in real time based on two-dimensional image feature recognition. A plaster model of a subject’s upper and lower dentition were made using conventional methods. A mandibular occlusal splint was made on the plaster model, and then the occlusal surface was removed. Temporal denture base resin was used to make a 3-cm handle extending outside the mouth connecting the anterior labial surface of the occlusal splint with a detection target with intersecting lines designed for spatial coordinate extraction. The subject's head was firmly fixed in place, and the occlusal splint was fully seated on the mandibular dentition. The subject was then asked to make various mouth movements while the mandibular movement target locus point set was recorded. Comparisons between the coordinate values and the actual values of the 30 intersections on the detection target were then analyzed using paired t-tests. Results The three-dimensional trajectory curve shapes of the mandibular movements were consistent with the respective subject movements. Mean XYZ coordinate values and paired t-test results were as follows: X axis: -0.0037 ± 0.02953, P = 0.502; Y axis: 0.0037 ± 0.05242, P = 0.704; and Z axis: 0.0007 ± 0.06040, P = 0.952. The t-test result showed that the coordinate values of the 30 cross points were considered statistically no significant. (P<0.05) Conclusions Use of a real-time recording system of three-dimensional mandibular movement based on computer binocular vision and two-dimensional image feature recognition technology produced a recording accuracy of approximately ± 0.1 mm, and is therefore suitable for clinical application. Certainly, further research is necessary to confirm the clinical applications of the method. PMID:26375800
Remote assessment of diabetic foot ulcers using a novel wound imaging system.
Bowling, Frank L; King, Laurie; Paterson, James A; Hu, Jingyi; Lipsky, Benjamin A; Matthews, David R; Boulton, Andrew J M
2011-01-01
Telemedicine allows experts to assess patients in remote locations, enabling quality convenient, cost-effective care. To help assess foot wounds remotely, we investigated the reliability of a novel optical imaging system employing a three-dimensional camera and disposable optical marker. We first examined inter- and intraoperator measurement variability (correlation coefficient) of five clinicians examining three different wounds. Then, to assess of the system's ability to identify key clinically relevant features, we had two clinicians evaluate 20 different wounds at two centers, recording observations on a standardized form. Three other clinicians recorded their observations using only the corresponding three-dimensional images. Using the in-person assessment as the criterion standard, we assessed concordance of the remote with in-person assessments. Measurement variation of area was 3.3% for intraoperator and 11.9% for interoperator; difference in clinician opinion about wound boundary location was significant. Overall agreement for remote vs. in-person assessments was good, but was lowest on the subjective clinical assessments, e.g., value of debridement to improve healing. Limitations of imaging included inability to show certain characteristics, e.g., moistness or exudation. Clinicians gave positive feedback on visual fidelity. This pilot study showed that a clinician viewing only the three-dimensional images could accurately measure and assess a diabetic foot wound remotely. © 2010 by the Wound Healing Society.
Human iris three-dimensional imaging at micron resolution by a micro-plenoptic camera
Chen, Hao; Woodward, Maria A.; Burke, David T.; Jeganathan, V. Swetha E.; Demirci, Hakan; Sick, Volker
2017-01-01
A micro-plenoptic system was designed to capture the three-dimensional (3D) topography of the anterior iris surface by simple single-shot imaging. Within a depth-of-field of 2.4 mm, depth resolution of 10 µm can be achieved with accuracy (systematic errors) and precision (random errors) below 20%. We demonstrated the application of our micro-plenoptic imaging system on two healthy irides, an iris with naevi, and an iris with melanoma. The ridges and folds, with height differences of 10~80 µm, on the healthy irides can be effectively captured. The front surface on the iris naevi was flat, and the iris melanoma was 50 ± 10 µm higher than the surrounding iris. The micro-plenoptic imaging system has great potential to be utilized for iris disease diagnosis and continuing, simple monitoring. PMID:29082081
Human iris three-dimensional imaging at micron resolution by a micro-plenoptic camera.
Chen, Hao; Woodward, Maria A; Burke, David T; Jeganathan, V Swetha E; Demirci, Hakan; Sick, Volker
2017-10-01
A micro-plenoptic system was designed to capture the three-dimensional (3D) topography of the anterior iris surface by simple single-shot imaging. Within a depth-of-field of 2.4 mm, depth resolution of 10 µm can be achieved with accuracy (systematic errors) and precision (random errors) below 20%. We demonstrated the application of our micro-plenoptic imaging system on two healthy irides, an iris with naevi, and an iris with melanoma. The ridges and folds, with height differences of 10~80 µm, on the healthy irides can be effectively captured. The front surface on the iris naevi was flat, and the iris melanoma was 50 ± 10 µm higher than the surrounding iris. The micro-plenoptic imaging system has great potential to be utilized for iris disease diagnosis and continuing, simple monitoring.
Yi, Faliu; Jeoung, Yousun; Moon, Inkyu
2017-05-20
In recent years, many studies have focused on authentication of two-dimensional (2D) images using double random phase encryption techniques. However, there has been little research on three-dimensional (3D) imaging systems, such as integral imaging, for 3D image authentication. We propose a 3D image authentication scheme based on a double random phase integral imaging method. All of the 2D elemental images captured through integral imaging are encrypted with a double random phase encoding algorithm and only partial phase information is reserved. All the amplitude and other miscellaneous phase information in the encrypted elemental images is discarded. Nevertheless, we demonstrate that 3D images from integral imaging can be authenticated at different depths using a nonlinear correlation method. The proposed 3D image authentication algorithm can provide enhanced information security because the decrypted 2D elemental images from the sparse phase cannot be easily observed by the naked eye. Additionally, using sparse phase images without any amplitude information can greatly reduce data storage costs and aid in image compression and data transmission.
Automated Root Tracking with "Root System Analyzer"
NASA Astrophysics Data System (ADS)
Schnepf, Andrea; Jin, Meina; Ockert, Charlotte; Bol, Roland; Leitner, Daniel
2015-04-01
Crucial factors for plant development are water and nutrient availability in soils. Thus, root architecture is a main aspect of plant productivity and needs to be accurately considered when describing root processes. Images of root architecture contain a huge amount of information, and image analysis helps to recover parameters describing certain root architectural and morphological traits. The majority of imaging systems for root systems are designed for two-dimensional images, such as RootReader2, GiA Roots, SmartRoot, EZ-Rhizo, and Growscreen, but most of them are semi-automated and involve mouse-clicks in each root by the user. "Root System Analyzer" is a new, fully automated approach for recovering root architectural parameters from two-dimensional images of root systems. Individual roots can still be corrected manually in a user interface if required. The algorithm starts with a sequence of segmented two-dimensional images showing the dynamic development of a root system. For each image, morphological operators are used for skeletonization. Based on this, a graph representation of the root system is created. A dynamic root architecture model helps to determine which edges of the graph belong to an individual root. The algorithm elongates each root at the root tip and simulates growth confined within the already existing graph representation. The increment of root elongation is calculated assuming constant growth. For each root, the algorithm finds all possible paths and elongates the root in the direction of the optimal path. In this way, each edge of the graph is assigned to one or more coherent roots. Image sequences of root systems are handled in such a way that the previous image is used as a starting point for the current image. The algorithm is implemented in a set of Matlab m-files. Output of Root System Analyzer is a data structure that includes for each root an identification number, the branching order, the time of emergence, the parent identification number, the distance between branching point to the parent root base, the root length, the root radius and the nodes that belong to each individual root path. This information is relevant for the analysis of dynamic root system development as well as the parameterisation of root architecture models. Here, we show results of Root System Analyzer applied to analyse the root systems of wheat plants grown in rhizotrons. Different treatments with respect to soil moisture and apatite concentrations were used to test the effects of those conditions on root system development. Photographs of the root systems were taken at high spatial and temporal resolution and root systems are automatically tracked.
Quasi-two-dimensional complex plasma containing spherical particles and their binary agglomerates.
Chaudhuri, M; Semenov, I; Nosenko, V; Thomas, H M
2016-05-01
A unique type of quasi-two-dimensional complex plasma system was observed which consisted of monodisperse microspheres and their binary agglomerations (dimers). The particles and their dimers levitated in a plasma sheath at slightly different heights and formed two distinct sublayers. The system did not crystallize and may be characterized as a disordered solid. The dimers were identified based on their characteristic appearance in defocused images, i.e., rotating interference fringe patterns. The in-plane and interplane particle separations exhibit nonmonotonic dependence on the discharge pressure.
Le, Tuan-Anh; Zhang, Xingming; Hoshiar, Ali Kafash; Yoon, Jungwon
2017-09-07
Magnetic nanoparticles (MNPs) are effective drug carriers. By using electromagnetic actuated systems, MNPs can be controlled noninvasively in a vascular network for targeted drug delivery (TDD). Although drugs can reach their target location through capturing schemes of MNPs by permanent magnets, drugs delivered to non-target regions can affect healthy tissues and cause undesirable side effects. Real-time monitoring of MNPs can improve the targeting efficiency of TDD systems. In this paper, a two-dimensional (2D) real-time monitoring scheme has been developed for an MNP guidance system. Resovist particles 45 to 65 nm in diameter (5 nm core) can be monitored in real-time (update rate = 2 Hz) in 2D. The proposed 2D monitoring system allows dynamic tracking of MNPs during TDD and renders magnetic particle imaging-based navigation more feasible.
Le, Tuan-Anh; Zhang, Xingming; Hoshiar, Ali Kafash; Yoon, Jungwon
2017-01-01
Magnetic nanoparticles (MNPs) are effective drug carriers. By using electromagnetic actuated systems, MNPs can be controlled noninvasively in a vascular network for targeted drug delivery (TDD). Although drugs can reach their target location through capturing schemes of MNPs by permanent magnets, drugs delivered to non-target regions can affect healthy tissues and cause undesirable side effects. Real-time monitoring of MNPs can improve the targeting efficiency of TDD systems. In this paper, a two-dimensional (2D) real-time monitoring scheme has been developed for an MNP guidance system. Resovist particles 45 to 65 nm in diameter (5 nm core) can be monitored in real-time (update rate = 2 Hz) in 2D. The proposed 2D monitoring system allows dynamic tracking of MNPs during TDD and renders magnetic particle imaging-based navigation more feasible. PMID:28880220
The 3-D image recognition based on fuzzy neural network technology
NASA Technical Reports Server (NTRS)
Hirota, Kaoru; Yamauchi, Kenichi; Murakami, Jun; Tanaka, Kei
1993-01-01
Three dimensional stereoscopic image recognition system based on fuzzy-neural network technology was developed. The system consists of three parts; preprocessing part, feature extraction part, and matching part. Two CCD color camera image are fed to the preprocessing part, where several operations including RGB-HSV transformation are done. A multi-layer perception is used for the line detection in the feature extraction part. Then fuzzy matching technique is introduced in the matching part. The system is realized on SUN spark station and special image input hardware system. An experimental result on bottle images is also presented.
Three-dimensional ghost imaging lidar via sparsity constraint
NASA Astrophysics Data System (ADS)
Gong, Wenlin; Zhao, Chengqiang; Yu, Hong; Chen, Mingliang; Xu, Wendong; Han, Shensheng
2016-05-01
Three-dimensional (3D) remote imaging attracts increasing attentions in capturing a target’s characteristics. Although great progress for 3D remote imaging has been made with methods such as scanning imaging lidar and pulsed floodlight-illumination imaging lidar, either the detection range or application mode are limited by present methods. Ghost imaging via sparsity constraint (GISC), enables the reconstruction of a two-dimensional N-pixel image from much fewer than N measurements. By GISC technique and the depth information of targets captured with time-resolved measurements, we report a 3D GISC lidar system and experimentally show that a 3D scene at about 1.0 km range can be stably reconstructed with global measurements even below the Nyquist limit. Compared with existing 3D optical imaging methods, 3D GISC has the capability of both high efficiency in information extraction and high sensitivity in detection. This approach can be generalized in nonvisible wavebands and applied to other 3D imaging areas.
Low-cost Volumetric Ultrasound by Augmentation of 2D Systems: Design and Prototype.
Herickhoff, Carl D; Morgan, Matthew R; Broder, Joshua S; Dahl, Jeremy J
2018-01-01
Conventional two-dimensional (2D) ultrasound imaging is a powerful diagnostic tool in the hands of an experienced user, yet 2D ultrasound remains clinically underutilized and inherently incomplete, with output being very operator dependent. Volumetric ultrasound systems can more fully capture a three-dimensional (3D) region of interest, but current 3D systems require specialized transducers, are prohibitively expensive for many clinical departments, and do not register image orientation with respect to the patient; these systems are designed to provide improved workflow rather than operator independence. This work investigates whether it is possible to add volumetric 3D imaging capability to existing 2D ultrasound systems at minimal cost, providing a practical means of reducing operator dependence in ultrasound. In this paper, we present a low-cost method to make 2D ultrasound systems capable of quality volumetric image acquisition: we present the general system design and image acquisition method, including the use of a probe-mounted orientation sensor, a simple probe fixture prototype, and an offline volume reconstruction technique. We demonstrate initial results of the method, implemented using a Verasonics Vantage research scanner.
Computer-assisted techniques to evaluate fringe patterns
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.; Bhat, Gopalakrishna K.
1992-01-01
Strain measurement using interferometry requires an efficient way to extract the desired information from interferometric fringes. Availability of digital image processing systems makes it possible to use digital techniques for the analysis of fringes. In the past, there have been several developments in the area of one dimensional and two dimensional fringe analysis techniques, including the carrier fringe method (spatial heterodyning) and the phase stepping (quasi-heterodyning) technique. This paper presents some new developments in the area of two dimensional fringe analysis, including a phase stepping technique supplemented by the carrier fringe method and a two dimensional Fourier transform method to obtain the strain directly from the discontinuous phase contour map.
Athermally photoreduced graphene oxides for three-dimensional holographic images
Li, Xiangping; Ren, Haoran; Chen, Xi; Liu, Juan; Li, Qin; Li, Chengmingyue; Xue, Gaolei; Jia, Jia; Cao, Liangcai; Sahu, Amit; Hu, Bin; Wang, Yongtian; Jin, Guofan; Gu, Min
2015-01-01
The emerging graphene-based material, an atomic layer of aromatic carbon atoms with exceptional electronic and optical properties, has offered unprecedented prospects for developing flat two-dimensional displaying systems. Here, we show that reduced graphene oxide enabled write-once holograms for wide-angle and full-colour three-dimensional images. This is achieved through the discovery of subwavelength-scale multilevel optical index modulation of athermally reduced graphene oxides by a single femtosecond pulsed beam. This new feature allows for static three-dimensional holographic images with a wide viewing angle up to 52 degrees. In addition, the spectrally flat optical index modulation in reduced graphene oxides enables wavelength-multiplexed holograms for full-colour images. The large and polarization-insensitive phase modulation over π in reduced graphene oxide composites enables to restore vectorial wavefronts of polarization discernible images through the vectorial diffraction of a reconstruction beam. Therefore, our technique can be leveraged to achieve compact and versatile holographic components for controlling light. PMID:25901676
NASA Technical Reports Server (NTRS)
Meyn, Larry A.; Bennett, Mark S.
1993-01-01
A description is presented of two enhancements for a two-camera, video imaging system that increase the accuracy and efficiency of the system when applied to the determination of three-dimensional locations of points along a continuous line. These enhancements increase the utility of the system when extracting quantitative data from surface and off-body flow visualizations. The first enhancement utilizes epipolar geometry to resolve the stereo "correspondence" problem. This is the problem of determining, unambiguously, corresponding points in the stereo images of objects that do not have visible reference points. The second enhancement, is a method to automatically identify and trace the core of a vortex in a digital image. This is accomplished by means of an adaptive template matching algorithm. The system was used to determine the trajectory of a vortex generated by the Leading-Edge eXtension (LEX) of a full-scale F/A-18 aircraft tested in the NASA Ames 80- by 120-Foot Wind Tunnel. The system accuracy for resolving the vortex trajectories is estimated to be +/-2 inches over distance of 60 feet. Stereo images of some of the vortex trajectories are presented. The system was also used to determine the point where the LEX vortex "bursts". The vortex burst point locations are compared with those measured in small-scale tests and in flight and found to be in good agreement.
NASA Technical Reports Server (NTRS)
Muller, Richard E. (Inventor); Mouroulis, Pantazis Z. (Inventor); Maker, Paul D. (Inventor); Wilson, Daniel W. (Inventor)
2003-01-01
The optical system of this invention is an unique type of imaging spectrometer, i.e. an instrument that can determine the spectra of all points in a two-dimensional scene. The general type of imaging spectrometer under which this invention falls has been termed a computed-tomography imaging spectrometer (CTIS). CTIS's have the ability to perform spectral imaging of scenes containing rapidly moving objects or evolving features, hereafter referred to as transient scenes. This invention, a reflective CTIS with an unique two-dimensional reflective grating, can operate in any wavelength band from the ultraviolet through long-wave infrared. Although this spectrometer is especially useful for rapidly occurring events it is also useful for investigation of some slow moving phenomena as in the life sciences.
Two-Dimensional Hermite Filters Simplify the Description of High-Order Statistics of Natural Images.
Hu, Qin; Victor, Jonathan D
2016-09-01
Natural image statistics play a crucial role in shaping biological visual systems, understanding their function and design principles, and designing effective computer-vision algorithms. High-order statistics are critical for conveying local features, but they are challenging to study - largely because their number and variety is large. Here, via the use of two-dimensional Hermite (TDH) functions, we identify a covert symmetry in high-order statistics of natural images that simplifies this task. This emerges from the structure of TDH functions, which are an orthogonal set of functions that are organized into a hierarchy of ranks. Specifically, we find that the shape (skewness and kurtosis) of the distribution of filter coefficients depends only on the projection of the function onto a 1-dimensional subspace specific to each rank. The characterization of natural image statistics provided by TDH filter coefficients reflects both their phase and amplitude structure, and we suggest an intuitive interpretation for the special subspace within each rank.
Stereo Image Ranging For An Autonomous Robot Vision System
NASA Astrophysics Data System (ADS)
Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven
1985-12-01
The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.
Efficient local representations for three-dimensional palmprint recognition
NASA Astrophysics Data System (ADS)
Yang, Bing; Wang, Xiaohua; Yao, Jinliang; Yang, Xin; Zhu, Wenhua
2013-10-01
Palmprints have been broadly used for personal authentication because they are highly accurate and incur low cost. Most previous works have focused on two-dimensional (2-D) palmprint recognition in the past decade. Unfortunately, 2-D palmprint recognition systems lose the shape information when capturing palmprint images. Moreover, such 2-D palmprint images can be easily forged or affected by noise. Hence, three-dimensional (3-D) palmprint recognition has been regarded as a promising way to further improve the performance of palmprint recognition systems. We have developed a simple, but efficient method for 3-D palmprint recognition by using local features. We first utilize shape index representation to describe the geometry of local regions in 3-D palmprint data. Then, we extract local binary pattern and Gabor wavelet features from the shape index image. The two types of complementary features are finally fused at a score level for further improvements. The experimental results on the Hong Kong Polytechnic 3-D palmprint database, which contains 8000 samples from 400 palms, illustrate the effectiveness of the proposed method.
Method and apparatus for two-dimensional absolute optical encoding
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
2004-01-01
This invention presents a two-dimensional absolute optical encoder and a method for determining position of an object in accordance with information from the encoder. The encoder of the present invention comprises a scale having a pattern being predetermined to indicate an absolute location on the scale, means for illuminating the scale, means for forming an image of the pattern; and detector means for outputting signals derived from the portion of the image of the pattern which lies within a field of view of the detector means, the field of view defining an image reference coordinate system, and analyzing means, receiving the signals from the detector means, for determining the absolute location of the object. There are two types of scale patterns presented in this invention: grid type and starfield type.
Vrooijink, Gustaaf J.; Abayazid, Momen; Patil, Sachin; Alterovitz, Ron; Misra, Sarthak
2015-01-01
Needle insertion is commonly performed in minimally invasive medical procedures such as biopsy and radiation cancer treatment. During such procedures, accurate needle tip placement is critical for correct diagnosis or successful treatment. Accurate placement of the needle tip inside tissue is challenging, especially when the target moves and anatomical obstacles must be avoided. We develop a needle steering system capable of autonomously and accurately guiding a steerable needle using two-dimensional (2D) ultrasound images. The needle is steered to a moving target while avoiding moving obstacles in a three-dimensional (3D) non-static environment. Using a 2D ultrasound imaging device, our system accurately tracks the needle tip motion in 3D space in order to estimate the tip pose. The needle tip pose is used by a rapidly exploring random tree-based motion planner to compute a feasible needle path to the target. The motion planner is sufficiently fast such that replanning can be performed repeatedly in a closed-loop manner. This enables the system to correct for perturbations in needle motion, and movement in obstacle and target locations. Our needle steering experiments in a soft-tissue phantom achieves maximum targeting errors of 0.86 ± 0.35 mm (without obstacles) and 2.16 ± 0.88 mm (with a moving obstacle). PMID:26279600
Blind restoration method of three-dimensional microscope image based on RL algorithm
NASA Astrophysics Data System (ADS)
Yao, Jin-li; Tian, Si; Wang, Xiang-rong; Wang, Jing-li
2013-08-01
Thin specimens of biological tissue appear three dimensional transparent under a microscope. The optic slice images can be captured by moving the focal planes at the different locations of the specimen. The captured image has low resolution due to the influence of the out-of-focus information comes from the planes adjacent to the local plane. Using traditional methods can remove the blur in the images at a certain degree, but it needs to know the point spread function (PSF) of the imaging system accurately. The accuracy degree of PSF influences the restoration result greatly. In fact, it is difficult to obtain the accurate PSF of the imaging system. In order to restore the original appearance of the specimen under the conditions of the imaging system parameters are unknown or there is noise and spherical aberration in the system, a blind restoration methods of three-dimensional microscope based on the R-L algorithm is proposed in this paper. On the basis of the exhaustive study of the two-dimension R-L algorithm, according to the theory of the microscopy imaging and the wavelet transform denoising pretreatment, we expand the R-L algorithm to three-dimension space. It is a nonlinear restoration method with the maximum entropy constraint. The method doesn't need to know the PSF of the microscopy imaging system precisely to recover the blur image. The image and PSF converge to the optimum solutions by many alterative iterations and corrections. The matlab simulation and experiments results show that the expansion algorithm is better in visual indicators, peak signal to noise ratio and improved signal to noise ratio when compared with the PML algorithm, and the proposed algorithm can suppress noise, restore more details of target, increase image resolution.
A neural network approach for image reconstruction in electron magnetic resonance tomography.
Durairaj, D Christopher; Krishna, Murali C; Murugesan, Ramachandran
2007-10-01
An object-oriented, artificial neural network (ANN) based, application system for reconstruction of two-dimensional spatial images in electron magnetic resonance (EMR) tomography is presented. The standard back propagation algorithm is utilized to train a three-layer sigmoidal feed-forward, supervised, ANN to perform the image reconstruction. The network learns the relationship between the 'ideal' images that are reconstructed using filtered back projection (FBP) technique and the corresponding projection data (sinograms). The input layer of the network is provided with a training set that contains projection data from various phantoms as well as in vivo objects, acquired from an EMR imager. Twenty five different network configurations are investigated to test the ability of the generalization of the network. The trained ANN then reconstructs two-dimensional temporal spatial images that present the distribution of free radicals in biological systems. Image reconstruction by the trained neural network shows better time complexity than the conventional iterative reconstruction algorithms such as multiplicative algebraic reconstruction technique (MART). The network is further explored for image reconstruction from 'noisy' EMR data and the results show better performance than the FBP method. The network is also tested for its ability to reconstruct from limited-angle EMR data set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hua, Xin; Marshall, Matthew J.; Xiong, Yijia
2015-05-01
A vacuum compatible microfluidic reactor, SALVI (System for Analysis at the Liquid Vacuum Interface) was employed for in situ chemical imaging of live biofilms using time-of-flight secondary ion mass spectrometry (ToF-SIMS). Depth profiling by sputtering materials in sequential layers resulted in live biofilm spatial chemical mapping. 2D images were reconstructed to report the first 3D images of hydrated biofilm elucidating spatial and chemical heterogeneity. 2D image principal component analysis (PCA) was conducted among biofilms at different locations in the microchannel. Our approach directly visualized spatial and chemical heterogeneity within the living biofilm by dynamic liquid ToF-SIMS.
2016-11-30
AFRL-AFOSR-JP-TR-2017-0016 In-situ Manipulation and Imaging of Switchable Two-dimensional Electron Gas at Oxide Heterointerfaces CHANG BEOM EOM...Imaging of Switchable Two-dimensional Electron Gas at Oxide Heterointerfaces 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA2386-15-1-4046 5c. PROGRAM...NOTES 14. ABSTRACT The recent discovery of a two-dimensional electron gas (2DEG) at the interface between insulating perovskite oxides SrTiO3 and LaAlO3
Jha, Abhinav K; Barrett, Harrison H; Frey, Eric C; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A
2015-09-21
Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.
NASA Astrophysics Data System (ADS)
Jha, Abhinav K.; Barrett, Harrison H.; Frey, Eric C.; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A.
2015-09-01
Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.
Bao, Wei; Borys, Nicholas J.; Ko, Changhyun; ...
2015-08-13
The ideal building blocks for atomically thin, flexible optoelectronic and catalytic devices are two-dimensional monolayer transition metal dichalcogenide semiconductors. Although challenging for two-dimensional systems, sub-diffraction optical microscopy provides a nanoscale material understanding that is vital for optimizing their optoelectronic properties. We use the ‘Campanile’ nano-optical probe to spectroscopically image exciton recombination within monolayer MoS2 with sub-wavelength resolution (60 nm), at the length scale relevant to many critical optoelectronic processes. Moreover, synthetic monolayer MoS2 is found to be composed of two distinct optoelectronic regions: an interior, locally ordered but mesoscopically heterogeneous two-dimensional quantum well and an unexpected ~300-nm wide, energetically disorderedmore » edge region. Further, grain boundaries are imaged with sufficient resolution to quantify local exciton-quenching phenomena, and complimentary nano-Auger microscopy reveals that the optically defective grain boundary and edge regions are sulfur deficient. In conclusion, the nanoscale structure–property relationships established here are critical for the interpretation of edge- and boundary-related phenomena and the development of next-generation two-dimensional optoelectronic devices.« less
Three dimensional identification card and applications
NASA Astrophysics Data System (ADS)
Zhou, Changhe; Wang, Shaoqing; Li, Chao; Li, Hao; Liu, Zhao
2016-10-01
Three dimensional Identification Card, with its three-dimensional personal image displayed and stored for personal identification, is supposed be the advanced version of the present two-dimensional identification card in the future [1]. Three dimensional Identification Card means that there are three-dimensional optical techniques are used, the personal image on ID card is displayed to be three-dimensional, so we can see three dimensional personal face. The ID card also stores the three-dimensional face information in its inside electronics chip, which might be recorded by using two-channel cameras, and it can be displayed in computer as three-dimensional images for personal identification. Three-dimensional ID card might be one interesting direction to update the present two-dimensional card in the future. Three-dimension ID card might be widely used in airport custom, entrance of hotel, school, university, as passport for on-line banking, registration of on-line game, etc...
Implementation of webcam-based hyperspectral imaging system
NASA Astrophysics Data System (ADS)
Balooch, Ali; Nazeri, Majid; Abbasi, Hamed
2018-02-01
In the present work, a hyperspectral imaging system (imaging spectrometer) using a commercial webcam has been designed and developed. This system was able to capture two-dimensional spectra (in emission, transmission and reflection modes) directly from the scene in the desired wavelengths. Imaging of the object is done directly by linear sweep (pushbroom method). To do so, the spectrometer is equipped with a suitable collecting lens and a linear travel stage. A 1920 x 1080 pixel CMOS webcam was used as a detector. The spectrometer has been calibrated by the reference spectral lines of standard lamps. The spectral resolution of this system was about 2nm and its spatial resolution was about 1 mm for a 10 cm long object. The hardware solution is based on data acquisition working on the USB platform and controlled by a LabVIEW program. In this system, the initial output was a three-dimensional matrix in which two dimensions of the matrix were related to the spatial information of the object and the third dimension was the spectrum of any point of the object. Finally, the images in different wavelengths were created by reforming the data of the matrix. The free spectral range (FSR) of the system was 400 to 1100 nm. The system was successfully tested for some applications, such as plasma diagnosis as well as applications in food and agriculture sciences.
NASA Astrophysics Data System (ADS)
Lee, Chang-Kun; Moon, Seokil; Lee, Byounghyo; Jeong, Youngmo; Lee, Byoungho
2016-10-01
A head-mounted compressive three-dimensional (3D) display system is proposed by combining polarization beam splitter (PBS), fast switching polarization rotator and micro display with high pixel density. According to the polarization state of the image controlled by polarization rotator, optical path of image in the PBS can be divided into transmitted and reflected components. Since optical paths of each image are spatially separated, it is possible to independently focus both images at different depth positions. Transmitted p-polarized and reflected s-polarized images can be focused by convex lens and mirror, respectively. When the focal lengths of the convex lens and mirror are properly determined, two image planes can be located in intended positions. The geometrical relationship is easily modulated by replacement of the components. The fast switching of polarization realizes the real-time operation of multi-focal image planes with a single display panel. Since it is possible to conserve the device characteristic of single panel, the high image quality, reliability and uniformity can be retained. For generating 3D images, layer images for compressive light field display between two image planes are calculated. Since the display panel with high pixel density is adopted, high quality 3D images are reconstructed. In addition, image degradation by diffraction between physically stacked display panels can be mitigated. Simple optical configuration of the proposed system is implemented and the feasibility of the proposed method is verified through experiments.
3D X-Ray Luggage-Screening System
NASA Technical Reports Server (NTRS)
Fernandez, Kenneth
2006-01-01
A three-dimensional (3D) x-ray luggage- screening system has been proposed to reduce the fatigue experienced by human inspectors and increase their ability to detect weapons and other contraband. The system and variants thereof could supplant thousands of xray scanners now in use at hundreds of airports in the United States and other countries. The device would be applicable to any security checkpoint application where current two-dimensional scanners are in use. A conventional x-ray luggage scanner generates a single two-dimensional (2D) image that conveys no depth information. Therefore, a human inspector must scrutinize the image in an effort to understand ambiguous-appearing objects as they pass by at high speed on a conveyor belt. Such a high level of concentration can induce fatigue, causing the inspector to reduce concentration and vigilance. In addition, because of the lack of depth information, contraband objects could be made more difficult to detect by positioning them near other objects so as to create x-ray images that confuse inspectors. The proposed system would make it unnecessary for a human inspector to interpret 2D images, which show objects at different depths as superimposed. Instead, the system would take advantage of the natural human ability to infer 3D information from stereographic or stereoscopic images. The inspector would be able to perceive two objects at different depths, in a more nearly natural manner, as distinct 3D objects lying at different depths. Hence, the inspector could recognize objects with greater accuracy and less effort. The major components of the proposed system would be similar to those of x-ray luggage scanners now in use. As in a conventional x-ray scanner, there would be an x-ray source. Unlike in a conventional scanner, there would be two x-ray image sensors, denoted the left and right sensors, located at positions along the conveyor that are upstream and downstream, respectively (see figure). X-ray illumination may be provided by a single source or by two sources. The position of the conveyor would be detected to provide a means of matching the appropriate left- and right-eye images of an item under inspection. The appropriate right- and left-eye images of an item would be displayed simultaneously to the right and left eyes, respectively, of the human inspector, using commercially available stereo display screens. The human operator could adjust viewing parameters for maximum viewing comfort. The stereographic images thus generated would differ from true stereoscopic images by small distortions that are characteristic of radiographic images in general, but these distortions would not diminish the value of the images for identifying distinct objects at different depths.
Multi-acoustic lens design methodology for a low cost C-scan photoacoustic imaging camera
NASA Astrophysics Data System (ADS)
Chinni, Bhargava; Han, Zichao; Brown, Nicholas; Vallejo, Pedro; Jacobs, Tess; Knox, Wayne; Dogra, Vikram; Rao, Navalgund
2016-03-01
We have designed and implemented a novel acoustic lens based focusing technology into a prototype photoacoustic imaging camera. All photoacoustically generated waves from laser exposed absorbers within a small volume get focused simultaneously by the lens onto an image plane. We use a multi-element ultrasound transducer array to capture the focused photoacoustic signals. Acoustic lens eliminates the need for expensive data acquisition hardware systems, is faster compared to electronic focusing and enables real-time image reconstruction. Using this photoacoustic imaging camera, we have imaged more than 150 several centimeter size ex-vivo human prostate, kidney and thyroid specimens with a millimeter resolution for cancer detection. In this paper, we share our lens design strategy and how we evaluate the resulting quality metrics (on and off axis point spread function, depth of field and modulation transfer function) through simulation. An advanced toolbox in MATLAB was adapted and used for simulating a two-dimensional gridded model that incorporates realistic photoacoustic signal generation and acoustic wave propagation through the lens with medium properties defined on each grid point. Two dimensional point spread functions have been generated and compared with experiments to demonstrate the utility of our design strategy. Finally we present results from work in progress on the use of two lens system aimed at further improving some of the quality metrics of our system.
Development and testing of 2-dimensional photon counter
NASA Technical Reports Server (NTRS)
1981-01-01
The development of a commercially available two dimensional photon counter into an operational system for speckle imaging of astronomical objects is described. The system includes digital recording for field observations. The counter has a bialkali photocathode with a field size of 18 by 18 mm over which it resolves about 100 by 100 pixels. The system records photon positions as 16 bit words at rates up to 14,400 per second. Field tests at observatories verifying the operation of the system are described.
[Development of a system for ultrasonic three-dimensional reconstruction of fetus].
Baba, K
1989-04-01
We have developed a system for ultrasonic three-dimensional (3-D) fetus reconstruction using computers. Either a real-time linear array probe or a convex array probe of an ultrasonic scanner was mounted on a position sensor arm of a manual compound scanner in order to detect the position of the probe. A microcomputer was used to convert the position information to what could be recorded on a video tape as an image. This image was superimposed on the ultrasonic tomographic image simultaneously with a superimposer and recorded on a video tape. Fetuses in utero were scanned in seven cases. More than forty ultrasonic section image on the video tape were fed into a minicomputer. The shape of the fetus was displayed three-dimensionally by means of computer graphics. The computer-generated display produced a 3-D image of the fetus and showed the usefulness and accuracy of this system. Since it took only a few seconds for data collection by ultrasonic inspection, fetal movement did not adversely affect the results. Data input took about ten minutes for 40 slices, and 3-D reconstruction and display took about two minutes. The system made it possible to observe and record the 3-D image of the fetus in utero non-invasively and therefore is expected to make it much easier to obtain a 3-D picture of the fetus in utero.
Display system for imaging scientific telemetric information
NASA Technical Reports Server (NTRS)
Zabiyakin, G. I.; Rykovanov, S. N.
1979-01-01
A system for imaging scientific telemetric information, based on the M-6000 minicomputer and the SIGD graphic display, is described. Two dimensional graphic display of telemetric information and interaction with the computer, in analysis and processing of telemetric parameters displayed on the screen is provided. The running parameter information output method is presented. User capabilities in the analysis and processing of telemetric information imaged on the display screen and the user language are discussed and illustrated.
Fujisaki, K; Yokota, H; Nakatsuchi, H; Yamagata, Y; Nishikawa, T; Udagawa, T; Makinouchi, A
2010-01-01
A three-dimensional (3D) internal structure observation system based on serial sectioning was developed from an ultrasonic elliptical vibration cutting device and an optical microscope combined with a high-precision positioning device. For bearing steel samples, the cutting device created mirrored surfaces suitable for optical metallography, even for long-cutting distances during serial sectioning of these ferrous materials. Serial sectioning progressed automatically by means of numerical control. The system was used to observe inclusions in steel materials on a scale of several tens of micrometers. Three specimens containing inclusions were prepared from bearing steels. These inclusions could be detected as two-dimensional (2D) sectional images with resolution better than 1 mum. A three-dimensional (3D) model of each inclusion was reconstructed from the 2D serial images. The microscopic 3D models had sharp edges and complicated surfaces.
Performance modeling of terahertz (THz) and millimeter waves (mmW) pupil plane imaging
NASA Astrophysics Data System (ADS)
Mohammadian, Nafiseh; Furxhi, Orges; Zhang, Lei; Offermans, Peter; Ghazi, Galia; Driggers, Ronald
2018-05-01
Terahertz- (THz) and millimeter-wave sensors are becoming more important in industrial, security, medical, and defense applications. A major problem in these sensing areas is the resolution, sensitivity, and visual acuity of the imaging systems. There are different fundamental parameters in designing a system that have significant effects on the imaging performance. The performance of THz systems can be discussed in terms of two characteristics: sensitivity and spatial resolution. New approaches for design and manufacturing of THz imagers are a vital basis for developing future applications. Photonics solutions have been at the technological forefront in THz band applications. A single scan antenna does not provide reasonable resolution, sensitivity, and speed. An effective approach to imaging is placing a high-performance antenna in a two-dimensional antenna array to achieve higher radiation efficiency and higher resolution in the imaging systems. Here, we present the performance modeling of a pupil plane imaging system to find the resolution and sensitivity efficiency of the imaging system.
Dimensionality and noise in energy selective x-ray imaging
Alvarez, Robert E.
2013-01-01
Purpose: To develop and test a method to quantify the effect of dimensionality on the noise in energy selective x-ray imaging. Methods: The Cramèr-Rao lower bound (CRLB), a universal lower limit of the covariance of any unbiased estimator, is used to quantify the noise. It is shown that increasing dimensionality always increases, or at best leaves the same, the variance. An analytic formula for the increase in variance in an energy selective x-ray system is derived. The formula is used to gain insight into the dependence of the increase in variance on the properties of the additional basis functions, the measurement noise covariance, and the source spectrum. The formula is also used with computer simulations to quantify the dependence of the additional variance on these factors. Simulated images of an object with three materials are used to demonstrate the trade-off of increased information with dimensionality and noise. The images are computed from energy selective data with a maximum likelihood estimator. Results: The increase in variance depends most importantly on the dimension and on the properties of the additional basis functions. With the attenuation coefficients of cortical bone, soft tissue, and adipose tissue as the basis functions, the increase in variance of the bone component from two to three dimensions is 1.4 × 103. With the soft tissue component, it is 2.7 × 104. If the attenuation coefficient of a high atomic number contrast agent is used as the third basis function, there is only a slight increase in the variance from two to three basis functions, 1.03 and 7.4 for the bone and soft tissue components, respectively. The changes in spectrum shape with beam hardening also have a substantial effect. They increase the variance by a factor of approximately 200 for the bone component and 220 for the soft tissue component as the soft tissue object thickness increases from 1 to 30 cm. Decreasing the energy resolution of the detectors increases the variance of the bone component markedly with three dimension processing, approximately a factor of 25 as the resolution decreases from 100 to 3 bins. The increase with two dimension processing for adipose tissue is a factor of two and with the contrast agent as the third material for two or three dimensions is also a factor of two for both components. The simulated images show that a maximum likelihood estimator can be used to process energy selective x-ray data to produce images with noise close to the CRLB. Conclusions: The method presented can be used to compute the effects of the object attenuation coefficients and the x-ray system properties on the relationship of dimensionality and noise in energy selective x-ray imaging systems. PMID:24320442
SU-C-207A-03: Development of Proton CT Imaging System Using Thick Scintillator and CCD Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanaka, S; Uesaka, M; Nishio, T
2016-06-15
Purpose: In the treatment planning of proton therapy, Water Equivalent Length (WEL), which is the parameter for the calculation of dose and the range of proton, is derived by X-ray CT (xCT) image and xCT-WEL conversion. However, about a few percent error in the accuracy of proton range calculation through this conversion has been reported. The purpose of this study is to construct a proton CT (pCT) imaging system for an evaluation of the error. Methods: The pCT imaging system was constructed with a thick scintillator and a cooled CCD camera, which acquires the two-dimensional image of integrated value ofmore » the scintillation light toward the beam direction. The pCT image is reconstructed by FBP method using a correction between the light intensity and residual range of proton beam. An experiment for the demonstration of this system was performed with 70-MeV proton beam provided by NIRS cyclotron. The pCT image of several objects reconstructed from the experimental data was evaluated quantitatively. Results: Three-dimensional pCT images of several objects were reconstructed experimentally. A finestructure of approximately 1 mm was clearly observed. The position resolution of pCT image was almost the same as that of xCT image. And the error of proton CT pixel value was up to 4%. The deterioration of image quality was caused mainly by the effect of multiple Coulomb scattering. Conclusion: We designed and constructed the pCT imaging system using a thick scintillator and a CCD camera. And the system was evaluated with the experiment by use of 70-MeV proton beam. Three-dimensional pCT images of several objects were acquired by the system. This work was supported by JST SENTAN Grant Number 13A1101 and JSPS KAKENHI Grant Number 15H04912.« less
Volegov, P. L.; Danly, C. R.; Merrill, F. E.; ...
2015-11-24
The neutron imaging system at the National Ignition Facility is an important diagnostic tool for measuring the two-dimensional size and shape of the source of neutrons produced in the burning deuterium-tritium plasma during the stagnation phase of inertial confinement fusion implosions. Few two-dimensional projections of neutronimages are available to reconstruct the three-dimensionalneutron source. In our paper, we present a technique that has been developed for the 3Dreconstruction of neutron and x-raysources from a minimal number of 2D projections. Here, we present the detailed algorithms used for this characterization and the results of reconstructedsources from experimental data collected at Omega.
A novel spinal kinematic analysis using X-ray imaging and vicon motion analysis: a case study.
Noh, Dong K; Lee, Nam G; You, Joshua H
2014-01-01
This study highlights a novel spinal kinematic analysis method and the feasibility of X-ray imaging measurements to accurately assess thoracic spine motion. The advanced X-ray Nash-Moe method and analysis were used to compute the segmental range of motion in thoracic vertebra pedicles in vivo. This Nash-Moe X-ray imaging method was compared with a standardized method using the Vicon 3-dimensional motion capture system. Linear regression analysis showed an excellent and significant correlation between the two methods (R2 = 0.99, p < 0.05), suggesting that the analysis of spinal segmental range of motion using X-ray imaging measurements was accurate and comparable to the conventional 3-dimensional motion analysis system. Clinically, this novel finding is compelling evidence demonstrating that measurements with X-ray imaging are useful to accurately decipher pathological spinal alignment and movement impairments in idiopathic scoliosis (IS).
Transparent 3D display for augmented reality
NASA Astrophysics Data System (ADS)
Lee, Byoungho; Hong, Jisoo
2012-11-01
Two types of transparent three-dimensional display systems applicable for the augmented reality are demonstrated. One of them is a head-mounted-display-type implementation which utilizes the principle of the system adopting the concave floating lens to the virtual mode integral imaging. Such configuration has an advantage in that the threedimensional image can be displayed at sufficiently far distance resolving the accommodation conflict with the real world scene. Incorporating the convex half mirror, which shows a partial transparency, instead of the concave floating lens, makes it possible to implement the transparent three-dimensional display system. The other type is the projection-type implementation, which is more appropriate for the general use than the head-mounted-display-type implementation. Its imaging principle is based on the well-known reflection-type integral imaging. We realize the feature of transparent display by imposing the partial transparency to the array of concave mirror which is used for the screen of reflection-type integral imaging. Two types of configurations, relying on incoherent and coherent light sources, are both possible. For the incoherent configuration, we introduce the concave half mirror array, whereas the coherent one adopts the holographic optical element which replicates the functionality of the lenslet array. Though the projection-type implementation is beneficial than the head-mounted-display in principle, the present status of the technical advance of the spatial light modulator still does not provide the satisfactory visual quality of the displayed three-dimensional image. Hence we expect that the head-mounted-display-type and projection-type implementations will come up in the market in sequence.
Reddy, Gaddum Duemani; Kelleher, Keith; Fink, Rudy; Saggau, Peter
2009-01-01
The dynamic ability of neuronal dendrites to shape and integrate synaptic responses is the hallmark of information processing in the brain. Effectively studying this phenomenon requires concurrent measurements at multiple sites on live neurons. Significant progress has been made by optical imaging systems which combine confocal and multiphoton microscopy with inertia-free laser scanning. However, all systems developed to date restrict fast imaging to two dimensions. This severely limits the extent to which neurons can be studied, since they represent complex three-dimensional (3D) structures. Here we present a novel imaging system that utilizes a unique arrangement of acousto-optic deflectors to steer a focused ultra-fast laser beam to arbitrary locations in 3D space without moving the objective lens. As we demonstrate, this highly versatile random-access multiphoton microscope supports functional imaging of complex 3D cellular structures such as neuronal dendrites or neural populations at acquisition rates on the order of tens of kilohertz. PMID:18432198
Lin, Kao-Han; Young, Sun-Yi; Hsu, Ming-Chuan; Chan, Hsu; Chen, Yung-Yaw; Lin, Win-Li
2008-01-01
In this study, we developed a focused ultrasound (FUS) thermal therapy system with ultrasound image guidance and thermocouple temperature measurement feedback. Hydraulic position devices and computer-controlled servo motors were used to move the FUS transducer to the desired location with the measurement of actual movement by linear scale. The entire system integrated automatic position devices, FUS transducer, power amplifier, ultrasound image system, and thermocouple temperature measurement into a graphical user interface. For the treatment procedure, a thermocouple was implanted into a targeted treatment region in a tissue-mimicking phantom under ultrasound image guidance, and then the acoustic interference pattern formed by image ultrasound beam and low-power FUS beam was employed as image guidance to move the FUS transducer to have its focal zone coincident with the thermocouple tip. The thermocouple temperature rise was used to determine the sonication duration for a suitable thermal lesion as a high power was turned on and ultrasound image was used to capture the thermal lesion formation. For a multiple lesion formation, the FUS transducer was moved under the acoustic interference guidance to a new location and then it sonicated with the same power level and duration. This system was evaluated and the results showed that it could perform two-dimensional motion control to do a two-dimensional thermal therapy with a small localization error 0.5 mm. Through the user interface, the FUS transducer could be moved to heat the target region with the guidance of ultrasound image and acoustic interference pattern. The preliminary phantom experimental results demonstrated that the system could achieve the desired treatment plan satisfactorily.
Granero, Luis; Zalevsky, Zeev; Micó, Vicente
2011-04-01
We present a new implementation capable of producing two-dimensional (2D) superresolution (SR) imaging in a single exposure by aperture synthesis in digital lensless Fourier holography when using angular multiplexing provided by a vertical cavity surface-emitting laser source array. The system performs the recording in a single CCD snapshot of a multiplexed hologram coming from the incoherent addition of multiple subholograms, where each contains information about a different 2D spatial frequency band of the object's spectrum. Thus, a set of nonoverlapping bandpass images of the input object can be recovered by Fourier transformation (FT) of the multiplexed hologram. The SR is obtained by coherent addition of the information contained in each bandpass image while generating an enlarged synthetic aperture. Experimental results demonstrate improvement in resolution and image quality.
A two-dimensional intensified photodiode array for imaging spectroscopy
NASA Technical Reports Server (NTRS)
Tennyson, P. D.; Dymond, K.; Moos, H. W.; Feldman, P. D.; Mackey, E. F.
1986-01-01
The Johns Hopkins University is currently developing an instrument to fly aboard NASA's Space Shuttle as a Spartan payload in the late 1980s. This Spartan free flyer will obtain spatially resolved spectra of faint extended emission line objects in the wavelength range 750-1150 A at about 2-A resolution. The use of two-dimensional photon counting detectors will give simultaneous coverage of the 400 A spectral range and the 9 arc-minute spatial resolution along the spectrometer slit. The progress towards the flight detector is reported here with preliminary results from a laboratory breadboard detector, and a comparison with the one-dimensional detector developed for the Hopkins Ultraviolet Telescope. A hardware digital centroiding algorithm has been successfully implemented. The system is ultimately capable of 15-micron resolution in two dimensions at the image plane and can handle continuous counting rates of up to 8000 counts/s.
Two-dimensional real-time imaging system for subtraction angiography using an iodine filter
NASA Astrophysics Data System (ADS)
Umetani, Keiji; Ueda, Ken; Takeda, Tohoru; Anno, Izumi; Itai, Yuji; Akisada, Masayoshi; Nakajima, Teiichi
1992-01-01
A new type of subtraction imaging system was developed using an iodine filter and a single-energy broad bandwidth monochromatized x ray. The x-ray images of coronary arteries made after intravenous injection of a contrast agent are enhanced by an energy-subtraction technique. Filter chopping of the x-ray beam switches energies rapidly, so that a nearly simultaneous pair of filtered and nonfiltered images can be made. By using a high-speed video camera, a pair of two 512 × 512 pixel images can be obtained within 9 ms. Three hundred eighty-four images (raw data) are stored in a 144-Mbyte frame memory. After phantom studies, in vivo subtracted images of coronary arteries in dogs were obtained at a rate of 15 images/s.
NASA Astrophysics Data System (ADS)
Felipe-Sesé, Luis; López-Alba, Elías; Siegmann, Philip; Díaz, Francisco A.
2016-12-01
A low-cost approach for three-dimensional (3-D) full-field displacement measurement is applied for the analysis of large displacements involved in two different mechanical events. The method is based on a combination of fringe projection and two-dimensional digital image correlation (DIC) techniques. The two techniques have been employed simultaneously using an RGB camera and a color encoding method; therefore, it is possible to measure in-plane and out-of-plane displacements at the same time with only one camera even at high speed rates. The potential of the proposed methodology has been employed for the analysis of large displacements during contact experiments in a soft material block. Displacement results have been successfully compared with those obtained using a 3D-DIC commercial system. Moreover, the analysis of displacements during an impact test on a metal plate was performed to emphasize the application of the methodology for dynamics events. Results show a good level of agreement, highlighting the potential of FP + 2D DIC as low-cost alternative for the analysis of large deformations problems.
von Diezmann, Alex; Shechtman, Yoav; Moerner, W. E.
2017-01-01
Single-molecule super-resolution fluorescence microscopy and single-particle tracking are two imaging modalities that illuminate the properties of cells and materials on spatial scales down to tens of nanometers, or with dynamical information about nanoscale particle motion in the millisecond range, respectively. These methods generally use wide-field microscopes and two-dimensional camera detectors to localize molecules to much higher precision than the diffraction limit. Given the limited total photons available from each single-molecule label, both modalities require careful mathematical analysis and image processing. Much more information can be obtained about the system under study by extending to three-dimensional (3D) single-molecule localization: without this capability, visualization of structures or motions extending in the axial direction can easily be missed or confused, compromising scientific understanding. A variety of methods for obtaining both 3D super-resolution images and 3D tracking information have been devised, each with their own strengths and weaknesses. These include imaging of multiple focal planes, point-spread-function engineering, and interferometric detection. These methods may be compared based on their ability to provide accurate and precise position information of single-molecule emitters with limited photons. To successfully apply and further develop these methods, it is essential to consider many practical concerns, including the effects of optical aberrations, field-dependence in the imaging system, fluorophore labeling density, and registration between different color channels. Selected examples of 3D super-resolution imaging and tracking are described for illustration from a variety of biological contexts and with a variety of methods, demonstrating the power of 3D localization for understanding complex systems. PMID:28151646
NASA Astrophysics Data System (ADS)
Sun, Changchun; Chen, Zhongtang; Xu, Qicheng
2017-12-01
An original three-dimensional (3D) smooth continuous chaotic system and its mirror-image system with eight common parameters are constructed and a pair of symmetric chaotic attractors can be generated simultaneously. Basic dynamical behaviors of two 3D chaotic systems are investigated respectively. A double-scroll chaotic attractor by connecting the pair of mutual mirror-image attractors is generated via a novel planar switching control approach. Chaos can also be controlled to a fixed point, a periodic orbit and a divergent orbit respectively by switching between two chaotic systems. Finally, an equivalent 3D chaotic system by combining two 3D chaotic systems with a switching law is designed by utilizing a sign function. Two circuit diagrams for realizing the double-scroll attractor are depicted by employing an improved module-based design approach.
Pagoulatos, N; Edwards, W S; Haynor, D R; Kim, Y
1999-12-01
The use of stereotactic systems has been one of the main approaches for image-based guidance of the surgical tool within the brain. The main limitation of stereotactic systems is that they are based on preoperative images that might become outdated and invalid during the course of surgery. Ultrasound (US) is considered the most practical and cost-effective intraoperative imaging modality, but US images inherently have a low signal-to-noise ratio. Integrating intraoperative US with stereotactic systems has recently been attempted. In this paper, we present a new system for interactively registering two-dimensional US and three-dimensional magnetic resonance (MR) images. This registration is based on tracking the US probe with a dc magnetic position sensor. We have performed an extensive analysis of the errors of our system by using a custom-built phantom. The registration error between the MR and the position sensor space was found to have a mean value of 1.78 mm and a standard deviation of 0.18 mm. The registration error between US and MR space was dependent on the distance of the target point from the US probe face. For a 3.5-MHz phased one-dimensional array transducer and a depth of 6 cm, the mean value of the registration error was 2.00 mm and the standard deviation was 0.75 mm. The registered MR images were reconstructed using either zeroth-order or first-order interpolation. The ease of use and the interactive nature of our system (approximately 6.5 frames/s for 344 x 310 images and first-order interpolation on a Pentium II 450 MHz) demonstrates its potential to be used in the operating room.
Multi-viewer tracking integral imaging system and its viewing zone analysis.
Park, Gilbae; Jung, Jae-Hyun; Hong, Keehoon; Kim, Yunhee; Kim, Young-Hoon; Min, Sung-Wook; Lee, Byoungho
2009-09-28
We propose a multi-viewer tracking integral imaging system for viewing angle and viewing zone improvement. In the tracking integral imaging system, the pickup angles in each elemental lens in the lens array are decided by the positions of viewers, which means the elemental image can be made for each viewer to provide wider viewing angle and larger viewing zone. Our tracking integral imaging system is implemented with an infrared camera and infrared light emitting diodes which can track the viewers' exact positions robustly. For multiple viewers to watch integrated three-dimensional images in the tracking integral imaging system, it is needed to formulate the relationship between the multiple viewers' positions and the elemental images. We analyzed the relationship and the conditions for the multiple viewers, and verified them by the implementation of two-viewer tracking integral imaging system.
1975-09-30
systems a linear model results in an object f being mappad into an image _ by a point spread function matrix H. Thus with noise j +Hf +n (1) The simplest... linear models for imaging systems are given by space invariant point spread functions (SIPSF) in which case H is block circulant. If the linear model is...Ij,...,k-IM1 is a set of two dimensional indices each distinct and prior to k. Modeling Procedare: To derive the linear predictor (block LP of figure
NASA Astrophysics Data System (ADS)
Ham, Woonchul; Song, Chulgyu
2017-05-01
In this paper, we propose a new three-dimensional stereo image reconstruction algorithm for a photoacoustic medical imaging system. We also introduce and discuss a new theoretical algorithm by using the physical concept of Radon transform. The main key concept of proposed theoretical algorithm is to evaluate the existence possibility of the acoustic source within a searching region by using the geometric distance between each sensor element of acoustic detector and the corresponding searching region denoted by grid. We derive the mathematical equation for the magnitude of the existence possibility which can be used for implementing a new proposed algorithm. We handle and derive mathematical equations of proposed algorithm for the one-dimensional sensing array case as well as two dimensional sensing array case too. A mathematical k-wave simulation data are used for comparing the image quality of the proposed algorithm with that of general conventional algorithm in which the FFT should be necessarily used. From the k-wave Matlab simulation results, we can prove the effectiveness of the proposed reconstruction algorithm.
NASA Astrophysics Data System (ADS)
Svalkvist, Angelica; Hansson, Jonny; Bâth, Magnus
2014-03-01
Three-dimensional (3D) imaging with interventional fluoroscopy systems is today a common examination. The examination includes acquisition of two-dimensional projection images, used to reconstruct section images of the patient. The aim of the present study was to investigate the difference in resulting effective dose obtained using different levels of complexity in calculations of effective doses from these examinations. In the study the Siemens Artis Zeego interventional fluoroscopy system (Siemens Medical Solutions, Erlangen, Germany) was used. Images of anthropomorphic chest and pelvis phantoms were acquired. The exposure values obtained were used to calculate the resulting effective doses from the examinations, using the computer software PCXMC (STUK, Helsinki, Finland). The dose calculations were performed using three different methods: 1. using individual exposure values for each projection image, 2. using the mean tube voltage and the total DAP value, evenly distributed over the projection images, and 3. using the mean kV and the total DAP value, evenly distributed over smaller selection of projection images. The results revealed that the difference in resulting effective dose between the first two methods was smaller than 5%. When only a selection of projection images were used in the dose calculations the difference increased to over 10%. Given the uncertainties associated with the effective dose concept, the results indicate that dose calculations based on average exposure values distributed over a smaller selection of projection angles can provide reasonably accurate estimations of the radiation doses from 3D imaging using interventional fluoroscopy systems.
Suenaga, Hideyuki; Hoang Tran, Huy; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Mori, Yoshiyuki; Takato, Tsuyoshi
2013-01-01
To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. Three-dimensional surface models of the jawbones, based on the computed tomography data, were used to create the integral videography images of a subject's maxillofacial area. The three-dimensional augmented reality system (integral videography display, computed tomography, a position tracker and a computer) was used to generate a three-dimensional overlay that was projected on the surgical site via a half-silvered mirror. Thereafter, a feasibility study was performed on a volunteer. The accuracy of this system was verified on a solid model while simulating bone resection. Positional registration was attained by identifying and tracking the patient/surgical instrument's position. Thus, integral videography images of jawbones, teeth and the surgical tool were superimposed in the correct position. Stereoscopic images viewed from various angles were accurately displayed. Change in the viewing angle did not negatively affect the surgeon's ability to simultaneously observe the three-dimensional images and the patient, without special glasses. The difference in three-dimensional position of each measuring point on the solid model and augmented reality navigation was almost negligible (<1 mm); this indicates that the system was highly accurate. This augmented reality system was highly accurate and effective for surgical navigation and for overlaying a three-dimensional computed tomography image on a patient's surgical area, enabling the surgeon to understand the positional relationship between the preoperative image and the actual surgical site, with the naked eye. PMID:23703710
Lahmiri, Salim; Gargour, Christian S; Gabrea, Marcel
2014-10-01
An automated diagnosis system that uses complex continuous wavelet transform (CWT) to process retina digital images and support vector machines (SVMs) for classification purposes is presented. In particular, each retina image is transformed into two one-dimensional signals by concatenating image rows and columns separately. The mathematical norm of phase angles found in each one-dimensional signal at each level of CWT decomposition are relied on to characterise the texture of normal images against abnormal images affected by exudates, drusen and microaneurysms. The leave-one-out cross-validation method was adopted to conduct experiments and the results from the SVM show that the proposed approach gives better results than those obtained by other methods based on the correct classification rate, sensitivity and specificity.
Witcomb, Luci A; Czupryna, Julie; Francis, Kevin P; Frankel, Gad; Taylor, Peter W
2017-08-15
In contrast to two-dimensional bioluminescence imaging, three dimensional diffuse light imaging tomography with integrated micro-computed tomography (DLIT-μCT) has the potential to realise spatial variations in infection patterns when imaging experimental animals dosed with derivatives of virulent bacteria carrying bioluminescent reporter genes such as the lux operon from the bacterium Photorhabdus luminescens. The method provides an opportunity to precisely localise the bacterial infection sites within the animal and enables the generation of four-dimensional movies of the infection cycle. Here, we describe the use of the PerkinElmer IVIS SpectrumCT in vivo imaging system to investigate progression of lethal systemic infection in neonatal rats following colonisation of the gastrointestinal tract with the neonatal pathogen Escherichia coli K1. We confirm previous observations that these bacteria stably colonize the colon and small intestine following feeding of the infectious dose from a micropipette; invading bacteria migrate across the gut epithelium into the blood circulation and establish foci of infection in major organs, including the brain. DLIT-μCT revealed novel multiple sites of colonisation within the alimentary canal, including the tongue, oesophagus and stomach, with penetration of the non-keratinised oesophageal epithelial surface, providing strong evidence of a further major site for bacterial dissemination. We highlight technical issues associated with imaging of infections in new born rat pups and show that the whole-body and organ bioburden correlates with disease severity. Copyright © 2017 Elsevier Inc. All rights reserved.
An automatic panoramic image reconstruction scheme from dental computed tomography images
Papakosta, Thekla K; Savva, Antonis D; Economopoulos, Theodore L; Gröhndal, H G
2017-01-01
Objectives: Panoramic images of the jaws are extensively used for dental examinations and/or surgical planning because they provide a general overview of the patient's maxillary and mandibular regions. Panoramic images are two-dimensional projections of three-dimensional (3D) objects. Therefore, it should be possible to reconstruct them from 3D radiographic representations of the jaws, produced by CBCT scanning, obviating the need for additional exposure to X-rays, should there be a need of panoramic views. The aim of this article is to present an automated method for reconstructing panoramic dental images from CBCT data. Methods: The proposed methodology consists of a series of sequential processing stages for detecting a fitting dental arch which is used for projecting the 3D information of the CBCT data to the two-dimensional plane of the panoramic image. The detection is based on a template polynomial which is constructed from a training data set. Results: A total of 42 CBCT data sets of real clinical pre-operative and post-operative representations from 21 patients were used. Eight data sets were used for training the system and the rest for testing. Conclusions: The proposed methodology was successfully applied to CBCT data sets, producing corresponding panoramic images, suitable for examining pre-operatively and post-operatively the patients' maxillary and mandibular regions. PMID:28112548
Scolaro, Loretta; Lorenser, Dirk; Madore, Wendy-Julie; Kirk, Rodney W.; Kramer, Anne S.; Yeoh, George C.; Godbout, Nicolas; Sampson, David D.; Boudoux, Caroline; McLaughlin, Robert A.
2015-01-01
Molecular imaging using optical techniques provides insight into disease at the cellular level. In this paper, we report on a novel dual-modality probe capable of performing molecular imaging by combining simultaneous three-dimensional optical coherence tomography (OCT) and two-dimensional fluorescence imaging in a hypodermic needle. The probe, referred to as a molecular imaging (MI) needle, may be inserted tens of millimeters into tissue. The MI needle utilizes double-clad fiber to carry both imaging modalities, and is interfaced to a 1310-nm OCT system and a fluorescence imaging subsystem using an asymmetrical double-clad fiber coupler customized to achieve high fluorescence collection efficiency. We present, to the best of our knowledge, the first dual-modality OCT and fluorescence needle probe with sufficient sensitivity to image fluorescently labeled antibodies. Such probes enable high-resolution molecular imaging deep within tissue. PMID:26137379
Two-dimensional signal processing with application to image restoration
NASA Technical Reports Server (NTRS)
Assefi, T.
1974-01-01
A recursive technique for modeling and estimating a two-dimensional signal contaminated by noise is presented. A two-dimensional signal is assumed to be an undistorted picture, where the noise introduces the distortion. Both the signal and the noise are assumed to be wide-sense stationary processes with known statistics. Thus, to estimate the two-dimensional signal is to enhance the picture. The picture representing the two-dimensional signal is converted to one dimension by scanning the image horizontally one line at a time. The scanner output becomes a nonstationary random process due to the periodic nature of the scanner operation. Procedures to obtain a dynamical model corresponding to the autocorrelation function of the scanner output are derived. Utilizing the model, a discrete Kalman estimator is designed to enhance the image.
NASA Astrophysics Data System (ADS)
Sheen, David M.; Fernandes, Justin L.; Tedeschi, Jonathan R.; McMakin, Douglas L.; Jones, A. Mark; Lechelt, Wayne M.; Severtsen, Ronald H.
2013-05-01
Active millimeter-wave imaging is currently being used for personnel screening at airports and other high-security facilities. The cylindrical imaging techniques used in the deployed systems are based on licensed technology developed at the Pacific Northwest National Laboratory. The cylindrical and a related planar imaging technique form three-dimensional images by scanning a diverging beam swept frequency transceiver over a two-dimensional aperture and mathematically focusing or reconstructing the data into three-dimensional images of the person being screened. The resolution, clothing penetration, and image illumination quality obtained with these techniques can be significantly enhanced through the selection of the aperture size, antenna beamwidth, center frequency, and bandwidth. The lateral resolution can be improved by increasing the center frequency, or it can be increased with a larger antenna beamwidth. The wide beamwidth approach can significantly improve illumination quality relative to a higher frequency system. Additionally, a wide antenna beamwidth allows for operation at a lower center frequency resulting in less scattering and attenuation from the clothing. The depth resolution of the system can be improved by increasing the bandwidth. Utilization of extremely wide bandwidths of up to 30 GHz can result in depth resolution as fine as 5 mm. This wider bandwidth operation may allow for improved detection techniques based on high range resolution. In this paper, the results of an extensive imaging study that explored the advantages of using extremely wide beamwidth and bandwidth are presented, primarily for 10-40 GHz frequency band.
Nakata, Norio; Suzuki, Naoki; Hattori, Asaki; Hirai, Naoya; Miyamoto, Yukio; Fukuda, Kunihiko
2012-01-01
Although widely used as a pointing device on personal computers (PCs), the mouse was originally designed for control of two-dimensional (2D) cursor movement and is not suited to complex three-dimensional (3D) image manipulation. Augmented reality (AR) is a field of computer science that involves combining the physical world and an interactive 3D virtual world; it represents a new 3D user interface (UI) paradigm. A system for 3D and four-dimensional (4D) image manipulation has been developed that uses optical tracking AR integrated with a smartphone remote control. The smartphone is placed in a hard case (jacket) with a 2D printed fiducial marker for AR on the back. It is connected to a conventional PC with an embedded Web camera by means of WiFi. The touch screen UI of the smartphone is then used as a remote control for 3D and 4D image manipulation. Using this system, the radiologist can easily manipulate 3D and 4D images from computed tomography and magnetic resonance imaging in an AR environment with high-quality image resolution. Pilot assessment of this system suggests that radiologists will be able to manipulate 3D and 4D images in the reading room in the near future. Supplemental material available at http://radiographics.rsna.org/lookup/suppl/doi:10.1148/rg.324115086/-/DC1.
Newton, Peter O; Hahn, Gregory W; Fricka, Kevin B; Wenger, Dennis R
2002-04-15
A retrospective radiographic review of 31 patients with congenital spine abnormalities who underwent conventional radiography and advanced imaging studies was conducted. To analyze the utility of three-dimensional computed tomography with multiplanar reformatted images for congenital spine anomalies, as compared with plain radiographs and axial two-dimensional computed tomography imaging. Conventional radiographic imaging for congenital spine disorders often are difficult to interpret because of the patient's small size, the complexity of the disorder, a deformity not in the plane of the radiographs, superimposed structures, and difficulty in forming a mental three-dimensional image. Multiplanar reformatted and three-dimensional computed tomographic imaging offers many potential advantages for defining congenital spine anomalies including visualization of the deformity in any plane, from any angle, with the overlying structures subtracted. The imaging studies of patients who had undergone a three-dimensional computed tomography for congenital deformities of the spine between 1992 and 1998 were reviewed (31 cases). All plain radiographs and axial two-dimensional computed tomography images performed before the three-dimensional computed tomography were reviewed and the findings documented. This was repeated for the three-dimensional reconstructions and, when available, the multiplanar reformatted images (15 cases). In each case, the utility of the advanced imaging was graded as one of the following: Grade A (substantial new information obtained), Grade B (confirmatory with improved visualization and understanding of the deformity), and Grade C (no added useful information obtained). In 17 of 31 cases, the multiplanar reformatted and three-dimensional images allowed identification of unrecognized malformations. In nine additional cases, the advanced imaging was helpful in better visualizing and understanding previously identified deformities. In five cases, no new information was gained. The standard and curved multiplanar reformatted images were best for defining the occiput-C1-C2 anatomy and the extent of segmentation defects. The curved multiplanar reformatted images were especially helpful in keeping the spine from "coming in" and "going out" of the plane of the image when there was significant spine deformity in the sagittal or coronal plane. The three-dimensional reconstructions proved valuable in defining failures of formation. Advanced computed tomography imaging (three-dimensional computed tomography and curved/standard multiplanar reformatted images) allows better definition of congenital spine anomalies. More than 50% of the cases showed additional abnormalities not appreciated on plain radiographs or axial two-dimensional computed tomography images. Curved multiplanar reformatted images allowed imaging in the coronal and sagittal planes of the entire deformity.
A three-dimensional quality-guided phase unwrapping method for MR elastography
NASA Astrophysics Data System (ADS)
Wang, Huifang; Weaver, John B.; Perreard, Irina I.; Doyley, Marvin M.; Paulsen, Keith D.
2011-07-01
Magnetic resonance elastography (MRE) uses accumulated phases that are acquired at multiple, uniformly spaced relative phase offsets, to estimate harmonic motion information. Heavily wrapped phase occurs when the motion is large and unwrapping procedures are necessary to estimate the displacements required by MRE. Two unwrapping methods were developed and compared in this paper. The first method is a sequentially applied approach. The three-dimensional MRE phase image block for each slice was processed by two-dimensional unwrapping followed by a one-dimensional phase unwrapping approach along the phase-offset direction. This unwrapping approach generally works well for low noise data. However, there are still cases where the two-dimensional unwrapping method fails when noise is high. In this case, the baseline of the corrupted regions within an unwrapped image will not be consistent. Instead of separating the two-dimensional and one-dimensional unwrapping in a sequential approach, an interleaved three-dimensional quality-guided unwrapping method was developed to combine both the two-dimensional phase image continuity and one-dimensional harmonic motion information. The quality of one-dimensional harmonic motion unwrapping was used to guide the three-dimensional unwrapping procedures and it resulted in stronger guidance than in the sequential method. In this work, in vivo results generated by the two methods were compared.
Ultrashort electron pulses as a four-dimensional diagnosis of plasma dynamics.
Zhu, P F; Zhang, Z C; Chen, L; Li, R Z; Li, J J; Wang, X; Cao, J M; Sheng, Z M; Zhang, J
2010-10-01
We report an ultrafast electron imaging system for real-time examination of ultrafast plasma dynamics in four dimensions. It consists of a femtosecond pulsed electron gun and a two-dimensional single electron detector. The device has an unprecedented capability of acquiring a high-quality shadowgraph image with a single ultrashort electron pulse, thus permitting the measurement of irreversible processes using a single-shot scheme. In a prototype experiment of laser-induced plasma of a metal target under moderate pump intensity, we demonstrated its unique capability of acquiring high-quality shadowgraph images on a micron scale with a-few-picosecond time resolution.
NASA Astrophysics Data System (ADS)
Khimchenko, Anna; Schulz, Georg; Deyhle, Hans; Hieber, Simone E.; Hasan, Samiul; Bikis, Christos; Schulz, Joachim; Costeur, Loïc.; Müller, Bert
2016-04-01
X-ray imaging in the absorption contrast mode is an established method of visualising calcified tissues such as bone and teeth. Physically soft tissues such as brain or muscle are often imaged using magnetic resonance imaging (MRI). However, the spatial resolution of MRI is insufficient for identifying individual biological cells within three-dimensional tissue. X-ray grating interferometry (XGI) has advantages for the investigation of soft tissues or the simultaneous three-dimensional visualisation of soft and hard tissues. Since laboratory microtomography (μCT) systems have better accessibility than tomography set-ups at synchrotron radiation facilities, a great deal of effort has been invested in optimising XGI set-ups for conventional μCT systems. In this conference proceeding, we present how a two-grating interferometer is incorporated into a commercially available nanotom m (GE Sensing and Inspection Technologies GmbH) μCT system to extend its capabilities toward phase contrast. We intend to demonstrate superior contrast in spiders (Hogna radiata (Fam. Lycosidae) and Xysticus erraticus (Fam. Thomisidae)), as well as the simultaneous visualisation of hard and soft tissues. XGI is an imaging modality that provides quantitative data, and visualisation is an important part of biomimetics; consequently, hard X-ray imaging provides a sound basis for bioinspiration, bioreplication and biomimetics and allows for the quantitative comparison of biofabricated products with their natural counterparts.
Real-time broadband terahertz spectroscopic imaging by using a high-sensitivity terahertz camera
NASA Astrophysics Data System (ADS)
Kanda, Natsuki; Konishi, Kuniaki; Nemoto, Natsuki; Midorikawa, Katsumi; Kuwata-Gonokami, Makoto
2017-02-01
Terahertz (THz) imaging has a strong potential for applications because many molecules have fingerprint spectra in this frequency region. Spectroscopic imaging in the THz region is a promising technique to fully exploit this characteristic. However, the performance of conventional techniques is restricted by the requirement of multidimensional scanning, which implies an image data acquisition time of several minutes. In this study, we propose and demonstrate a novel broadband THz spectroscopic imaging method that enables real-time image acquisition using a high-sensitivity THz camera. By exploiting the two-dimensionality of the detector, a broadband multi-channel spectrometer near 1 THz was constructed with a reflection type diffraction grating and a high-power THz source. To demonstrate the advantages of the developed technique, we performed molecule-specific imaging and high-speed acquisition of two-dimensional (2D) images. Two different sugar molecules (lactose and D-fructose) were identified with fingerprint spectra, and their distributions in one-dimensional space were obtained at a fast video rate (15 frames per second). Combined with the one-dimensional (1D) mechanical scanning of the sample, two-dimensional molecule-specific images can be obtained only in a few seconds. Our method can be applied in various important fields such as security and biomedicine.
Perfect 3-D movies and stereoscopic movies on TV and projection screens: an appraisement
NASA Astrophysics Data System (ADS)
Klein, Susanne; Dultz, Wolfgang
1990-09-01
Since the invention of stereoscopy (WHEATSTONE 1838) reasons for and against 3-dimensional images have occupied the literature, but there has never been much doubt about the preference of autostereoscopic systems showing a scene which is 3-dimensional and true to life from all sides (perfect 3-dimensional image, HESSE 1939), especially since most stereoscopic movies of the past show serious imperfections with respect to image quality and technical operation. Leave aside that no convincing perfect 3D-TV-system is in sight, there are properties f the stereoscopic movie which are advantageous to certain representations on TV and important for the 3-dimensional motion picture. In this paper we investigate the influence of apparent motions of 3-dimensional images and classify the different projection systems with respect to presence and absence of these spectacular illusions. Apparent motions bring dramatic effects into stereoscopic movies which cannot be created with perfect 3-dimensional systems. In this study we describe their applications and limits for television.
High-resolution echocardiography
NASA Technical Reports Server (NTRS)
Nathan, R.
1979-01-01
High resolution computer aided ultrasound system provides two-and three-dimensional images of beating heart from many angles. System provides means for determining whether small blood vessels around the heart are blocked or if heart wall is moving normally without interference of dead and noncontracting muscle tissue.
3-D video techniques in endoscopic surgery.
Becker, H; Melzer, A; Schurr, M O; Buess, G
1993-02-01
Three-dimensional visualisation of the operative field is an important requisite for precise and fast handling of open surgical operations. Up to now it has only been possible to display a two-dimensional image on the monitor during endoscopic procedures. The increasing complexity of minimal invasive interventions requires endoscopic suturing and ligatures of larger vessels which are difficult to perform without the impression of space. Three-dimensional vision therefore may decrease the operative risk, accelerate interventions and widen the operative spectrum. In April 1992 a 3-D video system developed at the Nuclear Research Center Karlsruhe, Germany (IAI Institute) was applied in various animal experimental procedures and clinically in laparoscopic cholecystectomy. The system works with a single monitor and active high-speed shutter glasses. Our first trials with this new 3-D imaging system clearly showed a facilitation of complex surgical manoeuvres like mobilisation of organs, preparation in the deep space and suture techniques. The 3-D-system introduced in this article will enter the market in 1993 (Opticon Co., Karlsruhe, Germany.
Dual-detection confocal fluorescence microscopy: fluorescence axial imaging without axial scanning.
Lee, Dong-Ryoung; Kim, Young-Duk; Gweon, Dae-Gab; Yoo, Hongki
2013-07-29
We propose a new method for high-speed, three-dimensional (3-D) fluorescence imaging, which we refer to as dual-detection confocal fluorescence microscopy (DDCFM). In contrast to conventional beam-scanning confocal fluorescence microscopy, where the focal spot must be scanned either optically or mechanically over a sample volume to reconstruct a 3-D image, DDCFM can obtain the depth of a fluorescent emitter without depth scanning. DDCFM comprises two photodetectors, each with a pinhole of different size, in the confocal detection system. Axial information on fluorescent emitters can be measured by the axial response curve through the ratio of intensity signals. DDCFM can rapidly acquire a 3-D fluorescent image from a single two-dimensional scan with less phototoxicity and photobleaching than confocal fluorescence microscopy because no mechanical depth scans are needed. We demonstrated the feasibility of the proposed method by phantom studies.
2014-09-01
to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging system that...research is to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging ...i) developed time-of- flight extraction algorithms to perform USCT, (ii) developing image reconstruction algorithms for USCT, (iii) developed
Rybicki, F J; Hrovat, M I; Patz, S
2000-09-01
We have proposed a two-dimensional PERiodic-Linear (PERL) magnetic encoding field geometry B(x,y) = g(y)y cos(q(x)x) and a magnetic resonance imaging pulse sequence which incorporates two fields to image a two-dimensional spin density: a standard linear gradient in the x dimension, and the PERL field. Because of its periodicity, the PERL field produces a signal where the phase of the two dimensions is functionally different. The x dimension is encoded linearly, but the y dimension appears as the argument of a sinusoidal phase term. Thus, the time-domain signal and image spin density are not related by a two-dimensional Fourier transform. They are related by a one-dimensional Fourier transform in the x dimension and a new Bessel function integral transform (the PERL transform) in the y dimension. The inverse of the PERL transform provides a reconstruction algorithm for the y dimension of the spin density from the signal space. To date, the inverse transform has been computed numerically by a Bessel function expansion over its basis functions. This numerical solution used a finite sum to approximate an infinite summation and thus introduced a truncation error. This work analytically determines the basis functions for the PERL transform and incorporates them into the reconstruction algorithm. The improved algorithm is demonstrated by (1) direct comparison between the numerically and analytically computed basis functions, and (2) reconstruction of a known spin density. The new solution for the basis functions also lends proof of the system function for the PERL transform under specific conditions.
The evolution of image-guided lumbosacral spine surgery.
Bourgeois, Austin C; Faulkner, Austin R; Pasciak, Alexander S; Bradley, Yong C
2015-04-01
Techniques and approaches of spinal fusion have considerably evolved since their first description in the early 1900s. The incorporation of pedicle screw constructs into lumbosacral spine surgery is among the most significant advances in the field, offering immediate stability and decreased rates of pseudarthrosis compared to previously described methods. However, early studies describing pedicle screw fixation and numerous studies thereafter have demonstrated clinically significant sequelae of inaccurate surgical fusion hardware placement. A number of image guidance systems have been developed to reduce morbidity from hardware malposition in increasingly complex spine surgeries. Advanced image guidance systems such as intraoperative stereotaxis improve the accuracy of pedicle screw placement using a variety of surgical approaches, however their clinical indications and clinical impact remain debated. Beginning with intraoperative fluoroscopy, this article describes the evolution of image guided lumbosacral spinal fusion, emphasizing two-dimensional (2D) and three-dimensional (3D) navigational methods.
Wedge-and-strip anodes for centroid-finding position-sensitive photon and particle detectors
NASA Technical Reports Server (NTRS)
Martin, C.; Jelinsky, P.; Lampton, M.; Malina, R. F.
1981-01-01
The paper examines geometries employing position-dependent charge partitioning to obtain a two-dimensional position signal from each detected photon or particle. Requiring three or four anode electrodes and signal paths, images have little distortion and resolution is not limited by thermal noise. An analysis of the geometrical image nonlinearity between event centroid location and the charge partition ratios is presented. In addition, fabrication and testing of two wedge-and-strip anode systems are discussed. Images obtained with EUV radiation and microchannel plates verify the predicted performance, with further resolution improvements achieved by adopting low noise signal circuitry. Also discussed are the designs of practical X-ray, EUV, and charged particle image systems.
Automated macromolecular crystal detection system and method
Christian, Allen T [Tracy, CA; Segelke, Brent [San Ramon, CA; Rupp, Bernard [Livermore, CA; Toppani, Dominique [Fontainebleau, FR
2007-06-05
An automated macromolecular method and system for detecting crystals in two-dimensional images, such as light microscopy images obtained from an array of crystallization screens. Edges are detected from the images by identifying local maxima of a phase congruency-based function associated with each image. The detected edges are segmented into discrete line segments, which are subsequently geometrically evaluated with respect to each other to identify any crystal-like qualities such as, for example, parallel lines, facing each other, similarity in length, and relative proximity. And from the evaluation a determination is made as to whether crystals are present in each image.
NASA Astrophysics Data System (ADS)
Johnson, Kristina Mary
In 1973 the computerized tomography (CT) scanner revolutionized medical imaging. This machine can isolate and display in two-dimensional cross-sections, internal lesions and organs previously impossible to visualize. The possibility of three-dimensional imaging however is not yet exploited by present tomographic systems. Using multiple-exposure holography, three-dimensional displays can be synthesizing from two-dimensional CT cross -sections. A multiple-exposure hologram is an incoherent superposition of many individual holograms. Intuitively it is expected that holograms recorded with equal energy will reconstruct images with equal brightness. It is found however, that holograms recorded first are brighter than holograms recorded later in the superposition. This phenomena is called Holographic Reciprocity Law Failure (HRLF). Computer simulations of latent image formation in multiple-exposure holography are one of the methods used to investigate HRLF. These simulations indicate that it is the time between individual exposures in the multiple -exposure hologram that is responsible for HRLF. This physical parameter introduces an asymmetry into the latent image formation process that favors the signal of previously recorded holograms over holograms recorded later in the superposition. The origin of this asymmetry lies in the dynamics of latent image formation, and in particular in the decay of single-atom latent image specks, which have lifetimes that are short compared to typical times between exposures. An analytical model is developed for a double exposure hologram that predicts a decrease in the brightness of the second exposure as compared to the first exposure as the time between exposures increases. These results are consistent with the computer simulations. Experiments investigating the influence of this parameter on the diffraction efficiency of reconstructed images in a double exposure hologram are also found to be consistent with the computer simulations and analytical results. From this information, two techniques are presented that correct for HRLF, and succeed in reconstructing multiple holographic images of CT cross-sections with equal brightness. The multiple multiple-exposure hologram is a new hologram that increases the number of equally bright images that can be superimposed on one photographic plate.
NASA Astrophysics Data System (ADS)
Zhang, Haichong K.; Fang, Ting Yun; Finocchi, Rodolfo; Boctor, Emad M.
2017-03-01
Three dimensional (3D) ultrasound imaging is becoming a standard mode for medical ultrasound diagnoses. Conventional 3D ultrasound imaging is mostly scanned either by using a two dimensional matrix array or by motorizing a one dimensional array in the elevation direction. However, the former system is not widely assessable due to its cost, and the latter one has limited resolution and field-of-view in the elevation axis. Here, we propose a 3D ultrasound imaging system based on the synthetic tracked aperture approach, in which a robotic arm is used to provide accurate tracking and motion. While the ultrasound probe is moved by a robotic arm, each probe position is tracked and can be used to reconstruct a wider field-of-view as there are no physical barriers that restrict the elevational scanning. At the same time, synthetic aperture beamforming provides a better resolution in the elevation axis. To synthesize the elevational information, the single focal point is regarded as the virtual element, and forward and backward delay-andsum are applied to the radio-frequency (RF) data collected through the volume. The concept is experimentally validated using a general ultrasound phantom, and the elevational resolution improvement of 2.54 and 2.13 times was measured at the target depths of 20 mm and 110 mm, respectively.
Three-dimensional hydrogen microscopy using a high-energy proton probe
NASA Astrophysics Data System (ADS)
Dollinger, G.; Reichart, P.; Datzmann, G.; Hauptner, A.; Körner, H.-J.
2003-01-01
It is a challenge to measure two-dimensional or three-dimensional (3D) hydrogen profiles on a micrometer scale. Quantitative hydrogen analyses of micrometer resolution are demonstrated utilizing proton-proton scattering at a high-energy proton microprobe. It has more than an-order-of-magnitude better position resolution and in addition higher sensitivity than any other technique for 3D hydrogen analyses. This type of hydrogen imaging opens plenty room to characterize microstructured materials, and semiconductor devices or objects in microbiology. The first hydrogen image obtained with a 10 MeV proton microprobe shows the hydrogen distribution of the microcapillary system being present in the wing of a mayfly and demonstrates the potential of the method.
Non-Contact Optical Ultrasound Concept for Biomedical Imaging
2016-11-03
Non -Contact Optical Ultrasound Concept for Biomedical Imaging Robert Haupt1, Charles Wynn1, Jonathan Fincke2, Shawn Zhang2, Brian Anthony2...results. Lastly, we present imaging capabilities using a non -contact laser ultrasound proof-of-concept system. Two and three dimensional time... non -contact, standoff optical ultrasound has the potential to provide a fixed reference measurement capability that minimizes operator variability as
[Bone drilling simulation by three-dimensional imaging].
Suto, Y; Furuhata, K; Kojima, T; Kurokawa, T; Kobayashi, M
1989-06-01
The three-dimensional display technique has a wide range of medical applications. Pre-operative planning is one typical application: in orthopedic surgery, three-dimensional image processing has been used very successfully. We have employed this technique in pre-operative planning for orthopedic surgery, and have developed a simulation system for bone-drilling. Positive results were obtained by pre-operative rehearsal; when a region of interest is indicated by means of a mouse on the three-dimensional image displayed on the CRT, the corresponding region appears on the slice image which is displayed simultaneously. Consequently, the status of the bone-drilling is constantly monitored. In developing this system, we have placed emphasis on the quality of the reconstructed three-dimensional images, on fast processing, and on the easy operation of the surgical planning simulation.
Dimensionality and noise in energy selective x-ray imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alvarez, Robert E.
Purpose: To develop and test a method to quantify the effect of dimensionality on the noise in energy selective x-ray imaging.Methods: The Cramèr-Rao lower bound (CRLB), a universal lower limit of the covariance of any unbiased estimator, is used to quantify the noise. It is shown that increasing dimensionality always increases, or at best leaves the same, the variance. An analytic formula for the increase in variance in an energy selective x-ray system is derived. The formula is used to gain insight into the dependence of the increase in variance on the properties of the additional basis functions, the measurementmore » noise covariance, and the source spectrum. The formula is also used with computer simulations to quantify the dependence of the additional variance on these factors. Simulated images of an object with three materials are used to demonstrate the trade-off of increased information with dimensionality and noise. The images are computed from energy selective data with a maximum likelihood estimator.Results: The increase in variance depends most importantly on the dimension and on the properties of the additional basis functions. With the attenuation coefficients of cortical bone, soft tissue, and adipose tissue as the basis functions, the increase in variance of the bone component from two to three dimensions is 1.4 × 10{sup 3}. With the soft tissue component, it is 2.7 × 10{sup 4}. If the attenuation coefficient of a high atomic number contrast agent is used as the third basis function, there is only a slight increase in the variance from two to three basis functions, 1.03 and 7.4 for the bone and soft tissue components, respectively. The changes in spectrum shape with beam hardening also have a substantial effect. They increase the variance by a factor of approximately 200 for the bone component and 220 for the soft tissue component as the soft tissue object thickness increases from 1 to 30 cm. Decreasing the energy resolution of the detectors increases the variance of the bone component markedly with three dimension processing, approximately a factor of 25 as the resolution decreases from 100 to 3 bins. The increase with two dimension processing for adipose tissue is a factor of two and with the contrast agent as the third material for two or three dimensions is also a factor of two for both components. The simulated images show that a maximum likelihood estimator can be used to process energy selective x-ray data to produce images with noise close to the CRLB.Conclusions: The method presented can be used to compute the effects of the object attenuation coefficients and the x-ray system properties on the relationship of dimensionality and noise in energy selective x-ray imaging systems.« less
Imaging hydrogen flames by two-photon, laser-induced fluorescence
NASA Technical Reports Server (NTRS)
Miles, R.; Lempert, W.; Kumar, V.; Diskin, G.
1991-01-01
A nonintrusive multicomponent imaging system is developed which can image hydrogen, hot oxygen, and air simultaneously. An Ar-F excimer laser is injection-locked to cover the Q1 two-photon transition in molecular hydrogen which allows the observation of both hot oxygen and cold hydrogen. Rayleigh scattering from the water molecules occurs at the same frequency as the illuminating laser allowing analysis of the air density. Images of ignited and nonignited hydrogen jets are recorded with a high-sensitivity gated video camera. The images permit the analysis of turbulent hydrogen-core jet, the combustion zone, and the surrounding air, and two-dimensional spatial correlations can be made to study the turbulent structure and couplings between different regions of the flow field. The method is of interest to the study of practical combustion systems which employ hydrogen-air diffusion flames.
3D GeoWall Analysis System for Shuttle External Tank Foreign Object Debris Events
NASA Technical Reports Server (NTRS)
Brown, Richard; Navard, Andrew; Spruce, Joseph
2010-01-01
An analytical, advanced imaging method has been developed for the initial monitoring and identification of foam debris and similar anomalies that occur post-launch in reference to the space shuttle s external tank (ET). Remote sensing technologies have been used to perform image enhancement and analysis on high-resolution, true-color images collected with the DCS 760 Kodak digital camera located in the right umbilical well of the space shuttle. Improvements to the camera, using filters, have added sharpness/definition to the image sets; however, image review/analysis of the ET has been limited by the fact that the images acquired by umbilical cameras during launch are two-dimensional, and are usually nonreferenceable between frames due to rotation translation of the ET as it falls away from the space shuttle. Use of stereo pairs of these images can enable strong visual indicators that can immediately portray depth perception of damaged areas or movement of fragments between frames is not perceivable in two-dimensional images. A stereoscopic image visualization system has been developed to allow 3D depth perception of stereo-aligned image pairs taken from in-flight umbilical and handheld digital shuttle cameras. This new system has been developed to augment and optimize existing 2D monitoring capabilities. Using this system, candidate sequential image pairs are identified for transformation into stereo viewing pairs. Image orientation is corrected using control points (similar points) between frames to place the two images in proper X-Y viewing perspective. The images are then imported into the WallView stereo viewing software package. The collected control points are used to generate a transformation equation that is used to re-project one image and effectively co-register it to the other image. The co-registered, oriented image pairs are imported into a WallView image set and are used as a 3D stereo analysis slide show. Multiple sequential image pairs can be used to allow forensic review of temporal phenomena between pairs. The observer, while wearing linear polarized glasses, is able to review image pairs in passive 3D stereo.
NASA Technical Reports Server (NTRS)
Revilock, Duane M., Jr.; Thesken, John C.; Schmidt, Timothy E.
2007-01-01
Ambient temperature hydrostatic pressurization tests were conducted on a composite overwrapped pressure vessel (COPV) to understand the fiber stresses in COPV components. Two three-dimensional digital image correlation systems with high speed cameras were used in the evaluation to provide full field displacement and strain data for each pressurization test. A few of the key findings will be discussed including how the principal strains provided better insight into system behavior than traditional gauges, a high localized strain that was measured where gages were not present and the challenges of measuring curved surfaces with the use of a 1.25 in. thick layered polycarbonate panel that protected the cameras.
Noise-free accurate count of microbial colonies by time-lapse shadow image analysis.
Ogawa, Hiroyuki; Nasu, Senshi; Takeshige, Motomu; Funabashi, Hisakage; Saito, Mikako; Matsuoka, Hideaki
2012-12-01
Microbial colonies in food matrices could be counted accurately by a novel noise-free method based on time-lapse shadow image analysis. An agar plate containing many clusters of microbial colonies and/or meat fragments was trans-illuminated to project their 2-dimensional (2D) shadow images on a color CCD camera. The 2D shadow images of every cluster distributed within a 3-mm thick agar layer were captured in focus simultaneously by means of a multiple focusing system, and were then converted to 3-dimensional (3D) shadow images. By time-lapse analysis of the 3D shadow images, it was determined whether each cluster comprised single or multiple colonies or a meat fragment. The analytical precision was high enough to be able to distinguish a microbial colony from a meat fragment, to recognize an oval image as two colonies contacting each other, and to detect microbial colonies hidden under a food fragment. The detection of hidden colonies is its outstanding performance in comparison with other systems. The present system attained accuracy for counting fewer than 5 colonies and is therefore of practical importance. Copyright © 2012 Elsevier B.V. All rights reserved.
Dual-wavelength digital holographic imaging with phase background subtraction
NASA Astrophysics Data System (ADS)
Khmaladze, Alexander; Matz, Rebecca L.; Jasensky, Joshua; Seeley, Emily; Holl, Mark M. Banaszak; Chen, Zhan
2012-05-01
Three-dimensional digital holographic microscopic phase imaging of objects that are thicker than the wavelength of the imaging light is ambiguous and results in phase wrapping. In recent years, several unwrapping methods that employed two or more wavelengths were introduced. These methods compare the phase information obtained from each of the wavelengths and extend the range of unambiguous height measurements. A straightforward dual-wavelength phase imaging method is presented which allows for a flexible tradeoff between the maximum height of the sample and the amount of noise the method can tolerate. For highly accurate phase measurements, phase unwrapping of objects with heights higher than the beat (synthetic) wavelength (i.e. the product of the original two wavelengths divided by their difference), can be achieved. Consequently, three-dimensional measurements of a wide variety of biological systems and microstructures become technically feasible. Additionally, an effective method of removing phase background curvature based on slowly varying polynomial fitting is proposed. This method allows accurate volume measurements of several small objects with the same image frame.
Lu, J; Wang, L; Zhang, Y C; Tang, H T; Xia, Z F
2017-10-20
Objective: To validate the clinical effect of three dimensional human body scanning system BurnCalc developed by our research team in the evaluation of burn wound area. Methods: A total of 48 burn patients treated in the outpatient department of our unit from January to June 2015, conforming to the study criteria, were enrolled in. For the first 12 patients, one wound on the limbs or torso was selected from each patient. The stability of the system was tested by 3 attending physicians using three dimensional human body scanning system BurnCalc to measure the area of wounds individually. For the following 36 patients, one wound was selected from each patient, including 12 wounds on limbs, front torso, and side torso, respectively. The area of wounds was measured by the same attending physician using transparency tracing method, National Institutes of Health (NIH) Image J method, and three dimensional human body scanning system BurnCalc, respectively. The time for getting information of 36 wounds by three methods was recorded by stopwatch. The stability among the testers was evaluated by the intra-class correlation coefficient (ICC). Data were processed with randomized blocks analysis of variance and Bonferroni test. Results: (1) Wound area of patients measured by three physicians using three dimensional human body scanning system BurnCalc was (122±95), (121±95), and (123±96) cm(2,) respectively, and there was no statistically significant difference among them ( F =1.55, P >0.05). The ICC among 3 physicians was 0.999. (2) The wound area of limbs of patients measured by transparency tracing method, NIH Image J method, and three dimensional human body scanning system BurnCalc was (84±50), (76±46), and (84±49) cm(2,) respectively. There was no statistically significant difference in the wound area of limbs of patients measured by transparency tracing method and three dimensional human body scanning system BurnCalc ( P >0.05). The wound area of limbs of patients measured by NIH Image J method was smaller than that measured by transparency tracing method and three dimensional human body scanning system BurnCalc (with P values below 0.05). There was no statistically significant difference in the wound area of front torso of patients measured by transparency tracing method, NIH Image J method, and three dimensional human body scanning system BurnCalc ( F =0.33, P >0.05). The wound area of side torso of patients measured by transparency tracing method, NIH Image J method, and three dimensional human body scanning system BurnCalc was (169±88), (150±80), and (169±86) cm(2,) respectively. There was no statistically significant difference in the wound area of side torso of patients measured by transparency tracing method and three dimensional human body scanning system BurnCalc ( P >0.05). The wound area of side torso of patients measured by NIH Image J method was smaller than that measured by transparency tracing method and three dimensional human body scanning system BurnCalc (with P values below 0.05). (3) The time for getting information of wounds of patients by transparency tracing method, NIH Image J method, and three dimensional human body scanning system BurnCalc was (77±14), (10±3), and (9±3) s, respectively. The time for getting information of wounds of patients by transparency tracing method was longer than that by NIH Image J method and three dimensional human body scanning system BurnCalc (with P values below 0.05). The time for getting information of wounds of patients by three dimensional human body scanning system BurnCalc was close to that by NIH Image J method ( P >0.05). Conclusions: The three dimensional human body scanning system BurnCalc is stable and can accurately evaluate the wound area on limbs and torso of burn patients.
Motion Estimation System Utilizing Point Cloud Registration
NASA Technical Reports Server (NTRS)
Chen, Qi (Inventor)
2016-01-01
A system and method of estimation motion of a machine is disclosed. The method may include determining a first point cloud and a second point cloud corresponding to an environment in a vicinity of the machine. The method may further include generating a first extended gaussian image (EGI) for the first point cloud and a second EGI for the second point cloud. The method may further include determining a first EGI segment based on the first EGI and a second EGI segment based on the second EGI. The method may further include determining a first two dimensional distribution for points in the first EGI segment and a second two dimensional distribution for points in the second EGI segment. The method may further include estimating motion of the machine based on the first and second two dimensional distributions.
The neutron imaging diagnostic at NIF (invited).
Merrill, F E; Bower, D; Buckles, R; Clark, D D; Danly, C R; Drury, O B; Dzenitis, J M; Fatherley, V E; Fittinghoff, D N; Gallegos, R; Grim, G P; Guler, N; Loomis, E N; Lutz, S; Malone, R M; Martinson, D D; Mares, D; Morley, D J; Morgan, G L; Oertel, J A; Tregillis, I L; Volegov, P L; Weiss, P B; Wilde, C H; Wilson, D C
2012-10-01
A neutron imaging diagnostic has recently been commissioned at the National Ignition Facility (NIF). This new system is an important diagnostic tool for inertial fusion studies at the NIF for measuring the size and shape of the burning DT plasma during the ignition stage of Inertial Confinement Fusion (ICF) implosions. The imaging technique utilizes a pinhole neutron aperture, placed between the neutron source and a neutron detector. The detection system measures the two dimensional distribution of neutrons passing through the pinhole. This diagnostic has been designed to collect two images at two times. The long flight path for this diagnostic, 28 m, results in a chromatic separation of the neutrons, allowing the independently timed images to measure the source distribution for two neutron energies. Typically the first image measures the distribution of the 14 MeV neutrons and the second image of the 6-12 MeV neutrons. The combination of these two images has provided data on the size and shape of the burning plasma within the compressed capsule, as well as a measure of the quantity and spatial distribution of the cold fuel surrounding this core.
NASA Astrophysics Data System (ADS)
Tian, Biao; Liu, Yang; Xu, Shiyou; Chen, Zengping
2014-01-01
Interferometric inverse synthetic aperture radar (InISAR) imaging provides complementary information to monostatic inverse synthetic aperture radar (ISAR) imaging. This paper proposes a new InISAR imaging system for space targets based on wideband direct sampling using two antennas. The system is easy to realize in engineering since the motion trajectory of space targets can be known in advance, which is simpler than that of three receivers. In the preprocessing step, high speed movement compensation is carried out by designing an adaptive matched filter containing speed that is obtained from the narrow band information. Then, the coherent processing and keystone transform for ISAR imaging are adopted to reserve the phase history of each antenna. Through appropriate collocation of the system, image registration and phase unwrapping can be avoided. Considering the situation not to be satisfied, the influence of baseline variance is analyzed and compensation method is adopted. The corresponding size can be achieved by interferometric processing of the two complex ISAR images. Experimental results prove the validity of the analysis and the three-dimensional imaging algorithm.
Image Fusion and 3D Roadmapping in Endovascular Surgery.
Jones, Douglas W; Stangenberg, Lars; Swerdlow, Nicholas J; Alef, Matthew; Lo, Ruby; Shuja, Fahad; Schermerhorn, Marc L
2018-05-21
Practitioners of endovascular surgery have historically utilized two-dimensional (2D) intraoperative fluoroscopic imaging, with intra-vascular contrast opacification, to treat complex three-dimensional (3D) pathology. Recently, major technical developments in intraoperative imaging have made image fusion techniques possible: the creation of a 3D patient-specific vascular roadmap based on preoperative imaging which aligns with intraoperative fluoroscopy, with many potential benefits. First, a 3D model is segmented from preoperative imaging, typically a CT scan. The model is then used to plan for the procedure, with placement of specific markers and storing of C-arm angles that will be used for intra-operative guidance. At the time of the procedure, an intraoperative cone-beam CT is performed and the 3D model is registered to the patient's on-table anatomy. Finally, the system is used for live guidance where the 3D model is codisplayed overlying fluoroscopic images. Copyright © 2018. Published by Elsevier Inc.
Hand-held optoacoustic probe for three-dimensional imaging of human morphology and function
NASA Astrophysics Data System (ADS)
Deán-Ben, X. Luís.; Razansky, Daniel
2014-03-01
We report on a hand-held imaging probe for real-time optoacoustic visualization of deep tissues in three dimensions. The proposed solution incorporates a two-dimensional array of ultrasonic sensors densely distributed on a spherical surface, whereas illumination is performed coaxially through a cylindrical cavity in the array. Visualization of three-dimensional tomographic data at a frame rate of 10 images per second is enabled by parallel recording of 256 time-resolved signals for each individual laser pulse along with a highly efficient GPUbased real-time reconstruction. A liquid coupling medium (water), enclosed in a transparent membrane, is used to guarantee transmission of the optoacoustically generated waves to the ultrasonic detectors. Excitation at multiple wavelengths further allows imaging spectrally distinctive tissue chromophores such as oxygenated and deoxygenated haemoglobin. The performance is showcased by video-rate tracking of deep tissue vasculature and three-dimensional measurements of blood oxygenenation in a healthy human volunteer. The flexibility provided by the hand-held hardware design, combined with the real-time operation, makes the developed platform highly usable for both small animal research and clinical imaging in multiple indications, including cancer, inflammation, skin and cardiovascular diseases, diagnostics of lymphatic system and breast
X-ray system simulation software tools for radiology and radiography education.
Kengyelics, Stephen M; Treadgold, Laura A; Davies, Andrew G
2018-02-01
To develop x-ray simulation software tools to support delivery of radiological science education for a range of learning environments and audiences including individual study, lectures, and tutorials. Two software tools were developed; one simulated x-ray production for a simple two dimensional radiographic system geometry comprising an x-ray source, beam filter, test object and detector. The other simulated the acquisition and display of two dimensional radiographic images of complex three dimensional objects using a ray casting algorithm through three dimensional mesh objects. Both tools were intended to be simple to use, produce results accurate enough to be useful for educational purposes, and have an acceptable simulation time on modest computer hardware. The radiographic factors and acquisition geometry could be altered in both tools via their graphical user interfaces. A comparison of radiographic contrast measurements of the simulators to a real system was performed. The contrast output of the simulators had excellent agreement with measured results. The software simulators were deployed to 120 computers on campus. The software tools developed are easy-to-use, clearly demonstrate important x-ray physics and imaging principles, are accessible within a standard University setting and could be used to enhance the teaching of x-ray physics to undergraduate students. Current approaches to teaching x-ray physics in radiological science lack immediacy when linking theory with practice. This method of delivery allows students to engage with the subject in an experiential learning environment. Copyright © 2017. Published by Elsevier Ltd.
Two-dimensional imaging of sprays with fluorescence, lasing, and stimulated Raman scattering.
Serpengüzel, A; Swindal, J C; Chang, R K; Acker, W P
1992-06-20
Two-dimensional fluorescence, lasing, and stimulated Raman scattering images of a hollow-cone nozzle spray are observed. The various constituents of the spray, such as vapor, liquid ligaments, small droplets, and large droplets, are distinguished by selectively imaging different colors associated with the inelastic light-scattering processes.
The use of global image characteristics for neural network pattern recognitions
NASA Astrophysics Data System (ADS)
Kulyas, Maksim O.; Kulyas, Oleg L.; Loshkarev, Aleksey S.
2017-04-01
The recognition system is observed, where the information is transferred by images of symbols generated by a television camera. For descriptors of objects the coefficients of two-dimensional Fourier transformation generated in a special way. For solution of the task of classification the one-layer neural network trained on reference images is used. Fast learning of a neural network with a single neuron calculation of coefficients is applied.
Turbulence imaging and applications using beam emission spectroscopy on DIII-D (invited)
NASA Astrophysics Data System (ADS)
McKee, G. R.; Fenzi, C.; Fonck, R. J.; Jakubowski, M.
2003-03-01
Two-dimensional measurements of density fluctuations are obtained in the radial and poloidal plane of the DIII-D tokamak with the Beam Emission Spectroscopy (BES) diagnostic system. The goals are to visualize the spatial structure and time evolution of turbulent eddies, as well as to obtain the 2D statistical properties of turbulence. The measurements are obtained with an array of localized BES spatial channels configured to image a midplane region of the plasma. 32 channels have been deployed, each with a spatial resolution of about 1 cm in the radial and poloidal directions, thus providing measurements of turbulence in the wave number range 0
Detection of ochratoxin A contamination in stored wheat using near-infrared hyperspectral imaging
NASA Astrophysics Data System (ADS)
Senthilkumar, T.; Jayas, D. S.; White, N. D. G.; Fields, P. G.; Gräfenhan, T.
2017-03-01
Near-infrared (NIR) hyperspectral imaging system was used to detect five concentration levels of ochratoxin A (OTA) in contaminated wheat kernels. The wheat kernels artificially inoculated with two different OTA producing Penicillium verrucosum strains, two different non-toxigenic P. verrucosum strains, and sterile control wheat kernels were subjected to NIR hyperspectral imaging. The acquired three-dimensional data were reshaped into readable two-dimensional data. Principal Component Analysis (PCA) was applied to the two dimensional data to identify the key wavelengths which had greater significance in detecting OTA contamination in wheat. Statistical and histogram features extracted at the key wavelengths were used in the linear, quadratic and Mahalanobis statistical discriminant models to differentiate between sterile control, five concentration levels of OTA contamination in wheat kernels, and five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels. The classification models differentiated sterile control samples from OTA contaminated wheat kernels and non-OTA producing P. verrucosum inoculated wheat kernels with a 100% accuracy. The classification models also differentiated between five concentration levels of OTA contaminated wheat kernels and between five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels with a correct classification of more than 98%. The non-OTA producing P. verrucosum inoculated wheat kernels and OTA contaminated wheat kernels subjected to hyperspectral imaging provided different spectral patterns.
Investigation of Layer Structure of the Takamatsuzuka Mural Paintings by Terahertz Imaging Technique
NASA Astrophysics Data System (ADS)
Inuzuka, M.; Kouzuma, Y.; Sugioka, N.; Fukunaga, K.; Tateishi, T.
2017-04-01
Terahertz imaging can be a powerful tool in conservation science for cultural heritages. In this study, a new terahertz imaging system was applied to the Takamatsuzuka mural painting of a blue dragon, and the condition of the plaster layer was diagnosed. As a result, the locations where the plaster layer appears solid on the surface but in actuality may have peeled off the underlying tuff stone were revealed and viewed as two-dimensional images.
Huang, David; Swanson, Eric A.; Lin, Charles P.; Schuman, Joel S.; Stinson, William G.; Chang, Warren; Hee, Michael R.; Flotte, Thomas; Gregory, Kenton; Puliafito, Carmen A.; Fujimoto, James G.
2015-01-01
A technique called optical coherence tomography (OCT) has been developed for noninvasive cross-sectional imaging in biological systems. OCT uses low-coherence interferometry to produce a two-dimensional image of optical scattering from internal tissue microstructures in a way that is analogous to ultrasonic pulse-echo imaging. OCT has longitudinal and lateral spatial resolutions of a few micrometers and can detect reflected signals as small as ~10−10 of the incident optical power. Tomographic imaging is demonstrated in vitro in the peripapillary area of the retina and in the coronary artery, two clinically relevant examples that are representative of transparent and turbid media, respectively. PMID:1957169
NASA Technical Reports Server (NTRS)
Winston, R.; Welford, W. T.
1980-01-01
The paper discusses the paraboloidal mirror as a tracking solar concentrator, fitting a nonimaging second stage to the paraboloidal mirror, other image-forming systems as first stages, and tracking systems in two-dimensional geometry. Because of inherent aberrations, the paraboloidal mirror cannot achieve the thermodynamic limit. It is shown how paraboloidal mirrors of short focal ratio and similar systems can have their flux concentration enhanced to near the thermodynamic limit by the addition of nonimaging compound elliptical concentrators.
NASA Astrophysics Data System (ADS)
Winston, R.; Welford, W. T.
1980-02-01
The paper discusses the paraboloidal mirror as a tracking solar concentrator, fitting a nonimaging second stage to the paraboloidal mirror, other image-forming systems as first stages, and tracking systems in two-dimensional geometry. Because of inherent aberrations, the paraboloidal mirror cannot achieve the thermodynamic limit. It is shown how paraboloidal mirrors of short focal ratio and similar systems can have their flux concentration enhanced to near the thermodynamic limit by the addition of nonimaging compound elliptical concentrators.
Park, Jae-Hyeung; Kim, Hak-Rin; Kim, Yunhee; Kim, Joohwan; Hong, Jisoo; Lee, Sin-Doo; Lee, Byoungho
2004-12-01
A depth-enhanced three-dimensional-two-dimensional convertible display that uses a polymer-dispersed liquid crystal based on the principle of integral imaging is proposed. In the proposed method, a lens array is located behind a transmission-type display panel to form an array of point-light sources, and a polymer-dispersed liquid crystal is electrically controlled to pass or to scatter light coming from these point-light sources. Therefore, three-dimensional-two-dimensional conversion is accomplished electrically without any mechanical movement. Moreover, the nonimaging structure of the proposed method increases the expressible depth range considerably. We explain the method of operation and present experimental results.
Shankar, Hariharan; Reddy, Sapna
2012-07-01
Ultrasound imaging has gained acceptance in pain management interventions. Features of myofascial pain syndrome have been explored using ultrasound imaging and elastography. There is a paucity of reports showing the benefit clinically. This report provides three-dimensional features of taut bands and highlights the advantages of using two-dimensional ultrasound imaging to improve targeting of taut bands in deeper locations. Fifty-eight-year-old man with pain and decreased range of motion of the right shoulder was referred for further management of pain above the scapula after having failed conservative management for myofascial pain syndrome. Three-dimensional ultrasound images provided evidence of aberrancy in the architecture of the muscle fascicles around the taut bands compared to the adjacent normal muscle tissue during serial sectioning of the accrued image. On two-dimensional ultrasound imaging over the palpated taut band, areas of hyperechogenicity were visualized in the trapezius and supraspinatus muscles. Subsequently, the patient received ultrasound-guided real-time lidocaine injections to the trigger points with successful resolution of symptoms. This is a successful demonstration of utility of ultrasound imaging of taut bands in the management of myofascial pain syndrome. Utility of this imaging modality in myofascial pain syndrome requires further clinical validation. Wiley Periodicals, Inc.
Resolution enhancement using simultaneous couple illumination
NASA Astrophysics Data System (ADS)
Hussain, Anwar; Martínez Fuentes, José Luis
2016-10-01
A super-resolution technique based on structured illumination created by a liquid crystal on silicon spatial light modulator (LCOS-SLM) is presented. Single and simultaneous pairs of tilted beams are generated to illuminate a target object. Resolution enhancement of an optical 4f system is demonstrated by using numerical simulations. The resulting intensity images are recorded at a charged couple device (CCD) and stored in the computer memory for further processing. One dimension enhancement can be performed with only 15 images. Two dimensional complete improvement requires 153 different images. The resolution of the optical system is extended three times compared to the band limited system.
System and method for progressive band selection for hyperspectral images
NASA Technical Reports Server (NTRS)
Fisher, Kevin (Inventor)
2013-01-01
Disclosed herein are systems, methods, and non-transitory computer-readable storage media for progressive band selection for hyperspectral images. A system having module configured to control a processor to practice the method calculates a virtual dimensionality of a hyperspectral image having multiple bands to determine a quantity Q of how many bands are needed for a threshold level of information, ranks each band based on a statistical measure, selects Q bands from the multiple bands to generate a subset of bands based on the virtual dimensionality, and generates a reduced image based on the subset of bands. This approach can create reduced datasets of full hyperspectral images tailored for individual applications. The system uses a metric specific to a target application to rank the image bands, and then selects the most useful bands. The number of bands selected can be specified manually or calculated from the hyperspectral image's virtual dimensionality.
Visidep (TM): A Three-Dimensional Imaging System For The Unaided Eye
NASA Astrophysics Data System (ADS)
McLaurin, A. Porter; Jones, Edwin R.; Cathey, LeConte
1984-05-01
The VISIDEP process for creating images in three dimensions on flat screens is suitable for photographic, electrographic and computer generated imaging systems. Procedures for generating these images vary from medium to medium due to the specific requirements of each technology. Imaging requirements for photographic and electrographic media are more directly tied to the hardware than are computer based systems. Applications of these technologies are not limited to entertainment, but have implications for training, interactive computer/video systems, medical imaging, and inspection equipment. Through minor modification the system can provide three-dimensional images with accurately measureable relationships for robotics and adds this factor for future developments in artificial intelligence. In almost any area requiring image analysis or critical review, VISIDEP provides the added advantage of three-dimensionality. All of this is readily accomplished without aids to the human eye. The system can be viewed in full color, false-color infra-red, and monochromatic modalities from any angle and is also viewable with a single eye. Thus, the potential of application for this developing system is extensive and covers the broad spectrum of human endeavor from entertainment to scientific study.
Dental magnetic resonance imaging: making the invisible visible.
Idiyatullin, Djaudat; Corum, Curt; Moeller, Steen; Prasad, Hari S; Garwood, Michael; Nixdorf, Donald R
2011-06-01
Clinical dentistry is in need of noninvasive and accurate diagnostic methods to better evaluate dental pathosis. The purpose of this work was to assess the feasibility of a recently developed magnetic resonance imaging (MRI) technique, called SWeep Imaging with Fourier Transform (SWIFT), to visualize dental tissues. Three in vitro teeth, representing a limited range of clinical conditions of interest, imaged using a 9.4T system with scanning times ranging from 100 seconds to 25 minutes. In vivo imaging of a subject was performed using a 4T system with a 10-minute scanning time. SWIFT images were compared with traditional two-dimensional radiographs, three-dimensional cone-beam computed tomography (CBCT) scanning, gradient-echo MRI technique, and histological sections. A resolution of 100 μm was obtained from in vitro teeth. SWIFT also identified the presence and extent of dental caries and fine structures of the teeth, including cracks and accessory canals, which are not visible with existing clinical radiography techniques. Intraoral positioning of the radiofrequency coil produced initial images of multiple adjacent teeth at a resolution of 400 μm. SWIFT MRI offers simultaneous three-dimensional hard- and soft-tissue imaging of teeth without the use of ionizing radiation. Furthermore, it has the potential to image minute dental structures within clinically relevant scanning times. This technology has implications for endodontists because it offers a potential method to longitudinally evaluate teeth where pulp and root structures have been regenerated. Copyright © 2011 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Topics in the two-dimensional sampling and reconstruction of images. [in remote sensing
NASA Technical Reports Server (NTRS)
Schowengerdt, R.; Gray, S.; Park, S. K.
1984-01-01
Mathematical analysis of image sampling and interpolative reconstruction is summarized and extended to two dimensions for application to data acquired from satellite sensors such as the Thematic mapper and SPOT. It is shown that sample-scene phase influences the reconstruction of sampled images, adds a considerable blur to the average system point spread function, and decreases the average system modulation transfer function. It is also determined that the parametric bicubic interpolator with alpha = -0.5 is more radiometrically accurate than the conventional bicubic interpolator with alpha = -1, and this at no additional cost. Finally, the parametric bicubic interpolator is found to be suitable for adaptive implementation by relating the alpha parameter to the local frequency content of an image.
Imaging two-dimensional mechanical waves of skeletal muscle contraction.
Grönlund, Christer; Claesson, Kenji; Holtermann, Andreas
2013-02-01
Skeletal muscle contraction is related to rapid mechanical shortening and thickening. Recently, specialized ultrasound systems have been applied to demonstrate and quantify transient tissue velocities and one-dimensional (1-D) propagation of mechanical waves during muscle contraction. Such waves could potentially provide novel information on musculoskeletal characteristics, function and disorders. In this work, we demonstrate two-dimensional (2-D) mechanical wave imaging following the skeletal muscle contraction. B-mode image acquisition during multiple consecutive electrostimulations, speckle-tracking and a time-stamp sorting protocol were used to obtain 1.4 kHz frame rate 2-D tissue velocity imaging of the biceps brachii muscle contraction. The results present novel information on tissue velocity profiles and mechanical wave propagation. In particular, counter-propagating compressional and shear waves in the longitudinal direction were observed in the contracting tissue (speed 2.8-4.4 m/s) and a compressional wave in the transverse direction of the non-contracting muscle tissue (1.2-1.9 m/s). In conclusion, analysing transient 2-D tissue velocity allows simultaneous assessment of both active and passive muscle tissue properties. Copyright © 2013 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Meter-Scale 3-D Models of the Martian Surface from Combining MOC and MOLA Data
NASA Technical Reports Server (NTRS)
Soderblom, Laurence A.; Kirk, Randolph L.
2003-01-01
We have extended our previous efforts to derive through controlled photoclinometry, accurate, calibrated, high-resolution topographic models of the martian surface. The process involves combining MGS MOLA topographic profiles and MGS MOC Narrow Angle images. The earlier work utilized, along with a particular MOC NA image, the MOLA topographic profile that was acquired simultaneously, in order to derive photometric and scattering properties of the surface and atmosphere so as to force the low spatial frequencies of a one-dimensional MOC photoclinometric model to match the MOLA profile. Both that work and the new results reported here depend heavily on successful efforts to: 1) refine the radiometric calibration of MOC NA; 2) register the MOC to MOLA coordinate systems and refine the pointing; and 3) provide the ability to project into a common coordinate system, simultaneously acquired MOC and MOLA with a single set of SPICE kernels utilizing the USGS ISIS cartographic image processing tools. The approach described in this paper extends the MOC-MOLA integration and cross-calibration procedures from one-dimensional profiles to full two-dimensional photoclinometry and image simulations. Included are methods to account for low-frequency albedo variations within the scene.
NASA Technical Reports Server (NTRS)
Dorosz, Jennifer L.; Bolson, Edward L.; Waiss, Mary S.; Sheehan, Florence H.
2003-01-01
Three-dimensional guidance programs have been shown to increase the reproducibility of 2-dimensional (2D) left ventricular volume calculations, but these systems have not been tested in 2D measurements of the right ventricle. Using magnetic fields to identify the probe location, we developed a new 3-dimensional guidance system that displays the line of intersection, the plane of intersection, and the numeric angle of intersection between the current image plane and previously saved scout views. When used by both an experienced and an inexperienced sonographer, this guidance system increases the accuracy of the 2D right ventricular volume measurements using a monoplane pyramidal model. Furthermore, a reconstruction of the right ventricle, with a computed volume similar to the calculated 2D volume, can be displayed quickly by tracing a few anatomic structures on 2D scans.
NASA Astrophysics Data System (ADS)
Nurge, Mark A.
2007-05-01
An electrical capacitance volume tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 × 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of dielectric distributions. This paper presents a method of reconstructing images of high contrast dielectric materials using only the self-capacitance measurements. By constraining the unknown dielectric material to one of two values, the inverse problem is no longer ill-determined. Resolution becomes limited only by the accuracy and resolution of the measurement circuitry. Images were reconstructed using this method with both synthetic and real data acquired using an aluminium structure inserted at different positions within the sensing region. Comparisons with standard two-dimensional ECT systems highlight the capabilities and limitations of the electronics and reconstruction algorithm.
Electrical capacitance volume tomography of high contrast dielectrics using a cuboid geometry
NASA Astrophysics Data System (ADS)
Nurge, Mark A.
An Electrical Capacitance Volume Tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 x 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of dielectric distributions. This dissertation presents a method of reconstructing images of high contrast dielectric materials using only the self capacitance measurements. By constraining the unknown dielectric material to one of two values, the inverse problem is no longer ill-determined. Resolution becomes limited only by the accuracy and resolution of the measurement circuitry. Images were reconstructed using this method with both synthetic and real data acquired using an aluminum structure inserted at different positions within the sensing region. Comparisons with standard two dimensional ECT systems highlight the capabilities and limitations of the electronics and reconstruction algorithm.
How 3D immersive visualization is changing medical diagnostics
NASA Astrophysics Data System (ADS)
Koning, Anton H. J.
2011-03-01
Originally the only way to look inside the human body without opening it up was by means of two dimensional (2D) images obtained using X-ray equipment. The fact that human anatomy is inherently three dimensional leads to ambiguities in interpretation and problems of occlusion. Three dimensional (3D) imaging modalities such as CT, MRI and 3D ultrasound remove these drawbacks and are now part of routine medical care. While most hospitals 'have gone digital', meaning that the images are no longer printed on film, they are still being viewed on 2D screens. However, this way valuable depth information is lost, and some interactions become unnecessarily complex or even unfeasible. Using a virtual reality (VR) system to present volumetric data means that depth information is presented to the viewer and 3D interaction is made possible. At the Erasmus MC we have developed V-Scope, an immersive volume visualization system for visualizing a variety of (bio-)medical volumetric datasets, ranging from 3D ultrasound, via CT and MRI, to confocal microscopy, OPT and 3D electron-microscopy data. In this talk we will address the advantages of such a system for both medical diagnostics as well as for (bio)medical research.
NASA Astrophysics Data System (ADS)
Kazanskiy, Nikolay; Protsenko, Vladimir; Serafimovich, Pavel
2016-03-01
This research article contains an experiment with implementation of image filtering task in Apache Storm and IBM InfoSphere Streams stream data processing systems. The aim of presented research is to show that new technologies could be effectively used for sliding window filtering of image sequences. The analysis of execution was focused on two parameters: throughput and memory consumption. Profiling was performed on CentOS operating systems running on two virtual machines for each system. The experiment results showed that IBM InfoSphere Streams has about 1.5 to 13.5 times lower memory footprint than Apache Storm, but could be about 2.0 to 2.5 slower on a real hardware.
3D digital image correlation using single color camera pseudo-stereo system
NASA Astrophysics Data System (ADS)
Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang
2017-10-01
Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.
Yu, Zeyun; Holst, Michael J.; Hayashi, Takeharu; Bajaj, Chandrajit L.; Ellisman, Mark H.; McCammon, J. Andrew; Hoshijima, Masahiko
2009-01-01
A general framework of image-based geometric processing is presented to bridge the gap between three-dimensional (3D) imaging that provides structural details of a biological system and mathematical simulation where high-quality surface or volumetric meshes are required. A 3D density map is processed in the order of image pre-processing (contrast enhancement and anisotropic filtering), feature extraction (boundary segmentation and skeletonization), and high-quality and realistic surface (triangular) and volumetric (tetrahedral) mesh generation. While the tool-chain described is applicable to general types of 3D imaging data, the performance is demonstrated specifically on membrane-bound organelles in ventricular myocytes that are imaged and reconstructed with electron microscopic (EM) tomography and two-photon microscopy (T-PM). Of particular interest in this study are two types of membrane-bound Ca2+-handling organelles, namely, transverse tubules (T-tubules) and junctional sarcoplasmic reticulum (jSR), both of which play an important role in regulating the excitation-contraction (E-C) coupling through dynamic Ca2+ mobilization in cardiomyocytes. PMID:18835449
Yu, Zeyun; Holst, Michael J; Hayashi, Takeharu; Bajaj, Chandrajit L; Ellisman, Mark H; McCammon, J Andrew; Hoshijima, Masahiko
2008-12-01
A general framework of image-based geometric processing is presented to bridge the gap between three-dimensional (3D) imaging that provides structural details of a biological system and mathematical simulation where high-quality surface or volumetric meshes are required. A 3D density map is processed in the order of image pre-processing (contrast enhancement and anisotropic filtering), feature extraction (boundary segmentation and skeletonization), and high-quality and realistic surface (triangular) and volumetric (tetrahedral) mesh generation. While the tool-chain described is applicable to general types of 3D imaging data, the performance is demonstrated specifically on membrane-bound organelles in ventricular myocytes that are imaged and reconstructed with electron microscopic (EM) tomography and two-photon microscopy (T-PM). Of particular interest in this study are two types of membrane-bound Ca(2+)-handling organelles, namely, transverse tubules (T-tubules) and junctional sarcoplasmic reticulum (jSR), both of which play an important role in regulating the excitation-contraction (E-C) coupling through dynamic Ca(2+) mobilization in cardiomyocytes.
Atomic Force Microscopy Based Cell Shape Index
NASA Astrophysics Data System (ADS)
Adia-Nimuwa, Usienemfon; Mujdat Tiryaki, Volkan; Hartz, Steven; Xie, Kan; Ayres, Virginia
2013-03-01
Stellation is a measure of cell physiology and pathology for several cell groups including neural, liver and pancreatic cells. In the present work, we compare the results of a conventional two-dimensional shape index study of both atomic force microscopy (AFM) and fluorescent microscopy images with the results obtained using a new three-dimensional AFM-based shape index similar to sphericity index. The stellation of astrocytes is investigated on nanofibrillar scaffolds composed of electrospun polyamide nanofibers that has demonstrated promise for central nervous system (CNS) repair. Recent work by our group has given us the ability to clearly segment the cells from nanofibrillar scaffolds in AFM images. The clear-featured AFM images indicated that the astrocyte processes were longer than previously identified at 24h. It was furthermore shown that cell spreading could vary significantly as a function of environmental parameters, and that AFM images could record these variations. The new three-dimensional AFM-based shape index incorporates the new information: longer stellate processes and cell spreading. The support of NSF PHY-095776 is acknowledged.
Image recovery from defocused 2D fluorescent images in multimodal digital holographic microscopy.
Quan, Xiangyu; Matoba, Osamu; Awatsuji, Yasuhiro
2017-05-01
A technique of three-dimensional (3D) intensity retrieval from defocused, two-dimensional (2D) fluorescent images in the multimodal digital holographic microscopy (DHM) is proposed. In the multimodal DHM, 3D phase and 2D fluorescence distributions are obtained simultaneously by an integrated system of an off-axis DHM and a conventional epifluorescence microscopy, respectively. This gives us more information of the target; however, defocused fluorescent images are observed due to the short depth of field. In this Letter, we propose a method to recover the defocused images based on the phase compensation and backpropagation from the defocused plane to the focused plane using the distance information that is obtained from a 3D phase distribution. By applying Zernike polynomial phase correction, we brought back the fluorescence intensity to the focused imaging planes. The experimental demonstration using fluorescent beads is presented, and the expected applications are suggested.
NASA Astrophysics Data System (ADS)
Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod
2015-10-01
In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.
Advancements to the planogram frequency–distance rebinning algorithm
Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E
2010-01-01
In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact reconstruction) and planogram filtered backprojection image reconstruction algorithms. We show that the PFDRX algorithm produces images that are nearly as accurate as images reconstructed with the planogram filtered backprojection algorithm and more accurate than images reconstructed with the PFDR+FBP algorithm. Both the PFDR+FBP and PFDRX algorithms provide a dramatic improvement in computation time over the planogram filtered backprojection algorithm. PMID:20436790
Hyperspectral Fluorescence and Reflectance Imaging Instrument
NASA Technical Reports Server (NTRS)
Ryan, Robert E.; O'Neal, S. Duane; Lanoue, Mark; Russell, Jeffrey
2008-01-01
The system is a single hyperspectral imaging instrument that has the unique capability to acquire both fluorescence and reflectance high-spatial-resolution data that is inherently spatially and spectrally registered. Potential uses of this instrument include plant stress monitoring, counterfeit document detection, biomedical imaging, forensic imaging, and general materials identification. Until now, reflectance and fluorescence spectral imaging have been performed by separate instruments. Neither a reflectance spectral image nor a fluorescence spectral image alone yields as much information about a target surface as does a combination of the two modalities. Before this system was developed, to benefit from this combination, analysts needed to perform time-consuming post-processing efforts to co-register the reflective and fluorescence information. With this instrument, the inherent spatial and spectral registration of the reflectance and fluorescence images minimizes the need for this post-processing step. The main challenge for this technology is to detect the fluorescence signal in the presence of a much stronger reflectance signal. To meet this challenge, the instrument modulates artificial light sources from ultraviolet through the visible to the near-infrared part of the spectrum; in this way, both the reflective and fluorescence signals can be measured through differencing processes to optimize fluorescence and reflectance spectra as needed. The main functional components of the instrument are a hyperspectral imager, an illumination system, and an image-plane scanner. The hyperspectral imager is a one-dimensional (line) imaging spectrometer that includes a spectrally dispersive element and a two-dimensional focal plane detector array. The spectral range of the current imaging spectrometer is between 400 to 1,000 nm, and the wavelength resolution is approximately 3 nm. The illumination system consists of narrowband blue, ultraviolet, and other discrete wavelength light-emitting-diode (LED) sources and white-light LED sources designed to produce consistently spatially stable light. White LEDs provide illumination for the measurement of reflectance spectra, while narrowband blue and UV LEDs are used to excite fluorescence. Each spectral type of LED can be turned on or off depending on the specific remote-sensing process being performed. Uniformity of illumination is achieved by using an array of LEDs and/or an integrating sphere or other diffusing surface. The image plane scanner uses a fore optic with a field of view large enough to provide an entire scan line on the image plane. It builds up a two-dimensional image in pushbroom fashion as the target is scanned across the image plane either by moving the object or moving the fore optic. For fluorescence detection, spectral filtering of a narrowband light illumination source is sometimes necessary to minimize the interference of the source spectrum wings with the fluorescence signal. Spectral filtering is achieved with optical interference filters and absorption glasses. This dual spectral imaging capability will enable the optimization of reflective, fluorescence, and fused datasets as well as a cost-effective design for multispectral imaging solutions. This system has been used in plant stress detection studies and in currency analysis.
Computer-generated 3D ultrasound images of the carotid artery
NASA Technical Reports Server (NTRS)
Selzer, Robert H.; Lee, Paul L.; Lai, June Y.; Frieden, Howard J.; Blankenhorn, David H.
1989-01-01
A method is under development to measure carotid artery lesions from a computer-generated three-dimensional ultrasound image. For each image, the position of the transducer in six coordinates (x, y, z, azimuth, elevation, and roll) is recorded and used to position each B-mode picture element in its proper spatial position in a three-dimensional memory array. After all B-mode images have been assembled in the memory, the three-dimensional image is filtered and resampled to produce a new series of parallel-plane two-dimensional images from which arterial boundaries are determined using edge tracking methods.
Computer-generated 3D ultrasound images of the carotid artery
NASA Astrophysics Data System (ADS)
Selzer, Robert H.; Lee, Paul L.; Lai, June Y.; Frieden, Howard J.; Blankenhorn, David H.
A method is under development to measure carotid artery lesions from a computer-generated three-dimensional ultrasound image. For each image, the position of the transducer in six coordinates (x, y, z, azimuth, elevation, and roll) is recorded and used to position each B-mode picture element in its proper spatial position in a three-dimensional memory array. After all B-mode images have been assembled in the memory, the three-dimensional image is filtered and resampled to produce a new series of parallel-plane two-dimensional images from which arterial boundaries are determined using edge tracking methods.
Automatic Reconstruction of Spacecraft 3D Shape from Imagery
NASA Astrophysics Data System (ADS)
Poelman, C.; Radtke, R.; Voorhees, H.
We describe a system that computes the three-dimensional (3D) shape of a spacecraft from a sequence of uncalibrated, two-dimensional images. While the mathematics of multi-view geometry is well understood, building a system that accurately recovers 3D shape from real imagery remains an art. A novel aspect of our approach is the combination of algorithms from computer vision, photogrammetry, and computer graphics. We demonstrate our system by computing spacecraft models from imagery taken by the Air Force Research Laboratory's XSS-10 satellite and DARPA's Orbital Express satellite. Using feature tie points (each identified in two or more images), we compute the relative motion of each frame and the 3D location of each feature using iterative linear factorization followed by non-linear bundle adjustment. The "point cloud" that results from this traditional shape-from-motion approach is typically too sparse to generate a detailed 3D model. Therefore, we use the computed motion solution as input to a volumetric silhouette-carving algorithm, which constructs a solid 3D model based on viewpoint consistency with the image frames. The resulting voxel model is then converted to a facet-based surface representation and is texture-mapped, yielding realistic images from arbitrary viewpoints. We also illustrate other applications of the algorithm, including 3D mensuration and stereoscopic 3D movie generation.
Li, Chengshuai; Chen, Shichao; Klemba, Michael; Zhu, Yizheng
2016-09-01
A dual-modality birefringence/phase imaging system is presented. The system features a crystal retarder that provides polarization mixing and generates two interferometric carrier waves in a single signal spectrum. The retardation and orientation of sample birefringence can then be measured simultaneously based on spectral multiplexing interferometry. Further, with the addition of a Nomarski prism, the same setup can be used for quantitative differential interference contrast (DIC) imaging. Sample phase can then be obtained with two-dimensional integration. In addition, birefringence-induced phase error can be corrected using the birefringence data. This dual-modality approach is analyzed theoretically with Jones calculus and validated experimentally with malaria-infected red blood cells. The system generates not only corrected DIC and phase images, but a birefringence map that highlights the distribution of hemozoin crystals.
NASA Astrophysics Data System (ADS)
Li, Chengshuai; Chen, Shichao; Klemba, Michael; Zhu, Yizheng
2016-09-01
A dual-modality birefringence/phase imaging system is presented. The system features a crystal retarder that provides polarization mixing and generates two interferometric carrier waves in a single signal spectrum. The retardation and orientation of sample birefringence can then be measured simultaneously based on spectral multiplexing interferometry. Further, with the addition of a Nomarski prism, the same setup can be used for quantitative differential interference contrast (DIC) imaging. Sample phase can then be obtained with two-dimensional integration. In addition, birefringence-induced phase error can be corrected using the birefringence data. This dual-modality approach is analyzed theoretically with Jones calculus and validated experimentally with malaria-infected red blood cells. The system generates not only corrected DIC and phase images, but a birefringence map that highlights the distribution of hemozoin crystals.
NASA Technical Reports Server (NTRS)
Tilley, D. G.
1986-01-01
Directional ocean wave spectra were derived from Shuttle Imaging Radar (SIR-B) imagery in regions where nearly simultaneous aircraft-based measurements of the wave spectra were also available as part of the NASA Shuttle Mission 41G experiments. The SIR-B response to a coherently speckled scene is used to estimate the stationary system transfer function in the 15 even terms of an eighth-order two-dimensional polynomial. Surface elevation contours are assigned to SIR-B ocean scenes Fourier filtered using a empirical model of the modulation transfer function calibrated with independent measurements of wave height. The empirical measurements of the wave height distribution are illustrated for a variety of sea states.
Three-dimensional radar imaging techniques and systems for near-field applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheen, David M.; Hall, Thomas E.; McMakin, Douglas L.
2016-05-12
The Pacific Northwest National Laboratory has developed three-dimensional holographic (synthetic aperture) radar imaging techniques and systems for a wide variety of near-field applications. These applications include radar cross-section (RCS) imaging, personnel screening, standoff concealed weapon detection, concealed threat detection, through-barrier imaging, ground penetrating radar (GPR), and non-destructive evaluation (NDE). Sequentially-switched linear arrays are used for many of these systems to enable high-speed data acquisition and 3-D imaging. In this paper, the techniques and systems will be described along with imaging results that demonstrate the utility of near-field 3-D radar imaging for these compelling applications.
Three dimensional scattering center imaging techniques
NASA Technical Reports Server (NTRS)
Younger, P. R.; Burnside, W. D.
1991-01-01
Two methods to image scattering centers in 3-D are presented. The first method uses 2-D images generated from Inverse Synthetic Aperture Radar (ISAR) measurements taken by two vertically offset antennas. This technique is shown to provide accurate 3-D imaging capability which can be added to an existing ISAR measurement system, requiring only the addition of a second antenna. The second technique uses target impulse responses generated from wideband radar measurements from three slightly different offset antennas. This technique is shown to identify the dominant scattering centers on a target in nearly real time. The number of measurements required to image a target using this technique is very small relative to traditional imaging techniques.
NASA Astrophysics Data System (ADS)
Hoffmann, A.; Zimmermann, F.; Scharr, H.; Krömker, S.; Schulz, C.
2005-01-01
A laser-based technique for measuring instantaneous three-dimensional species concentration distributions in turbulent flows is presented. The laser beam from a single laser is formed into two crossed light sheets that illuminate the area of interest. The laser-induced fluorescence (LIF) signal emitted from excited species within both planes is detected with a single camera via a mirror arrangement. Image processing enables the reconstruction of the three-dimensional data set in close proximity to the cutting line of the two light sheets. Three-dimensional intensity gradients are computed and compared to the two-dimensional projections obtained from the two directly observed planes. Volume visualization by digital image processing gives unique insight into the three-dimensional structures within the turbulent processes. We apply this technique to measurements of toluene-LIF in a turbulent, non-reactive mixing process of toluene and air and to hydroxyl (OH) LIF in a turbulent methane-air flame upon excitation at 248 nm with a tunable KrF excimer laser.
Tan, A C; Richards, R
1989-01-01
Three-dimensional (3D) medical graphics is becoming popular in clinical use on tomographic scanners. Research work in 3D reconstructive display of computerized tomography (CT) and magnetic resonance imaging (MRI) scans on conventional computers has produced many so-called pseudo-3D images. The quality of these images depends on the rendering algorithm, the coarseness of the digitized object, the number of grey levels and the image screen resolution. CT and MRI data are fundamentally voxel based and they produce images that are coarse because of the resolution of the data acquisition system. 3D images produced by the Z-buffer depth shading technique suffer loss of detail when complex objects with fine textural detail need to be displayed. Attempts have been made to improve the display of voxel objects, and existing techniques have shown the improvement possible using these post-processing algorithms. The improved rendering technique works on the Z-buffer image to generate a shaded image using a single light source in any direction. The effectiveness of the technique in generating a shaded image has been shown to be a useful means of presenting 3D information for clinical use.
Advanced Scintillator Detectors for Neutron Imaging in Inertial Confinement Fusion
NASA Astrophysics Data System (ADS)
Geppert-Kleinrath, Verena; Danly, Christopher; Merrill, Frank; Simpson, Raspberry; Volegov, Petr; Wilde, Carl
2016-10-01
The neutron imaging team at Los Alamos National Laboratory (LANL) has been providing two-dimensional neutron imaging of the inertial confinement fusion process at the National Ignition Facility (NIF) for over five years. Neutron imaging is a powerful tool in which position-sensitive detectors register neutrons emitted in the fusion reactions, producing a picture of the burning fuel. Recent images have revealed possible multi-dimensional asymmetries, calling for additional views to facilitate three-dimensional imaging. These will be along shorter lines of sight to stay within the existing facility at NIF. In order to field imaging capabilities equivalent to the existing system several technological challenges have to be met: high spatial resolution, high light output, and fast scintillator response to capture lower-energy neutrons, which have scattered from non-burning regions of fuel. Deuterated scintillators are a promising candidate to achieve the timing and resolution required; a systematic study of deuterated and non-deuterated polystyrene and liquid samples is currently ongoing. A test stand has been implemented to measure the response function, and preliminary data on resolution and light output have been obtained at the LANL Weapons Neutrons Research facility.
Acanthamoeba migration in an electric field.
Rudell, Jolene Chang; Gao, Jing; Sun, Yuxin; Sun, Yaohui; Chodosh, James; Schwab, Ivan; Zhao, Min
2013-06-21
We investigated the in vitro response of Acanthamoeba trophozoites to electric fields (EFs). Acanthamoeba castellanii were exposed to varying strengths of an EF. During EF exposure, cell migration was monitored using an inverted microscope equipped with a CCD camera and the SimplePCI 5.3 imaging system to capture time-lapse images. The migration of A. castellanii trophozoites was analyzed and quantified with ImageJ software. For analysis of cell migration in a three-dimensional culture system, Acanthamoeba trophozoites were cultured in agar, exposed to an EF, digitally video recorded, and analyzed at various Z focal planes. Acanthamoeba trophozoites move at random in the absence of an EF, but move directionally in response to an EF. Directedness in the absence of an EF is 0.08 ± 0.01, while in 1200 mV/mm EF, directedness is significantly higher at -0.65 ± 0.01 (P < 0.001). We find that the trophozoite migration response is voltage-dependent, with higher directionality with higher voltage application. Acanthamoeba move directionally in a three-dimensional (3D) agar system as well when exposed to an EF. Acanthamoeba trophozoites move directionally in response to an EF in a two-dimensional and 3D culture system. Acanthamoeba trophozoite migration is also voltage-dependent, with increased directionality with increasing voltage. This may provide new treatment modalities for Acanthamoeba keratitis.
Comparison of sound speed measurements on two different ultrasound tomography devices
NASA Astrophysics Data System (ADS)
Sak, Mark; Duric, Neb; Littrup, Peter; Bey-Knight, Lisa; Sherman, Mark; Gierach, Gretchen; Malyarenko, Antonina
2014-03-01
Ultrasound tomography (UST) employs sound waves to produce three-dimensional images of breast tissue and precisely measures the attenuation of sound speed secondary to breast tissue composition. High breast density is a strong breast cancer risk factor and sound speed is directly proportional to breast density. UST provides a quantitative measure of breast density based on three-dimensional imaging without compression, thereby overcoming the shortcomings of many other imaging modalities. The quantitative nature of the UST breast density measures are tied to an external standard, so sound speed measurement in breast tissue should be independent of specific hardware. The work presented here compares breast sound speed measurement obtained with two different UST devices. The Computerized Ultrasound Risk Evaluation (CURE) system located at the Karmanos Cancer Institute in Detroit, Michigan was recently replaced with the SoftVue ultrasound tomographic device. Ongoing clinical trials have used images generated from both sets of hardware, so maintaining consistency in sound speed measurements is important. During an overlap period when both systems were in the same exam room, a total of 12 patients had one or both of their breasts imaged on both systems on the same day. There were 22 sound speed scans analyzed from each system and the average breast sound speeds were compared. Images were either reconstructed using saved raw data (for both CURE and SoftVue) or were created during the image acquisition (saved in DICOM format for SoftVue scans only). The sound speed measurements from each system were strongly and positively correlated with each other. The average difference in sound speed between the two sets of data was on the order of 1-2 m/s and this result was not statistically significant. The only sets of images that showed a statistical difference were the DICOM images created during the SoftVue scan compared to the SoftVue images reconstructed from the raw data. However, the discrepancy between the sound speed values could be easily handled by uniformly increasing the DICOM sound speed by approximately 0.5 m/s. These results suggest that there is no fundamental difference in sound speed measurement for the two systems and support combining data generated with these instruments in future studies.
Three-dimensional measurement of yarn hairiness via multiperspective images
NASA Astrophysics Data System (ADS)
Wang, Lei; Xu, Bugao; Gao, Weidong
2018-02-01
Yarn hairiness is one of the essential parameters for assessing yarn quality. Most of the currently used yarn measurement systems are based on two-dimensional (2-D) photoelectric measurements, which are likely to underestimate levels of yarn hairiness because hairy fibers on a yarn surface are often projected or occluded in these 2-D systems. A three-dimensional (3-D) test method for hairiness measurement using a multiperspective imaging system is presented. The system was developed to reconstruct a 3-D yarn model for tracing the actual length of hairy fibers on a yarn surface. Five views of a yarn from different perspectives were created by two angled mirrors and simultaneously captured in one panoramic picture by a camera. A 3-D model was built by extracting the yarn silhouettes in the five views and transferring the silhouettes into a common coordinate system. From the 3-D model, curved hair fibers were traced spatially so that projection and occlusion occurring in the current systems could be avoided. In the experiment, the proposed method was compared with two commercial instruments, i.e., the Uster Tester and Zweigle Tester. It is demonstrated that the length distribution of hairy fibers measured from the 3-D model showed an exponential growth when the fiber length is sorted from shortest to longest. The hairiness measurements, such as H-value, measured by the multiperspective method were highly consistent with those of Uster Tester (r=0.992) but had larger values than those obtained from Uster Tester and Zweigle Tester, proving that the proposed method corrected underestimated hairiness measurements in the commercial systems.
The forensic holodeck: an immersive display for forensic crime scene reconstructions.
Ebert, Lars C; Nguyen, Tuan T; Breitbeck, Robert; Braun, Marcel; Thali, Michael J; Ross, Steffen
2014-12-01
In forensic investigations, crime scene reconstructions are created based on a variety of three-dimensional image modalities. Although the data gathered are three-dimensional, their presentation on computer screens and paper is two-dimensional, which incurs a loss of information. By applying immersive virtual reality (VR) techniques, we propose a system that allows a crime scene to be viewed as if the investigator were present at the scene. We used a low-cost VR headset originally developed for computer gaming in our system. The headset offers a large viewing volume and tracks the user's head orientation in real-time, and an optical tracker is used for positional information. In addition, we created a crime scene reconstruction to demonstrate the system. In this article, we present a low-cost system that allows immersive, three-dimensional and interactive visualization of forensic incident scene reconstructions.
Application of digital interferogram evaluation techniques to the measurement of 3-D flow fields
NASA Technical Reports Server (NTRS)
Becker, Friedhelm; Yu, Yung H.
1987-01-01
A system for digitally evaluating interferograms, based on an image processing system connected to a host computer, was implemented. The system supports one- and two-dimensional interferogram evaluations. Interferograms are digitized, enhanced, and then segmented. The fringe coordinates are extracted, and the fringes are represented as polygonal data structures. Fringe numbering and fringe interpolation modules are implemented. The system supports editing and interactive features, as well as graphic visualization. An application of the system to the evaluation of double exposure interferograms from the transonic flow field around a helicopter blade and the reconstruction of the three dimensional flow field is given.
Visual Image Sensor Organ Replacement
NASA Technical Reports Server (NTRS)
Maluf, David A.
2014-01-01
This innovation is a system that augments human vision through a technique called "Sensing Super-position" using a Visual Instrument Sensory Organ Replacement (VISOR) device. The VISOR device translates visual and other sensors (i.e., thermal) into sounds to enable very difficult sensing tasks. Three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. Because the human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns, the translation of images into sounds reduces the risk of accidentally filtering out important clues. The VISOR device was developed to augment the current state-of-the-art head-mounted (helmet) display systems. It provides the ability to sense beyond the human visible light range, to increase human sensing resolution, to use wider angle visual perception, and to improve the ability to sense distances. It also allows compensation for movement by the human or changes in the scene being viewed.
SEMG signal compression based on two-dimensional techniques.
de Melo, Wheidima Carneiro; de Lima Filho, Eddie Batista; da Silva Júnior, Waldir Sabino
2016-04-18
Recently, two-dimensional techniques have been successfully employed for compressing surface electromyographic (SEMG) records as images, through the use of image and video encoders. Such schemes usually provide specific compressors, which are tuned for SEMG data, or employ preprocessing techniques, before the two-dimensional encoding procedure, in order to provide a suitable data organization, whose correlations can be better exploited by off-the-shelf encoders. Besides preprocessing input matrices, one may also depart from those approaches and employ an adaptive framework, which is able to directly tackle SEMG signals reassembled as images. This paper proposes a new two-dimensional approach for SEMG signal compression, which is based on a recurrent pattern matching algorithm called multidimensional multiscale parser (MMP). The mentioned encoder was modified, in order to efficiently work with SEMG signals and exploit their inherent redundancies. Moreover, a new preprocessing technique, named as segmentation by similarity (SbS), which has the potential to enhance the exploitation of intra- and intersegment correlations, is introduced, the percentage difference sorting (PDS) algorithm is employed, with different image compressors, and results with the high efficiency video coding (HEVC), H.264/AVC, and JPEG2000 encoders are presented. Experiments were carried out with real isometric and dynamic records, acquired in laboratory. Dynamic signals compressed with H.264/AVC and HEVC, when combined with preprocessing techniques, resulted in good percent root-mean-square difference [Formula: see text] compression factor figures, for low and high compression factors, respectively. Besides, regarding isometric signals, the modified two-dimensional MMP algorithm outperformed state-of-the-art schemes, for low compression factors, the combination between SbS and HEVC proved to be competitive, for high compression factors, and JPEG2000, combined with PDS, provided good performance allied to low computational complexity, all in terms of percent root-mean-square difference [Formula: see text] compression factor. The proposed schemes are effective and, specifically, the modified MMP algorithm can be considered as an interesting alternative for isometric signals, regarding traditional SEMG encoders. Besides, the approach based on off-the-shelf image encoders has the potential of fast implementation and dissemination, given that many embedded systems may already have such encoders available, in the underlying hardware/software architecture.
McGovern, Eimear; Kelleher, Eoin; Snow, Aisling; Walsh, Kevin; Gadallah, Bassem; Kutty, Shelby; Redmond, John M; McMahon, Colin J
2017-09-01
In recent years, three-dimensional printing has demonstrated reliable reproducibility of several organs including hearts with complex congenital cardiac anomalies. This represents the next step in advanced image processing and can be used to plan surgical repair. In this study, we describe three children with complex univentricular hearts and abnormal systemic or pulmonary venous drainage, in whom three-dimensional printed models based on CT data assisted with preoperative planning. For two children, after group discussion and examination of the models, a decision was made not to proceed with surgery. We extend the current clinical experience with three-dimensional printed modelling and discuss the benefits of such models in the setting of managing complex surgical problems in children with univentricular circulation and abnormal systemic or pulmonary venous drainage.
Near-real-time biplanar fluoroscopic tracking system for the video tumor fighter
NASA Astrophysics Data System (ADS)
Lawson, Michael A.; Wika, Kevin G.; Gilles, George T.; Ritter, Rogers C.
1991-06-01
We have developed software capable of the three-dimensional tracking of objects in the brain volume, and the subsequent overlaying of an image of the object onto previously obtained MR or CT scans. This software has been developed for use with the Magnetic Stereotaxis System (MSS), also called the 'Video Tumor Fighter' (VTF). The software was written for a Sun 4/110 SPARC workstation with an ANDROX ICS-400 image processing card installed to manage this task. At present, the system uses input from two orthogonally-oriented, visible- light cameras and a simulated scene to determine the three-dimensional position of the object of interest. The coordinates are then transformed into MR or CT coordinates and an image of the object is displayed in the appropriate intersecting MR slice on a computer screen. This paper describes the tracking algorithm and discusses how it was implemented in software. The system's hardware is also described. The limitations of the present system are discussed and plans for incorporating bi-planar, x-ray fluoroscopy are presented.
NASA Astrophysics Data System (ADS)
McArdle, Sara; Chodaczek, Grzegorz; Ray, Nilanjan; Ley, Klaus
2015-02-01
Intravital multiphoton imaging of arteries is technically challenging because the artery expands with every heartbeat, causing severe motion artifacts. To study leukocyte activity in atherosclerosis, we developed the intravital live cell triggered imaging system (ILTIS). This system implements cardiac triggered acquisition as well as frame selection and image registration algorithms to produce stable movies of myeloid cell movement in atherosclerotic arteries in live mice. To minimize tissue damage, no mechanical stabilization is used and the artery is allowed to expand freely. ILTIS performs multicolor high frame-rate two-dimensional imaging and full-thickness three-dimensional imaging of beating arteries in live mice. The external carotid artery and its branches (superior thyroid and ascending pharyngeal arteries) were developed as a surgically accessible and reliable model of atherosclerosis. We use ILTIS to demonstrate Cx3cr1GFP monocytes patrolling the lumen of atherosclerotic arteries. Additionally, we developed a new reporter mouse (Apoe-/-Cx3cr1GFP/+Cd11cYFP) to image GFP+ and GFP+YFP+ macrophages "dancing on the spot" and YFP+ macrophages migrating within intimal plaque. ILTIS will be helpful to answer pertinent open questions in the field, including monocyte recruitment and transmigration, macrophage and dendritic cell activity, and motion of other immune cells.
Note: An absolute X-Y-Θ position sensor using a two-dimensional phase-encoded binary scale
NASA Astrophysics Data System (ADS)
Kim, Jong-Ahn; Kim, Jae Wan; Kang, Chu-Shik; Jin, Jonghan
2018-04-01
This Note presents a new absolute X-Y-Θ position sensor for measuring planar motion of a precision multi-axis stage system. By analyzing the rotated image of a two-dimensional phase-encoded binary scale (2D), the absolute 2D position values at two separated points were obtained and the absolute X-Y-Θ position could be calculated combining these values. The sensor head was constructed using a board-level camera, a light-emitting diode light source, an imaging lens, and a cube beam-splitter. To obtain the uniform intensity profiles from the vignette scale image, we selected the averaging directions deliberately, and higher resolution in the angle measurement could be achieved by increasing the allowable offset size. The performance of a prototype sensor was evaluated in respect of resolution, nonlinearity, and repeatability. The sensor could resolve 25 nm linear and 0.001° angular displacements clearly, and the standard deviations were less than 18 nm when 2D grid positions were measured repeatedly.
Three-dimensional confocal microscopy of the living cornea and ocular lens
NASA Astrophysics Data System (ADS)
Masters, Barry R.
1991-07-01
The three-dimensional reconstruction of the optic zone of the cornea and the ocular crystalline lens has been accomplished using confocal microscopy and volume rendering computer techniques. A laser scanning confocal microscope was used in the reflected light mode to obtain the two-dimensional images from the cornea and the ocular lens of a freshly enucleated rabbit eye. The light source was an argon ion laser with a 488 nm wavelength. The microscope objective was a Leitz X25, NA 0.6 water immersion lens. The 400 micron thick cornea was optically sectioned into 133 three micron sections. The semi-transparent cornea and the in-situ ocular lens was visualized as high resolution, high contrast two-dimensional images. The structures observed in the cornea include: superficial epithelial cells and their nuclei, basal epithelial cells and their 'beaded' cell borders, basal lamina, nerve plexus, nerve fibers, nuclei of stromal keratocytes, and endothelial cells. The structures observed in the in- situ ocular lens include: lens capsule, lens epithelial cells, and individual lens fibers. The three-dimensional data sets of the cornea and the ocular lens were reconstructed in the computer using volume rendering techniques. Stereo pairs were also created of the two- dimensional ocular images for visualization. The stack of two-dimensional images was reconstructed into a three-dimensional object using volume rendering techniques. This demonstration of the three-dimensional visualization of the intact, enucleated eye provides an important step toward quantitative three-dimensional morphometry of the eye. The important aspects of three-dimensional reconstruction are discussed.
A fast rigid-registration method of inferior limb X-ray image and 3D CT images for TKA surgery
NASA Astrophysics Data System (ADS)
Ito, Fumihito; O. D. A, Prima; Uwano, Ikuko; Ito, Kenzo
2010-03-01
In this paper, we propose a fast rigid-registration method of inferior limb X-ray films (two-dimensional Computed Radiography (CR) images) and three-dimensional Computed Tomography (CT) images for Total Knee Arthroplasty (TKA) surgery planning. The position of the each bone, such as femur and tibia (shin bone), in X-ray film and 3D CT images is slightly different, and we must pay attention how to use the two different images, since X-ray film image is captured in the standing position, and 3D CT is captured in decubitus (face up) position, respectively. Though the conventional registration mainly uses cross-correlation function between two images,and utilizes optimization techniques, it takes enormous calculation time and it is difficult to use it in interactive operations. In order to solve these problems, we calculate the center line (bone axis) of femur and tibia (shin bone) automatically, and we use them as initial positions for the registration. We evaluate our registration method by using three patient's image data, and we compare our proposed method and a conventional registration, which uses down-hill simplex algorithm. The down-hill simplex method is an optimization algorithm that requires only function evaluations, and doesn't need the calculation of derivatives. Our registration method is more effective than the downhill simplex method in computational time and the stable convergence. We have developed the implant simulation system on a personal computer, in order to support the surgeon in a preoperative planning of TKA. Our registration method is implemented in the simulation system, and user can manipulate 2D/3D translucent templates of implant components on X-ray film and 3D CT images.
Wood, Martin; Mannion, Richard
2011-02-01
A comparison of 2 surgical techniques. To determine the relative accuracy of minimally invasive lumbar pedicle screw placement using 2 different CT-based image-guided techniques. Three-dimensional intraoperative fluoroscopy systems have recently become available that provide the ability to use CT-quality images for navigation during image-guided minimally invasive spinal surgery. However, the cost of this equipment may negate any potential benefit in navigational accuracy. We therefore assess the accuracy of pedicle screw placement using an intraoperative 3-dimensional fluoroscope for guidance compared with a technique using preoperative CT images merged to intraoperative 2-dimensional fluoroscopy. Sixty-seven patients undergoing minimally invasive placement of lumbar pedicle screws (296 screws) using a navigated, image-guided technique were studied and the accuracy of pedicle screw placement assessed. Electromyography (EMG) monitoring of lumbar nerve roots was used in all. Group 1: 24 patients in whom a preoperative CT scan was merged with intraoperative 2-dimensional fluoroscopy images on the image-guidance system. Group 2: 43 patients using intraoperative 3-dimensional fluoroscopy images as the source for the image guidance system. The frequencies of pedicle breach and EMG warnings (indicating potentially unsafe screw placement) in each group were recorded. The rate of pedicle screw misplacement was 6.4% in group 1 vs 1.6% in group 2 (P=0.03). There were no cases of neurologic injury from suboptimal placement of screws. Additionally, the incidence of EMG warnings was significantly lower in group 2 (3.7% vs. 10% (P=0.03). The use of an intraoperative 3-dimensional fluoroscopy system with an image-guidance system results in greater accuracy of pedicle screw placement than the use of preoperative CT scans, although potentially dangerous placement of pedicle screws can be prevented by the use of EMG monitoring of lumbar nerve roots.
Development of a digital impression procedure using photogrammetry for complete denture fabrication.
Matsuda, Takashi; Goto, Takaharu; Kurahashi, Kosuke; Kashiwabara, Toshiya; Ichikawa, Tetsuo
We developed an innovative procedure for digitizing maxillary edentulous residual ridges with a photogrammetric system capable of estimating three-dimensional (3D) digital forms from multiple two-dimensional (2D) digital images. The aim of this study was to validate the effectiveness of the photogrammetric system. Impressions of the maxillary residual ridges of five edentulous patients were taken with four kinds of procedures: three conventional impression procedures and the photogrammetric system. Plaster models were fabricated from conventional impressions and digitized with a 3D scanner. Two 3D forms out of four forms were superimposed with 3D inspection software, and differences were evaluated using a least squares best fit algorithm. The in vitro experiment suggested that better imaging conditions were in the horizontal range of ± 15 degrees and at a vertical angle of 45 degrees. The mean difference between the photogrammetric image (Form A) and the image taken from conventional preliminarily impression (Form C) was 0.52 ± 0.22 mm. The mean difference between the image taken of final impression through a special tray (Form B) and Form C was 0.26 ± 0.06 mm. The mean difference between the image taken from conventional final impression (Form D) and Form C was 0.25 ± 0.07 mm. The difference between Forms A and C was significantly larger than the differences between Forms B and C and between Forms D and C. The results of this study suggest that obtaining digital impressions of edentulous residual ridges using a photogrammetric system is feasible and available for clinical use.
Document Examination: Applications of Image Processing Systems.
Kopainsky, B
1989-12-01
Dealing with images is a familiar business for an expert in questioned documents: microscopic, photographic, infrared, and other optical techniques generate images containing the information he or she is looking for. A recent method for extracting most of this information is digital image processing, ranging from the simple contrast and contour enhancement to the advanced restoration of blurred texts. When combined with a sophisticated physical imaging system, an image pricessing system has proven to be a powerful and fast tool for routine non-destructive scanning of suspect documents. This article reviews frequent applications, comprising techniques to increase legibility, two-dimensional spectroscopy (ink discrimination, alterations, erased entries, etc.), comparison techniques (stamps, typescript letters, photo substitution), and densitometry. Computerized comparison of handwriting is not included. Copyright © 1989 Central Police University.
Two-dimensional imaging in a lightweight portable MRI scanner without gradient coils.
Cooley, Clarissa Zimmerman; Stockmann, Jason P; Armstrong, Brandon D; Sarracanie, Mathieu; Lev, Michael H; Rosen, Matthew S; Wald, Lawrence L
2015-02-01
As the premiere modality for brain imaging, MRI could find wider applicability if lightweight, portable systems were available for siting in unconventional locations such as intensive care units, physician offices, surgical suites, ambulances, emergency rooms, sports facilities, or rural healthcare sites. We construct and validate a truly portable (<100 kg) and silent proof-of-concept MRI scanner which replaces conventional gradient encoding with a rotating lightweight cryogen-free, low-field magnet. When rotated about the object, the inhomogeneous field pattern is used as a rotating spatial encoding magnetic field (rSEM) to create generalized projections which encode the iteratively reconstructed two-dimensional (2D) image. Multiple receive channels are used to disambiguate the nonbijective encoding field. The system is validated with experimental images of 2D test phantoms. Similar to other nonlinear field encoding schemes, the spatial resolution is position dependent with blurring in the center, but is shown to be likely sufficient for many medical applications. The presented MRI scanner demonstrates the potential for portability by simultaneously relaxing the magnet homogeneity criteria and eliminating the gradient coil. This new architecture and encoding scheme shows convincing proof of concept images that are expected to be further improved with refinement of the calibration and methodology. © 2014 Wiley Periodicals, Inc.
Two-dimensional imaging via a narrowband MIMO radar system with two perpendicular linear arrays.
Wang, Dang-wei; Ma, Xiao-yan; Su, Yi
2010-05-01
This paper presents a system model and method for the 2-D imaging application via a narrowband multiple-input multiple-output (MIMO) radar system with two perpendicular linear arrays. Furthermore, the imaging formulation for our method is developed through a Fourier integral processing, and the parameters of antenna array including the cross-range resolution, required size, and sampling interval are also examined. Different from the spatial sequential procedure sampling the scattered echoes during multiple snapshot illuminations in inverse synthetic aperture radar (ISAR) imaging, the proposed method utilizes a spatial parallel procedure to sample the scattered echoes during a single snapshot illumination. Consequently, the complex motion compensation in ISAR imaging can be avoided. Moreover, in our array configuration, multiple narrowband spectrum-shared waveforms coded with orthogonal polyphase sequences are employed. The mainlobes of the compressed echoes from the different filter band could be located in the same range bin, and thus, the range alignment in classical ISAR imaging is not necessary. Numerical simulations based on synthetic data are provided for testing our proposed method.
NASA Astrophysics Data System (ADS)
Oiknine, Yaniv; August, Isaac Y.; Revah, Liat; Stern, Adrian
2016-05-01
Recently we introduced a Compressive Sensing Miniature Ultra-Spectral Imaging (CS-MUSI) system. The system is based on a single Liquid Crystal (LC) cell and a parallel sensor array where the liquid crystal cell performs spectral encoding. Within the framework of compressive sensing, the CS-MUSI system is able to reconstruct ultra-spectral cubes captured with only an amount of ~10% samples compared to a conventional system. Despite the compression, the technique is extremely complex computationally, because reconstruction of ultra-spectral images requires processing huge data cubes of Gigavoxel size. Fortunately, the computational effort can be alleviated by using separable operation. An additional way to reduce the reconstruction effort is to perform the reconstructions on patches. In this work, we consider processing on various patch shapes. We present an experimental comparison between various patch shapes chosen to process the ultra-spectral data captured with CS-MUSI system. The patches may be one dimensional (1D) for which the reconstruction is carried out spatially pixel-wise, or two dimensional (2D) - working on spatial rows/columns of the ultra-spectral cube, as well as three dimensional (3D).
Chen, Hsin-Yu; Ng, Li-Shia; Chang, Chun-Shin; Lu, Ting-Chen; Chen, Ning-Hung; Chen, Zung-Chung
2017-06-01
Advances in three-dimensional imaging and three-dimensional printing technology have expanded the frontier of presurgical design for microtia reconstruction from two-dimensional curved lines to three-dimensional perspectives. This study presents an algorithm for combining three-dimensional surface imaging, computer-assisted design, and three-dimensional printing to create patient-specific auricular frameworks in unilateral microtia reconstruction. Between January of 2015 and January of 2016, six patients with unilateral microtia were enrolled. The average age of the patients was 7.6 years. A three-dimensional image of the patient's head was captured by 3dMDcranial, and virtual sculpture carried out using Geomagic Freeform software and a Touch X Haptic device for fabrication of the auricular template. Each template was tailored according to the patient's unique auricular morphology. The final construct was mirrored onto the defective side and printed out with biocompatible acrylic material. During the surgery, the prefabricated customized template served as a three-dimensional guide for surgical simulation and sculpture of the MEDPOR framework. Average follow-up was 10.3 months. Symmetric and good aesthetic results with regard to auricular shape, projection, and orientation were obtained. One case with severe implant exposure was salvaged with free temporoparietal fascia transfer and skin grafting. The combination of three-dimensional imaging and manufacturing technology with the malleability of MEDPOR has surpassed existing limitations resulting from the use of autologous materials and the ambiguity of two-dimensional planning. This approach allows surgeons to customize the auricular framework in a highly precise and sophisticated manner, taking a big step closer to the goal of mirror-image reconstruction for unilateral microtia patients. Therapeutic, IV.
Stereoscopic Imaging in Hypersonics Boundary Layers using Planar Laser-Induced Fluorescence
NASA Technical Reports Server (NTRS)
Danehy, Paul M.; Bathel, Brett; Inman, Jennifer A.; Alderfer, David W.; Jones, Stephen B.
2008-01-01
Stereoscopic time-resolved visualization of three-dimensional structures in a hypersonic flow has been performed for the first time. Nitric Oxide (NO) was seeded into hypersonic boundary layer flows that were designed to transition from laminar to turbulent. A thick laser sheet illuminated and excited the NO, causing spatially-varying fluorescence. Two cameras in a stereoscopic configuration were used to image the fluorescence. The images were processed in a computer visualization environment to provide stereoscopic image pairs. Two methods were used to display these image pairs: a cross-eyed viewing method which can be viewed by naked eyes, and red/blue anaglyphs, which require viewing through red/blue glasses. The images visualized three-dimensional information that would be lost if conventional planar laser-induced fluorescence imaging had been used. Two model configurations were studied in NASA Langley Research Center's 31-Inch Mach 10 Air Wind tunnel. One model was a 10 degree half-angle wedge containing a small protuberance to force the flow to transition. The other model was a 1/3-scale, truncated Hyper-X forebody model with blowing through a series of holes to force the boundary layer flow to transition to turbulence. In the former case, low flowrates of pure NO seeded and marked the boundary layer fluid. In the latter, a trace concentration of NO was seeded into the injected N2 gas. The three-dimensional visualizations have an effective time resolution of about 500 ns, which is fast enough to freeze this hypersonic flow. The 512x512 resolution of the resulting images is much higher than high-speed laser-sheet scanning systems with similar time response, which typically measure 10-20 planes.
DIGE Analysis of Human Tissues.
Gelfi, Cecilia; Capitanio, Daniele
2018-01-01
Two-dimensional difference gel electrophoresis (2-D DIGE) is an advanced and elegant gel electrophoretic analytical tool for comparative protein assessment. It is based on two-dimensional gel electrophoresis (2-DE) separation of fluorescently labeled protein extracts. The tagging procedures are designed to not interfere with the chemical properties of proteins with respect to their pI and electrophoretic mobility, once a proper labeling protocol is followed. The two-dye or three-dye systems can be adopted and their choice depends on specific applications. Furthermore, the use of an internal pooled standard makes 2-D DIGE a highly accurate quantitative method enabling multiple protein samples to be separated on the same two-dimensional gel. The image matching and cross-gel statistical analysis generates robust quantitative results making data validation by independent technologies successful.
Design of an open-ended plenoptic camera for three-dimensional imaging of dusty plasmas
NASA Astrophysics Data System (ADS)
Sanpei, Akio; Tokunaga, Kazuya; Hayashi, Yasuaki
2017-08-01
Herein, the design of a plenoptic imaging system for three-dimensional reconstructions of dusty plasmas using an integral photography technique has been reported. This open-ended system is constructed with a multi-convex lens array and a typical reflex CMOS camera. We validated the design of the reconstruction system using known target particles. Additionally, the system has been applied to observations of fine particles floating in a horizontal, parallel-plate radio-frequency plasma. Furthermore, the system works well in the range of our dusty plasma experiment. We can identify the three-dimensional positions of dust particles from a single-exposure image obtained from one viewing port.
NASA Astrophysics Data System (ADS)
Malik, Mehul
Over the past three decades, quantum mechanics has allowed the development of technologies that provide unconditionally secure communication. In parallel, the quantum nature of the transverse electromagnetic field has spawned the field of quantum imaging that encompasses technologies such as quantum lithography, quantum ghost imaging, and high-dimensional quantum key distribution (QKD). The emergence of such quantum technologies also highlights the need for the development of accurate and efficient methods of measuring and characterizing the elusive quantum state itself. In this thesis, I present new technologies that use the quantum properties of light for security. The first of these is a technique that extends the principles behind QKD to the field of imaging and optical ranging. By applying the polarization-based BB84 protocol to individual photons in an active imaging system, we obtained images that were secure against any intercept-resend jamming attacks. The second technology presented in this thesis is based on an extension of quantum ghost imaging, a technique that uses position-momentum entangled photons to create an image of an object without directly gaining any spatial information from it. We used a holographic filtering technique to build a quantum ghost image identification system that uses a few pairs of photons to identify an object from a set of known objects. The third technology addressed in this thesis is a high-dimensional QKD system that uses orbital-angular-momentum (OAM) modes of light for encoding. Moving to a high-dimensional state space in QKD allows one to impress more information on each photon, as well as introduce higher levels of security. I discuss the development of two OAM-QKD protocols based on the BB84 and Ekert protocols of QKD. In addition, I present a study characterizing the effects of turbulence on a communication system using OAM modes for encoding. The fourth and final technology presented in this thesis is a relatively new technique called direct measurement that uses sequential weak and strong measurements to characterize a quantum state. I use this technique to characterize the quantum state of a photon with a dimensionality of d = 27, and visualize its rotation in the natural basis of OAM.
Grid point extraction and coding for structured light system
NASA Astrophysics Data System (ADS)
Song, Zhan; Chung, Ronald
2011-09-01
A structured light system simplifies three-dimensional reconstruction by illuminating a specially designed pattern to the target object, thereby generating a distinct texture on it for imaging and further processing. Success of the system hinges upon what features are to be coded in the projected pattern, extracted in the captured image, and matched between the projector's display panel and the camera's image plane. The codes have to be such that they are largely preserved in the image data upon illumination from the projector, reflection from the target object, and projective distortion in the imaging process. The features also need to be reliably extracted in the image domain. In this article, a two-dimensional pseudorandom pattern consisting of rhombic color elements is proposed, and the grid points between the pattern elements are chosen as the feature points. We describe how a type classification of the grid points plus the pseudorandomness of the projected pattern can equip each grid point with a unique label that is preserved in the captured image. We also present a grid point detector that extracts the grid points without the need of segmenting the pattern elements, and that localizes the grid points in subpixel accuracy. Extensive experiments are presented to illustrate that, with the proposed pattern feature definition and feature detector, more features points in higher accuracy can be reconstructed in comparison with the existing pseudorandomly encoded structured light systems.
Tuschy, Benjamin; Berlit, Sebastian; Brade, Joachim; Sütterlin, Marc; Hornemann, Amadeus
2014-01-01
To investigate the clinical assessment of a full high-definition (HD) three-dimensional robot-assisted laparoscopic device in gynaecological surgery. This study included 70 women who underwent gynaecological laparoscopic procedures. Demographic parameters, type and duration of surgery and perioperative complications were analyzed. Fifteen surgeons were postoperatively interviewed regarding their assessment of this new system with a standardized questionnaire. The clinical assessment revealed that three-dimensional full-HD visualisation is comfortable and improves spatial orientation and hand-to-eye coordination. The majority of the surgeons stated they would prefer a three-dimensional system to a conventional two-dimensional device and stated that the robotic camera arm led to more relaxed working conditions. Three-dimensional laparoscopy is feasible, comfortable and well-accepted in daily routine. The three-dimensional visualisation improves surgeons' hand-to-eye coordination, intracorporeal suturing and fine dissection. The combination of full-HD three-dimensional visualisation with the robotic camera arm results in very high image quality and stability.
On the V-Line Radon Transform and Its Imaging Applications
Morvidone, M.; Nguyen, M. K.; Truong, T. T.; Zaidi, H.
2010-01-01
Radon transforms defined on smooth curves are well known and extensively studied in the literature. In this paper, we consider a Radon transform defined on a discontinuous curve formed by a pair of half-lines forming the vertical letter V. If the classical two-dimensional Radon transform has served as a work horse for tomographic transmission and/or emission imaging, we show that this V-line Radon transform is the backbone of scattered radiation imaging in two dimensions. We establish its analytic inverse formula as well as a corresponding filtered back projection reconstruction procedure. These theoretical results allow the reconstruction of two-dimensional images from Compton scattered radiation collected on a one-dimensional collimated camera. We illustrate the working principles of this imaging modality by presenting numerical simulation results. PMID:20706545
Promoting Inquiry in the Gifted Classroom through GPS and GIS Technologies
ERIC Educational Resources Information Center
Shaunessy, Elizabeth; Page, Carrie
2006-01-01
Geography is rapidly becoming more interactive, especially with the advent of the Global Positioning System (GPS) and Geographic Information Systems (GIS) and their adoption in the public and private sectors. The days of two-dimensional maps are quickly being replaced by geographic images that are stored electronically in computers and handheld…
Volumetric 3D display using a DLP projection engine
NASA Astrophysics Data System (ADS)
Geng, Jason
2012-03-01
In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.
Three-dimensional scene reconstruction from a two-dimensional image
NASA Astrophysics Data System (ADS)
Parkins, Franz; Jacobs, Eddie
2017-05-01
We propose and simulate a method of reconstructing a three-dimensional scene from a two-dimensional image for developing and augmenting world models for autonomous navigation. This is an extension of the Perspective-n-Point (PnP) method which uses a sampling of the 3D scene, 2D image point parings, and Random Sampling Consensus (RANSAC) to infer the pose of the object and produce a 3D mesh of the original scene. Using object recognition and segmentation, we simulate the implementation on a scene of 3D objects with an eye to implementation on embeddable hardware. The final solution will be deployed on the NVIDIA Tegra platform.
An Upgrade of the Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) Software
NASA Technical Reports Server (NTRS)
Mason, Michelle L.; Rufer, Shann J.
2015-01-01
The Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) code is used at NASA Langley Research Center to analyze global aeroheating data on wind tunnel models tested in the Langley Aerothermodynamics Laboratory. One-dimensional, semi-infinite heating data derived from IHEAT are used to design thermal protection systems to mitigate the risks due to the aeroheating loads on hypersonic vehicles, such as re-entry vehicles during descent and landing procedures. This code was originally written in the PV-WAVE programming language to analyze phosphor thermography data from the two-color, relativeintensity system developed at Langley. To increase the efficiency, functionality, and reliability of IHEAT, the code was migrated to MATLAB syntax and compiled as a stand-alone executable file labeled version 4.0. New features of IHEAT 4.0 include the options to batch process all of the data from a wind tunnel run, to map the two-dimensional heating distribution to a three-dimensional computer-aided design model of the vehicle to be viewed in Tecplot, and to extract data from a segmented line that follows an interesting feature in the data. Results from IHEAT 4.0 were compared on a pixel level to the output images from the legacy code to validate the program. The differences between the two codes were on the order of 10-5 to 10-7. IHEAT 4.0 replaces the PV-WAVE version as the production code for aeroheating experiments conducted in the hypersonic facilities at NASA Langley.
Scattering calculation and image reconstruction using elevation-focused beams
Duncan, David P.; Astheimer, Jeffrey P.; Waag, Robert C.
2009-01-01
Pressure scattered by cylindrical and spherical objects with elevation-focused illumination and reception has been analytically calculated, and corresponding cross sections have been reconstructed with a two-dimensional algorithm. Elevation focusing was used to elucidate constraints on quantitative imaging of three-dimensional objects with two-dimensional algorithms. Focused illumination and reception are represented by angular spectra of plane waves that were efficiently computed using a Fourier interpolation method to maintain the same angles for all temporal frequencies. Reconstructions were formed using an eigenfunction method with multiple frequencies, phase compensation, and iteration. The results show that the scattered pressure reduces to a two-dimensional expression, and two-dimensional algorithms are applicable when the region of a three-dimensional object within an elevation-focused beam is approximately constant in elevation. The results also show that energy scattered out of the reception aperture by objects contained within the focused beam can result in the reconstructed values of attenuation slope being greater than true values at the boundary of the object. Reconstructed sound speed images, however, appear to be relatively unaffected by the loss in scattered energy. The broad conclusion that can be drawn from these results is that two-dimensional reconstructions require compensation to account for uncaptured three-dimensional scattering. PMID:19425653
Scattering calculation and image reconstruction using elevation-focused beams.
Duncan, David P; Astheimer, Jeffrey P; Waag, Robert C
2009-05-01
Pressure scattered by cylindrical and spherical objects with elevation-focused illumination and reception has been analytically calculated, and corresponding cross sections have been reconstructed with a two-dimensional algorithm. Elevation focusing was used to elucidate constraints on quantitative imaging of three-dimensional objects with two-dimensional algorithms. Focused illumination and reception are represented by angular spectra of plane waves that were efficiently computed using a Fourier interpolation method to maintain the same angles for all temporal frequencies. Reconstructions were formed using an eigenfunction method with multiple frequencies, phase compensation, and iteration. The results show that the scattered pressure reduces to a two-dimensional expression, and two-dimensional algorithms are applicable when the region of a three-dimensional object within an elevation-focused beam is approximately constant in elevation. The results also show that energy scattered out of the reception aperture by objects contained within the focused beam can result in the reconstructed values of attenuation slope being greater than true values at the boundary of the object. Reconstructed sound speed images, however, appear to be relatively unaffected by the loss in scattered energy. The broad conclusion that can be drawn from these results is that two-dimensional reconstructions require compensation to account for uncaptured three-dimensional scattering.
Rapid Protein Separations in Microfluidic Devices
NASA Technical Reports Server (NTRS)
Fan, Z. H.; Das, Champak; Xia, Zheng; Stoyanov, Alexander V.; Fredrickson, Carl K.
2004-01-01
This paper describes fabrication of glass and plastic microfluidic devices for protein separations. Although the long-term goal is to develop a microfluidic device for two-dimensional gel electrophoresis, this paper focuses on the first dimension-isoelectric focusing (IEF). A laser-induced fluorescence (LIF) imaging system has been built for imaging an entire channel in an IEF device. The whole-channel imaging eliminates the need to migrate focused protein bands, which is required if a single-point detector is used. Using the devices and the imaging system, we are able to perform IEF separations of proteins within minutes rather than hours in traditional bench-top instruments.
Real-time, continuous-wave terahertz imaging using a microbolometer focal-plane array
NASA Technical Reports Server (NTRS)
Hu, Qing (Inventor); Min Lee, Alan W. (Inventor)
2010-01-01
The present invention generally provides a terahertz (THz) imaging system that includes a source for generating radiation (e.g., a quantum cascade laser) having one or more frequencies in a range of about 0.1 THz to about 10 THz, and a two-dimensional detector array comprising a plurality of radiation detecting elements that are capable of detecting radiation in that frequency range. An optical system directs radiation from the source to an object to be imaged. The detector array detects at least a portion of the radiation transmitted through the object (or reflected by the object) so as to form a THz image of that object.
Upper bound on the efficiency of certain nonimaging concentrators in the physical-optics model
NASA Astrophysics Data System (ADS)
Welford, W. T.; Winston, R.
1982-09-01
Upper bounds on the performance of nonimaging concentrators are obtained within the framework of scalar-wave theory by using a simple approach to avoid complex calculations on multiple phase fronts. The approach consists in treating a theoretically perfect image-forming device and postulating that no non-image-forming concentrator can have a better performance than such an ideal image-forming system. The performance of such a system can be calculated according to wave theory, and this will provide, in accordance with the postulate, upper bounds on the performance of nonimaging systems. The method is demonstrated for a two-dimensional compound parabolic concentrator.
Feasibility of four-dimensional preoperative simulation for elbow debridement arthroplasty.
Yamamoto, Michiro; Murakami, Yukimi; Iwatsuki, Katsuyuki; Kurimoto, Shigeru; Hirata, Hitoshi
2016-04-02
Recent advances in imaging modalities have enabled three-dimensional preoperative simulation. A four-dimensional preoperative simulation system would be useful for debridement arthroplasty of primary degenerative elbow osteoarthritis because it would be able to detect the impingement lesions. We developed a four-dimensional simulation system by adding the anatomical axis to the three-dimensional computed tomography scan data of the affected arm in one position. Eleven patients with primary degenerative elbow osteoarthritis were included. A "two rings" method was used to calculate the flexion-extension axis of the elbow by converting the surface of the trochlea and capitellum into two rings. A four-dimensional simulation movie was created and showed the optimal range of motion and the impingement area requiring excision. To evaluate the reliability of the flexion-extension axis, interobserver and intraobserver reliabilities regarding the assessment of bony overlap volumes were calculated twice for each patient by two authors. Patients were treated by open or arthroscopic debridement arthroplasties. Pre- and postoperative examinations included elbow range of motion measurement, and completion of the patient-rated questionnaire Hand20, Japanese Orthopaedic Association-Japan Elbow Society Elbow Function Score, and the Mayo Elbow Performance Score. Measurement of the bony overlap volume showed an intraobserver intraclass correlation coefficient of 0.93 and 0.90, and an interobserver intraclass correlation coefficient of 0.94. The mean elbow flexion-extension arc significantly improved from 101° to 125°. The mean Hand20 score significantly improved from 52 to 22. The mean Japanese Orthopaedic Association-Japan Elbow Society Elbow Function Score significantly improved from 67 to 88. The mean Mayo Elbow Performance Score significantly improved from 71 to 91 at the final follow-up evaluation. We showed that four-dimensional, preoperative simulation can be generated by adding the rotation axis to the one-position, three-dimensional computed tomography image of the affected arm. This method is feasible for elbow debridement arthroplasty.
Two-dimensional PCA-based human gait identification
NASA Astrophysics Data System (ADS)
Chen, Jinyan; Wu, Rongteng
2012-11-01
It is very necessary to recognize person through visual surveillance automatically for public security reason. Human gait based identification focus on recognizing human by his walking video automatically using computer vision and image processing approaches. As a potential biometric measure, human gait identification has attracted more and more researchers. Current human gait identification methods can be divided into two categories: model-based methods and motion-based methods. In this paper a two-Dimensional Principal Component Analysis and temporal-space analysis based human gait identification method is proposed. Using background estimation and image subtraction we can get a binary images sequence from the surveillance video. By comparing the difference of two adjacent images in the gait images sequence, we can get a difference binary images sequence. Every binary difference image indicates the body moving mode during a person walking. We use the following steps to extract the temporal-space features from the difference binary images sequence: Projecting one difference image to Y axis or X axis we can get two vectors. Project every difference image in the difference binary images sequence to Y axis or X axis difference binary images sequence we can get two matrixes. These two matrixes indicate the styles of one walking. Then Two-Dimensional Principal Component Analysis(2DPCA) is used to transform these two matrixes to two vectors while at the same time keep the maximum separability. Finally the similarity of two human gait images is calculated by the Euclidean distance of the two vectors. The performance of our methods is illustrated using the CASIA Gait Database.
Imaging properties and its improvements of scanning/imaging x-ray microscope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeuchi, Akihisa, E-mail: take@spring8.or.jp; Uesugi, Kentaro; Suzuki, Yoshio
A scanning / imaging X-ray microscope (SIXM) system has been developed at SPring-8. The SIXM consists of a scanning X-ray microscope with a one-dimensional (1D) X-ray focusing device and an imaging (full-field) X-ray microscope with a 1D X-ray objective. The motivation of the SIXM system is to realize a quantitative and highly-sensitive multimodal 3D X-ray tomography by taking advantages of both the scanning X-ray microscope using multi-pixel detector and the imaging X-ray microscope. Data acquisition process of a 2D image is completely different between in the horizontal direction and in the vertical direction; a 1D signal is obtained with themore » linear-scanning while the other dimensional signal is obtained with the imaging optics. Such condition have caused a serious problem on the imaging properties that the imaging quality in the vertical direction has been much worse than that in the horizontal direction. In this paper, two approaches to solve this problem will be presented. One is introducing a Fourier transform method for phase retrieval from one phase derivative image, and the other to develop and employ a 1D diffuser to produce an asymmetrical coherent illumination.« less
Method of composing two-dimensional scanned spectra observed by the New Vacuum Solar Telescope
NASA Astrophysics Data System (ADS)
Cai, Yun-Fang; Xu, Zhi; Chen, Yu-Chao; Xu, Jun; Li, Zheng-Gang; Fu, Yu; Ji, Kai-Fan
2018-04-01
In this paper we illustrate the technique used by the New Vacuum Solar Telescope (NVST) to increase the spatial resolution of two-dimensional (2D) solar spectroscopy observations involving two dimensions of space and one of wavelength. Without an image stabilizer at the NVST, large scale wobble motion is present during the spatial scanning, whose instantaneous amplitude can reach 1.3″ due to the Earth’s atmosphere and the precision of the telescope guiding system, and seriously decreases the spatial resolution of 2D spatial maps composed with scanned spectra. We make the following effort to resolve this problem: the imaging system (e.g., the TiO-band) is used to record and detect the displacement vectors of solar image motion during the raster scan, in both the slit and scanning directions. The spectral data (e.g., the Hα line) which are originally obtained in time sequence are corrected and re-arranged in space according to those displacement vectors. Raster scans are carried out in several active regions with different seeing conditions (two rasters are illustrated in this paper). Given a certain spatial sampling and temporal resolution, the spatial resolution of the composed 2D map could be close to that of the slit-jaw image. The resulting quality after correction is quantitatively evaluated with two methods. A physical quantity, such as the line-of-sight velocities in multiple layers of the solar atmosphere, is also inferred from the re-arranged spectrum, demonstrating the advantage of this technique.
Holography of Wi-fi Radiation.
Holl, Philipp M; Reinhard, Friedemann
2017-05-05
Wireless data transmission systems such as wi-fi or Bluetooth emit coherent light-electromagnetic waves with a precisely known amplitude and phase. Propagating in space, this radiation forms a hologram-a two-dimensional wave front encoding a three-dimensional view of all objects traversed by the light beam. Here we demonstrate a scheme to record this hologram in a phase-coherent fashion across a meter-sized imaging region. We recover three-dimensional views of objects and emitters by feeding the resulting data into digital reconstruction algorithms. Employing a digital implementation of dark-field propagation to suppress multipath reflection, we significantly enhance the quality of the resulting images. We numerically simulate the hologram of a 10-m-sized building, finding that both localization of emitters and 3D tomography of absorptive objects could be feasible by this technique.
NASA Astrophysics Data System (ADS)
Holl, Philipp M.; Reinhard, Friedemann
2017-05-01
Wireless data transmission systems such as wi-fi or Bluetooth emit coherent light—electromagnetic waves with a precisely known amplitude and phase. Propagating in space, this radiation forms a hologram—a two-dimensional wave front encoding a three-dimensional view of all objects traversed by the light beam. Here we demonstrate a scheme to record this hologram in a phase-coherent fashion across a meter-sized imaging region. We recover three-dimensional views of objects and emitters by feeding the resulting data into digital reconstruction algorithms. Employing a digital implementation of dark-field propagation to suppress multipath reflection, we significantly enhance the quality of the resulting images. We numerically simulate the hologram of a 10-m-sized building, finding that both localization of emitters and 3D tomography of absorptive objects could be feasible by this technique.
High dynamic range CMOS (HDRC) imagers for safety systems
NASA Astrophysics Data System (ADS)
Strobel, Markus; Döttling, Dietmar
2013-04-01
The first part of this paper describes the high dynamic range CMOS (HDRC®) imager - a special type of CMOS image sensor with logarithmic response. The powerful property of a high dynamic range (HDR) image acquisition is detailed by mathematical definition and measurement of the optoelectronic conversion function (OECF) of two different HDRC imagers. Specific sensor parameters will be discussed including the pixel design for the global shutter readout. The second part will give an outline on the applications and requirements of cameras for industrial safety. Equipped with HDRC global shutter sensors SafetyEYE® is a high-performance stereo camera system for safe three-dimensional zone monitoring enabling new and more flexible solutions compared to existing safety guards.
Digital Beamforming Interferometry
NASA Technical Reports Server (NTRS)
Rincon, Rafael F. (Inventor)
2016-01-01
Airborne or spaceborne Syntheic Aperture Radar (SAR) can be used in a variety of ways, and is often used to generate two dimensional images of a surface. SAR involves the use of radio waves to determine presence, properties, and features of extended areas. Specifically, radio waves are 10 transmitted in the presence of a ground surface. A portion of the radio wave's energy is reflected back to the radar system, which allows the radar system to detect and image the surface. Such radar systems may be used in science applications, military contexts, and other commercial applications.
Applications Of Digital Image Acquisition In Anthropometry
NASA Astrophysics Data System (ADS)
Woolford, Barbara; Lewis, James L.
1981-10-01
Anthropometric data on reach and mobility have traditionally been collected by time consuming and relatively inaccurate manual methods. Three dimensional digital image acquisition promises to radically increase the speed and ease of data collection and analysis. A three-camera video anthropometric system for collecting position, velocity, and force data in real time is under development for the Anthropometric Measurement Laboratory at NASA's Johnson Space Center. The use of a prototype of this system for collecting data on reach capabilities and on lateral stability is described. Two extensions of this system are planned.
Appleton, P L; Quyn, A J; Swift, S; Näthke, I
2009-05-01
Visualizing overall tissue architecture in three dimensions is fundamental for validating and integrating biochemical, cell biological and visual data from less complex systems such as cultured cells. Here, we describe a method to generate high-resolution three-dimensional image data of intact mouse gut tissue. Regions of highest interest lie between 50 and 200 mum within this tissue. The quality and usefulness of three-dimensional image data of tissue with such depth is limited owing to problems associated with scattered light, photobleaching and spherical aberration. Furthermore, the highest-quality oil-immersion lenses are designed to work at a maximum distance of =10-15 mum into the sample, further compounding the ability to image at high-resolution deep within tissue. We show that manipulating the refractive index of the mounting media and decreasing sample opacity greatly improves image quality such that the limiting factor for a standard, inverted multi-photon microscope is determined by the working distance of the objective as opposed to detectable fluorescence. This method negates the need for mechanical sectioning of tissue and enables the routine generation of high-quality, quantitative image data that can significantly advance our understanding of tissue architecture and physiology.
Two-dimensional Kerr-Fourier imaging of translucent phantoms in thick turbid media
NASA Astrophysics Data System (ADS)
Liang, X.; Wang, L.; Ho, P. P.; Alfano, R. R.
1995-06-01
Translucent scattering phantoms hidden inside a 5.5-cm-thick Intralipid solution were imaged as a function of phantom scattering coefficients by the use of a picosecond time-and space-gated Kerr-Fourier imaging system. A 2-mm-thick translucent phantom with a 0.1% concentration (scattering coefficient) difference from the 55-mm-thick surrounding scattering host can be distinguished at a signal level of approximately 10-10 of the incidence illumination intensity.
Imaging of acoustic fields using optical feedback interferometry.
Bertling, Karl; Perchoux, Julien; Taimre, Thomas; Malkin, Robert; Robert, Daniel; Rakić, Aleksandar D; Bosch, Thierry
2014-12-01
This study introduces optical feedback interferometry as a simple and effective technique for the two-dimensional visualisation of acoustic fields. We present imaging results for several pressure distributions including those for progressive waves, standing waves, as well as the diffraction and interference patterns of the acoustic waves. The proposed solution has the distinct advantage of extreme optical simplicity and robustness thus opening the way to a low cost acoustic field imaging system based on mass produced laser diodes.
Top, Can Barış; Ilbey, Serhat; Güven, Hüseyin Emre
2017-12-01
We propose a coil arrangement for open bore field-free line (FFL) magnetic particle imaging (MPI) system, which is suitable for accessing the subject from the sides. The purpose of this study is twofold, to show that the FFL can be rotated and translated electronically in a volume of interest with this arrangement and to analyze the current, voltage and power requirements for a 1 T/m gradient human sized scanner for a 200 mm diameter × 200 mm height cylindrical field of view (FOV). We used split coils side by side with alternating current directions to generate a field-free line. Employing two of these coil groups, one of which is rotated 90 degrees with respect to the other, a rotating FFL was generated. We conducted numerical simulations to show the feasibility of this arrangement for three-dimensional (3D) electronical scan of the FFL. Using simulations, we obtained images of a two-dimensional (2D) in silico dot phantom for a human size scanner with system matrix-based reconstruction. Simulations showed that the FFL can be generated and rotated in one plane and can be translated in two axes, allowing for 3D imaging of a large subject with the proposed arrangement. Human sized scanner required 63-215 kW power for the selection field coils to scan the focus inside the FOV. The proposed setup is suitable for FFL MPI imaging with an open bore configuration without the need for mechanical rotation, which is preferable for clinical usage in terms of imaging time and patient access. Further studies are necessary to determine the limitations imposed by peripheral nerve stimulation, and to optimize the system parameters and the sequence design. © 2017 American Association of Physicists in Medicine.
Crack Modelling for Radiography
NASA Astrophysics Data System (ADS)
Chady, T.; Napierała, L.
2010-02-01
In this paper, possibility of creation of three-dimensional crack models, both random type and based on real-life radiographic images is discussed. Method for storing cracks in a number of two-dimensional matrices, as well algorithm for their reconstruction into three-dimensional objects is presented. Also the possibility of using iterative algorithm for matching simulated images of cracks to real-life radiographic images is discussed.
Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras
NASA Astrophysics Data System (ADS)
Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro
2018-03-01
Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.
Ravi, Daniele; Fabelo, Himar; Callic, Gustavo Marrero; Yang, Guang-Zhong
2017-09-01
Recent advances in hyperspectral imaging have made it a promising solution for intra-operative tissue characterization, with the advantages of being non-contact, non-ionizing, and non-invasive. Working with hyperspectral images in vivo, however, is not straightforward as the high dimensionality of the data makes real-time processing challenging. In this paper, a novel dimensionality reduction scheme and a new processing pipeline are introduced to obtain a detailed tumor classification map for intra-operative margin definition during brain surgery. However, existing approaches to dimensionality reduction based on manifold embedding can be time consuming and may not guarantee a consistent result, thus hindering final tissue classification. The proposed framework aims to overcome these problems through a process divided into two steps: dimensionality reduction based on an extension of the T-distributed stochastic neighbor approach is first performed and then a semantic segmentation technique is applied to the embedded results by using a Semantic Texton Forest for tissue classification. Detailed in vivo validation of the proposed method has been performed to demonstrate the potential clinical value of the system.
Hidden explosives detector employing pulsed neutron and x-ray interrogation
Schultz, F.J.; Caldwell, J.T.
1993-04-06
Methods and systems for the detection of small amounts of modern, highly-explosive nitrogen-based explosives, such as plastic explosives, hidden in airline baggage. Several techniques are employed either individually or combined in a hybrid system. One technique employed in combination is X-ray imaging. Another technique is interrogation with a pulsed neutron source in a two-phase mode of operation to image both nitrogen and oxygen densities. Another technique employed in combination is neutron interrogation to form a hydrogen density image or three-dimensional map. In addition, deliberately-placed neutron-absorbing materials can be detected.
Hidden explosives detector employing pulsed neutron and x-ray interrogation
Schultz, Frederick J.; Caldwell, John T.
1993-01-01
Methods and systems for the detection of small amounts of modern, highly-explosive nitrogen-based explosives, such as plastic explosives, hidden in airline baggage. Several techniques are employed either individually or combined in a hybrid system. One technique employed in combination is X-ray imaging. Another technique is interrogation with a pulsed neutron source in a two-phase mode of operation to image both nitrogen and oxygen densities. Another technique employed in combination is neutron interrogation to form a hydrogen density image or three-dimensional map. In addition, deliberately-placed neutron-absorbing materials can be detected.
Beam alignment based on two-dimensional power spectral density of a near-field image.
Wang, Shenzhen; Yuan, Qiang; Zeng, Fa; Zhang, Xin; Zhao, Junpu; Li, Kehong; Zhang, Xiaolu; Xue, Qiao; Yang, Ying; Dai, Wanjun; Zhou, Wei; Wang, Yuanchen; Zheng, Kuixing; Su, Jingqin; Hu, Dongxia; Zhu, Qihua
2017-10-30
Beam alignment is crucial to high-power laser facilities and is used to adjust the laser beams quickly and accurately to meet stringent requirements of pointing and centering. In this paper, a novel alignment method is presented, which employs data processing of the two-dimensional power spectral density (2D-PSD) for a near-field image and resolves the beam pointing error relative to the spatial filter pinhole directly. Combining this with a near-field fiducial mark, the operation of beam alignment is achieved. It is experimentally demonstrated that this scheme realizes a far-field alignment precision of approximately 3% of the pinhole size. This scheme adopts only one near-field camera to construct the alignment system, which provides a simple, efficient, and low-cost way to align lasers.
Enhancing the image resolution in a single-pixel sub-THz imaging system based on compressed sensing
NASA Astrophysics Data System (ADS)
Alkus, Umit; Ermeydan, Esra Sengun; Sahin, Asaf Behzat; Cankaya, Ilyas; Altan, Hakan
2018-04-01
Compressed sensing (CS) techniques allow for faster imaging when combined with scan architectures, which typically suffer from speed. This technique when implemented with a subterahertz (sub-THz) single detector scan imaging system provides images whose resolution is only limited by the pixel size of the pattern used to scan the image plane. To overcome this limitation, the image of the target can be oversampled; however, this results in slower imaging rates especially if this is done in two-dimensional across the image plane. We show that by implementing a one-dimensional (1-D) scan of the image plane, a modified approach to CS theory applied with an appropriate reconstruction algorithm allows for successful reconstruction of the reflected oversampled image of a target placed in standoff configuration from the source. The experiments are done in reflection mode configuration where the operating frequency is 93 GHz and the corresponding wavelength is λ = 3.2 mm. To reconstruct the image with fewer samples, CS theory is applied using masks where the pixel size is 5 mm × 5 mm, and each mask covers an image area of 5 cm × 5 cm, meaning that the basic image is resolved as 10 × 10 pixels. To enhance the resolution, the information between two consecutive pixels is used, and oversampling along 1-D coupled with a modification of the masks in CS theory allowed for oversampled images to be reconstructed rapidly in 20 × 20 and 40 × 40 pixel formats. These are then compared using two different reconstruction algorithms, TVAL3 and ℓ1-MAGIC. The performance of these methods is compared for both simulated signals and real signals. It is found that the modified CS theory approach coupled with the TVAL3 reconstruction process, even when scanning along only 1-D, allows for rapid precise reconstruction of the oversampled target.
NASA Astrophysics Data System (ADS)
Su, Yanfeng; Cai, Zhijian; Liu, Quan; Lu, Yifan; Guo, Peiliang; Shi, Lingyan; Wu, Jianhong
2018-04-01
In this paper, an autostereoscopic three-dimensional (3D) display system based on synthetic hologram reconstruction is proposed and implemented. The system uses a single phase-only spatial light modulator to load the synthetic hologram of the left and right stereo images, and the parallax angle between two reconstructed stereo images is enlarged by a grating to meet the split angle requirement of normal stereoscopic vision. To realize the crosstalk-free autostereoscopic 3D display with high light utilization efficiency, the groove parameters of the grating are specifically designed by the rigorous coupled-wave theory for suppressing the zero-order diffraction, and then the zero-order nulled grating is fabricated by the holographic lithography and the ion beam etching. Furthermore, the diffraction efficiency of the fabricated grating is measured under the illumination of a laser beam with a wavelength of 532 nm. Finally, the experimental verification system for the proposed autostereoscopic 3D display is presented. The experimental results prove that the proposed system is able to generate stereoscopic 3D images with good performances.
Fast Fourier single-pixel imaging via binary illumination.
Zhang, Zibang; Wang, Xueying; Zheng, Guoan; Zhong, Jingang
2017-09-20
Fourier single-pixel imaging (FSI) employs Fourier basis patterns for encoding spatial information and is capable of reconstructing high-quality two-dimensional and three-dimensional images. Fourier-domain sparsity in natural scenes allows FSI to recover sharp images from undersampled data. The original FSI demonstration, however, requires grayscale Fourier basis patterns for illumination. This requirement imposes a limitation on the imaging speed as digital micro-mirror devices (DMDs) generate grayscale patterns at a low refreshing rate. In this paper, we report a new strategy to increase the speed of FSI by two orders of magnitude. In this strategy, we binarize the Fourier basis patterns based on upsampling and error diffusion dithering. We demonstrate a 20,000 Hz projection rate using a DMD and capture 256-by-256-pixel dynamic scenes at a speed of 10 frames per second. The reported technique substantially accelerates image acquisition speed of FSI. It may find broad imaging applications at wavebands that are not accessible using conventional two-dimensional image sensors.
Stereo imaging with spaceborne radars
NASA Technical Reports Server (NTRS)
Leberl, F.; Kobrick, M.
1983-01-01
Stereo viewing is a valuable tool in photointerpretation and is used for the quantitative reconstruction of the three dimensional shape of a topographical surface. Stereo viewing refers to a visual perception of space by presenting an overlapping image pair to an observer so that a three dimensional model is formed in the brain. Some of the observer's function is performed by machine correlation of the overlapping images - so called automated stereo correlation. The direct perception of space with two eyes is often called natural binocular vision; techniques of generating three dimensional models of the surface from two sets of monocular image measurements is the topic of stereology.
Lee, SangYun; Kim, Kyoohyun; Lee, Yuhyun; Park, Sungjin; Shin, Heejae; Yang, Jongwon; Ko, Kwanhong; Park, HyunJoo; Park, YongKeun
2015-01-01
We present optical measurements of morphology and refractive indexes (RIs) of human downy arm hairs using three-dimensional (3-D) quantitative phase imaging techniques. 3-D RI tomograms and high-resolution two-dimensional synthetic aperture images of individual downy arm hairs were measured using a Mach–Zehnder laser interferometric microscopy equipped with a two-axis galvanometer mirror. From the measured quantitative images, the RIs and morphological parameters of downy hairs were noninvasively quantified including the mean RI, volume, cylinder, and effective radius of individual hairs. In addition, the effects of hydrogen peroxide on individual downy hairs were investigated.
Three-Dimensional Cataract Crystalline Lens Imaging With Swept-Source Optical Coherence Tomography.
de Castro, Alberto; Benito, Antonio; Manzanera, Silvestre; Mompeán, Juan; Cañizares, Belén; Martínez, David; Marín, Jose María; Grulkowski, Ireneusz; Artal, Pablo
2018-02-01
To image, describe, and characterize different features visible in the crystalline lens of older adults with and without cataract when imaged three-dimensionally with a swept-source optical coherence tomography (SS-OCT) system. We used a new SS-OCT laboratory prototype designed to enhance the visualization of the crystalline lens and imaged the entire anterior segment of both eyes in two groups of participants: patients scheduled to undergo cataract surgery, n = 17, age range 36 to 91 years old, and volunteers without visual complains, n = 14, age range 20 to 81 years old. Pre-cataract surgery patients were also clinically graded according to the Lens Opacification Classification System III. The three-dimensional location and shape of the visible opacities were compared with the clinical grading. Hypo- and hyperreflective features were visible in the lens of all pre-cataract surgery patients and in some of the older adults in the volunteer group. When the clinical examination revealed cortical or subcapsular cataracts, hyperreflective features were visible either in the cortex parallel to the surfaces of the lens or in the posterior pole. Other type of opacities that appeared as hyporeflective localized features were identified in the cortex of the lens. The OCT signal in the nucleus of the crystalline lens correlated with the nuclear cataract clinical grade. A dedicated OCT is a useful tool to study in vivo the subtle opacities in the cataractous crystalline lens, revealing its position and size three-dimensionally. The use of these images allows obtaining more detailed information on the age-related changes leading to cataract.
Holtrop, Joseph L.; Sutton, Bradley P.
2016-01-01
Abstract. A diffusion weighted imaging (DWI) approach that is signal-to-noise ratio (SNR) efficient and can be applied to achieve sub-mm resolutions on clinical 3 T systems was developed. The sequence combined a multislab, multishot pulsed gradient spin echo diffusion scheme with spiral readouts for imaging data and navigators. Long data readouts were used to keep the number of shots, and hence total imaging time, for the three-dimensional acquisition short. Image quality was maintained by incorporating a field-inhomogeneity-corrected image reconstruction to remove distortions associated with long data readouts. Additionally, multiple shots were required for the high-resolution images, necessitating motion induced phase correction through the use of efficiently integrated navigator data. The proposed approach is compared with two-dimensional (2-D) acquisitions that use either a spiral or a typical echo-planar imaging (EPI) acquisition to demonstrate the improved SNR efficiency. The proposed technique provided 71% higher SNR efficiency than the standard 2-D EPI approach. The adaptability of the technique to achieve high spatial resolutions is demonstrated by acquiring diffusion tensor imaging data sets with isotropic resolutions of 1.25 and 0.8 mm. The proposed approach allows for SNR-efficient sub-mm acquisitions of DWI data on clinical 3 T systems. PMID:27088107
Riffel, Philipp; Michaely, Henrik J; Morelli, John N; Pfeuffer, Josef; Attenberger, Ulrike I; Schoenberg, Stefan O; Haneder, Stefan
2014-01-01
Implementation of DWI in the abdomen is challenging due to artifacts, particularly those arising from differences in tissue susceptibility. Two-dimensional, spatially-selective radiofrequency (RF) excitation pulses for single-shot echo-planar imaging (EPI) combined with a reduction in the FOV in the phase-encoding direction (i.e. zooming) leads to a decreased number of k-space acquisition lines, significantly shortening the EPI echo train and potentially susceptibility artifacts. To assess the feasibility and image quality of a zoomed diffusion-weighted EPI (z-EPI) sequence in MR imaging of the pancreas. The approach is compared to conventional single-shot EPI (c-EPI). 23 patients who had undergone an MRI study of the abdomen were included in this retrospective study. Examinations were performed on a 3T whole-body MR system (Magnetom Skyra, Siemens) equipped with a two-channel fully dynamic parallel transmit array (TimTX TrueShape, Siemens). The acquired sequences consisted of a conventional EPI DWI of the abdomen and a zoomed EPI DWI of the pancreas. For z-EPI, the standard sinc excitation was replaced with a two-dimensional spatially-selective RF pulse using an echo-planar transmit trajectory. Images were evaluated with regard to image blur, respiratory motion artifacts, diagnostic confidence, delineation of the pancreas, and overall scan preference. Additionally ADC values of the pancreatic head, body, and tail were calculated and compared between sequences. The pancreas was better delineated in every case (23/23) with z-EPI versus c-EPI. In every case (23/23), both readers preferred z-EPI overall to c-EPI. With z-EPI there was statistically significantly less image blur (p<0.0001) and respiratory motion artifact compared to c-EPI (p<0.0001). Diagnostic confidence was statistically significantly better with z-EPI (p<0.0001). No statistically significant differences in calculated ADC values were observed between the two sequences. Zoomed diffusion-weighted EPI leads to substantial image quality improvements with reduction of susceptibility artifacts in pancreatic DWI.
NASA Technical Reports Server (NTRS)
Chamberlain, F. R. (Inventor)
1980-01-01
A system for generating, within a single frame of photographic film, a quadrified image including images of angularly (including orthogonally) related fields of view of a near field three dimensional object is described. It is characterized by three subsystems each of which includes a plurality of reflective surfaces for imaging a different field of view of the object at a different quadrant of the quadrified image. All of the subsystems have identical path lengths to the object photographed.
Sadleir, R J; Zhang, S U; Tucker, A S; Oh, Sungho
2008-08-01
Electrical impedance tomography (EIT) is particularly well-suited to applications where its portability, rapid acquisition speed and sensitivity give it a practical advantage over other monitoring or imaging systems. An EIT system's patient interface can potentially be adapted to match the target environment, and thereby increase its utility. It may thus be appropriate to use different electrode positions from those conventionally used in EIT in these cases. One application that may require this is the use of EIT on emergency medicine patients; in particular those who have suffered blunt abdominal trauma. In patients who have suffered major trauma, it is desirable to minimize the risk of spinal cord injury by avoiding lifting them. To adapt EIT to this requirement, we devised and evaluated a new electrode topology (the 'hemiarray') which comprises a set of eight electrodes placed only on the subject's anterior surface. Images were obtained using a two-dimensional sensitivity matrix and weighted singular value decomposition reconstruction. The hemiarray method's ability to quantify bleeding was evaluated by comparing its performance with conventional 2D reconstruction methods using data gathered from a saline phantom. We found that without applying corrections to reconstructed images it was possible to estimate blood volume in a two-dimensional hemiarray case with an uncertainty of around 27 ml. In an approximately 3D hemiarray case, volume prediction was possible with a maximum uncertainty of around 38 ml in the centre of the electrode plane. After application of a QI normalizing filter, average uncertainties in a two-dimensional hemiarray case were reduced to about 15 ml. Uncertainties in the approximate 3D case were reduced to about 30 ml.
Radiograph and passive data analysis using mixed variable optimization
Temple, Brian A.; Armstrong, Jerawan C.; Buescher, Kevin L.; Favorite, Jeffrey A.
2015-06-02
Disclosed herein are representative embodiments of methods, apparatus, and systems for performing radiography analysis. For example, certain embodiments perform radiographic analysis using mixed variable computation techniques. One exemplary system comprises a radiation source, a two-dimensional detector for detecting radiation transmitted through a object between the radiation source and detector, and a computer. In this embodiment, the computer is configured to input the radiographic image data from the two-dimensional detector and to determine one or more materials that form the object by using an iterative analysis technique that selects the one or more materials from hierarchically arranged solution spaces of discrete material possibilities and selects the layer interfaces from the optimization of the continuous interface data.
Nicholson, C; Tao, L
1993-12-01
This paper describes the theory of an integrative optical imaging system and its application to the analysis of the diffusion of 3-, 10-, 40-, and 70-kDa fluorescent dextran molecules in agarose gel and brain extracellular microenvironment. The method uses a precisely defined source of fluorescent molecules pressure ejected from a micropipette, and a detailed theory of the intensity contributions from out-of-focus molecules in a three-dimensional medium to a two-dimensional image. Dextrans tagged with either tetramethylrhodamine or Texas Red were ejected into 0.3% agarose gel or rat cortical slices maintained in a perfused chamber at 34 degrees C and imaged using a compound epifluorescent microscope with a 10 x water-immersion objective. About 20 images were taken at 2-10-s intervals, recorded with a cooled CCD camera, then transferred to a 486 PC for quantitative analysis. The diffusion coefficient in agarose gel, D, and the apparent diffusion coefficient, D*, in brain tissue were determined by fitting an integral expression relating the measured two-dimensional image intensity to the theoretical three-dimensional dextran concentration. The measurements in dilute agarose gel provided a reference value of D and validated the method. Values of the tortuosity, lambda = (D/D*)1/2, for the 3- and 10-kDa dextrans were 1.70 and 1.63, respectively, which were consistent with previous values derived from tetramethylammonium measurements in cortex. Tortuosities for the 40- and 70-kDa dextrans had significantly larger values of 2.16 and 2.25, respectively. This suggests that the extracellular space may have local constrictions that hinder the diffusion of molecules above a critical size that lies in the range of many neurotrophic compounds.
NASA Astrophysics Data System (ADS)
Di, Si; Lin, Hui; Du, Ruxu
2011-05-01
Displacement measurement of moving objects is one of the most important issues in the field of computer vision. This paper introduces a new binocular vision system (BVS) based on micro-electro-mechanical system (MEMS) technology. The eyes of the system are two microlenses fabricated on a substrate by MEMS technology. The imaging results of two microlenses are collected by one complementary metal-oxide-semiconductor (CMOS) array. An algorithm is developed for computing the displacement. Experimental results show that as long as the object is moving in two-dimensional (2D) space, the system can effectively estimate the 2D displacement without camera calibration. It is also shown that the average error of the displacement measurement is about 3.5% at different object distances ranging from 10 cm to 35 cm. Because of its low cost, small size and simple setting, this new method is particularly suitable for 2D displacement measurement applications such as vision-based electronics assembly and biomedical cell culture.
Ma, Hui-li; Jiang, Qiao; Han, Siyuan; Wu, Yan; Cui Tomshine, Jin; Wang, Dongliang; Gan, Yaling; Zou, Guozhang; Liang, Xing-Jie
2012-01-01
We present a flexible and highly reproducible method using three-dimensional (3D) multicellular tumor spheroids to quantify chemotherapeutic and nanoparticle penetration properties in vitro. We generated HeLa cell-derived spheroids using the liquid overlay method. To properly characterize HeLa spheroids, scanning electron microscopy, transmission electron microscopy, and multiphoton microscopy were used to obtain high-resolution 3D images of HeLa spheroids. Next, pairing high-resolution optical characterization techniques with flow cytometry, we quantitatively compared the penetration of doxorubicin, quantum dots, and synthetic micelles into 3D HeLa spheroid versus HeLa cells grown in a traditional two-dimensional culturing system. Our data revealed that 3D cultured HeLa cells acquired several clinically relevant morphologic and cellular characteristics (such as resistance to chemotherapeutics) often found in human solid tumors. These characteristic, however, could not be captured using conventional two-dimensional cell culture techniques. This study demonstrated the remarkable versatility of HeLa spheroid 3D imaging. In addition, our results revealed the capability of HeLa spheroids to function as a screening tool for nanoparticles or synthetic micelles that, due to their inherent size, charge, and hydrophobicity, can penetrate into solid tumors and act as delivery vehicles for chemotherapeutics. The development of this image-based, reproducible, and quantifiable in vitro HeLa spheroid screening tool will greatly aid future exploration of chemotherapeutics and nanoparticle delivery into solid tumors.
Producing a Linear Laser System for 3d Modelimg of Small Objects
NASA Astrophysics Data System (ADS)
Amini, A. Sh.; Mozaffar, M. H.
2012-07-01
Today, three dimensional modeling of objects is considered in many applications such as documentation of ancient heritage, quality control, reverse engineering and animation In this regard, there are a variety of methods for producing three-dimensional models. In this paper, a 3D modeling system is developed based on photogrammetry method using image processing and laser line extraction from images. In this method the laser beam profile is radiated on the body of the object and with video image acquisition, and extraction of laser line from the frames, three-dimensional coordinates of the objects can be achieved. In this regard, first the design and implementation of hardware, including cameras and laser systems was conducted. Afterwards, the system was calibrated. Finally, the software of the system was implemented for three dimensional data extraction. The system was investigated for modeling a number of objects. The results showed that the system can provide benefits such as low cost, appropriate speed and acceptable accuracy in 3D modeling of objects.
Bagan, Patrick; De Dominicis, Florence; Hernigou, Jacques; Dakhil, Bassel; Zaimi, Rym; Pricopi, Ciprian; Le Pimpec Barthes, Françoise; Berna, Pascal
2015-06-01
Common video systems for video-assisted thoracic surgery (VATS) provide the surgeon a two-dimensional (2D) image. This study aimed to evaluate performances of a new three-dimensional high definition (3D-HD) system in comparison with a two-dimensional high definition (2D-HD) system when conducting a complete thoracoscopic lobectomy (CTL). This multi-institutional comparative study trialled two video systems: 2D-HD and 3D-HD video systems used to conduct the same type of CTL. The inclusion criteria were T1N0M0 non-small-cell lung carcinoma (NSCLC) in the left lower lobe and suitable for thoracoscopic resection. The CTL was performed by the same surgeon using either a 3D-HD or 2D-HD system. Eighteen patients with NSCLC were included in the study between January and December 2013: 14 males, 4 females, with a median age of 65.6 years (range: 49-81). The patients were randomized before inclusion into two groups: to undergo surgery with the use of a 2D-HD or 3D-HD system. We compared operating time, the drainage duration, hospital stay and the N upstaging rate from the definitive histology. The use of the 3D-HD system significantly reduced the surgical time (by 17%). However, chest-tube drainage, hospital stay, the number of lymph-node stations and upstaging were similar in both groups. The main finding was that 3D-HD system significantly reduced the surgical time needed to complete the lobectomy. Thus, future integration of 3D-HD systems should improve thoracoscopic surgery, and enable more complex resections to be performed. It will also help advance the field of endoscopically assisted surgery. © The Author 2015. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Research and applications of infrared thermal imaging systems suitable for developing countries
NASA Astrophysics Data System (ADS)
Weili, Zhang; Danyu, Cai
1986-01-01
It is a common situation in most developing countries that the utilization ratio of the sources of energy is low, the reliability service of equipment is poor, the cost of installation maintenance is high, the loss due to conflagration is heavy, and so on. Therefore, they are in urgent need of using infrared thermal imaging technique to improve their energy saving, equipment diagnosis as well as fire searching. But the infrared thermal imaging systems in the world market so far are not suitable for their use. This paper summarizes the research on two dimensional real time infrared thermal imaging systems on the basis of electron beam scanning and pyroelectric detection, as well as their applications in industry in China.
NASA Astrophysics Data System (ADS)
Perillo, Evan P.; Liu, Yen-Liang; Huynh, Khang; Liu, Cong; Chou, Chao-Kai; Hung, Mien-Chie; Yeh, Hsin-Chih; Dunn, Andrew K.
2015-07-01
Molecular trafficking within cells, tissues and engineered three-dimensional multicellular models is critical to the understanding of the development and treatment of various diseases including cancer. However, current tracking methods are either confined to two dimensions or limited to an interrogation depth of ~15 μm. Here we present a three-dimensional tracking method capable of quantifying rapid molecular transport dynamics in highly scattering environments at depths up to 200 μm. The system has a response time of 1 ms with a temporal resolution down to 50 μs in high signal-to-noise conditions, and a spatial localization precision as good as 35 nm. Built on spatiotemporally multiplexed two-photon excitation, this approach requires only one detector for three-dimensional particle tracking and allows for two-photon, multicolour imaging. Here we demonstrate three-dimensional tracking of epidermal growth factor receptor complexes at a depth of ~100 μm in tumour spheroids.
Wheat, J S; Choppin, S; Goyal, A
2014-06-01
Three-dimensional surface imaging technologies have been used in the planning and evaluation of breast reconstructive and cosmetic surgery. The aim of this study was to develop a 3D surface imaging system based on the Microsoft Kinect and assess the accuracy and repeatability with which the system could image the breast. A system comprising two Kinects, calibrated to provide a complete 3D image of the mannequin was developed. Digital measurements of Euclidean and surface distances between landmarks showed acceptable agreement with manual measurements. The mean differences for Euclidean and surface distances were 1.9mm and 2.2mm, respectively. The system also demonstrated good intra- and inter-rater reliability (ICCs>0.999). The Kinect-based 3D surface imaging system offers a low-cost, readily accessible alternative to more expensive, commercially available systems, which have had limited clinical use. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
Statistical Signal Models and Algorithms for Image Analysis
1984-10-25
In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction
ERIC Educational Resources Information Center
Claxton, Laura J.
2011-01-01
Previous studies have found that preschoolers are confused about the relationship between two-dimensional (2D) symbols and their referents. Preschoolers report that 2D images (e.g. televised images and photographs) share some of the characteristics of the objects they are representing. A novel Comparison Task was created to test what might account…
Two-dimensional singlet oxygen imaging with its near-infrared luminescence during photosensitization
Hu, Bolin; Zeng, Nan; Liu, Zhiyi; Ji, Yanhong; Xie, Weidong; Peng, Qing; Zhou, Yong; He, Yonghong; Ma, Hui
2011-01-01
Photodynamic therapy is a promising cancer treatment that involves activation of photosensitizer by visible light to create singlet oxygen. This highly reactive oxygen species is believed to induce cell death and tissue destruction in PDT. Our approach used a near-infrared area CCD with high quantum efficiency to detect singlet oxygen by its 1270-nm luminescence. Two-dimensional singlet oxygen images with its near-infrared luminescence during photosensitization could be obtained with a CCD integration time of 1 s, without scanning. Thus this system can produce singlet oxygen luminescence images faster and achieve more accurate measurements in comparison to raster-scanning methods. The experimental data show a linear relationship between the singlet oxygen luminescence intensity and sample concentration. This method provides a detection sensitivity of 0.0181 μg/ml (benzoporphyrin derivative monoacid ring A dissolved in ethanol) and a spatial resolution better than 50 μm. A pilot study was conducted on a total of six female Kunming mice. The results from this study demonstrate the system's potential for in vivo measurements. Further experiments were carried out on two tumor-bearing nude mice. Singlet oxygen luminescence images were acquired from the tumor-bearing nude mouse with intravenous injection of BPD-MA, and the experimental results showed real-time singlet oxygen signal depletion as a function of the light exposure. PMID:21280909
Optimisation and evaluation of hyperspectral imaging system using machine learning algorithm
NASA Astrophysics Data System (ADS)
Suthar, Gajendra; Huang, Jung Y.; Chidangil, Santhosh
2017-10-01
Hyperspectral imaging (HSI), also called imaging spectrometer, originated from remote sensing. Hyperspectral imaging is an emerging imaging modality for medical applications, especially in disease diagnosis and image-guided surgery. HSI acquires a three-dimensional dataset called hypercube, with two spatial dimensions and one spectral dimension. Spatially resolved spectral imaging obtained by HSI provides diagnostic information about the objects physiology, morphology, and composition. The present work involves testing and evaluating the performance of the hyperspectral imaging system. The methodology involved manually taking reflectance of the object in many images or scan of the object. The object used for the evaluation of the system was cabbage and tomato. The data is further converted to the required format and the analysis is done using machine learning algorithm. The machine learning algorithms applied were able to distinguish between the object present in the hypercube obtain by the scan. It was concluded from the results that system was working as expected. This was observed by the different spectra obtained by using the machine-learning algorithm.
A color-coded vision scheme for robotics
NASA Technical Reports Server (NTRS)
Johnson, Kelley Tina
1991-01-01
Most vision systems for robotic applications rely entirely on the extraction of information from gray-level images. Humans, however, regularly depend on color to discriminate between objects. Therefore, the inclusion of color in a robot vision system seems a natural extension of the existing gray-level capabilities. A method for robot object recognition using a color-coding classification scheme is discussed. The scheme is based on an algebraic system in which a two-dimensional color image is represented as a polynomial of two variables. The system is then used to find the color contour of objects. In a controlled environment, such as that of the in-orbit space station, a particular class of objects can thus be quickly recognized by its color.
Zakariaee, Seyed Salman; Mesbahi, Asghar; Keshtkar, Ahmad; Azimirad, Vahid
2014-01-01
Polymer gel dosimeter is the only accurate three dimensional (3D) dosimeter that can measure the absorbed dose distribution in a perfect 3D setting. Gel dosimetry by using optical computed tomography (OCT) has been promoted by several researches. In the current study, we designed and constructed a prototype OCT system for gel dosimetry. First, the electrical system for optical scanning of the gel container using a Helium-Neon laser and a photocell was designed and constructed. Then, the mechanical part for two rotational and translational motions was designed and step motors were assembled to it. The data coming from photocell was grabbed by the home-built interface and sent to a personal computer. Data processing was carried out using MATLAB software. To calibrate the system and tune up the functionality of it, different objects was designed and scanned. Furthermore, the spatial and contrast resolution of the system was determined. The system was able to scan the gel dosimeter container with a diameter up to 11 cm inside the water phantom. The standard deviation of the pixels within water flask image was considered as the criteria for image uniformity. The uniformity of the system was about ±0.05%. The spatial resolution of the system was approximately 1 mm and contrast resolution was about 0.2%. Our primary results showed that this system is able to obtain two-dimensional, cross-sectional images from polymer gel samples. PMID:24761377
A 3D surface imaging system for assessing human obesity
NASA Astrophysics Data System (ADS)
Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.
2009-08-01
The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.
Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong
2016-08-01
Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.
NASA Astrophysics Data System (ADS)
Enomoto, Ayano; Hirata, Hiroshi
2014-02-01
This article describes a feasibility study of parallel image-acquisition using a two-channel surface coil array in continuous-wave electron paramagnetic resonance (CW-EPR) imaging. Parallel EPR imaging was performed by multiplexing of EPR detection in the frequency domain. The parallel acquisition system consists of two surface coil resonators and radiofrequency (RF) bridges for EPR detection. To demonstrate the feasibility of this method of parallel image-acquisition with a surface coil array, three-dimensional EPR imaging was carried out using a tube phantom. Technical issues in the multiplexing method of EPR detection were also clarified. We found that degradation in the signal-to-noise ratio due to the interference of RF carriers is a key problem to be solved.
Shin, Kang-Jae; Gil, Young-Chun; Lee, Shin-Hyo; Kim, Jeong-Nam; Yoo, Ja-Young; Kim, Soon-Heum; Choi, Hyun-Gon; Shin, Hyun Jin; Koh, Ki-Seok; Song, Wu-Chul
2017-01-01
The aim of the present study was to assess normal eyeball protrusion from the orbital rim using two- and three-dimensional images and demonstrate the better suitability of CT images for assessment of exophthalmos. The facial computed tomographic (CT) images of Korean adults were acquired in sagittal and transverse views. The CT images were used in reconstructing three-dimensional volume of faces using computer software. The protrusion distances from orbital rims and the diameters of eyeballs were measured in the two views of the CT image and three-dimensional volume of the face. Relative exophthalmometry was calculated by the difference in protrusion distance between the right and left sides. The eyeball protrusion was 4.9 and 12.5 mm in sagittal and transverse views, respectively. The protrusion distances were 2.9 mm in the three-dimensional volume of face. There were no significant differences between right and left sides in the degree of protrusion, and the difference was within 2 mm in more than 90% of the subjects. The results of the present study will provide reliable criteria for precise diagnosis and postoperative monitoring using CT imaging of diseases such as thyroid-associated ophthalmopathy and orbital tumors.
Ran, Hong; Zhang, Ping-Yang; Fang, Ling-Ling; Ma, Xiao-Wu; Wu, Wen-Fang; Feng, Wang-Fei
2012-07-01
To evaluate whether myocardial strain under adenosine stress calculated from two-dimensional echocardiography by automatic frame-by-frame tracking of natural acoustic markers enables objective description of myocardial viability in clinic. Two-dimensional echocardiography and two-dimensional speckle tracking imaging (2D STI) at rest were performed first and once again after adenosine was infused at 140 ug/kg/min over a period of 6 minutes in 36 stable patients with previous myocardial infarction. Then radionuclide myocardial perfusion/metabolic imaging served as the "gold standard" to define myocardial viability was given in all patients within 1 day. Two-dimensional speckle tracking images were acquired at rest and after adenosine administration. An automatic frame-by-frame tracking system of natural acoustic echocardiographic markers was used to calculate 2D strain variables including peak-systolic circumferential strain (CS(peak-sys)), radial strain (RS(peak-sys)), and longitudinal strain (LS(peak-sys)). Those segments with abnormal motion from visual assessment of two-dimensional echocardiography were selected for further study. As a result, 126 regions were viable whereas 194 were nonviable among 320 abnormal motion segments in 36 patients according to radionuclide imaging. At rest, there were no significant changes of 2D strain between the viable and nonviable myocardium. After adenosine administration (140 ug/kg/min), CS(peak-sys) had a little change of the viable myocardium while RS(peak-sys) and LS(peak-sys) increased significantly compared with those at rest. In nonviable group, CS(peak-sys), RS(peak-sys), and LS(peak-sys) had no significant changes during adenosine administration. After adenosine administration, RS(peak-sys) and LS(peak-sys) in viable group increased significantly compared with nonviable group. Obtained strain data were highly reproducible and affected in small intraobserver and interobserver variabilities. A change of radial strain more than 9.5% has a sensitivity of 83.9% and a specificity of 81.4% for viable whereas a change of longitudinal strain more than 14.6% allowed a sensitivity of 86.7% and a specificity of 90.2%. 2D STI combined with adenosine stress echocardiography could provide a new and reliable method to identify myocardium viability. © 2012, Wiley Periodicals, Inc.
Electronic method for autofluorography of macromolecules on two-D matrices
Davidson, Jackson B.; Case, Arthur L.
1983-01-01
A method for detecting, localizing, and quantifying macromolecules contained in a two-dimensional matrix is provided which employs a television-based position sensitive detection system. A molecule-containing matrix may be produced by conventional means to produce spots of light at the molecule locations which are detected by the television system. The matrix, such as a gel matrix, is exposed to an electronic camera system including an image-intensifier and secondary electron conduction camera capable of light integrating times of many minutes. A light image stored in the form of a charge image on the camera tube target is scanned by conventional television techniques, digitized, and stored in a digital memory. Intensity of any point on the image may be determined from the number at the memory address of the point. The entire image may be displayed on a television monitor for inspection and photographing or individual spots may be analyzed through selected readout of the memory locations. Compared to conventional film exposure methods, the exposure time may be reduced 100-1000 times.
Spatial Modulation Improves Performance in CTIS
NASA Technical Reports Server (NTRS)
Bearman, Gregory H.; Wilson, Daniel W.; Johnson, William R.
2009-01-01
Suitably formulated spatial modulation of a scene imaged by a computed-tomography imaging spectrometer (CTIS) has been found to be useful as a means of improving the imaging performance of the CTIS. As used here, "spatial modulation" signifies the imposition of additional, artificial structure on a scene from within the CTIS optics. The basic principles of a CTIS were described in "Improvements in Computed- Tomography Imaging Spectrometry" (NPO-20561) NASA Tech Briefs, Vol. 24, No. 12 (December 2000), page 38 and "All-Reflective Computed-Tomography Imaging Spectrometers" (NPO-20836), NASA Tech Briefs, Vol. 26, No. 11 (November 2002), page 7a. To recapitulate: A CTIS offers capabilities for imaging a scene with spatial, spectral, and temporal resolution. The spectral disperser in a CTIS is a two-dimensional diffraction grating. It is positioned between two relay lenses (or on one of two relay mirrors) in a video imaging system. If the disperser were removed, the system would produce ordinary images of the scene in its field of view. In the presence of the grating, the image on the focal plane of the system contains both spectral and spatial information because the multiple diffraction orders of the grating give rise to multiple, spectrally dispersed images of the scene. By use of algorithms adapted from computed tomography, the image on the focal plane can be processed into an image cube a three-dimensional collection of data on the image intensity as a function of the two spatial dimensions (x and y) in the scene and of wavelength (lambda). Thus, both spectrally and spatially resolved information on the scene at a given instant of time can be obtained, without scanning, from a single snapshot; this is what makes the CTIS such a potentially powerful tool for spatially, spectrally, and temporally resolved imaging. A CTIS performs poorly in imaging some types of scenes in particular, scenes that contain little spatial or spectral variation. The computed spectra of such scenes tend to approximate correct values to within acceptably small errors near the edges of the field of view but to be poor approximations away from the edges. The additional structure imposed on a scene according to the present method enables the CTIS algorithms to reconstruct acceptable approximations of the spectral data throughout the scene.
NASA Astrophysics Data System (ADS)
Wang, X.
2018-04-01
Tourism geological resources are of high value in admiration, scientific research and universal education, which need to be protected and rationally utilized. In the past, most of the remote sensing investigations of tourism geological resources used two-dimensional remote sensing interpretation method, which made it difficult for some geological heritages to be interpreted and led to the omission of some information. This aim of this paper is to assess the value of a method using the three-dimensional visual remote sensing image to extract information of geological heritages. skyline software system is applied to fuse the 0.36 m aerial images and 5m interval DEM to establish the digital earth model. Based on the three-dimensional shape, color tone, shadow, texture and other image features, the distribution of tourism geological resources in Shandong Province and the location of geological heritage sites were obtained, such as geological structure, DaiGu landform, granite landform, Volcanic landform, sandy landform, Waterscapes, etc. The results show that using this method for remote sensing interpretation is highly recognizable, making the interpretation more accurate and comprehensive.
Composite ultrasound imaging apparatus and method
Morimoto, Alan K.; Bow, Jr., Wallace J.; Strong, David Scott; Dickey, Fred M.
1998-01-01
An imaging apparatus and method for use in presenting composite two dimensional and three dimensional images from individual ultrasonic frames. A cross-sectional reconstruction is applied by using digital ultrasound frames, transducer orientation and a known center. Motion compensation, rank value filtering, noise suppression and tissue classification are utilized to optimize the composite image.
Composite ultrasound imaging apparatus and method
Morimoto, A.K.; Bow, W.J. Jr.; Strong, D.S.; Dickey, F.M.
1998-09-15
An imaging apparatus and method for use in presenting composite two dimensional and three dimensional images from individual ultrasonic frames. A cross-sectional reconstruction is applied by using digital ultrasound frames, transducer orientation and a known center. Motion compensation, rank value filtering, noise suppression and tissue classification are utilized to optimize the composite image. 37 figs.
Method of orthogonally splitting imaging pose measurement
NASA Astrophysics Data System (ADS)
Zhao, Na; Sun, Changku; Wang, Peng; Yang, Qian; Liu, Xintong
2018-01-01
In order to meet the aviation's and machinery manufacturing's pose measurement need of high precision, fast speed and wide measurement range, and to resolve the contradiction between measurement range and resolution of vision sensor, this paper proposes an orthogonally splitting imaging pose measurement method. This paper designs and realizes an orthogonally splitting imaging vision sensor and establishes a pose measurement system. The vision sensor consists of one imaging lens, a beam splitter prism, cylindrical lenses and dual linear CCD. Dual linear CCD respectively acquire one dimensional image coordinate data of the target point, and two data can restore the two dimensional image coordinates of the target point. According to the characteristics of imaging system, this paper establishes the nonlinear distortion model to correct distortion. Based on cross ratio invariability, polynomial equation is established and solved by the least square fitting method. After completing distortion correction, this paper establishes the measurement mathematical model of vision sensor, and determines intrinsic parameters to calibrate. An array of feature points for calibration is built by placing a planar target in any different positions for a few times. An terative optimization method is presented to solve the parameters of model. The experimental results show that the field angle is 52 °, the focus distance is 27.40 mm, image resolution is 5185×5117 pixels, displacement measurement error is less than 0.1mm, and rotation angle measurement error is less than 0.15°. The method of orthogonally splitting imaging pose measurement can satisfy the pose measurement requirement of high precision, fast speed and wide measurement range.
Fundamentals of image acquisition and processing in the digital era.
Farman, A G
2003-01-01
To review the historic context for digital imaging in dentistry and to outline the fundamental issues related to digital imaging modalities. Digital dental X-ray images can be achieved by scanning analog film radiographs (secondary capture), with photostimulable phosphors, or using solid-state detectors (e.g. charge-coupled device and complementary metal oxide semiconductor). There are four characteristics that are basic to all digital image detectors; namely, size of active area, signal-to-noise ratio, contrast resolution and the spatial resolution. To perceive structure in a radiographic image, there needs to be sufficient difference between contrasting densities. This primarily depends on the differences in the attenuation of the X-ray beam by adjacent tissues. It is also depends on the signal received; therefore, contrast tends to increase with increased exposure. Given adequate signal and sufficient differences in radiodensity, contrast will be sufficient to differentiate between adjacent structures, irrespective of the recording modality and processing used. Where contrast is not sufficient, digital images can sometimes be post-processed to disclose details that would otherwise go undetected. For example, cephalogram isodensity mapping can improve soft tissue detail. It is concluded that it could be a further decade or two before three-dimensional digital imaging systems entirely replace two-dimensional analog films. Such systems need not only to produce prettier images, but also to provide a demonstrable evidence-based higher standard of care at a cost that is not economically prohibitive for the practitioner or society, and which allows efficient and effective workflow within the business of dental practice.
Photoacoustic projection imaging using an all-optical detector array
NASA Astrophysics Data System (ADS)
Bauer-Marschallinger, J.; Felbermayer, K.; Berer, T.
2018-02-01
We present a prototype for all-optical photoacoustic projection imaging. By generating projection images, photoacoustic information of large volumes can be retrieved with less effort compared to common photoacoustic computed tomography where many detectors and/or multiple measurements are required. In our approach, an array of 60 integrating line detectors is used to acquire photoacoustic waves. The line detector array consists of fiber-optic MachZehnder interferometers, distributed on a cylindrical surface. From the measured variation of the optical path lengths of the interferometers, induced by photoacoustic waves, a photoacoustic projection image can be reconstructed. The resulting images represent the projection of the three-dimensional spatial light absorbance within the imaged object onto a two-dimensional plane, perpendicular to the line detector array. The fiber-optic detectors achieve a noise-equivalent pressure of 24 Pascal at a 10 MHz bandwidth. We present the operational principle, the structure of the array, and resulting images. The system can acquire high-resolution projection images of large volumes within a short period of time. Imaging large volumes at high frame rates facilitates monitoring of dynamic processes.
A low-cost tracked C-arm (TC-arm) upgrade system for versatile quantitative intraoperative imaging.
Amiri, Shahram; Wilson, David R; Masri, Bassam A; Anglin, Carolyn
2014-07-01
C-arm fluoroscopy is frequently used in clinical applications as a low-cost and mobile real-time qualitative assessment tool. C-arms, however, are not widely accepted for applications involving quantitative assessments, mainly due to the lack of reliable and low-cost position tracking methods, as well as adequate calibration and registration techniques. The solution suggested in this work is a tracked C-arm (TC-arm) which employs a low-cost sensor tracking module that can be retrofitted to any conventional C-arm for tracking the individual joints of the device. Registration and offline calibration methods were developed that allow accurate tracking of the gantry and determination of the exact intrinsic and extrinsic parameters of the imaging system for any acquired fluoroscopic image. The performance of the system was evaluated in comparison to an Optotrak[Formula: see text] motion tracking system and by a series of experiments on accurately built ball-bearing phantoms. Accuracies of the system were determined for 2D-3D registration, three-dimensional landmark localization, and for generating panoramic stitched views in simulated intraoperative applications. The system was able to track the center point of the gantry with an accuracy of [Formula: see text] mm or better. Accuracies of 2D-3D registrations were [Formula: see text] mm and [Formula: see text]. Three-dimensional landmark localization had an accuracy of [Formula: see text] of the length (or [Formula: see text] mm) on average, depending on whether the landmarks were located along, above, or across the table. The overall accuracies of the two-dimensional measurements conducted on stitched panoramic images of the femur and lumbar spine were 2.5 [Formula: see text] 2.0 % [Formula: see text] and [Formula: see text], respectively. The TC-arm system has the potential to achieve sophisticated quantitative fluoroscopy assessment capabilities using an existing C-arm imaging system. This technology may be useful to improve the quality of orthopedic surgery and interventional radiology.
Space imaging measurement system based on fixed lens and moving detector
NASA Astrophysics Data System (ADS)
Akiyama, Akira; Doshida, Minoru; Mutoh, Eiichiro; Kumagai, Hideo; Yamada, Hirofumi; Ishii, Hiromitsu
2006-08-01
We have developed the Space Imaging Measurement System based on the fixed lens and fast moving detector to the control of the autonomous ground vehicle. The space measurement is the most important task in the development of the autonomous ground vehicle. In this study we move the detector back and forth along the optical axis at the fast rate to measure the three-dimensional image data. This system is just appropriate to the autonomous ground vehicle because this system does not send out any optical energy to measure the distance and keep the safety. And we use the digital camera of the visible ray range. Therefore it gives us the cost reduction of the three-dimensional image data acquisition with respect to the imaging laser system. We can combine many pieces of the narrow space imaging measurement data to construct the wide range three-dimensional data. This gives us the improvement of the image recognition with respect to the object space. To develop the fast movement of the detector, we build the counter mass balance in the mechanical crank system of the Space Imaging Measurement System. And then we set up the duct to prevent the optical noise due to the ray not coming through lens. The object distance is derived from the focus distance which related to the best focused image data. The best focused image data is selected from the image of the maximum standard deviation in the standard deviations of series images.
Design and Construction of Detector and Data Acquisition Elements for Proton Computed Tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fermi Research Alliance; Northern Illinois University
2015-07-15
Proton computed tomography (pCT) offers an alternative to x-ray imaging with potential for three-dimensional imaging, reduced radiation exposure, and in-situ imaging. Northern Illinois University (NIU) is developing a second-generation proton computed tomography system with a goal of demonstrating the feasibility of three-dimensional imaging within clinically realistic imaging times. The second-generation pCT system is comprised of a tracking system, a calorimeter, data acquisition, a computing farm, and software algorithms. The proton beam encounters the upstream tracking detectors, the patient or phantom, the downstream tracking detectors, and a calorimeter. The schematic layout of the PCT system is shown. The data acquisition sendsmore » the proton scattering information to an offline computing farm. Major innovations of the second generation pCT project involve an increased data acquisition rate ( MHz range) and development of three-dimensional imaging algorithms. The Fermilab Particle Physics Division and Northern Illinois Center for Accelerator and Detector Development at Northern Illinois University worked together to design and construct the tracking detectors, calorimeter, readout electronics and detector mounting system.« less
Global Interior Robot Localisation by a Colour Content Image Retrieval System
NASA Astrophysics Data System (ADS)
Chaari, A.; Lelandais, S.; Montagne, C.; Ahmed, M. Ben
2007-12-01
We propose a new global localisation approach to determine a coarse position of a mobile robot in structured indoor space using colour-based image retrieval techniques. We use an original method of colour quantisation based on the baker's transformation to extract a two-dimensional colour pallet combining as well space and vicinity-related information as colourimetric aspect of the original image. We conceive several retrieving approaches bringing to a specific similarity measure [InlineEquation not available: see fulltext.] integrating the space organisation of colours in the pallet. The baker's transformation provides a quantisation of the image into a space where colours that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image. Whereas the distance [InlineEquation not available: see fulltext.] provides for partial invariance to translation, sight point small changes, and scale factor. In addition to this study, we developed a hierarchical search module based on the logic classification of images following rooms. This hierarchical module reduces the searching indoor space and ensures an improvement of our system performances. Results are then compared with those brought by colour histograms provided with several similarity measures. In this paper, we focus on colour-based features to describe indoor images. A finalised system must obviously integrate other type of signature like shape and texture.
NASA Astrophysics Data System (ADS)
Berbeco, Ross I.; Jiang, Steve B.; Sharp, Gregory C.; Chen, George T. Y.; Mostafavi, Hassan; Shirato, Hiroki
2004-01-01
The design of an integrated radiotherapy imaging system (IRIS), consisting of gantry mounted diagnostic (kV) x-ray tubes and fast read-out flat-panel amorphous-silicon detectors, has been studied. The system is meant to be capable of three main functions: radiographs for three-dimensional (3D) patient set-up, cone-beam CT and real-time tumour/marker tracking. The goal of the current study is to determine whether one source/panel pair is sufficient for real-time tumour/marker tracking and, if two are needed, the optimal position of each relative to other components and the isocentre. A single gantry-mounted source/imager pair is certainly capable of the first two of the three functions listed above and may also be useful for the third, if combined with prior knowledge of the target's trajectory. This would be necessary because only motion in two dimensions is visible with a single imager/source system. However, with previously collected information about the trajectory, the third coordinate may be derived from the other two with sufficient accuracy to facilitate tracking. This deduction of the third coordinate can only be made if the 3D tumour/marker trajectory is consistent from fraction to fraction. The feasibility of tumour tracking with one source/imager pair has been theoretically examined here using measured lung marker trajectory data for seven patients from multiple treatment fractions. The patients' selection criteria include minimum mean amplitudes of the tumour motions greater than 1 cm peak-to-peak. The marker trajectory for each patient was modelled using the first fraction data. Then for the rest of the data, marker positions were derived from the imager projections at various gantry angles and compared with the measured tumour positions. Our results show that, due to the three dimensionality and irregular trajectory characteristics of tumour motion, on a fraction-to-fraction basis, a 'monoscopic' system (single source/imager) is inadequate for consistent real-time tumour tracking, even with prior knowledge. We found that, among the seven patients studied with peak-to-peak marker motion greater than 1 cm, five cases have mean localization errors greater than 2 mm and two have mean errors greater than 3 mm. Because of this uncertainty associated with a monoscopic system, two source/imager pairs are necessary for robust 3D target localization. Dual orthogonal x-ray source/imager pairs mounted on the linac gantry are chosen for the IRIS. We further studied the placement of the x-ray sources/panel based on the geometric specifications of the Varian 21EX Clinac. The best configuration minimizes the localization error while maintaining a large field of view and avoiding collisions with the floor/ceiling or couch.
NASA Astrophysics Data System (ADS)
Akinpelu, Oluwatosin Caleb
The growing need for better definition of flow units and depositional heterogeneities in petroleum reservoirs and aquifers has stimulated a renewed interest in outcrop studies as reservoir analogues in the last two decades. Despite this surge in interest, outcrop studies remain largely two-dimensional; a major limitation to direct application of outcrop knowledge to the three dimensional heterogeneous world of subsurface reservoirs. Behind-outcrop Ground Penetrating Radar (GPR) imaging provides high-resolution geophysical data, which when combined with two dimensional architectural outcrop observation, becomes a powerful interpretation tool. Due to the high resolution, non-destructive and non-invasive nature of the GPR signal, as well as its reflection-amplitude sensitivity to shaly lithologies, three-dimensional outcrop studies combining two dimensional architectural element data and behind-outcrop GPR imaging hold significant promise with the potential to revolutionize outcrop studies the way seismic imaging changed basin analysis. Earlier attempts at GPR imaging on ancient clastic deposits were fraught with difficulties resulting from inappropriate field techniques and subsequent poorly-informed data processing steps. This project documents advances in GPR field methodology, recommends appropriate data collection and processing procedures and validates the value of integrating outcrop-based architectural-element mapping with GPR imaging to obtain three dimensional architectural data from outcrops. Case studies from a variety of clastic deposits: Whirlpool Formation (Niagara Escarpment), Navajo Sandstone (Moab, Utah), Dunvegan Formation (Pink Mountain, British Columbia), Chinle Formation (Southern Utah) and St. Mary River Formation (Alberta) demonstrate the usefulness of this approach for better interpretation of outcrop scale ancient depositional processes and ultimately as a tool for refining existing facies models, as well as a predictive tool for subsurface reservoir modelling. While this approach is quite promising for detailed three-dimensional outcrop studies, it is not an all-purpose panacea; thick overburden, poor antenna-ground coupling in rough terrains typical of outcrops, low penetration and rapid signal attenuation in mudstone and diagenetic clay- rich deposits often limit the prospects of this novel technique.
A Web-based Visualization System for Three Dimensional Geological Model using Open GIS
NASA Astrophysics Data System (ADS)
Nemoto, T.; Masumoto, S.; Nonogaki, S.
2017-12-01
A three dimensional geological model is an important information in various fields such as environmental assessment, urban planning, resource development, waste management and disaster mitigation. In this study, we have developed a web-based visualization system for 3D geological model using free and open source software. The system has been successfully implemented by integrating web mapping engine MapServer and geographic information system GRASS. MapServer plays a role of mapping horizontal cross sections of 3D geological model and a topographic map. GRASS provides the core components for management, analysis and image processing of the geological model. Online access to GRASS functions has been enabled using PyWPS that is an implementation of WPS (Web Processing Service) Open Geospatial Consortium (OGC) standard. The system has two main functions. Two dimensional visualization function allows users to generate horizontal and vertical cross sections of 3D geological model. These images are delivered via WMS (Web Map Service) and WPS OGC standards. Horizontal cross sections are overlaid on the topographic map. A vertical cross section is generated by clicking a start point and an end point on the map. Three dimensional visualization function allows users to visualize geological boundary surfaces and a panel diagram. The user can visualize them from various angles by mouse operation. WebGL is utilized for 3D visualization. WebGL is a web technology that brings hardware-accelerated 3D graphics to the browser without installing additional software. The geological boundary surfaces can be downloaded to incorporate the geologic structure in a design on CAD and model for various simulations. This study was supported by JSPS KAKENHI Grant Number JP16K00158.
1983-06-01
system, provides a convenient, low- noise , fully parallel method of improving contrast and enhancing structural detail in an image prior to input to a...directed towards problems in deconvolution, reconstruction from projections, bandlimited extrapolation, and shift varying deblurring of images...deconvolution algorithm has been studied with promising 5 results [I] for simulated motion blurs. Future work will focus on noise effects and the extension
Active terahertz wave imaging system for detecting hidden objects
NASA Astrophysics Data System (ADS)
Gan, Yuner; Liu, Ming; Zhao, Yuejin
2016-11-01
Terahertz wave can penetrate the common dielectric materials such as clothing, cardboard boxes, plastics and so on. Besides, the low photon energy and non-ionizing characteristic of the terahertz wave are especially suitable for the safety inspection of the human body. Terahertz imaging technology has a tremendous potential in the field of security inspection such as stations, airports and other public places. Terahertz wave imaging systems are divided into two categories: active terahertz imaging systems and passive terahertz imaging systems. So far, most terahertz imaging systems work at point to point mechanical scan pattern with the method of passive imaging. The imaging results of passive imaging tend to have low contrast and the image is not clear enough. This paper designs and implements an active terahertz wave imaging system combining terahertz wave transmitting and receiving with a Cassegrain antenna. The terahertz wave at the frequency of 94GHz is created by impact ionization avalanche transit time (IMPATT) diode, focused on the feed element for Cassegrain antenna by high density polyethylene (HDPE) lens, and transmitted to the human body by Cassegrain antenna. The reflected terahertz wave goes the same way it was emitted back to the feed element for Cassegrain antenna, focused on the horn antenna of detector by another high density polyethylene lens. The scanning method is the use of two-dimensional planar mirror, one responsible for horizontal scanning, and another responsible for vertical scanning. Our system can achieve a clear human body image, has better sensitivity and resolution than passive imaging system, and costs much lower than other active imaging system in the meantime.
Zhao, Liming; Ouyang, Qi; Chen, Dengfu; Udupa, Jayaram K; Wang, Huiqian; Zeng, Yuebin
2014-11-01
To provide an accurate surface defects inspection system and make the automation of robust image segmentation method a reality in routine production line, a general approach is presented for continuous casting slab (CC-slab) surface defects extraction and delineation. The applicability of the system is not tied to CC-slab exclusively. We combined the line array CCD (Charge-coupled Device) traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging) strategies in designing the system. Its aim is to suppress the respective imaging system's limitations. In the system, the images acquired from the two CCD sensors are carefully aligned in space and in time by maximum mutual information-based full-fledged registration schema. Subsequently, the image information is fused from these two subsystems such as the unbroken 2D information in LS-imaging and 3D depressed information in AL-imaging. Finally, on the basis of the established dual scanning imaging system the region of interest (ROI) localization by seed specification was designed, and the delineation for ROI by iterative relative fuzzy connectedness (IRFC) algorithm was utilized to get a precise inspection result. Our method takes into account the complementary advantages in the two common machine vision (MV) systems and it performs competitively with the state-of-the-art as seen from the comparison of experimental results. For the first time, a joint imaging scanning strategy is proposed for CC-slab surface defect inspection that allows a feasible way of powerful ROI delineation strategies to be applied to the MV inspection field. Multi-ROI delineation by using IRFC in this research field may further improve the results.
Two-dimensional Imaging Velocity Interferometry: Technique and Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erskine, D J; Smith, R F; Bolme, C
2011-03-23
We describe the data analysis procedures for an emerging interferometric technique for measuring motion across a two-dimensional image at a moment in time, i.e. a snapshot 2d-VISAR. Velocity interferometers (VISAR) measuring target motion to high precision have been an important diagnostic in shockwave physics for many years Until recently, this diagnostic has been limited to measuring motion at points or lines across a target. We introduce an emerging interferometric technique for measuring motion across a two-dimensional image, which could be called a snapshot 2d-VISAR. If a sufficiently fast movie camera technology existed, it could be placed behind a traditional VISARmore » optical system and record a 2d image vs time. But since that technology is not yet available, we use a CCD detector to record a single 2d image, with the pulsed nature of the illumination providing the time resolution. Consequently, since we are using pulsed illumination having a coherence length shorter than the VISAR interferometer delay ({approx}0.1 ns), we must use the white light velocimetry configuration to produce fringes with significant visibility. In this scheme, two interferometers (illuminating, detecting) having nearly identical delays are used in series, with one before the target and one after. This produces fringes with at most 50% visibility, but otherwise has the same fringe shift per target motion of a traditional VISAR. The 2d-VISAR observes a new world of information about shock behavior not readily accessible by traditional point or 1d-VISARS, simultaneously providing both a velocity map and an 'ordinary' snapshot photograph of the target. The 2d-VISAR has been used to observe nonuniformities in NIF related targets (polycrystalline diamond, Be), and in Si and Al.« less
Echocardiography Comparison Between Two and Three Dimensional Echocardiograms
NASA Technical Reports Server (NTRS)
2003-01-01
Echocardiography uses sound waves to image the heart and other organs. Developing a compact version of the latest technology improved the ease of monitoring crew member health, a critical task during long space flights. NASA researchers plan to adapt the three-dimensional (3-D) echocardiogram for space flight. The two-dimensional (2-D) echocardiogram utilized in orbit on the International Space Station (ISS) was effective, but difficult to use with precision. A heart image from a 2-D echocardiogram (left) is of a better quality than that from a 3-D device (right), but the 3-D imaging procedure is more user-friendly.
Hierarchical classification in high dimensional numerous class cases
NASA Technical Reports Server (NTRS)
Kim, Byungyong; Landgrebe, D. A.
1990-01-01
As progress in new sensor technology continues, increasingly high resolution imaging sensors are being developed. These sensors give more detailed and complex data for each picture element and greatly increase the dimensionality of data over past systems. Three methods for designing a decision tree classifier are discussed: a top down approach, a bottom up approach, and a hybrid approach. Three feature extraction techniques are implemented. Canonical and extended canonical techniques are mainly dependent upon the mean difference between two classes. An autocorrelation technique is dependent upon the correlation differences. The mathematical relationship between sample size, dimensionality, and risk value is derived.
Multiscale Anomaly Detection and Image Registration Algorithms for Airborne Landmine Detection
2008-05-01
with the sensed image. The two- dimensional correlation coefficient r for two matrices A and B both of size M ×N is given by r = ∑ m ∑ n (Amn...correlation based method by matching features in a high- dimensional feature- space . The current implementation of the SIFT algorithm uses a brute-force...by repeatedly convolving the image with a Guassian kernel. Each plane of the scale
NASA Astrophysics Data System (ADS)
Wang, P.; Xing, C.
2018-04-01
In the image plane of GB-SAR, identification of deformation distribution is usually carried out by artificial interpretation. This method requires analysts to have adequate experience of radar imaging and target recognition, otherwise it can easily cause false recognition of deformation target or region. Therefore, it is very meaningful to connect two-dimensional (2D) plane coordinate system with the common three-dimensional (3D) terrain coordinate system. To improve the global accuracy and reliability of the transformation from 2D coordinates of GB-SAR images to local 3D coordinates, and overcome the limitation of traditional similarity transformation parameter estimation method, 3D laser scanning data is used to assist the transformation of GB-SAR image coordinates. A straight line fitting method for calculating horizontal angle was proposed in this paper. After projection into a consistent imaging plane, we can calculate horizontal rotation angle by using the linear characteristics of the structure in radar image and the 3D coordinate system. Aided by external elevation information by 3D laser scanning technology, we completed the matching of point clouds and pixels on the projection plane according to the geometric projection principle of GB-SAR imaging realizing the transformation calculation of GB-SAR image coordinates to local 3D coordinates. Finally, the effectiveness of the method is verified by the GB-SAR deformation monitoring experiment on the high slope of Geheyan dam.
High-Dimensional Quantum Information Processing with Linear Optics
NASA Astrophysics Data System (ADS)
Fitzpatrick, Casey A.
Quantum information processing (QIP) is an interdisciplinary field concerned with the development of computers and information processing systems that utilize quantum mechanical properties of nature to carry out their function. QIP systems have become vastly more practical since the turn of the century. Today, QIP applications span imaging, cryptographic security, computation, and simulation (quantum systems that mimic other quantum systems). Many important strategies improve quantum versions of classical information system hardware, such as single photon detectors and quantum repeaters. Another more abstract strategy engineers high-dimensional quantum state spaces, so that each successful event carries more information than traditional two-level systems allow. Photonic states in particular bring the added advantages of weak environmental coupling and data transmission near the speed of light, allowing for simpler control and lower system design complexity. In this dissertation, numerous novel, scalable designs for practical high-dimensional linear-optical QIP systems are presented. First, a correlated photon imaging scheme using orbital angular momentum (OAM) states to detect rotational symmetries in objects using measurements, as well as building images out of those interactions is reported. Then, a statistical detection method using chains of OAM superpositions distributed according to the Fibonacci sequence is established and expanded upon. It is shown that the approach gives rise to schemes for sorting, detecting, and generating the recursively defined high-dimensional states on which some quantum cryptographic protocols depend. Finally, an ongoing study based on a generalization of the standard optical multiport for applications in quantum computation and simulation is reported upon. The architecture allows photons to reverse momentum inside the device. This in turn enables realistic implementation of controllable linear-optical scattering vertices for carrying out quantum walks on arbitrary graph structures, a powerful tool for any quantum computer. It is shown that the novel architecture provides new, efficient capabilities for the optical quantum simulation of Hamiltonians and topologically protected states. Further, these simulations use exponentially fewer resources than feedforward techniques, scale linearly to higher-dimensional systems, and use only linear optics, thus offering a concrete experimentally achievable implementation of graphical models of discrete-time quantum systems.
Single-Photon Detectors for Time-of-Flight Range Imaging
NASA Astrophysics Data System (ADS)
Stoppa, David; Simoni, Andrea
We live in a three-dimensional (3D) world and thanks to the stereoscopic vision provided by our two eyes, in combination with the powerful neural network of the brain we are able to perceive the distance of the objects. Nevertheless, despite the huge market volume of digital cameras, solid-state image sensors can capture only a two-dimensional (2D) projection, of the scene under observation, losing a variable of paramount importance, i.e., the scene depth. On the contrary, 3D vision tools could offer amazing possibilities of improvement in many areas thanks to the increased accuracy and reliability of the models representing the environment. Among the great variety of distance measuring techniques and detection systems available, this chapter will treat only the emerging niche of solid-state, scannerless systems based on the TOF principle and using a detector SPAD-based pixels. The chapter is organized into three main parts. At first, TOF systems and measuring techniques will be described. In the second part, most meaningful sensor architectures for scannerless TOF distance measurements will be analyzed, focusing onto the circuital building blocks required by time-resolved image sensors. Finally, a performance summary is provided and a perspective view for the near future developments of SPAD-TOF sensors is given.
Fault Diagnosis for Rolling Bearings under Variable Conditions Based on Visual Cognition
Cheng, Yujie; Zhou, Bo; Lu, Chen; Yang, Chao
2017-01-01
Fault diagnosis for rolling bearings has attracted increasing attention in recent years. However, few studies have focused on fault diagnosis for rolling bearings under variable conditions. This paper introduces a fault diagnosis method for rolling bearings under variable conditions based on visual cognition. The proposed method includes the following steps. First, the vibration signal data are transformed into a recurrence plot (RP), which is a two-dimensional image. Then, inspired by the visual invariance characteristic of the human visual system (HVS), we utilize speed up robust feature to extract fault features from the two-dimensional RP and generate a 64-dimensional feature vector, which is invariant to image translation, rotation, scaling variation, etc. Third, based on the manifold perception characteristic of HVS, isometric mapping, a manifold learning method that can reflect the intrinsic manifold embedded in the high-dimensional space, is employed to obtain a low-dimensional feature vector. Finally, a classical classification method, support vector machine, is utilized to realize fault diagnosis. Verification data were collected from Case Western Reserve University Bearing Data Center, and the experimental result indicates that the proposed fault diagnosis method based on visual cognition is highly effective for rolling bearings under variable conditions, thus providing a promising approach from the cognitive computing field. PMID:28772943
Jurrus, Elizabeth; Watanabe, Shigeki; Giuly, Richard J.; Paiva, Antonio R. C.; Ellisman, Mark H.; Jorgensen, Erik M.; Tasdizen, Tolga
2013-01-01
Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated process first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes. PMID:22644867
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jurrus, Elizabeth R.; Watanabe, Shigeki; Giuly, Richard J.
2013-01-01
Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated processmore » first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes.« less
A New Perspective on Surface Weather Maps
ERIC Educational Resources Information Center
Meyer, Steve
2006-01-01
A two-dimensional weather map is actually a physical representation of three-dimensional atmospheric conditions at a specific point in time. Abstract thinking is required to visualize this two-dimensional image in three-dimensional form. But once that visualization is accomplished, many of the meteorological concepts and processes conveyed by the…
Three-dimensional imaging of the brain cavities in human embryos.
Blaas, H G; Eik-Nes, S H; Kiserud, T; Berg, S; Angelsen, B; Olstad, B
1995-04-01
A system for high-resolution three-dimensional imaging of small structures has been developed, based on the Vingmed CFM-800 annular array sector scanner with a 7.5-MHz transducer attached to a PC-based TomTec Echo-Scan unit. A stepper motor rotates the transducer 180 degrees and the complete three-dimensional scan consists of 132 two-dimensional images, video-grabbed and scan-converted into a regular volumetric data set by the TomTec unit. Three normal pregnancies with embryos of gestational age 7, 9 and 10 weeks received a transvaginal examination with special attention to the embryonic/fetal brain. In all three cases, it was possible to obtain high-resolution images of the brain cavities. At 7 weeks, both hemispheres and their connection to the third ventricle were delineated. The isthmus rhombencephali could be visualized. At 9 weeks, the continuous development of the brain cavities could be followed and at 11 weeks the dominating size of the hemispheres could be depicted. It is concluded that present ultrasound technology has reached a stage where structures of only a few millimeters can be imaged in vivo in three-dimensions with a quality that resembles the plaster figures used in embryonic laboratories. The method can become an important tool in future embryological research and also in the detection of early developmental disorders of the embryo.
Computer-aided light sheet flow visualization using photogrammetry
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1994-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and a visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) results, was chosen to interactively display the reconstructed light sheet images with the numerical surface geometry for the model or aircraft under study. The photogrammetric reconstruction technique and the image processing and computer graphics techniques and equipment are described. Results of the computer-aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images with CFD solutions in the same graphics environment is also demonstrated.
Computer-Aided Light Sheet Flow Visualization
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1993-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.
Computer-aided light sheet flow visualization
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1993-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.
Noda, Naoki; Kamimura, Shinji
2008-02-01
With conventional light microscopy, precision in the measurement of the displacement of a specimen depends on the signal-to-noise ratio when we measure the light intensity of magnified images. This implies that, for the improvement of precision, getting brighter images and reducing background light noise are both inevitably required. For this purpose, we developed a new optics for laser dark-field illumination. For the microscopy, we used a laser beam and a pair of axicons (conical lenses) to get an optimal condition for dark-field observations. The optics was applied to measuring two dimensional microbead displacements with subnanometer precision. The bandwidth of our detection system overall was 10 kHz. Over most of this bandwidth, the observed noise level was as small as 0.1 nm/radicalHz.
Diatom Valve Three-Dimensional Representation: A New Imaging Method Based on Combined Microscopies
Ferrara, Maria Antonietta; De Tommasi, Edoardo; Coppola, Giuseppe; De Stefano, Luca; Rea, Ilaria; Dardano, Principia
2016-01-01
The frustule of diatoms, unicellular microalgae, shows very interesting photonic features, generally related to its complicated and quasi-periodic micro- and nano-structure. In order to simulate light propagation inside and through this natural structure, it is important to develop three-dimensional (3D) models for synthetic replica with high spatial resolution. In this paper, we present a new method that generates images of microscopic diatoms with high definition, by merging scanning electron microscopy and digital holography microscopy or atomic force microscopy data. Starting from two digital images, both acquired separately with standard characterization procedures, a high spatial resolution (Δz = λ/20, Δx = Δy ≅ 100 nm, at least) 3D model of the object has been generated. Then, the two sets of data have been processed by matrix formalism, using an original mathematical algorithm implemented on a commercially available software. The developed methodology could be also of broad interest in the design and fabrication of micro-opto-electro-mechanical systems. PMID:27690008
Technical overview of the millimeter-wave imaging reflectometer on the DIII-D tokamak (invited)
Muscatello, Christopher M.; Domier, Calvin W.; Hu, Xing; ...
2014-07-22
The two-dimensional mm-wave imaging reflectometer (MIR) on DIII-D is a multi-faceted device for diagnosing electron density fluctuations in fusion plasmas. Its multi-channel, multi-frequency capabilities and high sensitivity permit visualization and quantitative diagnosis of density perturbations, including correlation length, wavenumber, mode propagation velocity, and dispersion. The two-dimensional capabilities of MIR are made possible with twelve vertically separated sightlines and four-frequency operation (corresponding to four radial channels). The 48-channel DIII-D MIR system has a tunable source that can be stepped in 500 µs increments over a range of 56 to 74 GHz. An innovative optical design keeps both on-axis and off-axis channelsmore » focused at the cutoff surface, permitting imaging over an extended poloidal region. As a result, the integrity of the MIR optical design is confirmed by comparing Gaussian beam calculations to laboratory measurements of the transmitter beam pattern and receiver antenna patterns.« less
Axial Tomography from Digitized Real Time Radiography
DOE R&D Accomplishments Database
Zolnay, A. S.; McDonald, W. M.; Doupont, P. A.; McKinney, R. L.; Lee, M. M.
1985-01-18
Axial tomography from digitized real time radiographs provides a useful tool for industrial radiography and tomography. The components of this system are: x-ray source, image intensifier, video camera, video line extractor and digitizer, data storage and reconstruction computers. With this system it is possible to view a two dimensional x-ray image in real time at each angle of rotation and select the tomography plane of interest by choosing which video line to digitize. The digitization of a video line requires less than a second making data acquisition relatively short. Further improvements on this system are planned and initial results are reported.
Dueholm, M; Christensen, J W; Rydbjerg, S; Hansen, E S; Ørtoft, G
2015-06-01
To evaluate the diagnostic efficiency of two-dimensional (2D) and three-dimensional (3D) transvaginal ultrasonography, power Doppler angiography (PDA) and gel infusion sonography (GIS) at offline analysis for recognition of malignant endometrium compared with real-time evaluation during scanning, and to determine optimal image parameters at 3D analysis. One hundred and sixty-nine consecutive women with postmenopausal bleeding and endometrial thickness ≥ 5 mm underwent systematic evaluation of endometrial pattern on 2D imaging, and 2D videoclips and 3D volumes were later analyzed offline. Histopathological findings at hysteroscopy or hysterectomy were used as the reference standard. The efficiency of the different techniques for diagnosis of malignancy was calculated and compared. 3D image parameters, endometrial volume and 3D vascular indices were assessed. Optimal 3D image parameters were transformed by logistic regression into a risk of endometrial cancer (REC) score, including scores for body mass index, endometrial thickness and endometrial morphology at gray-scale and PDA and GIS. Offline 2D and 3D analysis were equivalent, but had lower diagnostic performance compared with real-time evaluation during scanning. Their diagnostic performance was not markedly improved by the addition of PDA or GIS, but their efficiency was comparable with that of real-time 2D-GIS in offline examinations of good image quality. On logistic regression, the 3D parameters from the REC-score system had the highest diagnostic efficiency. The area under the curve of the REC-score system at 3D-GIS (0.89) was not improved by inclusion of vascular indices or endometrial volume calculations. Real-time evaluation during scanning is most efficient, but offline 2D and 3D analysis is useful for prediction of endometrial cancer when good image quality can be obtained. The diagnostic efficiency at 3D analysis may be improved by use of REC-scoring systems, without the need for calculation of vascular indices or endometrial volume. The optimal imaging modality appears to be real-time 2D-GIS. Copyright © 2014 ISUOG. Published by John Wiley & Sons Ltd.
Lattice Light Sheet Microscopy: Imaging Molecules to Embryos at High Spatiotemporal Resolution
Chen, Bi-Chang; Legant, Wesley R.; Wang, Kai; Shao, Lin; Milkie, Daniel E.; Davidson, Michael W.; Janetopoulos, Chris; Wu, Xufeng S.; Hammer, John A.; Liu, Zhe; English, Brian P.; Mimori-Kiyosue, Yuko; Romero, Daniel P.; Ritter, Alex T.; Lippincott-Schwartz, Jennifer; Fritz-Laylin, Lillian; Mullins, R. Dyche; Mitchell, Diana M.; Bembenek, Joshua N.; Reymann, Anne-Cecile; Böhme, Ralph; Grill, Stephan W.; Wang, Jennifer T.; Seydoux, Geraldine; Tulu, U. Serdar; Kiehart, Daniel P.; Betzig, Eric
2015-01-01
Although fluorescence microscopy provides a crucial window into the physiology of living specimens, many biological processes are too fragile, too small, or occur too rapidly to see clearly with existing tools. We crafted ultra-thin light sheets from two-dimensional optical lattices that allowed us to image three-dimensional (3D) dynamics for hundreds of volumes, often at sub-second intervals, at the diffraction limit and beyond. We applied this to systems spanning four orders of magnitude in space and time, including the diffusion of single transcription factor molecules in stem cell spheroids, the dynamic instability of mitotic microtubules, the immunological synapse, neutrophil motility in a 3D matrix, and embryogenesis in Caenorhabditis elegans and Drosophila melanogaster. The results provide a visceral reminder of the beauty and complexity of living systems. PMID:25342811
A Review on Real-Time 3D Ultrasound Imaging Technology
Zeng, Zhaozheng
2017-01-01
Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail. PMID:28459067
C-arm technique using distance driven method for nephrolithiasis and kidney stones detection
NASA Astrophysics Data System (ADS)
Malalla, Nuhad; Sun, Pengfei; Chen, Ying; Lipkin, Michael E.; Preminger, Glenn M.; Qin, Jun
2016-04-01
Distance driven represents a state of art method that used for reconstruction for x-ray techniques. C-arm tomography is an x-ray imaging technique that provides three dimensional information of the object by moving the C-shaped gantry around the patient. With limited view angle, C-arm system was investigated to generate volumetric data of the object with low radiation dosage and examination time. This paper is a new simulation study with two reconstruction methods based on distance driven including: simultaneous algebraic reconstruction technique (SART) and Maximum Likelihood expectation maximization (MLEM). Distance driven is an efficient method that has low computation cost and free artifacts compared with other methods such as ray driven and pixel driven methods. Projection images of spherical objects were simulated with a virtual C-arm system with a total view angle of 40 degrees. Results show the ability of limited angle C-arm technique to generate three dimensional images with distance driven reconstruction.
A Review on Real-Time 3D Ultrasound Imaging Technology.
Huang, Qinghua; Zeng, Zhaozheng
2017-01-01
Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail.
Super Talbot effect in indefinite metamaterial.
Zhao, Wangshi; Huang, Xiaoyue; Lu, Zhaolin
2011-08-01
The Talbot effect (or the self-imaging effect) can be observed for a periodic object with a pitch larger than the diffraction limit of an imaging system, where the paraxial approximation is applied. In this paper, we show that the super Talbot effect can be achieved in an indefinite metamaterial even when the period is much smaller than the diffraction limit in both two-dimensional and three-dimensional numerical simulations, where the paraxial approximation is not applied. This is attributed to the evanescent waves, which carry the information about subwavelength features of the object, can be converted into propagating waves and then conveyed to far field by the metamaterial, where the permittivity in the propagation direction is negative while the transverse ones are positive. The indefinite metamaterial can be approximated by a system of thin, alternating multilayer metal and insulator (MMI) stack. As long as the loss of the metamaterial is small enough, deep subwavelength image size can be obtained in the super Talbot effect.
NASA Astrophysics Data System (ADS)
Liu, Zexi; Cohen, Fernand
2017-11-01
We describe an approach for synthesizing a three-dimensional (3-D) face structure from an image or images of a human face taken at a priori unknown poses using gender and ethnicity specific 3-D generic models. The synthesis process starts with a generic model, which is personalized as images of the person become available using preselected landmark points that are tessellated to form a high-resolution triangular mesh. From a single image, two of the three coordinates of the model are reconstructed in accordance with the given image of the person, while the third coordinate is sampled from the generic model, and the appearance is made in accordance with the image. With multiple images, all coordinates and appearance are reconstructed in accordance with the observed images. This method allows for accurate pose estimation as well as face identification in 3-D rendering of a difficult two-dimensional (2-D) face recognition problem into a much simpler 3-D surface matching problem. The estimation of the unknown pose is achieved using the Levenberg-Marquardt optimization process. Encouraging experimental results are obtained in a controlled environment with high-resolution images under a good illumination condition, as well as for images taken in an uncontrolled environment under arbitrary illumination with low-resolution cameras.
C-ME: A 3D Community-Based, Real-Time Collaboration Tool for Scientific Research and Training
Kolatkar, Anand; Kennedy, Kevin; Halabuk, Dan; Kunken, Josh; Marrinucci, Dena; Bethel, Kelly; Guzman, Rodney; Huckaby, Tim; Kuhn, Peter
2008-01-01
The need for effective collaboration tools is growing as multidisciplinary proteome-wide projects and distributed research teams become more common. The resulting data is often quite disparate, stored in separate locations, and not contextually related. Collaborative Molecular Modeling Environment (C-ME) is an interactive community-based collaboration system that allows researchers to organize information, visualize data on a two-dimensional (2-D) or three-dimensional (3-D) basis, and share and manage that information with collaborators in real time. C-ME stores the information in industry-standard databases that are immediately accessible by appropriate permission within the computer network directory service or anonymously across the internet through the C-ME application or through a web browser. The system addresses two important aspects of collaboration: context and information management. C-ME allows a researcher to use a 3-D atomic structure model or a 2-D image as a contextual basis on which to attach and share annotations to specific atoms or molecules or to specific regions of a 2-D image. These annotations provide additional information about the atomic structure or image data that can then be evaluated, amended or added to by other project members. PMID:18286178
A novel method for fast imaging of brain function, non-invasively, with light
NASA Astrophysics Data System (ADS)
Chance, Britton; Anday, Endla; Nioka, Shoko; Zhou, Shuoming; Hong, Long; Worden, Katherine; Li, C.; Murray, T.; Ovetsky, Y.; Pidikiti, D.; Thomas, R.
1998-05-01
Imaging of the human body by any non-invasive technique has been an appropriate goal of physics and medicine, and great success has been obtained with both Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) in brain imaging. Non-imaging responses to functional activation using near infrared spectroscopy of brain (fNIR) obtained in 1993 (Chance, et al. [1]) and in 1994 (Tamura, et al. [2]) are now complemented with images of pre-frontal and parietal stimulation in adults and pre-term neonates in this communication (see also [3]). Prior studies used continuous [4], pulsed [3] or modulated [5] light. The amplitude and phase cancellation of optical patterns as demonstrated for single source detector pairs affords remarkable sensitivity of small object detection in model systems [6]. The methods have now been elaborated with multiple source detector combinations (nine sources, four detectors). Using simple back projection algorithms it is now possible to image sensorimotor and cognitive activation of adult and pre- and full-term neonate human brain function in times < 30 sec and with two dimensional resolutions of < 1 cm in two dimensional displays. The method can be used in evaluation of adult and neonatal cerebral dysfunction in a simple, portable and affordable method that does not require immobilization, as contrasted to MRI and PET.
Kopriva, Ivica; Hadžija, Mirko; Popović Hadžija, Marijana; Korolija, Marina; Cichocki, Andrzej
2011-01-01
A methodology is proposed for nonlinear contrast-enhanced unsupervised segmentation of multispectral (color) microscopy images of principally unstained specimens. The methodology exploits spectral diversity and spatial sparseness to find anatomical differences between materials (cells, nuclei, and background) present in the image. It consists of rth-order rational variety mapping (RVM) followed by matrix/tensor factorization. Sparseness constraint implies duality between nonlinear unsupervised segmentation and multiclass pattern assignment problems. Classes not linearly separable in the original input space become separable with high probability in the higher-dimensional mapped space. Hence, RVM mapping has two advantages: it takes implicitly into account nonlinearities present in the image (ie, they are not required to be known) and it increases spectral diversity (ie, contrast) between materials, due to increased dimensionality of the mapped space. This is expected to improve performance of systems for automated classification and analysis of microscopic histopathological images. The methodology was validated using RVM of the second and third orders of the experimental multispectral microscopy images of unstained sciatic nerve fibers (nervus ischiadicus) and of unstained white pulp in the spleen tissue, compared with a manually defined ground truth labeled by two trained pathophysiologists. The methodology can also be useful for additional contrast enhancement of images of stained specimens. PMID:21708116
Real-time stereo generation for surgical vision during minimal invasive robotic surgery
NASA Astrophysics Data System (ADS)
Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod
2016-03-01
This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.
Flat dielectric metasurface lens array for three dimensional integral imaging
NASA Astrophysics Data System (ADS)
Zhang, Jianlei; Wang, Xiaorui; Yang, Yi; Yuan, Ying; Wu, Xiongxiong
2018-05-01
In conventional integral imaging, the singlet refractive lens array limits the imaging performance due to its prominent aberrations. Different from the refractive lens array relying on phase modulation via phase change accumulated along the optical paths, metasurfaces composed of nano-scatters can produce phase abrupt over the scale of wavelength. In this letter, we propose a novel lens array consisting of two neighboring flat dielectric metasurfaces for integral imaging system. The aspherical phase profiles of the metasurfaces are optimized to improve imaging performance. The simulation results show that our designed 5 × 5 metasurface-based lens array exhibits high image quality at designed wavelength 865 nm.
Confocal Imaging of the Embryonic Heart: How Deep?
NASA Astrophysics Data System (ADS)
Miller, Christine E.; Thompson, Robert P.; Bigelow, Michael R.; Gittinger, George; Trusk, Thomas C.; Sedmera, David
2005-06-01
Confocal microscopy allows for optical sectioning of tissues, thus obviating the need for physical sectioning and subsequent registration to obtain a three-dimensional representation of tissue architecture. However, practicalities such as tissue opacity, light penetration, and detector sensitivity have usually limited the available depth of imaging to 200 [mu]m. With the emergence of newer, more powerful systems, we attempted to push these limits to those dictated by the working distance of the objective. We used whole-mount immunohistochemical staining followed by clearing with benzyl alcohol-benzyl benzoate (BABB) to visualize three-dimensional myocardial architecture. Confocal imaging of entire chick embryonic hearts up to a depth of 1.5 mm with voxel dimensions of 3 [mu]m was achieved with a 10× dry objective. For the purpose of screening for congenital heart defects, we used endocardial painting with fluorescently labeled poly-L-lysine and imaged BABB-cleared hearts with a 5× objective up to a depth of 2 mm. Two-photon imaging of whole-mount specimens stained with Hoechst nuclear dye produced clear images all the way through stage 29 hearts without significant signal attenuation. Thus, currently available systems allow confocal imaging of fixed samples to previously unattainable depths, the current limiting factors being objective working distance, antibody penetration, specimen autofluorescence, and incomplete clearing.
Gu, X; Fang, Z-M; Liu, Y; Lin, S-L; Han, B; Zhang, R; Chen, X
2014-01-01
Three-dimensional fluid-attenuated inversion recovery magnetic resonance imaging of the inner ear after intratympanic injection of gadolinium, together with magnetic resonance imaging scoring of the perilymphatic space, were used to investigate the positive identification rate of hydrops and determine the technique's diagnostic value for delayed endolymphatic hydrops. Twenty-five patients with delayed endolymphatic hydrops underwent pure tone audiometry, bithermal caloric testing, vestibular-evoked myogenic potential testing and three-dimensional magnetic resonance imaging of the inner ear after bilateral intratympanic injection of gadolinium. The perilymphatic space of the scanned images was analysed to investigate the positive identification rate of endolymphatic hydrops. According to the magnetic resonance imaging scoring of the perilymphatic space and the diagnostic standard, 84 per cent of the patients examined had endolymphatic hydrops. In comparison, the positive identification rates for vestibular-evoked myogenic potential and bithermal caloric testing were 52 per cent and 72 per cent respectively. Three-dimensional magnetic resonance imaging after intratympanic injection of gadolinium is valuable in the diagnosis of delayed endolymphatic hydrops and its classification. The perilymphatic space scoring system improved the diagnostic accuracy of magnetic resonance imaging.
A sniffer-camera for imaging of ethanol vaporization from wine: the effect of wine glass shape.
Arakawa, Takahiro; Iitani, Kenta; Wang, Xin; Kajiro, Takumi; Toma, Koji; Yano, Kazuyoshi; Mitsubayashi, Kohji
2015-04-21
A two-dimensional imaging system (Sniffer-camera) for visualizing the concentration distribution of ethanol vapor emitting from wine in a wine glass has been developed. This system provides image information of ethanol vapor concentration using chemiluminescence (CL) from an enzyme-immobilized mesh. This system measures ethanol vapor concentration as CL intensities from luminol reactions induced by alcohol oxidase and a horseradish peroxidase (HRP)-luminol-hydrogen peroxide system. Conversion of ethanol distribution and concentration to two-dimensional CL was conducted using an enzyme-immobilized mesh containing an alcohol oxidase, horseradish peroxidase, and luminol solution. The temporal changes in CL were detected using an electron multiplier (EM)-CCD camera and analyzed. We selected three types of glasses-a wine glass, a cocktail glass, and a straight glass-to determine the differences in ethanol emission caused by the shape effects of the glass. The emission measurements of ethanol vapor from wine in each glass were successfully visualized, with pixel intensity reflecting ethanol concentration. Of note, a characteristic ring shape attributed to high alcohol concentration appeared near the rim of the wine glass containing 13 °C wine. Thus, the alcohol concentration in the center of the wine glass was comparatively lower. The Sniffer-camera was demonstrated to be sufficiently useful for non-destructive ethanol measurement for the assessment of food characteristics.
Fast Laser Holographic Interferometry For Wind Tunnels
NASA Technical Reports Server (NTRS)
Lee, George
1989-01-01
Proposed system makes holographic interferograms quickly in wind tunnels. Holograms reveal two-dimensional flows around airfoils and provide information on distributions of pressure, structures of wake and boundary layers, and density contours of flow fields. Holograms form quickly in thermoplastic plates in wind tunnel. Plates rigid and left in place so neither vibrations nor photgraphic-development process degrades accuracy of holograms. System processes and analyzes images quickly. Semiautomatic micro-computer-based desktop image-processing unit now undergoing development moves easily to wind tunnel, and its speed and memory adequate for flows about airfoils.
On-the-fly augmented reality for orthopedic surgery using a multimodal fiducial.
Andress, Sebastian; Johnson, Alex; Unberath, Mathias; Winkler, Alexander Felix; Yu, Kevin; Fotouhi, Javad; Weidert, Simon; Osgood, Greg; Navab, Nassir
2018-04-01
Fluoroscopic x-ray guidance is a cornerstone for percutaneous orthopedic surgical procedures. However, two-dimensional (2-D) observations of the three-dimensional (3-D) anatomy suffer from the effects of projective simplification. Consequently, many x-ray images from various orientations need to be acquired for the surgeon to accurately assess the spatial relations between the patient's anatomy and the surgical tools. We present an on-the-fly surgical support system that provides guidance using augmented reality and can be used in quasiunprepared operating rooms. The proposed system builds upon a multimodality marker and simultaneous localization and mapping technique to cocalibrate an optical see-through head mounted display to a C-arm fluoroscopy system. Then, annotations on the 2-D x-ray images can be rendered as virtual objects in 3-D providing surgical guidance. We quantitatively evaluate the components of the proposed system and, finally, design a feasibility study on a semianthropomorphic phantom. The accuracy of our system was comparable to the traditional image-guided technique while substantially reducing the number of acquired x-ray images as well as procedure time. Our promising results encourage further research on the interaction between virtual and real objects that we believe will directly benefit the proposed method. Further, we would like to explore the capabilities of our on-the-fly augmented reality support system in a larger study directed toward common orthopedic interventions.
Three-dimensional representation of curved nanowires.
Huang, Z; Dikin, D A; Ding, W; Qiao, Y; Chen, X; Fridman, Y; Ruoff, R S
2004-12-01
Nanostructures, such as nanowires, nanotubes and nanocoils, can be described in many cases as quasi one-dimensional curved objects projecting in three-dimensional space. A parallax method to construct the correct three-dimensional geometry of such one-dimensional nanostructures is presented. A series of scanning electron microscope images was acquired at different view angles, thus providing a set of image pairs that were used to generate three-dimensional representations using a matlab program. An error analysis as a function of the view angle between the two images is presented and discussed. As an example application, the importance of knowing the true three-dimensional shape of boron nanowires is demonstrated; without the nanowire's correct length and diameter, mechanical resonance data cannot provide an accurate estimate of Young's modulus.
Lipscomb, K
1980-01-01
Biplane cineradiography is a potentially powerful tool for precise measurement of intracardiac dimensions. The most systematic approach to these measurements is the creation of a three-dimensional coordinate system within the x-ray field. Using this system, interpoint distances, such as between radiopaque clips or coronary artery bifurcations, can be calculated by use of the Pythagoras theorem. Alternatively, calibration factors can be calculated in order to determine the absolute dimensions of a structure, such as a ventricle or coronary artery. However, cineradiography has two problems that have precluded widespread use of the system. These problems are pincushion distortion and variable image magnification. In this paper, methodology to quantitate and compensate for these variables is presented. The method uses radiopaque beads permanently mounted in the x-ray field. The position of the bead images on the x-ray film determine the compensation factors. Using this system, measurements are made with a standard deviation of approximately 1% of the true value.
Selective Removal of Natural Occlusal Caries by Coupling Near-infrared Imaging with a CO2 Laser
Tao, You-Chen; Fried, Daniel
2011-01-01
Laser removal of dental hard tissue can be combined with optical, spectral or acoustic feedback systems to selectively ablate dental caries and restorative materials. Near-infrared (NIR) imaging has considerable potential for the optical discrimination of sound and demineralized tissue. Last year we successfully demonstrated that near-IR images can be used to guide a CO2 laser ablation system for the selective removal of artificial caries lesions on smooth surfaces. The objective of this study was to test the hypothesis that two-dimensional near-infrared images of natural occlusal caries can be used to guide a CO2 laser for selective removal. Two-dimensional NIR images were acquired at 1310-nm of extracted human molar teeth with occlusal caries. Polarization sensitive optical coherence tomography (PS-OCT) was also used to acquire depth-resolved images of the lesion areas. An imaging processing module was developed to analyze the NIR imaging output and generate optical maps that were used to guide a CO2 laser to selectively remove the lesions at a uniform depth. Post-ablation NIR images were acquired to verify caries removal. Based on the analysis of the NIR images, caries lesions were selectively removed with a CO2 laser while sound tissues were conserved. However, the removal rate varied markedly with the severity of decay and multiple passes were required for caries removal. These initial results are promising but indicate that the selective removal of natural caries is more challenging than the selective removal of artificial lesions due to varying tooth geometry, the highly variable organic/mineral ratio in natural lesions and more complicated lesion structure. PMID:21909225
Selective removal of natural occlusal caries by coupling near-infrared imaging with a CO II laser
NASA Astrophysics Data System (ADS)
Tao, You-Chen; Fried, Daniel
2008-02-01
Laser removal of dental hard tissue can be combined with optical, spectral or acoustic feedback systems to selectively ablate dental caries and restorative materials. Near-infrared (NIR) imaging has considerable potential for the optical discrimination of sound and demineralized tissue. Last year we successfully demonstrated that near-IR images can be used to guide a CO2 laser ablation system for the selective removal of artificial caries lesions on smooth surfaces. The objective of this study was to test the hypothesis that two-dimensional near-infrared images of natural occlusal caries can be used to guide a CO2 laser for selective removal. Two-dimensional NIR images were acquired at 1310-nm of extracted human molar teeth with occlusal caries. Polarization sensitive optical coherence tomography (PS-OCT) was also used to acquire depth-resolved images of the lesion areas. An imaging processing module was developed to analyze the NIR imaging output and generate optical maps that were used to guide a CO2 laser to selectively remove the lesions at a uniform depth. Post-ablation NIR images were acquired to verify caries removal. Based on the analysis of the NIR images, caries lesions were selectively removed with a CO2 laser while sound tissues were conserved. However, the removal rate varied markedly with the severity of decay and multiple passes were required for caries removal. These initial results are promising but indicate that the selective removal of natural caries is more challenging than the selective removal of artificial lesions due to varying tooth geometry, the highly variable organic/mineral ratio in natural lesions and more complicated lesion structure.
Selective Removal of Natural Occlusal Caries by Coupling Near-infrared Imaging with a CO(2) Laser.
Tao, You-Chen; Fried, Daniel
2008-03-01
Laser removal of dental hard tissue can be combined with optical, spectral or acoustic feedback systems to selectively ablate dental caries and restorative materials. Near-infrared (NIR) imaging has considerable potential for the optical discrimination of sound and demineralized tissue. Last year we successfully demonstrated that near-IR images can be used to guide a CO(2) laser ablation system for the selective removal of artificial caries lesions on smooth surfaces. The objective of this study was to test the hypothesis that two-dimensional near-infrared images of natural occlusal caries can be used to guide a CO(2) laser for selective removal. Two-dimensional NIR images were acquired at 1310-nm of extracted human molar teeth with occlusal caries. Polarization sensitive optical coherence tomography (PS-OCT) was also used to acquire depth-resolved images of the lesion areas. An imaging processing module was developed to analyze the NIR imaging output and generate optical maps that were used to guide a CO(2) laser to selectively remove the lesions at a uniform depth. Post-ablation NIR images were acquired to verify caries removal. Based on the analysis of the NIR images, caries lesions were selectively removed with a CO(2) laser while sound tissues were conserved. However, the removal rate varied markedly with the severity of decay and multiple passes were required for caries removal. These initial results are promising but indicate that the selective removal of natural caries is more challenging than the selective removal of artificial lesions due to varying tooth geometry, the highly variable organic/mineral ratio in natural lesions and more complicated lesion structure.
National Defense Center of Excellence for Industrial Metrology and 3D Imaging
2012-10-18
validation rather than mundane data-reduction/analysis tasks. Indeed, the new financial and technical resources being brought to bear by integrating CT...of extremely fast axial scanners. By replacing the single-spot detector by a detector array, a three-dimensional image is acquired by one depth scan...the number of acquired voxels per complete two-dimensional or three-dimensional image, the axial and lateral resolution, the depth range, the
Three-dimensional T1rho-weighted MRI at 1.5 Tesla.
Borthakur, Arijitt; Wheaton, Andrew; Charagundla, Sridhar R; Shapiro, Erik M; Regatte, Ravinder R; Akella, Sarma V S; Kneeland, J Bruce; Reddy, Ravinder
2003-06-01
To design and implement a magnetic resonance imaging (MRI) pulse sequence capable of performing three-dimensional T(1rho)-weighted MRI on a 1.5-T clinical scanner, and determine the optimal sequence parameters, both theoretically and experimentally, so that the energy deposition by the radiofrequency pulses in the sequence, measured as the specific absorption rate (SAR), does not exceed safety guidelines for imaging human subjects. A three-pulse cluster was pre-encoded to a three-dimensional gradient-echo imaging sequence to create a three-dimensional, T(1rho)-weighted MRI pulse sequence. Imaging experiments were performed on a GE clinical scanner with a custom-built knee-coil. We validated the performance of this sequence by imaging articular cartilage of a bovine patella and comparing T(1rho) values measured by this sequence to those obtained with a previously tested two-dimensional imaging sequence. Using a previously developed model for SAR calculation, the imaging parameters were adjusted such that the energy deposition by the radiofrequency pulses in the sequence did not exceed safety guidelines for imaging human subjects. The actual temperature increase due to the sequence was measured in a phantom by a MRI-based temperature mapping technique. Following these experiments, the performance of this sequence was demonstrated in vivo by obtaining T(1rho)-weighted images of the knee joint of a healthy individual. Calculated T(1rho) of articular cartilage in the specimen was similar for both and three-dimensional and two-dimensional methods (84 +/- 2 msec and 80 +/- 3 msec, respectively). The temperature increase in the phantom resulting from the sequence was 0.015 degrees C, which is well below the established safety guidelines. Images of the human knee joint in vivo demonstrate a clear delineation of cartilage from surrounding tissues. We developed and implemented a three-dimensional T(1rho)-weighted pulse sequence on a 1.5-T clinical scanner. Copyright 2003 Wiley-Liss, Inc.
Real-time model-based vision system for object acquisition and tracking
NASA Technical Reports Server (NTRS)
Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd
1987-01-01
A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.
Techniques of noninvasive optical tomographic imaging
NASA Astrophysics Data System (ADS)
Rosen, Joseph; Abookasis, David; Gokhler, Mark
2006-01-01
Recently invented methods of optical tomographic imaging through scattering and absorbing media are presented. In one method, the three-dimensional structure of an object hidden between two biological tissues is recovered from many noisy speckle pictures obtained on the output of a multi-channeled optical imaging system. Objects are recovered from many speckled images observed by a digital camera through two stereoscopic microlens arrays. Each microlens in each array generates a speckle image of the object buried between the layers. In the computer each image is Fourier transformed jointly with an image of the speckled point-like source captured under the same conditions. A set of the squared magnitudes of the Fourier-transformed pictures is accumulated to form a single average picture. This final picture is again Fourier transformed, resulting in the three-dimensional reconstruction of the hidden object. In the other method, the effect of spatial longitudinal coherence is used for imaging through an absorbing layer with different thickness, or different index of refraction, along the layer. The technique is based on synthesis of multiple peak spatial degree of coherence. This degree of coherence enables us to scan simultaneously different sample points on different altitudes, and thus decreases the acquisition time. The same multi peak degree of coherence is also used for imaging through the absorbing layer. Our entire experiments are performed with a quasi-monochromatic light source. Therefore problems of dispersion and inhomogeneous absorption are avoided.
Optical Potential Field Mapping System
NASA Technical Reports Server (NTRS)
Reid, Max B. (Inventor)
1996-01-01
The present invention relates to an optical system for creating a potential field map of a bounded two dimensional region containing a goal location and an arbitrary number of obstacles. The potential field mapping system has an imaging device and a processor. Two image writing modes are used by the imaging device, electron deposition and electron depletion. Patterns written in electron deposition mode appear black and expand. Patterns written in electron depletion mode are sharp and appear white. The generated image represents a robot's workspace. The imaging device under processor control then writes a goal location in the work-space using the electron deposition mode. The black image of the goal expands in the workspace. The processor stores the generated images, and uses them to generate a feedback pattern. The feedback pattern is written in the workspace by the imaging device in the electron deposition mode to enhance the expansion of the original goal pattern. After the feedback pattern is written, an obstacle pattern is written by the imaging device in the electron depletion mode to represent the obstacles in the robot's workspace. The processor compares a stored image to a previously stored image to determine a change therebetween. When no change occurs, the processor averages the stored images to produce the potential field map.
Results from field tests of the one-dimensional Time-Encoded Imaging System.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marleau, Peter; Brennan, James S.; Brubaker, Erik
2014-09-01
A series of field experiments were undertaken to evaluate the performance of the one dimensional time encoded imaging system. The significant detection of a Cf252 fission radiation source was demonstrated at a stand-off of 100 meters. Extrapolations to different quantities of plutonium equivalent at different distances are made. Hardware modifications to the system for follow on work are suggested.
NASA Astrophysics Data System (ADS)
Mano, Tomohiro; Ohtsuki, Tomi
2017-11-01
The three-dimensional Anderson model is a well-studied model of disordered electron systems that shows the delocalization-localization transition. As in our previous papers on two- and three-dimensional (2D, 3D) quantum phase transitions [
[Three-dimensional reconstruction of functional brain images].
Inoue, M; Shoji, K; Kojima, H; Hirano, S; Naito, Y; Honjo, I
1999-08-01
We consider PET (positron emission tomography) measurement with SPM (Statistical Parametric Mapping) analysis to be one of the most useful methods to identify activated areas of the brain involved in language processing. SPM is an effective analytical method that detects markedly activated areas over the whole brain. However, with the conventional presentations of these functional brain images, such as horizontal slices, three directional projection, or brain surface coloring, makes understanding and interpreting the positional relationships among various brain areas difficult. Therefore, we developed three-dimensionally reconstructed images from these functional brain images to improve the interpretation. The subjects were 12 normal volunteers. The following three types of images were constructed: 1) routine images by SPM, 2) three-dimensional static images, and 3) three-dimensional dynamic images, after PET images were analyzed by SPM during daily dialog listening. The creation of images of both the three-dimensional static and dynamic types employed the volume rendering method by VTK (The Visualization Toolkit). Since the functional brain images did not include original brain images, we synthesized SPM and MRI brain images by self-made C++ programs. The three-dimensional dynamic images were made by sequencing static images with available software. Images of both the three-dimensional static and dynamic types were processed by a personal computer system. Our newly created images showed clearer positional relationships among activated brain areas compared to the conventional method. To date, functional brain images have been employed in fields such as neurology or neurosurgery, however, these images may be useful even in the field of otorhinolaryngology, to assess hearing and speech. Exact three-dimensional images based on functional brain images are important for exact and intuitive interpretation, and may lead to new developments in brain science. Currently, the surface model is the most common method of three-dimensional display. However, the volume rendering method may be more effective for imaging regions such as the brain.
NASA Astrophysics Data System (ADS)
Rousson, Johanna; Haar, Jérémy; Santal, Sarah; Kumcu, Asli; Platiša, Ljiljana; Piepers, Bastian; Kimpe, Tom; Philips, Wilfried
2016-03-01
While three-dimensional (3-D) imaging systems are entering hospitals, no study to date has explored the luminance calibration needs of 3-D stereoscopic diagnostic displays and if they differ from two-dimensional (2-D) displays. Since medical display calibration incorporates the human contrast sensitivity function (CSF), we first assessed the 2-D CSF for benchmarking and then examined the impact of two image parameters on the 3-D stereoscopic CSF: (1) five depth plane (DP) positions (between DP: -171 and DP: 2853 mm), and (2) three 3-D inclinations (0 deg, 45 deg, and 60 deg around the horizontal axis of a DP). Stimuli were stereoscopic images of a vertically oriented 2-D Gabor patch at one of seven frequencies ranging from 0.4 to 10 cycles/deg. CSFs were measured for seven to nine human observers with a staircase procedure. The results indicate that the 2-D CSF model remains valid for a 3-D stereoscopic display regardless of the amount of disparity between the stereo images. We also found that the 3-D CSF at DP≠0 does not differ from the 3-D CSF at DP=0 for DPs and disparities which allow effortless binocular fusion. Therefore, the existing 2-D medical luminance calibration algorithm remains an appropriate tool for calibrating polarized stereoscopic medical displays.
Laser one-dimensional range profile and the laser two-dimensional range profile of cylinders
NASA Astrophysics Data System (ADS)
Gong, Yanjun; Wang, Mingjun; Gong, Lei
2015-10-01
Laser one-dimensional range profile, that is scattering power from pulse laser scattering of target, is a radar imaging technology. The laser two-dimensional range profile is two-dimensional scattering imaging of pulse laser of target. Laser one-dimensional range profile and laser two-dimensional range profile are called laser range profile(LRP). The laser range profile can reflect the characteristics of the target shape and surface material. These techniques were motivated by applications of laser radar to target discrimination in ballistic missile defense. The radar equation of pulse laser is given in this paper. This paper demonstrates the analytical model of laser range profile of cylinder based on the radar equation of the pulse laser. Simulations results of laser one-dimensional range profiles of some cylinders are given. Laser range profiles of cylinder, whose surface material with diffuse lambertian reflectance, is given in this paper. Laser range profiles of different pulse width of cylinder are given in this paper. The influences of geometric parameters, pulse width, attitude on the range profiles are analyzed.
Fractal Dimensionality of Pore and Grain Volume of a Siliciclastic Marine Sand
NASA Astrophysics Data System (ADS)
Reed, A. H.; Pandey, R. B.; Lavoie, D. L.
Three-dimensional (3D) spatial distributions of pore and grain volumes were determined from high-resolution computer tomography (CT) images of resin-impregnated marine sands. Using a linear gradient extrapolation method, cubic three-dimensional samples were constructed from two-dimensional CT images. Image porosity (0.37) was found to be consistent with the estimate of porosity by water weight loss technique (0.36). Scaling of the pore volume (Vp) with the linear size (L), V~LD provides the fractal dimensionalities of the pore volume (D=2.74+/-0.02) and grain volume (D=2.90+/-0.02) typical for sedimentary materials.
A memory-efficient staining algorithm in 3D seismic modelling and imaging
NASA Astrophysics Data System (ADS)
Jia, Xiaofeng; Yang, Lu
2017-08-01
The staining algorithm has been proven to generate high signal-to-noise ratio (S/N) images in poorly illuminated areas in two-dimensional cases. In the staining algorithm, the stained wavefield relevant to the target area and the regular source wavefield forward propagate synchronously. Cross-correlating these two wavefields with the backward propagated receiver wavefield separately, we obtain two images: the local image of the target area and the conventional reverse time migration (RTM) image. This imaging process costs massive computer memory for wavefield storage, especially in large scale three-dimensional cases. To make the staining algorithm applicable to three-dimensional RTM, we develop a method to implement the staining algorithm in three-dimensional acoustic modelling in a standard staggered grid finite difference (FD) scheme. The implementation is adaptive to the order of spatial accuracy of the FD operator. The method can be applied to elastic, electromagnetic, and other wave equations. Taking the memory requirement into account, we adopt a random boundary condition (RBC) to backward extrapolate the receiver wavefield and reconstruct it by reverse propagation using the final wavefield snapshot only. Meanwhile, we forward simulate the stained wavefield and source wavefield simultaneously using the nearly perfectly matched layer (NPML) boundary condition. Experiments on a complex geologic model indicate that the RBC-NPML collaborative strategy not only minimizes the memory consumption but also guarantees high quality imaging results. We apply the staining algorithm to three-dimensional RTM via the proposed strategy. Numerical results show that our staining algorithm can produce high S/N images in the target areas with other structures effectively muted.
NASA Astrophysics Data System (ADS)
Sung, Kung-Bin; Lin, Yang-Hsien; Lin, Fong-jheng; Hsieh, Chao-Mao; Wu, Shang-Ju
2017-04-01
Three-dimensional (3D) refractive-index (RI) microscopy is an emerging technique suitable for live-cell imaging due to its label-free and fast 3D imaging capabilities. We have developed a common-path system to acquire 3D RI microscopic images of cells with excellent speed and stability. After obtaining 3D RI distributions of individual leukocytes, we used a 3D finite-difference time-domain tool to study light scattering properties. Backscattering spectra of lymphocytes, monocytes and neutrophils are different from each other. Backscattering spectra of lymphocytes matched well with those of homogeneous spheres as predicted by Mie theory while backscattering spectra of neutrophils are significantly more intense than those of the other two types. This suggests the possibility of classifying the three types of leukocytes based on backscattering.
Evolution of stereoscopic imaging in surgery and recent advances
Schwab, Katie; Smith, Ralph; Brown, Vanessa; Whyte, Martin; Jourdan, Iain
2017-01-01
In the late 1980s the first laparoscopic cholecystectomies were performed prompting a sudden rise in technological innovations as the benefits and feasibility of minimal access surgery became recognised. Monocular laparoscopes provided only two-dimensional (2D) viewing with reduced depth perception and contributed to an extended learning curve. Attention turned to producing a usable three-dimensional (3D) endoscopic view for surgeons; utilising different technologies for image capture and image projection. These evolving visual systems have been assessed in various research environments with conflicting outcomes of success and usability, and no overall consensus to their benefit. This review article aims to provide an explanation of the different types of technologies, summarise the published literature evaluating 3D vs 2D laparoscopy, to explain the conflicting outcomes, and discuss the current consensus view. PMID:28874957
Confocal Imaging of porous media
NASA Astrophysics Data System (ADS)
Shah, S.; Crawshaw, D.; Boek, D.
2012-12-01
Carbonate rocks, which hold approximately 50% of the world's oil and gas reserves, have a very complicated and heterogeneous structure in comparison with sandstone reservoir rock. We present advances with different techniques to image, reconstruct, and characterize statistically the micro-geometry of carbonate pores. The main goal here is to develop a technique to obtain two dimensional and three dimensional images using Confocal Laser Scanning Microscopy. CLSM is used in epi-fluorescent imaging mode, allowing for the very high optical resolution of features well below 1μm size. Images of pore structures were captured using CLSM imaging where spaces in the carbonate samples were impregnated with a fluorescent, dyed epoxy-resin, and scanned in the x-y plane by a laser probe. We discuss the sample preparation in detail for Confocal Imaging to obtain sub-micron resolution images of heterogeneous carbonate rocks. We also discuss the technical and practical aspects of this imaging technique, including its advantages and limitation. We present several examples of this application, including studying pore geometry in carbonates, characterizing sub-resolution porosity in two dimensional images. We then describe approaches to extract statistical information about porosity using image processing and spatial correlation function. We have managed to obtain very low depth information in z -axis (~ 50μm) to develop three dimensional images of carbonate rocks with the current capabilities and limitation of CLSM technique. Hence, we have planned a novel technique to obtain higher depth information to obtain high three dimensional images with sub-micron resolution possible in the lateral and axial planes.
Magnetic resonance imaging of convection in laser-polarized xenon
NASA Technical Reports Server (NTRS)
Mair, R. W.; Tseng, C. H.; Wong, G. P.; Cory, D. G.; Walsworth, R. L.
2000-01-01
We demonstrate nuclear magnetic resonance (NMR) imaging of the flow and diffusion of laser-polarized xenon (129Xe) gas undergoing convection above evaporating laser-polarized liquid xenon. The large xenon NMR signal provided by the laser-polarization technique allows more rapid imaging than one can achieve with thermally polarized gas-liquid systems, permitting shorter time-scale events such as rapid gas flow and gas-liquid dynamics to be observed. Two-dimensional velocity-encoded imaging shows convective gas flow above the evaporating liquid xenon, and also permits the measurement of enhanced gas diffusion near regions of large velocity variation.
Investigation of OPET Performance Using GATE, a Geant4-Based Simulation Software.
Rannou, Fernando R; Kohli, Vandana; Prout, David L; Chatziioannou, Arion F
2004-10-01
A combined optical positron emission tomography (OPET) system is capable of both optical and PET imaging in the same setting, and it can provide information/interpretation not possible in single-mode imaging. The scintillator array here serves the dual function of coupling the optical signal from bioluminescence/fluorescence to the photodetector and also of channeling optical scintillations from the gamma rays. We report simulation results of the PET part of OPET using GATE, a Geant4 simulation package. The purpose of this investigation is the definition of the geometric parameters of the OPET tomograph. OPET is composed of six detector blocks arranged in a hexagonal ring-shaped pattern with an inner radius of 15.6 mm. Each detector consists of a two-dimensional array of 8 × 8 scintillator crystals each measuring 2 × 2 × 10 mm(3). Monte Carlo simulations were performed using the GATE software to measure absolute sensitivity, depth of interaction, and spatial resolution for two ring configurations, with and without gantry rotations, two crystal materials, and several crystal lengths. Images were reconstructed with filtered backprojection after angular interleaving and transverse one-dimensional interpolation of the sinogram. We report absolute sensitivities nearly seven times that of the prototype microPET at the center of field of view and 2.0 mm tangential and 2.3 mm radial resolutions with gantry rotations up to an 8.0 mm radial offset. These performance parameters indicate that the imaging spatial resolution and sensitivity of the OPET system will be suitable for high-resolution and high-sensitivity small-animal PET imaging.
2011-12-01
Transport Phenomena and Thermal Management Applications,” Proceedings of the XXVIII UIT Heat Transfer Conference, Brescia, Italy, June 21-23, 2010...measurements in microscale systems. The integrated confocal microscope system is a critical component to obtain understanding of fluid- heat ...objective of this work was to develop a high speed three-dimensional (3D) confocal imaging system to study coupled fluidic and heat transport
NASA Astrophysics Data System (ADS)
Xin, Zhaowei; Wei, Dong; Li, Dapeng; Xie, Xingwang; Chen, Mingce; Zhang, Xinyu; Wang, Haiwei; Xie, Changsheng
2018-02-01
In this paper, a polarization difference liquid-crystal microlens array (PD-LCMLA) for three dimensional imaging application through turbid media is fabricated and demonstrated. This device is composed of a twisted nematic liquidcrystal cell (TNLCC), a polarizer and a liquid-crystal microlens array. The polarizer is sandwiched between the TNLCC and LCMLA to help the polarization difference system achieving the orthogonal polarization raw images. The prototyped camera for polarization difference imaging has been constructed by integrating the PD-LCMLA with an image sensor. The orthogonally polarized light-field images are recorded by switching the working state of the TNLCC. Here, by using a special microstructure in conjunction with the polarization-difference algorithm, we demonstrate that the three-dimensional information in the scattering media can be retrieved from the polarization-difference imaging system with an electrically tunable PD-LCMLA. We further investigate the system's potential function based on the flexible microstructure. The microstructure provides a wide operation range in the manipulation of incident beams and also emerges multiple operation modes for imaging applications, such as conventional planar imaging, polarization imaging mode, and polarization-difference imaging mode. Since the PD-LCMLA demonstrates a very low power consumption, multiple imaging modes and simple manufacturing, this kind of device presents a potential to be used in many other optical and electro-optical systems.
Bindu, G; Semenov, S
2013-01-01
This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell's equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness.
NASA Astrophysics Data System (ADS)
Hu, Chengliang; Amati, Giancarlo; Gullick, Nicola; Oakley, Stephen; Hurmusiadis, Vassilios; Schaeffter, Tobias; Penney, Graeme; Rhode, Kawal
2009-02-01
Knee arthroscopy is a minimally invasive procedure that is routinely carried out for the diagnosis and treatment of pathologies of the knee joint. A high level of expertise is required to carry out this procedure and therefore the clinical training is extensive. There are several reasons for this that include the small field of view seen by the arthroscope and the high degree of distortion in the video images. Several virtual arthroscopy simulators have been proposed to augment the learning process. One of the limitations of these simulators is the generic models that are used. We propose to develop a new virtual arthroscopy simulator that will allow the use of pathology-specific models with an increased level of photo-realism. In order to generate these models we propose to use registered magnetic resonance images (MRI) and arthroscopic video images collected from patients with a variety of knee pathologies. We present a method to perform this registration based on the use of a combined X-ray and MR imaging system (XMR). In order to validate our technique we carried out MR imaging and arthroscopy of a custom-made acrylic phantom in the XMR environment. The registration between the two modalities was computed using a combination of XMR and camera calibration, and optical tracking. Both two-dimensional (2D) and three-dimensional (3D) registration errors were computed and shown to be approximately 0.8 and 3 mm, respectively. Further to this, we qualitatively tested our approach using a more realistic plastic knee model that is used for the arthroscopy training.
Three-dimensional object recognition based on planar images
NASA Astrophysics Data System (ADS)
Mital, Dinesh P.; Teoh, Eam-Khwang; Au, K. C.; Chng, E. K.
1993-01-01
This paper presents the development and realization of a robotic vision system for the recognition of 3-dimensional (3-D) objects. The system can recognize a single object from among a group of known regular convex polyhedron objects that is constrained to lie on a calibrated flat platform. The approach adopted comprises a series of image processing operations on a single 2-dimensional (2-D) intensity image to derive an image line drawing. Subsequently, a feature matching technique is employed to determine 2-D spatial correspondences of the image line drawing with the model in the database. Besides its identification ability, the system can also provide important position and orientation information of the recognized object. The system was implemented on an IBM-PC AT machine executing at 8 MHz without the 80287 Maths Co-processor. In our overall performance evaluation based on a 600 recognition cycles test, the system demonstrated an accuracy of above 80% with recognition time well within 10 seconds. The recognition time is, however, indirectly dependent on the number of models in the database. The reliability of the system is also affected by illumination conditions which must be clinically controlled as in any industrial robotic vision system.
3-dimensional imaging at nanometer resolutions
Werner, James H.; Goodwin, Peter M.; Shreve, Andrew P.
2010-03-09
An apparatus and method for enabling precise, 3-dimensional, photoactivation localization microscopy (PALM) using selective, two-photon activation of fluorophores in a single z-slice of a sample in cooperation with time-gated imaging for reducing the background radiation from other image planes to levels suitable for single-molecule detection and spatial location, are described.
Testud, Frederik; Gallichan, Daniel; Layton, Kelvin J; Barmet, Christoph; Welz, Anna M; Dewdney, Andrew; Cocosco, Chris A; Pruessmann, Klaas P; Hennig, Jürgen; Zaitsev, Maxim
2015-03-01
PatLoc (Parallel Imaging Technique using Localized Gradients) accelerates imaging and introduces a resolution variation across the field-of-view. Higher-dimensional encoding employs more spatial encoding magnetic fields (SEMs) than the corresponding image dimensionality requires, e.g. by applying two quadratic and two linear spatial encoding magnetic fields to reconstruct a 2D image. Images acquired with higher-dimensional single-shot trajectories can exhibit strong artifacts and geometric distortions. In this work, the source of these artifacts is analyzed and a reliable correction strategy is derived. A dynamic field camera was built for encoding field calibration. Concomitant fields of linear and nonlinear spatial encoding magnetic fields were analyzed. A combined basis consisting of spherical harmonics and concomitant terms was proposed and used for encoding field calibration and image reconstruction. A good agreement between the analytical solution for the concomitant fields and the magnetic field simulations of the custom-built PatLoc SEM coil was observed. Substantial image quality improvements were obtained using a dynamic field camera for encoding field calibration combined with the proposed combined basis. The importance of trajectory calibration for single-shot higher-dimensional encoding is demonstrated using the combined basis including spherical harmonics and concomitant terms, which treats the concomitant fields as an integral part of the encoding. © 2014 Wiley Periodicals, Inc.
X-ray tests of a two-dimensional stigmatic imaging scheme with variable magnifications
Lu, J.; Bitter, M.; Hill, K. W.; ...
2014-07-22
A two-dimensional stigmatic x-ray imaging scheme, consisting of two spherically bent crystals, one concave and one convex, was recently proposed [M. Bitter et al., Rev. Sci. Instrum. 83, 10E527 (2012)]. We report that the Bragg angles and the radii of curvature of the two crystals of this imaging scheme are matched to eliminate the astigmatism and to satisfy the Bragg condition across both crystal surfaces for a given x-ray energy. In this paper, we consider more general configurations of this imaging scheme, which allow us to vary the magnification for a given pair of crystals and x-ray energy. The stigmaticmore » imaging scheme has been validated for the first time by imaging x-rays generated by a micro-focus x-ray source with source size of 8.4 μm validated by knife-edge measurements. Results are presented from imaging the tungsten Lα1 emission at 8.3976 keV, using a convex Si-422 crystal and a concave Si-533 crystal with 2d-spacings of 2.21707 Å and 1.65635 Å and radii of curvature of 500 ± 1 mm and 823 ± 1 mm, respectively, showing a spatial resolution of 54.9 μm. Finally, this imaging scheme is expected to be of interest for the two-dimensional imaging of laser produced plasmas.« less
Two sided residual refocusing for acoustic lens based photoacoustic imaging system.
Kalloor Joseph, Francis; Chinni, Bhargava; Channappayya, Sumohana S; Pachamuthu, Rajalakshmi; Dogra, Vikram S; Rao, Navalgund
2018-05-30
In photoacoustic (PA) imaging, an acoustic lens-based system can form a focused image of an object plane. A real-time C-scan PA image can be formed by simply time gating the transducer response. While most of the focusing action is done by the lens, residual refocusing is needed to image multiple depths with high resolution simultaneously. However, a refocusing algorithm for PA camera has not been studied so far in the literature. In this work, we reformulate this residual refocusing problem for a PA camera into a two-sided wave propagation from a planar sensor array. One part of the problem deals with forward wave propagation while the other deals with time reversal. We have chosen a Fast Fourier Transform (FFT) based wave propagation model for the refocusing to maintain the real-time nature of the system. We have conducted Point Spread Function (PSF) measurement experiments at multiple depths and refocused the signal using the proposed method. Full Width at Half Maximum (FWHM), peak value and Signal to Noise Ratio (SNR) of the refocused PSF is analyzed to quantify the effect of refocusing. We believe that using a two-dimensional transducer array combined with the proposed refocusing, can lead to real-time volumetric imaging using a lens based PA imaging system. © 2018 Institute of Physics and Engineering in Medicine.
Gosnell, Jordan; Pietila, Todd; Samuel, Bennett P; Kurup, Harikrishnan K N; Haw, Marcus P; Vettukattil, Joseph J
2016-12-01
Three-dimensional (3D) printing is an emerging technology aiding diagnostics, education, and interventional, and surgical planning in congenital heart disease (CHD). Three-dimensional printing has been derived from computed tomography, cardiac magnetic resonance, and 3D echocardiography. However, individually the imaging modalities may not provide adequate visualization of complex CHD. The integration of the strengths of two or more imaging modalities has the potential to enhance visualization of cardiac pathomorphology. We describe the feasibility of hybrid 3D printing from two imaging modalities in a patient with congenitally corrected transposition of the great arteries (L-TGA). Hybrid 3D printing may be useful as an additional tool for cardiologists and cardiothoracic surgeons in planning interventions in children and adults with CHD.
Jani, Shyam S; Low, Daniel A; Lamb, James M
2015-01-01
To develop an automated system that detects patient identification and positioning errors between 3-dimensional computed tomography (CT) and kilovoltage CT planning images. Planning kilovoltage CT images were collected for head and neck (H&N), pelvis, and spine treatments with corresponding 3-dimensional cone beam CT and megavoltage CT setup images from TrueBeam and TomoTherapy units, respectively. Patient identification errors were simulated by registering setup and planning images from different patients. For positioning errors, setup and planning images were misaligned by 1 to 5 cm in the 6 anatomical directions for H&N and pelvis patients. Spinal misalignments were simulated by misaligning to adjacent vertebral bodies. Image pairs were assessed using commonly used image similarity metrics as well as custom-designed metrics. Linear discriminant analysis classification models were trained and tested on the imaging datasets, and misclassification error (MCE), sensitivity, and specificity parameters were estimated using 10-fold cross-validation. For patient identification, our workflow produced MCE estimates of 0.66%, 1.67%, and 0% for H&N, pelvis, and spine TomoTherapy images, respectively. Sensitivity and specificity ranged from 97.5% to 100%. MCEs of 3.5%, 2.3%, and 2.1% were obtained for TrueBeam images of the above sites, respectively, with sensitivity and specificity estimates between 95.4% and 97.7%. MCEs for 1-cm H&N/pelvis misalignments were 1.3%/5.1% and 9.1%/8.6% for TomoTherapy and TrueBeam images, respectively. Two-centimeter MCE estimates were 0.4%/1.6% and 3.1/3.2%, respectively. MCEs for vertebral body misalignments were 4.8% and 3.6% for TomoTherapy and TrueBeam images, respectively. Patient identification and gross misalignment errors can be robustly and automatically detected using 3-dimensional setup images of different energies across 3 commonly treated anatomical sites. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Analysis of two dimensional signals via curvelet transform
NASA Astrophysics Data System (ADS)
Lech, W.; Wójcik, W.; Kotyra, A.; Popiel, P.; Duk, M.
2007-04-01
This paper describes an application of curvelet transform analysis problem of interferometric images. Comparing to two-dimensional wavelet transform, curvelet transform has higher time-frequency resolution. This article includes numerical experiments, which were executed on random interferometric image. In the result of nonlinear approximations, curvelet transform obtains matrix with smaller number of coefficients than is guaranteed by wavelet transform. Additionally, denoising simulations show that curvelet could be a very good tool to remove noise from images.
Radar systems for a polar mission, volume 1
NASA Technical Reports Server (NTRS)
Moore, R. K.; Claassen, J. P.; Erickson, R. L.; Fong, R. K. T.; Komen, M. J.; Mccauley, J.; Mcmillan, S. B.; Parashar, S. K.
1977-01-01
The application of synthetic aperture radar (SAR) in monitoring and managing earth resources is examined. Synthetic aperture radars form a class of side-looking airborne radar, often referred to as coherent SLAR, which permits fine-resolution radar imagery to be generated at long operating ranges by the use of signal processing techniques. By orienting the antenna beam orthogonal to the motion of the spacecraft carrying the radar, a one-dimensional imagery ray system is converted into a two-dimensional or terrain imaging system. The radar's ability to distinguish - or resolve - closely spaced transverse objects is determined by the length of the pulse. The transmitter components receivers, and the mixer are described in details.
Fast scanning mode and its realization in a scanning acoustic microscope
NASA Astrophysics Data System (ADS)
Ju, Bing-Feng; Bai, Xiaolong; Chen, Jian
2012-03-01
The scanning speed of the two-dimensional stage dominates the efficiency of mechanical scanning measurement systems. This paper focused on a detailed scanning time analysis of conventional raster and spiral scan modes and then proposed two fast alternative scanning modes. Performed on a self-developed scanning acoustic microscope (SAM), the measured images obtained by using the conventional scan mode and fast scan modes are compared. The total scanning time is reduced by 29% of the two proposed fast scan modes. It will offer a better solution for high speed scanning without sacrificing the system stability, and will not introduce additional difficulties to the configuration of scanning measurement systems. They can be easily applied to the mechanical scanning measuring systems with different driving actuators such as piezoelectric, linear motor, dc motor, and so on. The proposed fast raster and square spiral scan modes are realized in SAM, but not specially designed for it. Therefore, they have universal adaptability and can be applied to other scanning measurement systems with two-dimensional mechanical scanning stages, such as atomic force microscope or scanning tunneling microscope.
Jaremko, Jacob L; Mabee, Myles; Swami, Vimarsha G; Jamieson, Lucy; Chow, Kelvin; Thompson, Richard B
2014-12-01
To use three-dimensional ( 3D three-dimensional ) ultrasonography (US) to quantify the alpha-angle variability due to changing probe orientation during two-dimensional ( 2D two-dimensional ) US of the infant hip and its effect on the diagnostic classification of developmental dysplasia of the hip ( DDH developmental dysplasia of the hip ). In this institutional research ethics board-approved prospective study, with parental written informed consent, 13-MHz 3D three-dimensional US was added to initial 2D two-dimensional US for 56 hips in 35 infants (mean age, 41.7 days; range, 4-112 days), 26 of whom were female (mean age, 38.7 days; range, 6-112 days) and nine of whom were male (mean age, 50.2 days; range, 4-111 days). Findings in 20 hips were normal at the initial visit and were initially inconclusive but normalized spontaneously at follow-up in 23 hips; 13 hips were treated for dysplasia. With the computer algorithm, 3D three-dimensional US data were resectioned in planes tilted in 5° increments away from a central plane, as if slowly rotating a 2D two-dimensional US probe, until resulting images no longer met Graf quality criteria. On each acceptable 2D two-dimensional image, two observers measured alpha angles, and descriptive statistics, including mean, standard deviation, and limits of agreement, were computed. Acceptable 2D two-dimensional images were produced over a range of probe orientations averaging 24° (maximum, 45°) from the central plane. Over this range, alpha-angle variation was 19° (upper limit of agreement), leading to alteration of the diagnostic category of hip dysplasia in 54% of hips scanned. Use of 3D three-dimensional US showed that alpha angles measured at routine 2D two-dimensional US of the hip can vary substantially between 2D two-dimensional scans solely because of changes in probe positioning. Not only could normal hips appear dysplastic, but dysplastic hips also could have normal alpha angles. Three-dimensional US can display the full acetabular shape, which might improve DDH developmental dysplasia of the hip assessment accuracy. © RSNA, 2014.
75 FR 77885 - Government-Owned Inventions; Availability for Licensing
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-14
... of federally-funded research and development. Foreign patent applications are filed on selected... applications. Software System for Quantitative Assessment of Vasculature in Three Dimensional Images... three dimensional vascular networks from medical and basic research images. Deregulation of angiogenesis...
Comparison of three-dimensional surface-imaging systems.
Tzou, Chieh-Han John; Artner, Nicole M; Pona, Igor; Hold, Alina; Placheta, Eva; Kropatsch, Walter G; Frey, Manfred
2014-04-01
In recent decades, three-dimensional (3D) surface-imaging technologies have gained popularity worldwide, but because most published articles that mention them are technical, clinicians often have difficulties gaining a proper understanding of them. This article aims to provide the reader with relevant information on 3D surface-imaging systems. In it, we compare the most recent technologies to reveal their differences. We have accessed five international companies with the latest technologies in 3D surface-imaging systems: 3dMD, Axisthree, Canfield, Crisalix and Dimensional Imaging (Di3D; in alphabetical order). We evaluated their technical equipment, independent validation studies and corporate backgrounds. The fastest capturing devices are the 3dMD and Di3D systems, capable of capturing images within 1.5 and 1 ms, respectively. All companies provide software for tissue modifications. Additionally, 3dMD, Canfield and Di3D can fuse computed tomography (CT)/cone-beam computed tomography (CBCT) images into their 3D surface-imaging data. 3dMD and Di3D provide 4D capture systems, which allow capturing the movement of a 3D surface over time. Crisalix greatly differs from the other four systems as it is purely web based and realised via cloud computing. 3D surface-imaging systems are becoming important in today's plastic surgical set-ups, taking surgeons to a new level of communication with patients, surgical planning and outcome evaluation. Technologies used in 3D surface-imaging systems and their intended field of application vary within the companies evaluated. Potential users should define their requirements and assignment of 3D surface-imaging systems in their clinical as research environment before making the final decision for purchase. Copyright © 2014 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Pulsed photoacoustic flow imaging with a handheld system
NASA Astrophysics Data System (ADS)
van den Berg, Pim J.; Daoudi, Khalid; Steenbergen, Wiendelt
2016-02-01
Flow imaging is an important technique in a range of disease areas, but estimating low flow speeds, especially near the walls of blood vessels, remains challenging. Pulsed photoacoustic flow imaging can be an alternative since there is little signal contamination from background tissue with photoacoustic imaging. We propose flow imaging using a clinical photoacoustic system that is both handheld and portable. The system integrates a linear array with 7.5 MHz central frequency in combination with a high-repetition-rate diode laser to allow high-speed photoacoustic imaging-ideal for this application. This work shows the flow imaging performance of the system in vitro using microparticles. Both two-dimensional (2-D) flow images and quantitative flow velocities from 12 to 75 mm/s were obtained. In a transparent bulk medium, flow estimation showed standard errors of ˜7% the estimated speed; in the presence of tissue-realistic optical scattering, the error increased to 40% due to limited signal-to-noise ratio. In the future, photoacoustic flow imaging can potentially be performed in vivo using fluorophore-filled vesicles or with an improved setup on whole blood.
Tian, Peifang; Devor, Anna; Sakadžić, Sava; Dale, Anders M.; Boas, David A.
2011-01-01
Absorption or fluorescence-based two-dimensional (2-D) optical imaging is widely employed in functional brain imaging. The image is a weighted sum of the real signal from the tissue at different depths. This weighting function is defined as “depth sensitivity.” Characterizing depth sensitivity and spatial resolution is important to better interpret the functional imaging data. However, due to light scattering and absorption in biological tissues, our knowledge of these is incomplete. We use Monte Carlo simulations to carry out a systematic study of spatial resolution and depth sensitivity for 2-D optical imaging methods with configurations typically encountered in functional brain imaging. We found the following: (i) the spatial resolution is <200 μm for NA ≤0.2 or focal plane depth ≤300 μm. (ii) More than 97% of the signal comes from the top 500 μm of the tissue. (iii) For activated columns with lateral size larger than spatial resolution, changing numerical aperature (NA) and focal plane depth does not affect depth sensitivity. (iv) For either smaller columns or large columns covered by surface vessels, increasing NA and∕or focal plane depth may improve depth sensitivity at deeper layers. Our results provide valuable guidance for the optimization of optical imaging systems and data interpretation. PMID:21280912
High resolution three-dimensional photoacoustic imaging of human finger joints in vivo
NASA Astrophysics Data System (ADS)
Xi, Lei; Jiang, Huabei
2015-08-01
We present a method for noninvasively imaging the hand joints using a three-dimensional (3D) photoacoustic imaging (PAI) system. This 3D PAI system utilizes cylindrical scanning in data collection and virtual-detector concept in image reconstruction. The maximum lateral and axial resolutions of the PAI system are 70 μm and 240 μm. The cross-sectional photoacoustic images of a healthy joint clearly exhibited major internal structures including phalanx and tendons, which are not available from the current photoacoustic imaging methods. The in vivo PAI results obtained are comparable with the corresponding 3.0 T MRI images of the finger joint. This study suggests that the proposed method has the potential to be used in early detection of joint diseases such as osteoarthritis.
Complexity-Entropy Causality Plane as a Complexity Measure for Two-Dimensional Patterns
Ribeiro, Haroldo V.; Zunino, Luciano; Lenzi, Ervin K.; Santoro, Perseu A.; Mendes, Renio S.
2012-01-01
Complexity measures are essential to understand complex systems and there are numerous definitions to analyze one-dimensional data. However, extensions of these approaches to two or higher-dimensional data, such as images, are much less common. Here, we reduce this gap by applying the ideas of the permutation entropy combined with a relative entropic index. We build up a numerical procedure that can be easily implemented to evaluate the complexity of two or higher-dimensional patterns. We work out this method in different scenarios where numerical experiments and empirical data were taken into account. Specifically, we have applied the method to fractal landscapes generated numerically where we compare our measures with the Hurst exponent; liquid crystal textures where nematic-isotropic-nematic phase transitions were properly identified; 12 characteristic textures of liquid crystals where the different values show that the method can distinguish different phases; and Ising surfaces where our method identified the critical temperature and also proved to be stable. PMID:22916097
NASA Technical Reports Server (NTRS)
Novik, Dmitry A.; Tilton, James C.
1993-01-01
The compression, or efficient coding, of single band or multispectral still images is becoming an increasingly important topic. While lossy compression approaches can produce reconstructions that are visually close to the original, many scientific and engineering applications require exact (lossless) reconstructions. However, the most popular and efficient lossless compression techniques do not fully exploit the two-dimensional structural links existing in the image data. We describe here a general approach to lossless data compression that effectively exploits two-dimensional structural links of any length. After describing in detail two main variants on this scheme, we discuss experimental results.
NASA Technical Reports Server (NTRS)
Monaldo, Frank M.; Lyzenga, David R.
1988-01-01
During October 1984, coincident Shuttle Imaging Radar-B synthetic aperture radar (SAR) imagery and wave measurements from airborne instrumentation were acquired. The two-dimensional wave spectrum was measured by both a radar ocean-wave spectrometer and a surface-contour radar aboard the aircraft. In this paper, two-dimensional SAR image intensity variance spectra are compared with these independent measures of ocean wave spectra to verify previously proposed models of the relationship between such SAR image spectra and ocean wave spectra. The results illustrate both the functional relationship between SAR image spectra and ocean wave spectra and the limitations imposed on the imaging of short-wavelength, azimuth-traveling waves.
Luegmair, Georg; Mehta, Daryush D.; Kobler, James B.; Döllinger, Michael
2015-01-01
Vocal fold kinematics and its interaction with aerodynamic characteristics play a primary role in acoustic sound production of the human voice. Investigating the temporal details of these kinematics using high-speed videoendoscopic imaging techniques has proven challenging in part due to the limitations of quantifying complex vocal fold vibratory behavior using only two spatial dimensions. Thus, we propose an optical method of reconstructing the superior vocal fold surface in three spatial dimensions using a high-speed video camera and laser projection system. Using stereo-triangulation principles, we extend the camera-laser projector method and present an efficient image processing workflow to generate the three-dimensional vocal fold surfaces during phonation captured at 4000 frames per second. Initial results are provided for airflow-driven vibration of an ex vivo vocal fold model in which at least 75% of visible laser points contributed to the reconstructed surface. The method captures the vertical motion of the vocal folds at a high accuracy to allow for the computation of three-dimensional mucosal wave features such as vibratory amplitude, velocity, and asymmetry. PMID:26087485
Zhao, C; Vassiljev, N; Konstantinidis, A C; Speller, R D; Kanicki, J
2017-03-07
High-resolution, low-noise x-ray detectors based on the complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been developed and proposed for digital breast tomosynthesis (DBT). In this study, we evaluated the three-dimensional (3D) imaging performance of a 50 µm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). The two-dimensional (2D) angle-dependent modulation transfer function (MTF), normalized noise power spectrum (NNPS), and detective quantum efficiency (DQE) were experimentally characterized and modeled using the cascaded system analysis at oblique incident angles up to 30°. The cascaded system model was extended to the 3D spatial frequency space in combination with the filtered back-projection (FBP) reconstruction method to calculate the 3D and in-plane MTF, NNPS and DQE parameters. The results demonstrate that the beam obliquity blurs the 2D MTF and DQE in the high spatial frequency range. However, this effect can be eliminated after FBP image reconstruction. In addition, impacts of the image acquisition geometry and detector parameters were evaluated using the 3D cascaded system analysis for DBT. The result shows that a wider projection angle range (e.g. ±30°) improves the low spatial frequency (below 5 mm -1 ) performance of the CMOS APS detector. In addition, to maintain a high spatial resolution for DBT, a focal spot size of smaller than 0.3 mm should be used. Theoretical analysis suggests that a pixelated scintillator in combination with the 50 µm pixel pitch CMOS APS detector could further improve the 3D image resolution. Finally, the 3D imaging performance of the CMOS APS and an indirect amorphous silicon (a-Si:H) thin-film transistor (TFT) passive pixel sensor (PPS) detector was simulated and compared.
NASA Astrophysics Data System (ADS)
Zhao, C.; Vassiljev, N.; Konstantinidis, A. C.; Speller, R. D.; Kanicki, J.
2017-03-01
High-resolution, low-noise x-ray detectors based on the complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been developed and proposed for digital breast tomosynthesis (DBT). In this study, we evaluated the three-dimensional (3D) imaging performance of a 50 µm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). The two-dimensional (2D) angle-dependent modulation transfer function (MTF), normalized noise power spectrum (NNPS), and detective quantum efficiency (DQE) were experimentally characterized and modeled using the cascaded system analysis at oblique incident angles up to 30°. The cascaded system model was extended to the 3D spatial frequency space in combination with the filtered back-projection (FBP) reconstruction method to calculate the 3D and in-plane MTF, NNPS and DQE parameters. The results demonstrate that the beam obliquity blurs the 2D MTF and DQE in the high spatial frequency range. However, this effect can be eliminated after FBP image reconstruction. In addition, impacts of the image acquisition geometry and detector parameters were evaluated using the 3D cascaded system analysis for DBT. The result shows that a wider projection angle range (e.g. ±30°) improves the low spatial frequency (below 5 mm-1) performance of the CMOS APS detector. In addition, to maintain a high spatial resolution for DBT, a focal spot size of smaller than 0.3 mm should be used. Theoretical analysis suggests that a pixelated scintillator in combination with the 50 µm pixel pitch CMOS APS detector could further improve the 3D image resolution. Finally, the 3D imaging performance of the CMOS APS and an indirect amorphous silicon (a-Si:H) thin-film transistor (TFT) passive pixel sensor (PPS) detector was simulated and compared.
Dixit, Sudeepa; Fox, Mark; Pal, Anupam
2014-01-01
Magnetic resonance imaging (MRI) has advantages for the assessment of gastrointestinal structures and functions; however, processing MRI data is time consuming and this has limited uptake to a few specialist centers. This study introduces a semiautomatic image processing system for rapid analysis of gastrointestinal MRI. For assessment of simpler regions of interest (ROI) such as the stomach, the system generates virtual images along arbitrary planes that intersect the ROI edges in the original images. This generates seed points that are joined automatically to form contours on each adjacent two-dimensional image and reconstructed in three dimensions (3D). An alternative thresholding approach is available for rapid assessment of complex structures like the small intestine. For assessment of dynamic gastrointestinal function, such as gastric accommodation and emptying, the initial 3D reconstruction is used as reference to process adjacent image stacks automatically. This generates four-dimensional (4D) reconstructions of dynamic volume change over time. Compared with manual processing, this semiautomatic system reduced the user input required to analyze a MRI gastric emptying study (estimated 100 vs. 10,000 mouse clicks). This analysis was not subject to variation in volume measurements seen between three human observers. In conclusion, the image processing platform presented processed large volumes of MRI data, such as that produced by gastric accommodation and emptying studies, with minimal user input. 3D and 4D reconstructions of the stomach and, potentially, other gastrointestinal organs are produced faster and more accurately than manual methods. This system will facilitate the application of MRI in gastrointestinal research and clinical practice. PMID:25540229
Usta, Taner A; Ozkaynak, Aysel; Kovalak, Ebru; Ergul, Erdinc; Naki, M Murat; Kaya, Erdal
2015-08-01
Two-dimensional (2D) view is known to cause practical difficulties for surgeons in conventional laparoscopy. Our goal was to evaluate whether the new-generation, Three-Dimensional Laparoscopic Vision System (3D LVS) provides greater benefit in terms of execution time and error number during the performance of surgical tasks. This study tests the hypothesis that the use of the new generation 3D LVS can significantly improve technical ability on complex laparoscopic tasks in an experimental model. Twenty-four participants (8 experienced, 8 minimally experienced, and 8 inexperienced) were evaluated for 10 different tasks in terms of total execution time and error number. The 4-point lickert scale was used for subjective assessment of the two imaging modalities. All tasks were completed by all participants. Statistically significant difference was determined between 3D and 2D systems in the tasks of bead transfer and drop, suturing, and pick-and-place in the inexperienced group; in the task of passing through two circles with the needle in the minimally experienced group; and in the tasks of bead transfer and drop, suturing and passing through two circles with the needle in the experienced group. Three-dimensional imaging was preferred over 2D in 6 of the 10 subjective criteria questions on 4-point lickert scale. The majority of the tasks were completed in a shorter time using 3D LVS compared to 2D LVS. The subjective Likert-scale ratings from each group also demonstrated a clear preference for 3D LVS. New 3D LVS has the potential to improve the learning curve, and reduce the operating time and error rate during the performances of laparoscopic surgeons. Our results suggest that the new-generation 3D HD LVS will be helpful for surgeons in laparoscopy (Clinical Trial ID: NCT01799577, Protocol ID: BEHGynobs-4).
Dependence of quantitative accuracy of CT perfusion imaging on system parameters
NASA Astrophysics Data System (ADS)
Li, Ke; Chen, Guang-Hong
2017-03-01
Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.
Towards Automated Screening of Two-dimensional Crystals
Cheng, Anchi; Leung, Albert; Fellmann, Denis; Quispe, Joel; Suloway, Christian; Pulokas, James; Carragher, Bridget; Potter, Clinton S.
2007-01-01
Screening trials to determine the presence of two-dimensional (2D) protein crystals suitable for three-dimensional structure determination using electron crystallography is a very labor-intensive process. Methods compatible with fully automated screening have been developed for the process of crystal production by dialysis and for producing negatively stained grids of the resulting trials. Further automation via robotic handling of the EM grids, and semi-automated transmission electron microscopic imaging and evaluation of the trial grids is also possible. We, and others, have developed working prototypes for several of these tools and tested and evaluated them in a simple screen of 24 crystallization conditions. While further development of these tools is certainly required for a turn-key system, the goal of fully automated screening appears to be within reach. PMID:17977016
NASA Astrophysics Data System (ADS)
Brown, C. David; Ih, Charles S.; Arce, Gonzalo R.; Fertell, David A.
1987-01-01
Vision systems for mobile robots or autonomous vehicles navigating in an unknown terrain environment must provide a rapid and accurate method of segmenting the scene ahead into regions of pathway and background. A major distinguishing feature between the pathway and background is the three dimensional texture of these two regions. Typical methods of textural image segmentation are very computationally intensive, often lack the required robustness, and are incapable of sensing the three dimensional texture of various regions of the scene. A method is presented where scanned laser projected lines of structured light, viewed by a stereoscopically located single video camera, resulted in an image in which the three dimensional characteristics of the scene were represented by the discontinuity of the projected lines. This image was conducive to processing with simple regional operators to classify regions as pathway or background. Design of some operators and application methods, and demonstration on sample images are presented. This method provides rapid and robust scene segmentation capability that has been implemented on a microcomputer in near real time, and should result in higher speed and more reliable robotic or autonomous navigation in unstructured environments.
Two-dimensional angular transmission characterization of CPV modules.
Herrero, R; Domínguez, C; Askins, S; Antón, I; Sala, G
2010-11-08
This paper proposes a fast method to characterize the two-dimensional angular transmission function of a concentrator photovoltaic (CPV) system. The so-called inverse method, which has been used in the past for the characterization of small optical components, has been adapted to large-area CPV modules. In the inverse method, the receiver cell is forward biased to produce a Lambertian light emission, which reveals the reverse optical path of the optics. Using a large-area collimator mirror, the light beam exiting the optics is projected on a Lambertian screen to create a spatially resolved image of the angular transmission function. An image is then obtained using a CCD camera. To validate this method, the angular transmission functions of a real CPV module have been measured by both direct illumination (flash CPV simulator and sunlight) and the inverse method, and the comparison shows good agreement.
NASA Astrophysics Data System (ADS)
Thomas, Edward; Williams, Jeremiah; Silver, Jennifer
2004-11-01
Over the past five years, the Auburn Plasma Sciences Laboratory (PSL) has applied two-dimensional particle image velocimetry (2D-PIV) techniques [E. Thomas, Phys. Plasmas, 6, 2672 (1999)] to make measurements of particle transport in dusty plasmas. Although important information was obtained from these earlier studies, the complex behavior of the charged microparticles clearly indicated that three-dimensional velocity information is needed. The PSL has recently acquired and installed a stereoscopic PIV (stereo-PIV) diagnostic tool for dusty plasma investigations [E. Thomas. et al, Phys. Plasmas, L37 (2004)]. It employs a synchronized dual-laser, dual-camera system for measuring particle transport in three dimensions. Results will be presented on the initial application of stereo-PIV to dusty plasma studies. Additional results will be presented on the use of stereo-PIV for measuring the controlled interaction of two dust clouds.
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1976-01-01
A number of current research directions in the fields of digital signal processing and modern control and estimation theory were studied. Topics such as stability theory, linear prediction and parameter identification, system analysis and implementation, two-dimensional filtering, decentralized control and estimation, image processing, and nonlinear system theory were examined in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the two disciplines. An extensive bibliography is included.
Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan
2013-01-01
Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.
NASA Technical Reports Server (NTRS)
Alvertos, Nicolas; Dcunha, Ivan
1993-01-01
The problem of recognizing and positioning of objects in three-dimensional space is important for robotics and navigation applications. In recent years, digital range data, also referred to as range images or depth maps, have been available for the analysis of three-dimensional objects owing to the development of several active range finding techniques. The distinct advantage of range images is the explicitness of the surface information available. Many industrial and navigational robotics tasks will be more easily accomplished if such explicit information can be efficiently interpreted. In this research, a new technique based on analytic geometry for the recognition and description of three-dimensional quadric surfaces from range images is presented. Beginning with the explicit representation of quadrics, a set of ten coefficients are determined for various three-dimensional surfaces. For each quadric surface, a unique set of two-dimensional curves which serve as a feature set is obtained from the various angles at which the object is intersected with a plane. Based on a discriminant method, each of the curves is classified as a parabola, circle, ellipse, hyperbola, or a line. Each quadric surface is shown to be uniquely characterized by a set of these two-dimensional curves, thus allowing discrimination from the others. Before the recognition process can be implemented, the range data have to undergo a set of pre-processing operations, thereby making it more presentable to classification algorithms. One such pre-processing step is to study the effect of median filtering on raw range images. Utilizing a variety of surface curvature techniques, reliable sets of image data that approximate the shape of a quadric surface are determined. Since the initial orientation of the surfaces is unknown, a new technique is developed wherein all the rotation parameters are determined and subsequently eliminated. This approach enables us to position the quadric surfaces in a desired coordinate system. Experiments were conducted on raw range images of spheres, cylinders, and cones. Experiments were also performed on simulated data for surfaces such as hyperboloids of one and two sheets, elliptical and hyperbolic paraboloids, elliptical and hyperbolic cylinders, ellipsoids and the quadric cones. Both the real and simulated data yielded excellent results. Our approach is found to be more accurate and computationally inexpensive as compared to traditional approaches, such as the three-dimensional discriminant approach which involves evaluation of the rank of a matrix. Finally, we have proposed one other new approach, which involves the formulation of a mapping between the explicit and implicit forms of representing quadric surfaces. This approach, when fully realized, will yield a three-dimensional discriminant, which will recognize quadric surfaces based upon their component surfaces patches. This approach is faster than prior approaches and at the same time is invariant to pose and orientation of the surfaces in three-dimensional space.
NASA Astrophysics Data System (ADS)
Alvertos, Nicolas; Dcunha, Ivan
1993-03-01
The problem of recognizing and positioning of objects in three-dimensional space is important for robotics and navigation applications. In recent years, digital range data, also referred to as range images or depth maps, have been available for the analysis of three-dimensional objects owing to the development of several active range finding techniques. The distinct advantage of range images is the explicitness of the surface information available. Many industrial and navigational robotics tasks will be more easily accomplished if such explicit information can be efficiently interpreted. In this research, a new technique based on analytic geometry for the recognition and description of three-dimensional quadric surfaces from range images is presented. Beginning with the explicit representation of quadrics, a set of ten coefficients are determined for various three-dimensional surfaces. For each quadric surface, a unique set of two-dimensional curves which serve as a feature set is obtained from the various angles at which the object is intersected with a plane. Based on a discriminant method, each of the curves is classified as a parabola, circle, ellipse, hyperbola, or a line. Each quadric surface is shown to be uniquely characterized by a set of these two-dimensional curves, thus allowing discrimination from the others. Before the recognition process can be implemented, the range data have to undergo a set of pre-processing operations, thereby making it more presentable to classification algorithms. One such pre-processing step is to study the effect of median filtering on raw range images. Utilizing a variety of surface curvature techniques, reliable sets of image data that approximate the shape of a quadric surface are determined. Since the initial orientation of the surfaces is unknown, a new technique is developed wherein all the rotation parameters are determined and subsequently eliminated. This approach enables us to position the quadric surfaces in a desired coordinate system. Experiments were conducted on raw range images of spheres, cylinders, and cones. Experiments were also performed on simulated data for surfaces such as hyperboloids of one and two sheets, elliptical and hyperbolic paraboloids, elliptical and hyperbolic cylinders, ellipsoids and the quadric cones. Both the real and simulated data yielded excellent results. Our approach is found to be more accurate and computationally inexpensive as compared to traditional approaches, such as the three-dimensional discriminant approach which involves evaluation of the rank of a matrix.
Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa
2016-08-08
We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.
Study on super-resolution three-dimensional range-gated imaging technology
NASA Astrophysics Data System (ADS)
Guo, Huichao; Sun, Huayan; Wang, Shuai; Fan, Youchen; Li, Yuanmiao
2018-04-01
Range-gated three dimensional imaging technology is a hotspot in recent years, because of the advantages of high spatial resolution, high range accuracy, long range, and simultaneous reflection of target reflectivity information. Based on the study of the principle of intensity-related method, this paper has carried out theoretical analysis and experimental research. The experimental system adopts the high power pulsed semiconductor laser as light source, gated ICCD as the imaging device, can realize the imaging depth and distance flexible adjustment to achieve different work mode. The imaging experiment of small imaging depth is carried out aiming at building 500m away, and 26 group images were obtained with distance step 1.5m. In this paper, the calculation method of 3D point cloud based on triangle method is analyzed, and 15m depth slice of the target 3D point cloud are obtained by using two frame images, the distance precision is better than 0.5m. The influence of signal to noise ratio, illumination uniformity and image brightness on distance accuracy are analyzed. Based on the comparison with the time-slicing method, a method for improving the linearity of point cloud is proposed.
Magnetoacoustic microscopic imaging of conductive objects and nanoparticles distribution
NASA Astrophysics Data System (ADS)
Liu, Siyu; Zhang, Ruochong; Luo, Yunqi; Zheng, Yuanjin
2017-09-01
Magnetoacoustic tomography has been demonstrated as a powerful and low-cost multi-wave imaging modality. However, due to limited spatial resolution and detection efficiency of magnetoacoustic signal, full potential of the magnetoacoustic imaging remains to be tapped. Here we report a high-resolution magnetoacoustic microscopy method, where magnetic stimulation is provided by a compact solenoid resonance coil connected with a matching network, and acoustic reception is realized by using a high-frequency focused ultrasound transducer. Scanning the magnetoacoustic microscopy system perpendicularly to the acoustic axis of the focused transducer would generate a two-dimensional microscopic image with acoustically determined lateral resolution. It is analyzed theoretically and demonstrated experimentally that magnetoacoustic generation in this microscopic system depends on the conductivity profile of conductive objects and localized distribution of superparamagnetic iron magnetic nanoparticles, based on two different but related implementations. The lateral resolution is characterized. Directional nature of magnetoacoustic vibration and imaging sensitivity for mapping magnetic nanoparticles are also discussed. The proposed microscopy system offers a high-resolution method that could potentially map intrinsic conductivity distribution in biological tissue and extraneous magnetic nanoparticles.
Electronic method for autofluorography of macromolecules on two-D matrices. [Patent application
Davidson, J.B.; Case, A.L.
1981-12-30
A method for detecting, localizing, and quantifying macromolecules contained in a two-dimensional matrix is provided which employs a television-based position sensitive detection system. A molecule-containing matrix may be produced by conventional means to produce spots of light at the molecule locations which are detected by the television system. The matrix, such as a gel matrix, is exposed to an electronic camera system including an image-intensifier and secondary electron conduction camera capable of light integrating times of many minutes. A light image stored in the form of a charge image on the camera tube target is scanned by conventional television techniques, digitized, and stored in a digital memory. Intensity of any point on the image may be determined from the number at the memory address of the point. The entire image may be displayed on a television monitor for inspection and photographing or individual spots may be analyzed through selected readout of the memory locations. Compared to conventional film exposure methods, the exposure time may be reduced 100 to 1000 times.
Suzuki, Y; Kambara, H; Kadota, K; Tamaki, S; Yamazato, A; Nohara, R; Osakada, G; Kawai, C
1985-08-01
To evaluate the noninvasive detection of shunt flow using a newly developed real-time 2-dimensional color-coded Doppler flow imaging system (D-2DE), 20 patients were examined, including 10 with secundum atrial septal defect (ASD) and 10 control subjects. These results were compared with contrast 2-dimensional echocardiography (C-2DE). Doppler 2DE displayed the blood flow toward the transducer as red and the blood flow away from the transducer as blue in 8 shades, each shade adding green according to the degree of variance in Doppler frequency. In the patients with ASD, D-2DE clearly visualized left-to-right shunt flow in 7 of 10 patients. In 5 of these 7 patients, C-2DE showed a negative contrast effect in the same area of the right atrium. Thus, D-2DE increased the sensitivity over C-2DE for detecting left-to-right shunt flow (from 50% to 70%). However, the specificity was slightly less in D-2DE (90%) than C-2DE (100%). Doppler 2DE could not visualize right-to-left shunt flow in all patients with ASD, though C-2DE showed a positive contrast effect in the left-sided heart in 9 of 10 patients with ASD. Thus, D-2DE is clinically useful for detecting left-to-right shunt flow in patients with ASD.
Preprocessing of 2-Dimensional Gel Electrophoresis Images Applied to Proteomic Analysis: A Review.
Goez, Manuel Mauricio; Torres-Madroñero, Maria Constanza; Röthlisberger, Sarah; Delgado-Trejos, Edilson
2018-02-01
Various methods and specialized software programs are available for processing two-dimensional gel electrophoresis (2-DGE) images. However, due to the anomalies present in these images, a reliable, automated, and highly reproducible system for 2-DGE image analysis has still not been achieved. The most common anomalies found in 2-DGE images include vertical and horizontal streaking, fuzzy spots, and background noise, which greatly complicate computational analysis. In this paper, we review the preprocessing techniques applied to 2-DGE images for noise reduction, intensity normalization, and background correction. We also present a quantitative comparison of non-linear filtering techniques applied to synthetic gel images, through analyzing the performance of the filters under specific conditions. Synthetic proteins were modeled into a two-dimensional Gaussian distribution with adjustable parameters for changing the size, intensity, and degradation. Three types of noise were added to the images: Gaussian, Rayleigh, and exponential, with signal-to-noise ratios (SNRs) ranging 8-20 decibels (dB). We compared the performance of wavelet, contourlet, total variation (TV), and wavelet-total variation (WTTV) techniques using parameters SNR and spot efficiency. In terms of spot efficiency, contourlet and TV were more sensitive to noise than wavelet and WTTV. Wavelet worked the best for images with SNR ranging 10-20 dB, whereas WTTV performed better with high noise levels. Wavelet also presented the best performance with any level of Gaussian noise and low levels (20-14 dB) of Rayleigh and exponential noise in terms of SNR. Finally, the performance of the non-linear filtering techniques was evaluated using a real 2-DGE image with previously identified proteins marked. Wavelet achieved the best detection rate for the real image. Copyright © 2018 Beijing Institute of Genomics, Chinese Academy of Sciences and Genetics Society of China. Production and hosting by Elsevier B.V. All rights reserved.
Two-Dimensional Optoelectronic Graphene Nanoprobes for Neural Nerwork
NASA Astrophysics Data System (ADS)
Hong, Tu; Kitko, Kristina; Wang, Rui; Zhang, Qi; Xu, Yaqiong
2014-03-01
Brain is the most complex network created by nature, with billions of neurons connected by trillions of synapses through sophisticated wiring patterns and countless modulatory mechanisms. Current methods to study the neuronal process, either by electrophysiology or optical imaging, have significant limitations on throughput and sensitivity. Here, we use graphene, a monolayer of carbon atoms, as a two-dimensional nanoprobe for neural network. Scanning photocurrent measurement is applied to detect the local integration of electrical and chemical signals in mammalian neurons. Such interface between nanoscale electronic device and biological system provides not only ultra-high sensitivity, but also sub-millisecond temporal resolution, owing to the high carrier mobility of graphene.
Accelerated High-Dimensional MR Imaging with Sparse Sampling Using Low-Rank Tensors
He, Jingfei; Liu, Qiegen; Christodoulou, Anthony G.; Ma, Chao; Lam, Fan
2017-01-01
High-dimensional MR imaging often requires long data acquisition time, thereby limiting its practical applications. This paper presents a low-rank tensor based method for accelerated high-dimensional MR imaging using sparse sampling. This method represents high-dimensional images as low-rank tensors (or partially separable functions) and uses this mathematical structure for sparse sampling of the data space and for image reconstruction from highly undersampled data. More specifically, the proposed method acquires two datasets with complementary sampling patterns, one for subspace estimation and the other for image reconstruction; image reconstruction from highly undersampled data is accomplished by fitting the measured data with a sparsity constraint on the core tensor and a group sparsity constraint on the spatial coefficients jointly using the alternating direction method of multipliers. The usefulness of the proposed method is demonstrated in MRI applications; it may also have applications beyond MRI. PMID:27093543
Two Holes from Using Rasp in 'Snow White' (Stereo)
NASA Technical Reports Server (NTRS)
2008-01-01
This view from the Surface Stereo Imager on NASA's Phoenix Mars Lander shows a portion of the trench informally named 'Snow White,' with two holes near the top of the image that were produced by the first test use of Phoenix's rasp to collect a sample of icy soil. The test was conducted on July 15, 2008, during the 50th Martian day, or sol, since Phoenix landed, and the image was taken later the same day. The two holes are about one centimeter (0.4 inch) apart. The image appears three-dimensional when viewed through blue-red glasses. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is led by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.Lamb, James M; Agazaryan, Nzhde; Low, Daniel A
2013-10-01
To determine whether kilovoltage x-ray projection radiation therapy setup images could be used to perform patient identification and detect gross errors in patient setup using a computer algorithm. Three patient cohorts treated using a commercially available image guided radiation therapy (IGRT) system that uses 2-dimensional to 3-dimensional (2D-3D) image registration were retrospectively analyzed: a group of 100 cranial radiation therapy patients, a group of 100 prostate cancer patients, and a group of 83 patients treated for spinal lesions. The setup images were acquired using fixed in-room kilovoltage imaging systems. In the prostate and cranial patient groups, localizations using image registration were performed between computed tomography (CT) simulation images from radiation therapy planning and setup x-ray images corresponding both to the same patient and to different patients. For the spinal patients, localizations were performed to the correct vertebral body, and to an adjacent vertebral body, using planning CTs and setup x-ray images from the same patient. An image similarity measure used by the IGRT system image registration algorithm was extracted from the IGRT system log files and evaluated as a discriminant for error detection. A threshold value of the similarity measure could be chosen to separate correct and incorrect patient matches and correct and incorrect vertebral body localizations with excellent accuracy for these patient cohorts. A 10-fold cross-validation using linear discriminant analysis yielded misclassification probabilities of 0.000, 0.0045, and 0.014 for the cranial, prostate, and spinal cases, respectively. An automated measure of the image similarity between x-ray setup images and corresponding planning CT images could be used to perform automated patient identification and detection of localization errors in radiation therapy treatments. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bouma, Brett E.
1998-09-01
The pace of technological advancement of Optical Coherence Tomography (OCT) over the last several years has been extremely rapid. The field has progressed from one-dimensional low-coherence ranging to full three-dimensional imaging with individual two-dimensional images aquired at near video rate in a span of less than eight years. Imaging applications have included polymers and advanced composites, Ophthalmology, Developmental Biology, Gastroenterology, Urology, Cardiology, Neurology, and Gynecology. These preliminary studies indicate the great potential for OCT to make a significant impact, especially in clinical medicine.
Quantitative fluorescence microscopy and image deconvolution.
Swedlow, Jason R
2013-01-01
Quantitative imaging and image deconvolution have become standard techniques for the modern cell biologist because they can form the basis of an increasing number of assays for molecular function in a cellular context. There are two major types of deconvolution approaches--deblurring and restoration algorithms. Deblurring algorithms remove blur but treat a series of optical sections as individual two-dimensional entities and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed in this chapter. Image deconvolution in fluorescence microscopy has usually been applied to high-resolution imaging to improve contrast and thus detect small, dim objects that might otherwise be obscured. Their proper use demands some consideration of the imaging hardware, the acquisition process, fundamental aspects of photon detection, and image processing. This can prove daunting for some cell biologists, but the power of these techniques has been proven many times in the works cited in the chapter and elsewhere. Their usage is now well defined, so they can be incorporated into the capabilities of most laboratories. A major application of fluorescence microscopy is the quantitative measurement of the localization, dynamics, and interactions of cellular factors. The introduction of green fluorescent protein and its spectral variants has led to a significant increase in the use of fluorescence microscopy as a quantitative assay system. For quantitative imaging assays, it is critical to consider the nature of the image-acquisition system and to validate its response to known standards. Any image-processing algorithms used before quantitative analysis should preserve the relative signal levels in different parts of the image. A very common image-processing algorithm, image deconvolution, is used to remove blurred signal from an image. There are two major types of deconvolution approaches, deblurring and restoration algorithms. Deblurring algorithms remove blur, but treat a series of optical sections as individual two-dimensional entities, and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed. Copyright © 1998 Elsevier Inc. All rights reserved.
Holographic leaky-wave metasurfaces for dual-sensor imaging.
Li, Yun Bo; Li, Lian Lin; Cai, Ben Geng; Cheng, Qiang; Cui, Tie Jun
2015-12-10
Metasurfaces have huge potentials to develop new type imaging systems due to their abilities of controlling electromagnetic waves. Here, we propose a new method for dual-sensor imaging based on cross-like holographic leaky-wave metasurfaces which are composed of hybrid isotropic and anisotropic surface impedance textures. The holographic leaky-wave radiations are generated by special impedance modulations of surface waves excited by the sensor ports. For one independent sensor, the main leaky-wave radiation beam can be scanned by frequency in one-dimensional space, while the frequency scanning in the orthogonal spatial dimension is accomplished by the other sensor. Thus, for a probed object, the imaging plane can be illuminated adequately to obtain the two-dimensional backward scattered fields by the dual-sensor for reconstructing the object. The relativity of beams under different frequencies is very low due to the frequency-scanning beam performance rather than the random beam radiations operated by frequency, and the multi-illuminations with low relativity are very appropriate for multi-mode imaging method with high resolution and anti- noise. Good reconstruction results are given to validate the proposed imaging method.
The design and performance characteristics of a cellular logic 3-D image classification processor
NASA Astrophysics Data System (ADS)
Ankeney, L. A.
1981-04-01
The introduction of high resolution scanning laser radar systems which are capable of collecting range and reflectivity images, is predicted to significantly influence the development of processors capable of performing autonomous target classification tasks. Actively sensed range images are shown to be superior to passively collected infrared images in both image stability and information content. An illustrated tutorial introduces cellular logic (neighborhood) transformations and two and three dimensional erosion and dilation operations which are used for noise filters and geometric shape measurement. A unique 'cookbook' approach to selecting a sequence of neighborhood transformations suitable for object measurement is developed and related to false alarm rate and algorithm effectiveness measures. The cookbook design approach is used to develop an algorithm to classify objects based upon their 3-D geometrical features. A Monte Carlo performance analysis is used to demonstrate the utility of the design approach by characterizing the ability of the algorithm to classify randomly positioned three dimensional objects in the presence of additive noise, scale variations, and other forms of image distortion.
Pham, Tuyen Danh; Nguyen, Dat Tien; Kim, Wan; Park, Sung Ho; Park, Kang Ryoung
2018-01-01
In automatic paper currency sorting, fitness classification is a technique that assesses the quality of banknotes to determine whether a banknote is suitable for recirculation or should be replaced. Studies on using visible-light reflection images of banknotes for evaluating their usability have been reported. However, most of them were conducted under the assumption that the denomination and input direction of the banknote are predetermined. In other words, a pre-classification of the type of input banknote is required. To address this problem, we proposed a deep learning-based fitness-classification method that recognizes the fitness level of a banknote regardless of the denomination and input direction of the banknote to the system, using the reflection images of banknotes by visible-light one-dimensional line image sensor and a convolutional neural network (CNN). Experimental results on the banknote image databases of the Korean won (KRW) and the Indian rupee (INR) with three fitness levels, and the Unites States dollar (USD) with two fitness levels, showed that our method gives better classification accuracy than other methods. PMID:29415447
USDA-ARS?s Scientific Manuscript database
The eButton takes frontal images at 4 second intervals throughout the day. A three-dimensional (3D) manually administered wire mesh procedure has been developed to quantify portion sizes from the two-dimensional (2D) images. This paper reports a test of the interrater reliability and validity of use...
Applications of digital image acquisition in anthropometry
NASA Technical Reports Server (NTRS)
Woolford, B.; Lewis, J. L.
1981-01-01
A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.
System optimization on coded aperture spectrometer
NASA Astrophysics Data System (ADS)
Liu, Hua; Ding, Quanxin; Wang, Helong; Chen, Hongliang; Guo, Chunjie; Zhou, Liwei
2017-10-01
For aim to find a simple multiple configuration solution and achieve higher refractive efficiency, and based on to reduce the situation disturbed by FOV change, especially in a two-dimensional spatial expansion. Coded aperture system is designed by these special structure, which includes an objective a coded component a prism reflex system components, a compensatory plate and an imaging lens Correlative algorithms and perfect imaging methods are available to ensure this system can be corrected and optimized adequately. Simulation results show that the system can meet the application requirements in MTF, REA, RMS and other related criteria. Compared with the conventional design, the system has reduced in volume and weight significantly. Therefore, the determining factors are the prototype selection and the system configuration.
2017-11-28
AFRL-AFOSR-JP-TR-2018-0028 In-situ Charge-Density Imaging of Metamaterials from Switchable 2D electron gas CHANG BEOM EOM UNIVERSITY OF WISCONSIN...Imaging of Metamaterials made with Switchable Two-dimensional Electron Gas at Oxide Heterointerfaces 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA2386-16-1...using pulsed laser deposition atomic with in-situ reflection high-energy electron diffraction (RHEED). We have also demonstrated that the inline
Fast image matching algorithm based on projection characteristics
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun
2011-06-01
Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.
A knowledge-based object recognition system for applications in the space station
NASA Technical Reports Server (NTRS)
Dhawan, Atam P.
1988-01-01
A knowledge-based three-dimensional (3D) object recognition system is being developed. The system uses primitive-based hierarchical relational and structural matching for the recognition of 3D objects in the two-dimensional (2D) image for interpretation of the 3D scene. At present, the pre-processing, low-level preliminary segmentation, rule-based segmentation, and the feature extraction are completed. The data structure of the primitive viewing knowledge-base (PVKB) is also completed. Algorithms and programs based on attribute-trees matching for decomposing the segmented data into valid primitives were developed. The frame-based structural and relational descriptions of some objects were created and stored in a knowledge-base. This knowledge-base of the frame-based descriptions were developed on the MICROVAX-AI microcomputer in LISP environment. The simulated 3D scene of simple non-overlapping objects as well as real camera data of images of 3D objects of low-complexity have been successfully interpreted.
Mass Storage and Retrieval at Rome Laboratory
NASA Technical Reports Server (NTRS)
Kann, Joshua L.; Canfield, Brady W.; Jamberdino, Albert A.; Clarke, Bernard J.; Daniszewski, Ed; Sunada, Gary
1996-01-01
As the speed and power of modern digital computers continues to advance, the demands on secondary mass storage systems grow. In many cases, the limitations of existing mass storage reduce the overall effectiveness of the computing system. Image storage and retrieval is one important area where improved storage technologies are required. Three dimensional optical memories offer the advantage of large data density, on the order of 1 Tb/cm(exp 3), and faster transfer rates because of the parallel nature of optical recording. Such a system allows for the storage of multiple-Gbit sized images, which can be recorded and accessed at reasonable rates. Rome Laboratory is currently investigating several techniques to perform three-dimensional optical storage including holographic recording, two-photon recording, persistent spectral-hole burning, multi-wavelength DNA recording, and the use of bacteriorhodopsin as a recording material. In this paper, the current status of each of these on-going efforts is discussed. In particular, the potential payoffs as well as possible limitations are addressed.
SCAPS, a two-dimensional ion detector for mass spectrometer
NASA Astrophysics Data System (ADS)
Yurimoto, Hisayoshi
2014-05-01
Faraday Cup (FC) and electron multiplier (EM) are of the most popular ion detector for mass spectrometer. FC is used for high-count-rate ion measurements and EM can detect from single ion. However, FC is difficult to detect lower intensities less than kilo-cps, and EM loses ion counts higher than Mega-cps. Thus, FC and EM are used complementary each other, but they both belong to zero-dimensional detector. On the other hand, micro channel plate (MCP) is a popular ion signal amplifier with two-dimensional capability, but additional detection system must be attached to detect the amplified signals. Two-dimensional readout for the MCP signals, however, have not achieve the level of FC and EM systems. A stacked CMOS active pixel sensor (SCAPS) has been developed to detect two-dimensional ion variations for a spatial area using semiconductor technology [1-8]. The SCAPS is an integrated type multi-detector, which is different from EM and FC, and is composed of more than 500×500 pixels (micro-detectors) for imaging of cm-area with a pixel of less than 20 µm in square. The SCAPS can be detected from single ion to 100 kilo-count ions per one pixel. Thus, SCAPS can be accumulated up to several giga-count ions for total pixels, i.e. for total imaging area. The SCAPS has been applied to stigmatic ion optics of secondary ion mass spectrometer, as a detector of isotope microscope [9]. The isotope microscope has capabilities of quantitative isotope images of hundred-micrometer area on a sample with sub-micrometer resolution and permil precision, and of two-dimensional mass spectrum on cm-scale of mass dispersion plane of a sector magnet with ten-micrometer resolution. The performance has been applied to two-dimensional isotope spatial distribution for mainly hydrogen, carbon, nitrogen and oxygen of natural (extra-terrestrial and terrestrial) samples and samples simulated natural processes [e.g. 10-17]. References: [1] Matsumoto, K., et al. (1993) IEEE Trans. Electron Dev. 40, 82-85. [2] Takayanagi et al. (1999) Proc. 1999 IEEE workshop on Charge-Coupled Devices and Advanced Image Sensors, 159-162. [3] Kunihiro et al. (2001) Nucl. Instrum. Methods Phys. Res. Sec. A 470, 512-519. [4] Nagashima et al. (2001) Surface Interface Anal. 31, 131-137. [5] Takayanagi et al. (2003) IEEE Trans. Electron Dev. 50, 70- 76. [6] Sakamoto and Yurimoto (2006) Surface Interface Anal. 38, 1760-1762. [7] Yamamoto et al. (2010) Surface Interface Anal. 42, 1603-1605. [8] Sakamoto et al. (2012) Jpn. J. Appl. Phys. 51, 076701. [9] Yurimoto et al. (2003) Appl. Surf. Sci. 203-204, 793-797. [10] Nagashima et al. (2004) Nature 428, 921-924. [11] Kunihiro et al. (2005) Geochim. Cosmochim. Acta 69, 763-773. [12] Nakamura et al. (2005) Geology 33, 829-832. [13] Sakamoto et al. (2007) Science 317, 231-233. [14] Greenwood et al. (2008) Geophys. Res. Lett., 35, L05203. [15] Greenwood et al. (2011) Nature Geoscience 4, 79-82. [16] Park et al. (2012) Meteorit. Planet. Sci. 47, 2070-2083. [17] Hashiguchi et al. (2013) Geochim. Cosmochim. Acta. 122, 306-323.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dueñas, Maria Emilia; Essner, Jeffrey J.; Lee, Young Jin
The zebrafish ( Danio rerio) has been widely used as a model vertebrate system to study lipid metabolism, the roles of lipids in diseases, and lipid dynamics in embryonic development. Here, we applied high-spatial resolution matrix-assisted laser desorption/ionization (MALDI)-mass spectrometry imaging (MSI) to map and visualize the three-dimensional spatial distribution of phospholipid classes, phosphatidylcholine (PC), phosphatidylethanolamines (PE), and phosphatidylinositol (PI), in newly fertilized individual zebrafish embryos. This is the first time MALDI-MSI has been applied for three dimensional chemical imaging of a single cell. PC molecular species are present inside the yolk in addition to the blastodisc, while PE andmore » PI species are mostly absent in the yolk. Two-dimensional MSI was also studied for embryos at different cell stages (1-, 2-, 4-, 8-, and 16-cell stage) to investigate the localization changes of some lipids at various cell developmental stages. Lastly, four different normalization approaches were compared to find reliable relative quantification in 2D- and 3D- MALDI MSI data sets.« less
Dueñas, Maria Emilia; Essner, Jeffrey J.; Lee, Young Jin
2017-11-02
The zebrafish ( Danio rerio) has been widely used as a model vertebrate system to study lipid metabolism, the roles of lipids in diseases, and lipid dynamics in embryonic development. Here, we applied high-spatial resolution matrix-assisted laser desorption/ionization (MALDI)-mass spectrometry imaging (MSI) to map and visualize the three-dimensional spatial distribution of phospholipid classes, phosphatidylcholine (PC), phosphatidylethanolamines (PE), and phosphatidylinositol (PI), in newly fertilized individual zebrafish embryos. This is the first time MALDI-MSI has been applied for three dimensional chemical imaging of a single cell. PC molecular species are present inside the yolk in addition to the blastodisc, while PE andmore » PI species are mostly absent in the yolk. Two-dimensional MSI was also studied for embryos at different cell stages (1-, 2-, 4-, 8-, and 16-cell stage) to investigate the localization changes of some lipids at various cell developmental stages. Lastly, four different normalization approaches were compared to find reliable relative quantification in 2D- and 3D- MALDI MSI data sets.« less
Visible-Infrared Hyperspectral Image Projector
NASA Technical Reports Server (NTRS)
Bolcar, Matthew
2013-01-01
The VisIR HIP generates spatially-spectrally complex scenes. The generated scenes simulate real-world targets viewed by various remote sensing instruments. The VisIR HIP consists of two subsystems: a spectral engine and a spatial engine. The spectral engine generates spectrally complex uniform illumination that spans the wavelength range between 380 nm and 1,600 nm. The spatial engine generates two-dimensional gray-scale scenes. When combined, the two engines are capable of producing two-dimensional scenes with a unique spectrum at each pixel. The VisIR HIP can be used to calibrate any spectrally sensitive remote-sensing instrument. Tests were conducted on the Wide-field Imaging Interferometer Testbed at NASA s Goddard Space Flight Center. The device is a variation of the calibrated hyperspectral image projector developed by the National Institute of Standards and Technology in Gaithersburg, MD. It uses Gooch & Housego Visible and Infrared OL490 Agile Light Sources to generate arbitrary spectra. The two light sources are coupled to a digital light processing (DLP(TradeMark)) digital mirror device (DMD) that serves as the spatial engine. Scenes are displayed on the DMD synchronously with desired spectrum. Scene/spectrum combinations are displayed in rapid succession, over time intervals that are short compared to the integration time of the system under test.
Lu, Jian-Yu; Cheng, Jiqi; Wang, Jing
2006-10-01
A general-purpose high frame rate (HFR) medical imaging system has been developed. This system has 128 independent linear transmitters, each of which is capable of producing an arbitrary broadband (about 0.05-10 MHz) waveform of up to +/- 144 V peak voltage on a 75-ohm resistive load using a 12-bit/40-MHz digital-to-analog converter. The system also has 128 independent, broadband (about 0.25-10 MHz), and time-variable-gain receiver channels, each of which has a 12-bit/40-MHz analog-to-digital converter and up to 512 MB of memory. The system is controlled by a personal computer (PC), and radio frequency echo data of each channel are transferred to the same PC via a standard USB 2.0 port for image reconstructions. Using the HFR imaging system, we have developed a new limited-diffraction array beam imaging method with square-wave aperture voltage weightings. With this method, in principle, only one or two transmitters are required to excite a fully populated two-dimensional (2-D) array transducer to achieve an equivalent dynamic focusing in both transmission and reception to reconstruct a high-quality three-dimensional image without the need of the time delays of traditional beam focusing and steering, potentially simplifying the transmitter subsystem of an imager. To validate the method, for simplicity, 2-D imaging experiments were performed using the system. In the in vitro experiment, a custom-made, 128-element, 0.32-mm pitch, 3.5-MHz center frequency linear array transducer with about 50% fractional bandwidth was used to reconstruct images of an ATS 539 tissue-mimicking phantom at an axial distance of 130 mm with a field of view of more than 90 degrees. In the in vivo experiment of a human heart, images with a field of view of more than 90 degrees at 120-mm axial distance were obtained with a 128-element, 2.5-MHz center frequency, 0.15-mm pitch Acuson V2 phased array. To ensure that the system was operated under the limits set by the U.S. Food and Drug Administration, the mechanical index, thermal index, and acoustic output were measured. Results show that higher-quality images can be reconstructed with the square-wave aperture weighting method due to an increased penetration depth as compared to the exact weighting method developed previously, and a frame rate of 486 per second was achieved at a pulse repetition frequency of about 5348 Hz for the human heart.
Casadei, Cecilia M.; Tsai, Ching-Ju; Barty, Anton; ...
2018-01-01
Previous proof-of-concept measurements on single-layer two-dimensional membrane-protein crystals performed at X-ray free-electron lasers (FELs) have demonstrated that the collection of meaningful diffraction patterns, which is not possible at synchrotrons because of radiation-damage issues, is feasible. Here, the results obtained from the analysis of a thousand single-shot, room-temperature X-ray FEL diffraction images from two-dimensional crystals of a bacteriorhodopsin mutant are reported in detail. The high redundancy in the measurements boosts the intensity signal-to-noise ratio, so that the values of the diffracted intensities can be reliably determined down to the detector-edge resolution of 4 Å. The results show that two-dimensional serial crystallography atmore » X-ray FELs is a suitable method to study membrane proteins to near-atomic length scales at ambient temperature. The method presented here can be extended to pump–probe studies of optically triggered structural changes on submillisecond timescales in two-dimensional crystals, which allow functionally relevant large-scale motions that may be quenched in three-dimensional crystals.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casadei, Cecilia M.; Tsai, Ching-Ju; Barty, Anton
Previous proof-of-concept measurements on single-layer two-dimensional membrane-protein crystals performed at X-ray free-electron lasers (FELs) have demonstrated that the collection of meaningful diffraction patterns, which is not possible at synchrotrons because of radiation-damage issues, is feasible. Here, the results obtained from the analysis of a thousand single-shot, room-temperature X-ray FEL diffraction images from two-dimensional crystals of a bacteriorhodopsin mutant are reported in detail. The high redundancy in the measurements boosts the intensity signal-to-noise ratio, so that the values of the diffracted intensities can be reliably determined down to the detector-edge resolution of 4 Å. The results show that two-dimensional serial crystallography atmore » X-ray FELs is a suitable method to study membrane proteins to near-atomic length scales at ambient temperature. The method presented here can be extended to pump–probe studies of optically triggered structural changes on submillisecond timescales in two-dimensional crystals, which allow functionally relevant large-scale motions that may be quenched in three-dimensional crystals.« less
Hyperspectral range imaging for transportation systems evaluation
NASA Astrophysics Data System (ADS)
Bridgelall, Raj; Rafert, J. B.; Atwood, Don; Tolliver, Denver D.
2016-04-01
Transportation agencies expend significant resources to inspect critical infrastructure such as roadways, railways, and pipelines. Regular inspections identify important defects and generate data to forecast maintenance needs. However, cost and practical limitations prevent the scaling of current inspection methods beyond relatively small portions of the network. Consequently, existing approaches fail to discover many high-risk defect formations. Remote sensing techniques offer the potential for more rapid and extensive non-destructive evaluations of the multimodal transportation infrastructure. However, optical occlusions and limitations in the spatial resolution of typical airborne and space-borne platforms limit their applicability. This research proposes hyperspectral image classification to isolate transportation infrastructure targets for high-resolution photogrammetric analysis. A plenoptic swarm of unmanned aircraft systems will capture images with centimeter-scale spatial resolution, large swaths, and polarization diversity. The light field solution will incorporate structure-from-motion techniques to reconstruct three-dimensional details of the isolated targets from sequences of two-dimensional images. A comparative analysis of existing low-power wireless communications standards suggests an application dependent tradeoff in selecting the best-suited link to coordinate swarming operations. This study further produced a taxonomy of specific roadway and railway defects, distress symptoms, and other anomalies that the proposed plenoptic swarm sensing system would identify and characterize to estimate risk levels.
Prediction of high-dimensional states subject to respiratory motion: a manifold learning approach
NASA Astrophysics Data System (ADS)
Liu, Wenyang; Sawant, Amit; Ruan, Dan
2016-07-01
The development of high-dimensional imaging systems in image-guided radiotherapy provides important pathways to the ultimate goal of real-time full volumetric motion monitoring. Effective motion management during radiation treatment usually requires prediction to account for system latency and extra signal/image processing time. It is challenging to predict high-dimensional respiratory motion due to the complexity of the motion pattern combined with the curse of dimensionality. Linear dimension reduction methods such as PCA have been used to construct a linear subspace from the high-dimensional data, followed by efficient predictions on the lower-dimensional subspace. In this study, we extend such rationale to a more general manifold and propose a framework for high-dimensional motion prediction with manifold learning, which allows one to learn more descriptive features compared to linear methods with comparable dimensions. Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where accurate and efficient prediction can be performed. A fixed-point iterative pre-image estimation method is used to recover the predicted value in the original state space. We evaluated and compared the proposed method with a PCA-based approach on level-set surfaces reconstructed from point clouds captured by a 3D photogrammetry system. The prediction accuracy was evaluated in terms of root-mean-squared-error. Our proposed method achieved consistent higher prediction accuracy (sub-millimeter) for both 200 ms and 600 ms lookahead lengths compared to the PCA-based approach, and the performance gain was statistically significant.
Development and application of a particle image velocimeter for high-speed flows
NASA Astrophysics Data System (ADS)
Molezzi, M. J.; Dutton, J. C.
1992-01-01
A particle image velocimetry (PIV) system has been developed for use in high-speed separated air flows. The image acquisition system uses two 550 mJ/pulse Nd:YAG lasers and is fully controlled by a host Macintosh computer. The interrogation system is also Macintosh-based and performs interrogations at approximately 2.3 sec/spot and 4.0 sec/spot when using the Young's fringe and autocorrelation methods, respectively. The system has been proven in preliminary experiments using known-displacement simulated PIV photographs and a simple axisymmetric jet flow. Further results have been obtained in a transonic wind tunnel operating at Mach 0.4 to 0.5 (135 m/s to 170 m/s). PIV experiments were done with an empty test section to provide uniform flow data for comparison with pressure and LDV data, then with a two-dimensional base model, revealing features of the von Karman vortex street wake and underlying small scale turbulence.
Image Reconstruction in Radio Astronomy with Non-Coplanar Synthesis Arrays
NASA Astrophysics Data System (ADS)
Goodrick, L.
2015-03-01
Traditional radio astronomy imaging techniques assume that the interferometric array is coplanar, with a small field of view, and that the two-dimensional Fourier relationship between brightness and visibility remains valid, allowing the Fast Fourier Transform to be used. In practice, to acquire more accurate data, the non-coplanar baseline effects need to be incorporated, as small height variations in the array plane introduces the w spatial frequency component. This component adds an additional phase shift to the incoming signals. There are two approaches to account for the non-coplanar baseline effects: either the full three-dimensional brightness and visibility model can be used to reconstruct an image, or the non-coplanar effects can be removed, reducing the three dimensional relationship to that of the two-dimensional one. This thesis describes and implements the w-projection and w-stacking algorithms. The aim of these algorithms is to account for the phase error introduced by non-coplanar synthesis arrays configurations, making the recovered visibilities more true to the actual brightness distribution model. This is done by reducing the 3D visibilities to a 2D visibility model. The algorithms also have the added benefit of wide-field imaging, although w-stacking supports a wider field of view at the cost of more FFT bin support. For w-projection, the w-term is accounted for in the visibility domain by convolving it out of the problem with a convolution kernel, allowing the use of the two-dimensional Fast Fourier Transform. Similarly, the w-Stacking algorithm applies a phase correction in the image domain to image layers to produce an intensity model that accounts for the non-coplanar baseline effects. This project considers the KAT7 array for simulation and analysis of the limitations and advantages of both the algorithms. Additionally, a variant of the Högbom CLEAN algorithm was used which employs contour trimming for extended source emission flagging. The CLEAN algorithm is an iterative two-dimensional deconvolution method that can further improve image fidelity by removing the effects of the point spread function which can obscure source data.
Thermodynamics of Polaronic States in Artificial Spin Ice
NASA Astrophysics Data System (ADS)
Farhan, Alan
Artificial spin ices represent a class of systems consisting of lithographically patterned nanomagnets arranged in two-dimensional geometries. They were initially introduced as a two-dimensional analogue to geometrically frustrated pyrochlore spin ice, and the most recent introduction of artificial spin ice systems with thermally activated moment fluctuations not only delivered the possibility to directly investigate geometrical frustration and emergent phenomena with real space imaging, but also paved the way to design and investigate new two-dimensional magnetic metamaterials, where material properties can be directly manipulated giving rise to properties that do not exist in nature. Here, taking advantage of cryogenic photoemission electron microscopy, and using the concept of emergent magnetic charges, we are able to directly visualize the creation and annihilation of screened emergent magnetic monopole defects in artificial spin ice. We observe that these polaronic states arise as intermediate states, separating an energetically excited out-of-equilibrium state and low-energy equilibrium configurations. They appear as a result of a local screening effect between emergent magnetic charge defects and their neighboring magnetic charges, thus forming a transient minimum, before the system approaches a global minimum with the least amount of emergent magnetic charge defects. This project is funded by the Swiss National Science Foundation.
Sensing Super-position: Visual Instrument Sensor Replacement
NASA Technical Reports Server (NTRS)
Maluf, David A.; Schipper, John F.
2006-01-01
The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.
Matching methods evaluation framework for stereoscopic breast x-ray images.
Rousson, Johanna; Naudin, Mathieu; Marchessoux, Cédric
2016-01-01
Three-dimensional (3-D) imaging has been intensively studied in the past few decades. Depth information is an important added value of 3-D systems over two-dimensional systems. Special focuses were devoted to the development of stereo matching methods for the generation of disparity maps (i.e., depth information within a 3-D scene). Dedicated frameworks were designed to evaluate and rank the performance of different stereo matching methods but never considering x-ray medical images. Yet, 3-D x-ray acquisition systems and 3-D medical displays have already been introduced into the diagnostic market. To access the depth information within x-ray stereoscopic images, computing accurate disparity maps is essential. We aimed at developing a framework dedicated to x-ray stereoscopic breast images used to evaluate and rank several stereo matching methods. A multiresolution pyramid optimization approach was integrated to the framework to increase the accuracy and the efficiency of the stereo matching techniques. Finally, a metric was designed to score the results of the stereo matching compared with the ground truth. Eight methods were evaluated and four of them [locally scaled sum of absolute differences (LSAD), zero mean sum of absolute differences, zero mean sum of squared differences, and locally scaled mean sum of squared differences] appeared to perform equally good with an average error score of 0.04 (0 is the perfect matching). LSAD was selected for generating the disparity maps.
Li, Ke; Tang, Jie; Chen, Guang-Hong
2014-04-01
To reduce radiation dose in CT imaging, the statistical model based iterative reconstruction (MBIR) method has been introduced for clinical use. Based on the principle of MBIR and its nonlinear nature, the noise performance of MBIR is expected to be different from that of the well-understood filtered backprojection (FBP) reconstruction method. The purpose of this work is to experimentally assess the unique noise characteristics of MBIR using a state-of-the-art clinical CT system. Three physical phantoms, including a water cylinder and two pediatric head phantoms, were scanned in axial scanning mode using a 64-slice CT scanner (Discovery CT750 HD, GE Healthcare, Waukesha, WI) at seven different mAs levels (5, 12.5, 25, 50, 100, 200, 300). At each mAs level, each phantom was repeatedly scanned 50 times to generate an image ensemble for noise analysis. Both the FBP method with a standard kernel and the MBIR method (Veo(®), GE Healthcare, Waukesha, WI) were used for CT image reconstruction. Three-dimensional (3D) noise power spectrum (NPS), two-dimensional (2D) NPS, and zero-dimensional NPS (noise variance) were assessed both globally and locally. Noise magnitude, noise spatial correlation, noise spatial uniformity and their dose dependence were examined for the two reconstruction methods. (1) At each dose level and at each frequency, the magnitude of the NPS of MBIR was smaller than that of FBP. (2) While the shape of the NPS of FBP was dose-independent, the shape of the NPS of MBIR was strongly dose-dependent; lower dose lead to a "redder" NPS with a lower mean frequency value. (3) The noise standard deviation (σ) of MBIR and dose were found to be related through a power law of σ ∝ (dose)(-β) with the component β ≈ 0.25, which violated the classical σ ∝ (dose)(-0.5) power law in FBP. (4) With MBIR, noise reduction was most prominent for thin image slices. (5) MBIR lead to better noise spatial uniformity when compared with FBP. (6) A composite image generated from two MBIR images acquired at two different dose levels (D1 and D2) demonstrated lower noise than that of an image acquired at a dose level of D1+D2. The noise characteristics of the MBIR method are significantly different from those of the FBP method. The well known tradeoff relationship between CT image noise and radiation dose has been modified by MBIR to establish a more gradual dependence of noise on dose. Additionally, some other CT noise properties that had been well understood based on the linear system theory have also been altered by MBIR. Clinical CT scan protocols that had been optimized based on the classical CT noise properties need to be carefully re-evaluated for systems equipped with MBIR in order to maximize the method's potential clinical benefits in dose reduction and/or in CT image quality improvement. © 2014 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ke; Tang, Jie; Chen, Guang-Hong, E-mail: gchen7@wisc.edu
Purpose: To reduce radiation dose in CT imaging, the statistical model based iterative reconstruction (MBIR) method has been introduced for clinical use. Based on the principle of MBIR and its nonlinear nature, the noise performance of MBIR is expected to be different from that of the well-understood filtered backprojection (FBP) reconstruction method. The purpose of this work is to experimentally assess the unique noise characteristics of MBIR using a state-of-the-art clinical CT system. Methods: Three physical phantoms, including a water cylinder and two pediatric head phantoms, were scanned in axial scanning mode using a 64-slice CT scanner (Discovery CT750 HD,more » GE Healthcare, Waukesha, WI) at seven different mAs levels (5, 12.5, 25, 50, 100, 200, 300). At each mAs level, each phantom was repeatedly scanned 50 times to generate an image ensemble for noise analysis. Both the FBP method with a standard kernel and the MBIR method (Veo{sup ®}, GE Healthcare, Waukesha, WI) were used for CT image reconstruction. Three-dimensional (3D) noise power spectrum (NPS), two-dimensional (2D) NPS, and zero-dimensional NPS (noise variance) were assessed both globally and locally. Noise magnitude, noise spatial correlation, noise spatial uniformity and their dose dependence were examined for the two reconstruction methods. Results: (1) At each dose level and at each frequency, the magnitude of the NPS of MBIR was smaller than that of FBP. (2) While the shape of the NPS of FBP was dose-independent, the shape of the NPS of MBIR was strongly dose-dependent; lower dose lead to a “redder” NPS with a lower mean frequency value. (3) The noise standard deviation (σ) of MBIR and dose were found to be related through a power law of σ ∝ (dose){sup −β} with the component β ≈ 0.25, which violated the classical σ ∝ (dose){sup −0.5} power law in FBP. (4) With MBIR, noise reduction was most prominent for thin image slices. (5) MBIR lead to better noise spatial uniformity when compared with FBP. (6) A composite image generated from two MBIR images acquired at two different dose levels (D1 and D2) demonstrated lower noise than that of an image acquired at a dose level of D1+D2. Conclusions: The noise characteristics of the MBIR method are significantly different from those of the FBP method. The well known tradeoff relationship between CT image noise and radiation dose has been modified by MBIR to establish a more gradual dependence of noise on dose. Additionally, some other CT noise properties that had been well understood based on the linear system theory have also been altered by MBIR. Clinical CT scan protocols that had been optimized based on the classical CT noise properties need to be carefully re-evaluated for systems equipped with MBIR in order to maximize the method's potential clinical benefits in dose reduction and/or in CT image quality improvement.« less
PIFEX: An advanced programmable pipelined-image processor
NASA Technical Reports Server (NTRS)
Gennery, D. B.; Wilcox, B.
1985-01-01
PIFEX is a pipelined-image processor being built in the JPL Robotics Lab. It will operate on digitized raster-scanned images (at 60 frames per second for images up to about 300 by 400 and at lesser rates for larger images), performing a variety of operations simultaneously under program control. It thus is a powerful, flexible tool for image processing and low-level computer vision. It also has applications in other two-dimensional problems such as route planning for obstacle avoidance and the numerical solution of two-dimensional partial differential equations (although its low numerical precision limits its use in the latter field). The concept and design of PIFEX are described herein, and some examples of its use are given.
Kopriva, Ivica; Hadžija, Mirko; Popović Hadžija, Marijana; Korolija, Marina; Cichocki, Andrzej
2011-08-01
A methodology is proposed for nonlinear contrast-enhanced unsupervised segmentation of multispectral (color) microscopy images of principally unstained specimens. The methodology exploits spectral diversity and spatial sparseness to find anatomical differences between materials (cells, nuclei, and background) present in the image. It consists of rth-order rational variety mapping (RVM) followed by matrix/tensor factorization. Sparseness constraint implies duality between nonlinear unsupervised segmentation and multiclass pattern assignment problems. Classes not linearly separable in the original input space become separable with high probability in the higher-dimensional mapped space. Hence, RVM mapping has two advantages: it takes implicitly into account nonlinearities present in the image (ie, they are not required to be known) and it increases spectral diversity (ie, contrast) between materials, due to increased dimensionality of the mapped space. This is expected to improve performance of systems for automated classification and analysis of microscopic histopathological images. The methodology was validated using RVM of the second and third orders of the experimental multispectral microscopy images of unstained sciatic nerve fibers (nervus ischiadicus) and of unstained white pulp in the spleen tissue, compared with a manually defined ground truth labeled by two trained pathophysiologists. The methodology can also be useful for additional contrast enhancement of images of stained specimens. Copyright © 2011 American Society for Investigative Pathology. Published by Elsevier Inc. All rights reserved.
Three dimensional full-wave nonlinear acoustic simulations: Applications to ultrasound imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinton, Gianmarco
Characterization of acoustic waves that propagate nonlinearly in an inhomogeneous medium has significant applications to diagnostic and therapeutic ultrasound. The generation of an ultrasound image of human tissue is based on the complex physics of acoustic wave propagation: diffraction, reflection, scattering, frequency dependent attenuation, and nonlinearity. The nonlinearity of wave propagation is used to the advantage of diagnostic scanners that use the harmonic components of the ultrasonic signal to improve the resolution and penetration of clinical scanners. One approach to simulating ultrasound images is to make approximations that can reduce the physics to systems that have a low computational cost.more » Here a maximalist approach is taken and the full three dimensional wave physics is simulated with finite differences. This paper demonstrates how finite difference simulations for the nonlinear acoustic wave equation can be used to generate physically realistic two and three dimensional ultrasound images anywhere in the body. A specific intercostal liver imaging scenario for two cases: with the ribs in place, and with the ribs removed. This configuration provides an imaging scenario that cannot be performed in vivo but that can test the influence of the ribs on image quality. Several imaging properties are studied, in particular the beamplots, the spatial coherence at the transducer surface, the distributed phase aberration, and the lesion detectability for imaging at the fundamental and harmonic frequencies. The results indicate, counterintuitively, that at the fundamental frequency the beamplot improves due to the apodization effect of the ribs but at the same time there is more degradation from reverberation clutter. At the harmonic frequency there is significantly less improvement in the beamplot and also significantly less degradation from reverberation. It is shown that even though simulating the full propagation physics is computationally challenging it is necessary to quantify ultrasound image quality and its sources of degradation.« less
Optimized doppler optical coherence tomography for choroidal capillary vasculature imaging
NASA Astrophysics Data System (ADS)
Liu, Gangjun; Qi, Wenjuan; Yu, Lingfeng; Chen, Zhongping
2011-03-01
In this paper, we analyzed the retinal and choroidal blood vasculature in the posterior segment of the human eye with optimized color Doppler and Doppler variance optical coherence tomography. Depth-resolved structure, color Doppler and Doppler variance images were compared. Blood vessels down to capillary level were able to be obtained with the optimized optical coherence color Doppler and Doppler variance method. For in-vivo imaging of human eyes, bulkmotion induced bulk phase must be identified and removed before using color Doppler method. It was found that the Doppler variance method is not sensitive to bulk motion and the method can be used without removing the bulk phase. A novel, simple and fast segmentation algorithm to indentify retinal pigment epithelium (RPE) was proposed and used to segment the retinal and choroidal layer. The algorithm was based on the detected OCT signal intensity difference between different layers. A spectrometer-based Fourier domain OCT system with a central wavelength of 890 nm and bandwidth of 150nm was used in this study. The 3-dimensional imaging volume contained 120 sequential two dimensional images with 2048 A-lines per image. The total imaging time was 12 seconds and the imaging area was 5x5 mm2.
Usefulness of biological fingerprint in magnetic resonance imaging for patient verification.
Ueda, Yasuyuki; Morishita, Junji; Kudomi, Shohei; Ueda, Katsuhiko
2016-09-01
The purpose of our study is to investigate the feasibility of automated patient verification using multi-planar reconstruction (MPR) images generated from three-dimensional magnetic resonance (MR) imaging of the brain. Several anatomy-related MPR images generated from three-dimensional fast scout scan of each MR examination were used as biological fingerprint images in this study. The database of this study consisted of 730 temporal pairs of MR examination of the brain. We calculated the correlation value between current and prior biological fingerprint images of the same patient and also all combinations of two images for different patients to evaluate the effectiveness of our method for patient verification. The best performance of our system were as follows: a half-total error rate of 1.59 % with a false acceptance rate of 0.023 % and a false rejection rate of 3.15 %, an equal error rate of 1.37 %, and a rank-one identification rate of 98.6 %. Our method makes it possible to verify the identity of the patient using only some existing medical images without the addition of incidental equipment. Also, our method will contribute to patient misidentification error management caused by human errors.
Biodynamic profiling of three-dimensional tissue growth techniques
NASA Astrophysics Data System (ADS)
Sun, Hao; Merrill, Dan; Turek, John; Nolte, David
2016-03-01
Three-dimensional tissue culture presents a more biologically relevant environment in which to perform drug development than conventional two-dimensional cell culture. However, obtaining high-content information from inside three dimensional tissue has presented an obstacle to rapid adoption of 3D tissue culture for pharmaceutical applications. Biodynamic imaging is a high-content three-dimensional optical imaging technology based on low-coherence interferometry and digital holography that uses intracellular dynamics as high-content image contrast. In this paper, we use biodynamic imaging to compare pharmaceutical responses to Taxol of three-dimensional multicellular spheroids grown by three different growth techniques: rotating bioreactor, hanging-drop and plate-grown spheroids. The three growth techniques have systematic variations among tissue cohesiveness and intracellular activity and consequently display different pharmacodynamics under identical drug dose conditions. The in vitro tissue cultures are also compared to ex vivo living biopsies. These results demonstrate that three-dimensional tissue cultures are not equivalent, and that drug-response studies must take into account the growth method.
Realization of integral 3-dimensional image using fabricated tunable liquid lens array
NASA Astrophysics Data System (ADS)
Lee, Muyoung; Kim, Junoh; Kim, Cheol Joong; Lee, Jin Su; Won, Yong Hyub
2015-03-01
Electrowetting has been widely studied for various optical applications such as optical switch, sensor, prism, and display. In this study, vari-focal liquid lens array is developed using electrowetting principle to construct integral 3-dimensional imaging. The electrowetting principle that changes the surface tension by applying voltage has several advantages to realize active optical device such as fast response time, low electrical consumption, and no mechanical moving parts. Two immiscible liquids that are water and oil are used for forming lens. By applying a voltage to the water, the focal length of the lens could be tuned as changing contact angle of water. The fabricated electrowetting vari-focal liquid lens array has 1mm diameter spherical lens shape that has 1.6mm distance between each lens. The number of lenses on the panel is 23x23 and the focal length of the lens array is simultaneously tuned from -125 to 110 diopters depending on the applied voltage. The fabricated lens array is implemented to integral 3-dimensional imaging. A 3D object is reconstructed by fabricated liquid lens array with 23x23 elemental images that are generated by 3D max tools. When liquid lens array is tuned as convex state. From vari-focal liquid lens array implemented integral imaging system, we expect that depth enhanced integral imaging can be realized in the near future.
Design and development of an ultrasound calibration phantom and system
NASA Astrophysics Data System (ADS)
Cheng, Alexis; Ackerman, Martin K.; Chirikjian, Gregory S.; Boctor, Emad M.
2014-03-01
Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the ultrasound transducer and the ultrasound image. A phantom or model with known geometry is also required. In this work, we design and test an ultrasound calibration phantom and software. The two main considerations in this work are utilizing our knowledge of ultrasound physics to design the phantom and delivering an easy to use calibration process to the user. We explore the use of a three-dimensional printer to create the phantom in its entirety without need for user assembly. We have also developed software to automatically segment the three-dimensional printed rods from the ultrasound image by leveraging knowledge about the shape and scale of the phantom. In this work, we present preliminary results from using this phantom to perform ultrasound calibration. To test the efficacy of our method, we match the projection of the points segmented from the image to the known model and calculate a sum squared difference between each point for several combinations of motion generation and filtering methods. The best performing combination of motion and filtering techniques had an error of 1.56 mm and a standard deviation of 1.02 mm.
NASA Astrophysics Data System (ADS)
Pohlmeier, Andreas; Vanderborght, Jan; Haber-Pohlmeier, Sabina; Wienke, Sandra; Vereecken, Harry; Javaux, Mathieu
2010-05-01
Combination of experimental studies with detailed deterministic models help understand root water uptake processes. Recently, Javaux et al. developed the RSWMS model by integration of Doussańs root model into the well established SWMS code[1], which simulates water and solute transport in unsaturated soil [2, 3]. In order to confront RSWMS modeling results to experimental data, we used Magnetic Resonance Imaging (MRI) technique to monitor root water uptake in situ. Non-invasive 3-D imaging of root system architecture, water content distributions and tracer transport by MR were performed and compared with numerical model calculations. Two MRI experiments were performed and modeled: i) water uptake during drought stress and ii) transport of a locally injected tracer (Gd-DTPA) to the soil-root system driven by root water uptake. Firstly, the high resolution MRI image (0.23x0.23x0.5mm) of the root system was transferred into a continuous root system skeleton by a combination of thresholding, region-growing filtering and final manual 3D redrawing of the root strands. Secondly, the two experimental scenarios were simulated by RSWMS with a resolution of about 3mm. For scenario i) the numerical simulations could reproduce the general trend that is the strong water depletion from the top layer of the soil. However, the creation of depletion zones in the vicinity of the roots could not be simulated, due to a poor initial evaluation of the soil hydraulic properties, which equilibrates instantaneously larger differences in water content. The determination of unsaturated conductivities at low water content was needed to improve the model calculations. For scenario ii) simulations confirmed the solute transport towards the roots by advection. 1. Simunek, J., T. Vogel, and M.T. van Genuchten, The SWMS_2D Code for Simulating Water Flow and Solute Transport in Two-Dimensional Variably Saturated Media. Version 1.21. 1994, U.S. Salinity Laboratory, USDA, ARS: Riverside, California. 2. Javaux, M., et al., Use of a Three-Dimensional Detailed Modeling Approach for Predicting Root Water Uptake. Vadose Zone J., 2008. 7(3): p. 1079-1088. 3. Schröder, T., et al., Effect of Local Soil Hydraulic Conductivity Drop Using a Three Dimensional Root Water Uptake Model. Vadose Zone J., 2008. 7(3): p. 1089-1098.
Layer by layer: complex analysis with OCT technology
NASA Astrophysics Data System (ADS)
Florin, Christian
2017-03-01
Standard visualisation systems capture two- dimensional images and need more or less fast image processing systems. Now, the ASP Array (Actives sensor pixel array) opens a new world in imaging. On the ASP array, each pixel is provided with its own lens and with its own signal pre-processing. The OCT technology works in "real time" with highest accuracy. In the ASP array systems functionalities of the data acquisition and signal processing are even integrated onto the "pixel level". For the extraction of interferometric features, the time-of-flight principle (TOF) is used. The ASP architecture offers the demodulation of the optical signal within a pixel with up to 100 kHz and the reconstruction of the amplitude and its phase. The dynamics of image capture with the ASP array is higher by two orders of magnitude in comparison with conventional image sensors!!! The OCT- Technology allows a topographic imaging in real time with an extremely high geometric spatial resolution. The optical path length is generated by an axial movement of the reference mirror. The amplitude-modulated optical signal and the carrier frequency are proportional to the scan rate and contains the depth information. Each maximum of the signal envelope corresponds to a reflection (or scattering) within a sample. The ASP array produces at same time 300 * 300 axial Interferorgrams which touch each other on all sides. The signal demodulation for detecting the envelope is not limited by the frame rate of the ASP array in comparison to standard OCT systems. If an optical signal arrives to a pixel of the ASP Array an electrical signal is generated. The background is faded to saturation of pixels by high light intensity to avoid. The sampled signal is integrated continuously multiplied by a signal of the same frequency and two paths whose phase is shifted by 90 degrees from each other are averaged. The outputs of the two paths are routed to the PC, where the envelope amplitude and the phase calculate a three-dimensional tomographic image. For 3D measuring technique specially designed ASP- arrays with a very high image rate are available. If ASP- Arrays are coupled with the OCT method, layer thicknesses can be determined without contact, sealing seams can be inspected or geometrical shapes can be measured. From a stack of hundreds of single OCT images, interesting images can be selected and fed to the computer to analyse them.
Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects
Lu, Shin-Yee
1998-01-01
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.
Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects
Lu, S.Y.
1998-12-22
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.