Three Dimensional Imaging with Multiple Wavelength Speckle Interferometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernacki, Bruce E.; Cannon, Bret D.; Schiffern, John T.
2014-05-28
We present the design, modeling, construction, and results of a three-dimensional imager based upon multiple-wavelength speckle interferometry. A surface under test is illuminated with tunable laser light in a Michelson interferometer configuration while a speckled image is acquired at each laser frequency step. The resulting hypercube is Fourier transformed in the frequency dimension and the beat frequencies that result map the relative offsets of surface features. Synthetic wavelengths resulting from the laser tuning can probe features ranging from 18 microns to hundreds of millimeters. Three dimensional images will be presented along with modeling results.
The study of integration about measurable image and 4D production
NASA Astrophysics Data System (ADS)
Zhang, Chunsen; Hu, Pingbo; Niu, Weiyun
2008-12-01
In this paper, we create the geospatial data of three-dimensional (3D) modeling by the combination of digital photogrammetry and digital close-range photogrammetry. For large-scale geographical background, we make the establishment of DEM and DOM combination of three-dimensional landscape model based on the digital photogrammetry which uses aerial image data to make "4D" (DOM: Digital Orthophoto Map, DEM: Digital Elevation Model, DLG: Digital Line Graphic and DRG: Digital Raster Graphic) production. For the range of building and other artificial features which the users are interested in, we realize that the real features of the three-dimensional reconstruction adopting the method of the digital close-range photogrammetry can come true on the basis of following steps : non-metric cameras for data collection, the camera calibration, feature extraction, image matching, and other steps. At last, we combine three-dimensional background and local measurements real images of these large geographic data and realize the integration of measurable real image and the 4D production.The article discussed the way of the whole flow and technology, achieved the three-dimensional reconstruction and the integration of the large-scale threedimensional landscape and the metric building.
NASA Astrophysics Data System (ADS)
Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia
2018-05-01
Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.
Reorienting in Images of a Three-Dimensional Environment
ERIC Educational Resources Information Center
Kelly, Debbie M.; Bischof, Walter F.
2005-01-01
Adult humans searched for a hidden goal in images depicting 3-dimensional rooms. Images contained either featural cues, geometric cues, or both, which could be used to determine the correct location of the goal. In Experiment 1, participants learned to use featural and geometric information equally well. However, men and women showed significant…
He, Longjun; Xu, Lang; Ming, Xing; Liu, Qian
2015-02-01
Three-dimensional post-processing operations on the volume data generated by a series of CT or MR images had important significance on image reading and diagnosis. As a part of the DIOCM standard, WADO service defined how to access DICOM objects on the Web, but it didn't involve three-dimensional post-processing operations on the series images. This paper analyzed the technical features of three-dimensional post-processing operations on the volume data, and then designed and implemented a web service system for three-dimensional post-processing operations of medical images based on the WADO protocol. In order to improve the scalability of the proposed system, the business tasks and calculation operations were separated into two modules. As results, it was proved that the proposed system could support three-dimensional post-processing service of medical images for multiple clients at the same moment, which met the demand of accessing three-dimensional post-processing operations on the volume data on the web.
NASA Astrophysics Data System (ADS)
Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas
2009-10-01
This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.
Classification by Using Multispectral Point Cloud Data
NASA Astrophysics Data System (ADS)
Liao, C. T.; Huang, H. H.
2012-07-01
Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.
Online 3D Ear Recognition by Combining Global and Local Features.
Liu, Yahui; Zhang, Bob; Lu, Guangming; Zhang, David
2016-01-01
The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%.
Online 3D Ear Recognition by Combining Global and Local Features
Liu, Yahui; Zhang, Bob; Lu, Guangming; Zhang, David
2016-01-01
The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%. PMID:27935955
Three-dimensional biofilm structure quantification.
Beyenal, Haluk; Donovan, Conrad; Lewandowski, Zbigniew; Harkin, Gary
2004-12-01
Quantitative parameters describing biofilm physical structure have been extracted from three-dimensional confocal laser scanning microscopy images and used to compare biofilm structures, monitor biofilm development, and quantify environmental factors affecting biofilm structure. Researchers have previously used biovolume, volume to surface ratio, roughness coefficient, and mean and maximum thicknesses to compare biofilm structures. The selection of these parameters is dependent on the availability of software to perform calculations. We believe it is necessary to develop more comprehensive parameters to describe heterogeneous biofilm morphology in three dimensions. This research presents parameters describing three-dimensional biofilm heterogeneity, size, and morphology of biomass calculated from confocal laser scanning microscopy images. This study extends previous work which extracted quantitative parameters regarding morphological features from two-dimensional biofilm images to three-dimensional biofilm images. We describe two types of parameters: (1) textural parameters showing microscale heterogeneity of biofilms and (2) volumetric parameters describing size and morphology of biomass. The three-dimensional features presented are average (ADD) and maximum diffusion distances (MDD), fractal dimension, average run lengths (in X, Y and Z directions), aspect ratio, textural entropy, energy and homogeneity. We discuss the meaning of each parameter and present the calculations in detail. The developed algorithms, including automatic thresholding, are implemented in software as MATLAB programs which will be available at site prior to publication of the paper.
Stereo Image Ranging For An Autonomous Robot Vision System
NASA Astrophysics Data System (ADS)
Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven
1985-12-01
The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.
Shankar, Hariharan; Reddy, Sapna
2012-07-01
Ultrasound imaging has gained acceptance in pain management interventions. Features of myofascial pain syndrome have been explored using ultrasound imaging and elastography. There is a paucity of reports showing the benefit clinically. This report provides three-dimensional features of taut bands and highlights the advantages of using two-dimensional ultrasound imaging to improve targeting of taut bands in deeper locations. Fifty-eight-year-old man with pain and decreased range of motion of the right shoulder was referred for further management of pain above the scapula after having failed conservative management for myofascial pain syndrome. Three-dimensional ultrasound images provided evidence of aberrancy in the architecture of the muscle fascicles around the taut bands compared to the adjacent normal muscle tissue during serial sectioning of the accrued image. On two-dimensional ultrasound imaging over the palpated taut band, areas of hyperechogenicity were visualized in the trapezius and supraspinatus muscles. Subsequently, the patient received ultrasound-guided real-time lidocaine injections to the trigger points with successful resolution of symptoms. This is a successful demonstration of utility of ultrasound imaging of taut bands in the management of myofascial pain syndrome. Utility of this imaging modality in myofascial pain syndrome requires further clinical validation. Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Mayhew, Christopher A.; Mayhew, Craig M.
2009-02-01
Vision III Imaging, Inc. (the Company) has developed Parallax Image Display (PIDTM) software tools to critically align and display aerial images with parallax differences. Terrain features are rendered obvious to the viewer when critically aligned images are presented alternately at 4.3 Hz. The recent inclusion of digital elevation models in geographic data browsers now allows true three-dimensional parallax to be acquired from virtual globe programs like Google Earth. The authors have successfully developed PID methods and code that allow three-dimensional geographical terrain data to be visualized using temporal parallax differences.
Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography.
Wojtkowski, Maciej; Srinivasan, Vivek; Fujimoto, James G; Ko, Tony; Schuman, Joel S; Kowalczyk, Andrzej; Duker, Jay S
2005-10-01
To demonstrate high-speed, ultrahigh-resolution, 3-dimensional optical coherence tomography (3D OCT) and new protocols for retinal imaging. Ultrahigh-resolution OCT using broadband light sources achieves axial image resolutions of approximately 2 microm compared with standard 10-microm-resolution OCT current commercial instruments. High-speed OCT using spectral/Fourier domain detection enables dramatic increases in imaging speeds. Three-dimensional OCT retinal imaging is performed in normal human subjects using high-speed ultrahigh-resolution OCT. Three-dimensional OCT data of the macula and optic disc are acquired using a dense raster scan pattern. New processing and display methods for generating virtual OCT fundus images; cross-sectional OCT images with arbitrary orientations; quantitative maps of retinal, nerve fiber layer, and other intraretinal layer thicknesses; and optic nerve head topographic parameters are demonstrated. Three-dimensional OCT imaging enables new imaging protocols that improve visualization and mapping of retinal microstructure. An OCT fundus image can be generated directly from the 3D OCT data, which enables precise and repeatable registration of cross-sectional OCT images and thickness maps with fundus features. Optical coherence tomography images with arbitrary orientations, such as circumpapillary scans, can be generated from 3D OCT data. Mapping of total retinal thickness and thicknesses of the nerve fiber layer, photoreceptor layer, and other intraretinal layers is demonstrated. Measurement of optic nerve head topography and disc parameters is also possible. Three-dimensional OCT enables measurements that are similar to those of standard instruments, including the StratusOCT, GDx, HRT, and RTA. Three-dimensional OCT imaging can be performed using high-speed ultrahigh-resolution OCT. Three-dimensional OCT provides comprehensive visualization and mapping of retinal microstructures. The high data acquisition speeds enable high-density data sets with large numbers of transverse positions on the retina, which reduces the possibility of missing focal pathologies. In addition to providing image information such as OCT cross-sectional images, OCT fundus images, and 3D rendering, quantitative measurement and mapping of intraretinal layer thickness and topographic features of the optic disc are possible. We hope that 3D OCT imaging may help to elucidate the structural changes associated with retinal disease as well as improve early diagnosis and monitoring of disease progression and response to treatment.
NASA Astrophysics Data System (ADS)
Chaa, Mourad; Boukezzoula, Naceur-Eddine; Attia, Abdelouahab
2017-01-01
Two types of scores extracted from two-dimensional (2-D) and three-dimensional (3-D) palmprint for personal recognition systems are merged, introducing a local image descriptor for 2-D palmprint-based recognition systems, named bank of binarized statistical image features (B-BSIF). The main idea of B-BSIF is that the extracted histograms from the binarized statistical image features (BSIF) code images (the results of applying the different BSIF descriptor size with the length 12) are concatenated into one to produce a large feature vector. 3-D palmprint contains the depth information of the palm surface. The self-quotient image (SQI) algorithm is applied for reconstructing illumination-invariant 3-D palmprint images. To extract discriminative Gabor features from SQI images, Gabor wavelets are defined and used. Indeed, the dimensionality reduction methods have shown their ability in biometrics systems. Given this, a principal component analysis (PCA)+linear discriminant analysis (LDA) technique is employed. For the matching process, the cosine Mahalanobis distance is applied. Extensive experiments were conducted on a 2-D and 3-D palmprint database with 10,400 range images from 260 individuals. Then, a comparison was made between the proposed algorithm and other existing methods in the literature. Results clearly show that the proposed framework provides a higher correct recognition rate. Furthermore, the best results were obtained by merging the score of B-BSIF descriptor with the score of the SQI+Gabor wavelets+PCA+LDA method, yielding an equal error rate of 0.00% and a recognition rate of rank-1=100.00%.
Three-Dimensional Cataract Crystalline Lens Imaging With Swept-Source Optical Coherence Tomography.
de Castro, Alberto; Benito, Antonio; Manzanera, Silvestre; Mompeán, Juan; Cañizares, Belén; Martínez, David; Marín, Jose María; Grulkowski, Ireneusz; Artal, Pablo
2018-02-01
To image, describe, and characterize different features visible in the crystalline lens of older adults with and without cataract when imaged three-dimensionally with a swept-source optical coherence tomography (SS-OCT) system. We used a new SS-OCT laboratory prototype designed to enhance the visualization of the crystalline lens and imaged the entire anterior segment of both eyes in two groups of participants: patients scheduled to undergo cataract surgery, n = 17, age range 36 to 91 years old, and volunteers without visual complains, n = 14, age range 20 to 81 years old. Pre-cataract surgery patients were also clinically graded according to the Lens Opacification Classification System III. The three-dimensional location and shape of the visible opacities were compared with the clinical grading. Hypo- and hyperreflective features were visible in the lens of all pre-cataract surgery patients and in some of the older adults in the volunteer group. When the clinical examination revealed cortical or subcapsular cataracts, hyperreflective features were visible either in the cortex parallel to the surfaces of the lens or in the posterior pole. Other type of opacities that appeared as hyporeflective localized features were identified in the cortex of the lens. The OCT signal in the nucleus of the crystalline lens correlated with the nuclear cataract clinical grade. A dedicated OCT is a useful tool to study in vivo the subtle opacities in the cataractous crystalline lens, revealing its position and size three-dimensionally. The use of these images allows obtaining more detailed information on the age-related changes leading to cataract.
NASA Astrophysics Data System (ADS)
Fang, Leyuan; Wang, Chong; Li, Shutao; Yan, Jun; Chen, Xiangdong; Rabbani, Hossein
2017-11-01
We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness.
Building Facade Reconstruction by Fusing Terrestrial Laser Points and Images
Pu, Shi; Vosselman, George
2009-01-01
Laser data and optical data have a complementary nature for three dimensional feature extraction. Efficient integration of the two data sources will lead to a more reliable and automated extraction of three dimensional features. This paper presents a semiautomatic building facade reconstruction approach, which efficiently combines information from terrestrial laser point clouds and close range images. A building facade's general structure is discovered and established using the planar features from laser data. Then strong lines in images are extracted using Canny extractor and Hough transformation, and compared with current model edges for necessary improvement. Finally, textures with optimal visibility are selected and applied according to accurate image orientations. Solutions to several challenge problems throughout the collaborated reconstruction, such as referencing between laser points and multiple images and automated texturing, are described. The limitations and remaining works of this approach are also discussed. PMID:22408539
Three-dimensional modeling of tea-shoots using images and models.
Wang, Jian; Zeng, Xianyin; Liu, Jianbing
2011-01-01
In this paper, a method for three-dimensional modeling of tea-shoots with images and calculation models is introduced. The process is as follows: the tea shoots are photographed with a camera, color space conversion is conducted, using an improved algorithm that is based on color and regional growth to divide the tea shoots in the images, and the edges of the tea shoots extracted with the help of edge detection; after that, using the divided tea-shoot images, the three-dimensional coordinates of the tea shoots are worked out and the feature parameters extracted, matching and calculation conducted according to the model database, and finally the three-dimensional modeling of tea-shoots is completed. According to the experimental results, this method can avoid a lot of calculations and has better visual effects and, moreover, performs better in recovering the three-dimensional information of the tea shoots, thereby providing a new method for monitoring the growth of and non-destructive testing of tea shoots.
3D reconstruction based on light field images
NASA Astrophysics Data System (ADS)
Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei
2018-04-01
This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.
A stereo remote sensing feature selection method based on artificial bee colony algorithm
NASA Astrophysics Data System (ADS)
Yan, Yiming; Liu, Pigang; Zhang, Ye; Su, Nan; Tian, Shu; Gao, Fengjiao; Shen, Yi
2014-05-01
To improve the efficiency of stereo information for remote sensing classification, a stereo remote sensing feature selection method is proposed in this paper presents, which is based on artificial bee colony algorithm. Remote sensing stereo information could be described by digital surface model (DSM) and optical image, which contain information of the three-dimensional structure and optical characteristics, respectively. Firstly, three-dimensional structure characteristic could be analyzed by 3D-Zernike descriptors (3DZD). However, different parameters of 3DZD could descript different complexity of three-dimensional structure, and it needs to be better optimized selected for various objects on the ground. Secondly, features for representing optical characteristic also need to be optimized. If not properly handled, when a stereo feature vector composed of 3DZD and image features, that would be a lot of redundant information, and the redundant information may not improve the classification accuracy, even cause adverse effects. To reduce information redundancy while maintaining or improving the classification accuracy, an optimized frame for this stereo feature selection problem is created, and artificial bee colony algorithm is introduced for solving this optimization problem. Experimental results show that the proposed method can effectively improve the computational efficiency, improve the classification accuracy.
Three-dimensional face model reproduction method using multiview images
NASA Astrophysics Data System (ADS)
Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio
1991-11-01
This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.
Evaluation of three-dimensional virtual perception of garments
NASA Astrophysics Data System (ADS)
Aydoğdu, G.; Yeşilpinar, S.; Erdem, D.
2017-10-01
In recent years, three-dimensional design, dressing and simulation programs came into prominence in the textile industry. By these programs, the need to produce clothing samples for every design in design process has been eliminated. Clothing fit, design, pattern, fabric and accessory details and fabric drape features can be evaluated easily. Also, body size of virtual mannequin can be adjusted so more realistic simulations can be created. Moreover, three-dimensional virtual garment images created by these programs can be used while presenting the product to end-user instead of two-dimensional photograph images. In this study, a survey was carried out to investigate the visual perception of consumers. The survey was conducted for three different garment types, separately. Questions about gender, profession etc. was asked to the participants and expected them to compare real samples and artworks or three-dimensional virtual images of garments. When survey results were analyzed statistically, it is seen that demographic situation of participants does not affect visual perception and three-dimensional virtual garment images reflect the real sample characteristics better than artworks for each garment type. Also, it is reported that there is no perception difference depending on garment type between t-shirt, sweatshirt and tracksuit bottom.
Fang, Leyuan; Wang, Chong; Li, Shutao; Yan, Jun; Chen, Xiangdong; Rabbani, Hossein
2017-11-01
We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Multispectral embedding-based deep neural network for three-dimensional human pose recovery
NASA Astrophysics Data System (ADS)
Yu, Jialin; Sun, Jifeng
2018-01-01
Monocular image-based three-dimensional (3-D) human pose recovery aims to retrieve 3-D poses using the corresponding two-dimensional image features. Therefore, the pose recovery performance highly depends on the image representations. We propose a multispectral embedding-based deep neural network (MSEDNN) to automatically obtain the most discriminative features from multiple deep convolutional neural networks and then embed their penultimate fully connected layers into a low-dimensional manifold. This compact manifold can explore not only the optimum output from multiple deep networks but also the complementary properties of them. Furthermore, the distribution of each hierarchy discriminative manifold is sufficiently smooth so that the training process of our MSEDNN can be effectively implemented only using few labeled data. Our proposed network contains a body joint detector and a human pose regressor that are jointly trained. Extensive experiments conducted on four databases show that our proposed MSEDNN can achieve the best recovery performance compared with the state-of-the-art methods.
Li, Zhongke; Yang, Huifang; Lü, Peijun; Wang, Yong; Sun, Yuchun
2015-01-01
Background and Objective To develop a real-time recording system based on computer binocular vision and two-dimensional image feature extraction to accurately record mandibular movement in three dimensions. Methods A computer-based binocular vision device with two digital cameras was used in conjunction with a fixed head retention bracket to track occlusal movement. Software was developed for extracting target spatial coordinates in real time based on two-dimensional image feature recognition. A plaster model of a subject’s upper and lower dentition were made using conventional methods. A mandibular occlusal splint was made on the plaster model, and then the occlusal surface was removed. Temporal denture base resin was used to make a 3-cm handle extending outside the mouth connecting the anterior labial surface of the occlusal splint with a detection target with intersecting lines designed for spatial coordinate extraction. The subject's head was firmly fixed in place, and the occlusal splint was fully seated on the mandibular dentition. The subject was then asked to make various mouth movements while the mandibular movement target locus point set was recorded. Comparisons between the coordinate values and the actual values of the 30 intersections on the detection target were then analyzed using paired t-tests. Results The three-dimensional trajectory curve shapes of the mandibular movements were consistent with the respective subject movements. Mean XYZ coordinate values and paired t-test results were as follows: X axis: -0.0037 ± 0.02953, P = 0.502; Y axis: 0.0037 ± 0.05242, P = 0.704; and Z axis: 0.0007 ± 0.06040, P = 0.952. The t-test result showed that the coordinate values of the 30 cross points were considered statistically no significant. (P<0.05) Conclusions Use of a real-time recording system of three-dimensional mandibular movement based on computer binocular vision and two-dimensional image feature recognition technology produced a recording accuracy of approximately ± 0.1 mm, and is therefore suitable for clinical application. Certainly, further research is necessary to confirm the clinical applications of the method. PMID:26375800
Application of volume rendering technique (VRT) for musculoskeletal imaging.
Darecki, Rafał
2002-10-30
A review of the applications of volume rendering technique in musculoskeletal three-dimensional imaging from CT data. General features, potential and indications for applying the method are presented.
Athermally photoreduced graphene oxides for three-dimensional holographic images
Li, Xiangping; Ren, Haoran; Chen, Xi; Liu, Juan; Li, Qin; Li, Chengmingyue; Xue, Gaolei; Jia, Jia; Cao, Liangcai; Sahu, Amit; Hu, Bin; Wang, Yongtian; Jin, Guofan; Gu, Min
2015-01-01
The emerging graphene-based material, an atomic layer of aromatic carbon atoms with exceptional electronic and optical properties, has offered unprecedented prospects for developing flat two-dimensional displaying systems. Here, we show that reduced graphene oxide enabled write-once holograms for wide-angle and full-colour three-dimensional images. This is achieved through the discovery of subwavelength-scale multilevel optical index modulation of athermally reduced graphene oxides by a single femtosecond pulsed beam. This new feature allows for static three-dimensional holographic images with a wide viewing angle up to 52 degrees. In addition, the spectrally flat optical index modulation in reduced graphene oxides enables wavelength-multiplexed holograms for full-colour images. The large and polarization-insensitive phase modulation over π in reduced graphene oxide composites enables to restore vectorial wavefronts of polarization discernible images through the vectorial diffraction of a reconstruction beam. Therefore, our technique can be leveraged to achieve compact and versatile holographic components for controlling light. PMID:25901676
Efficient local representations for three-dimensional palmprint recognition
NASA Astrophysics Data System (ADS)
Yang, Bing; Wang, Xiaohua; Yao, Jinliang; Yang, Xin; Zhu, Wenhua
2013-10-01
Palmprints have been broadly used for personal authentication because they are highly accurate and incur low cost. Most previous works have focused on two-dimensional (2-D) palmprint recognition in the past decade. Unfortunately, 2-D palmprint recognition systems lose the shape information when capturing palmprint images. Moreover, such 2-D palmprint images can be easily forged or affected by noise. Hence, three-dimensional (3-D) palmprint recognition has been regarded as a promising way to further improve the performance of palmprint recognition systems. We have developed a simple, but efficient method for 3-D palmprint recognition by using local features. We first utilize shape index representation to describe the geometry of local regions in 3-D palmprint data. Then, we extract local binary pattern and Gabor wavelet features from the shape index image. The two types of complementary features are finally fused at a score level for further improvements. The experimental results on the Hong Kong Polytechnic 3-D palmprint database, which contains 8000 samples from 400 palms, illustrate the effectiveness of the proposed method.
Molina, David; Pérez-Beteta, Julián; Martínez-González, Alicia; Martino, Juan; Velasquez, Carlos; Arana, Estanislao; Pérez-García, Víctor M
2017-01-01
Textural measures have been widely explored as imaging biomarkers in cancer. However, their robustness under dynamic range and spatial resolution changes in brain 3D magnetic resonance images (MRI) has not been assessed. The aim of this work was to study potential variations of textural measures due to changes in MRI protocols. Twenty patients harboring glioblastoma with pretreatment 3D T1-weighted MRIs were included in the study. Four different spatial resolution combinations and three dynamic ranges were studied for each patient. Sixteen three-dimensional textural heterogeneity measures were computed for each patient and configuration including co-occurrence matrices (CM) features and run-length matrices (RLM) features. The coefficient of variation was used to assess the robustness of the measures in two series of experiments corresponding to (i) changing the dynamic range and (ii) changing the matrix size. No textural measures were robust under dynamic range changes. Entropy was the only textural feature robust under spatial resolution changes (coefficient of variation under 10% in all cases). Textural measures of three-dimensional brain tumor images are not robust neither under dynamic range nor under matrix size changes. Standards should be harmonized to use textural features as imaging biomarkers in radiomic-based studies. The implications of this work go beyond the specific tumor type studied here and pose the need for standardization in textural feature calculation of oncological images.
Poon, Ting-Chung
2011-12-01
This feature issue serves as a pilot issue promoting the joint issue of Applied Optics and Chinese Optics Letters. It focuses upon topics of current relevance to the community working in the area of digital holography and 3-D imaging. © 2011 Optical Society of America
The 3-D image recognition based on fuzzy neural network technology
NASA Technical Reports Server (NTRS)
Hirota, Kaoru; Yamauchi, Kenichi; Murakami, Jun; Tanaka, Kei
1993-01-01
Three dimensional stereoscopic image recognition system based on fuzzy-neural network technology was developed. The system consists of three parts; preprocessing part, feature extraction part, and matching part. Two CCD color camera image are fed to the preprocessing part, where several operations including RGB-HSV transformation are done. A multi-layer perception is used for the line detection in the feature extraction part. Then fuzzy matching technique is introduced in the matching part. The system is realized on SUN spark station and special image input hardware system. An experimental result on bottle images is also presented.
Hasegawa, Tomoka; Yamamoto, Tomomaya; Hongo, Hiromi; Qiu, Zixuan; Abe, Miki; Kanesaki, Takuma; Tanaka, Kawori; Endo, Takashi; de Freitas, Paulo Henrique Luiz; Li, Minqi; Amizuka, Norio
2018-04-01
The aim of this study is to demonstrate the application of focused ion beam-scanning electron microscopy, FIB-SEM for revealing the three-dimensional features of osteocytic cytoplasmic processes in metaphyseal (immature) and diaphyseal (mature) trabeculae. Tibiae of eight-week-old male mice were fixed with aldehyde solution, and treated with block staining prior to FIB-SEM observation. While two-dimensional backscattered SEM images showed osteocytes' cytoplasmic processes in a fragmented fashion, three-dimensional reconstructions of FIB-SEM images demonstrated that osteocytes in primary metaphyseal trabeculae extended their cytoplasmic processes randomly, thus maintaining contact with neighboring osteocytes and osteoblasts. In contrast, diaphyseal osteocytes extended thin cytoplasmic processes from their cell bodies, which ran perpendicular to the bone surface. In addition, these osteocytes featured thick processes that branched into thinner, transverse cytoplasmic processes; at some point, however, these transverse processes bend at a right angle to run perpendicular to the bone surface. Osteoblasts also possessed thicker cytoplasmic processes that branched off as thinner processes, which then connected with cytoplasmic processes of neighboring osteocytes. Thus, FIB-SEM is a useful technology for visualizing the three-dimensional structures of osteocytes and their cytoplasmic processes.
Coupled multiview autoencoders with locality sensitivity for three-dimensional human pose estimation
NASA Astrophysics Data System (ADS)
Yu, Jialin; Sun, Jifeng; Luo, Shasha; Duan, Bichao
2017-09-01
Estimating three-dimensional (3D) human poses from a single camera is usually implemented by searching pose candidates with image descriptors. Existing methods usually suppose that the mapping from feature space to pose space is linear, but in fact, their mapping relationship is highly nonlinear, which heavily degrades the performance of 3D pose estimation. We propose a method to recover 3D pose from a silhouette image. It is based on the multiview feature embedding (MFE) and the locality-sensitive autoencoders (LSAEs). On the one hand, we first depict the manifold regularized sparse low-rank approximation for MFE and then the input image is characterized by a fused feature descriptor. On the other hand, both the fused feature and its corresponding 3D pose are separately encoded by LSAEs. A two-layer back-propagation neural network is trained by parameter fine-tuning and then used to map the encoded 2D features to encoded 3D poses. Our LSAE ensures a good preservation of the local topology of data points. Experimental results demonstrate the effectiveness of our proposed method.
n-SIFT: n-dimensional scale invariant feature transform.
Cheung, Warren; Hamarneh, Ghassan
2009-09-01
We propose the n-dimensional scale invariant feature transform (n-SIFT) method for extracting and matching salient features from scalar images of arbitrary dimensionality, and compare this method's performance to other related features. The proposed features extend the concepts used for 2-D scalar images in the computer vision SIFT technique for extracting and matching distinctive scale invariant features. We apply the features to images of arbitrary dimensionality through the use of hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. We analyze the performance of a fully automated multimodal medical image matching technique based on these features, and successfully apply the technique to determine accurate feature point correspondence between pairs of 3-D MRI images and dynamic 3D + time CT data.
NASA Astrophysics Data System (ADS)
Zhuo, Shuangmu; Yan, Jie; Kang, Yuzhan; Xu, Shuoyu; Peng, Qiwen; So, Peter T. C.; Yu, Hanry
2014-07-01
Various structural features on the liver surface reflect functional changes in the liver. The visualization of these surface features with molecular specificity is of particular relevance to understanding the physiology and diseases of the liver. Using multi-photon microscopy (MPM), we have developed a label-free, three-dimensional quantitative and sensitive method to visualize various structural features of liver surface in living rat. MPM could quantitatively image the microstructural features of liver surface with respect to the sinuosity of collagen fiber, the elastic fiber structure, the ratio between elastin and collagen, collagen content, and the metabolic state of the hepatocytes that are correlative with the pathophysiologically induced changes in the regions of interest. This study highlights the potential of this technique as a useful tool for pathophysiological studies and possible diagnosis of the liver diseases with further development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhuo, Shuangmu, E-mail: shuangmuzhuo@gmail.com, E-mail: hanry-yu@nuhs.edu.sg; Institute of Laser and Optoelectronics Technology, Fujian Normal University, Fuzhou 350007; Yan, Jie
2014-07-14
Various structural features on the liver surface reflect functional changes in the liver. The visualization of these surface features with molecular specificity is of particular relevance to understanding the physiology and diseases of the liver. Using multi-photon microscopy (MPM), we have developed a label-free, three-dimensional quantitative and sensitive method to visualize various structural features of liver surface in living rat. MPM could quantitatively image the microstructural features of liver surface with respect to the sinuosity of collagen fiber, the elastic fiber structure, the ratio between elastin and collagen, collagen content, and the metabolic state of the hepatocytes that are correlativemore » with the pathophysiologically induced changes in the regions of interest. This study highlights the potential of this technique as a useful tool for pathophysiological studies and possible diagnosis of the liver diseases with further development.« less
NASA Astrophysics Data System (ADS)
Yeom, Seokwon
2013-05-01
Millimeter waves imaging draws increasing attention in security applications for weapon detection under clothing. In this paper, concealed object segmentation and three-dimensional localization schemes are reviewed. A concealed object is segmented by the k-means algorithm. A feature-based stereo-matching method estimates the longitudinal distance of the concealed object. The distance is estimated by the discrepancy between the corresponding centers of the segmented objects. Experimental results are provided with the analysis of the depth resolution.
Pérez-Beteta, Julián; Martínez-González, Alicia; Martino, Juan; Velasquez, Carlos; Arana, Estanislao; Pérez-García, Víctor M.
2017-01-01
Purpose Textural measures have been widely explored as imaging biomarkers in cancer. However, their robustness under dynamic range and spatial resolution changes in brain 3D magnetic resonance images (MRI) has not been assessed. The aim of this work was to study potential variations of textural measures due to changes in MRI protocols. Materials and methods Twenty patients harboring glioblastoma with pretreatment 3D T1-weighted MRIs were included in the study. Four different spatial resolution combinations and three dynamic ranges were studied for each patient. Sixteen three-dimensional textural heterogeneity measures were computed for each patient and configuration including co-occurrence matrices (CM) features and run-length matrices (RLM) features. The coefficient of variation was used to assess the robustness of the measures in two series of experiments corresponding to (i) changing the dynamic range and (ii) changing the matrix size. Results No textural measures were robust under dynamic range changes. Entropy was the only textural feature robust under spatial resolution changes (coefficient of variation under 10% in all cases). Conclusion Textural measures of three-dimensional brain tumor images are not robust neither under dynamic range nor under matrix size changes. Standards should be harmonized to use textural features as imaging biomarkers in radiomic-based studies. The implications of this work go beyond the specific tumor type studied here and pose the need for standardization in textural feature calculation of oncological images. PMID:28586353
NASA Astrophysics Data System (ADS)
Li, Na; Gong, Xingyu; Li, Hongan; Jia, Pengtao
2018-01-01
For faded relics, such as Terracotta Army, the 2D-3D registration between an optical camera and point cloud model is an important part for color texture reconstruction and further applications. This paper proposes a nonuniform multiview color texture mapping for the image sequence and the three-dimensional (3D) model of point cloud collected by Handyscan3D. We first introduce nonuniform multiview calibration, including the explanation of its algorithm principle and the analysis of its advantages. We then establish transformation equations based on sift feature points for the multiview image sequence. At the same time, the selection of nonuniform multiview sift feature points is introduced in detail. Finally, the solving process of the collinear equations based on multiview perspective projection is given with three steps and the flowchart. In the experiment, this method is applied to the color reconstruction of the kneeling figurine, Tangsancai lady, and general figurine. These results demonstrate that the proposed method provides an effective support for the color reconstruction of the faded cultural relics and be able to improve the accuracy of 2D-3D registration between the image sequence and the point cloud model.
Philip, Armelle; Meyssonnier, Jacques; Kluender, Rafael T.; Baruchel, José
2013-01-01
Rocking curve imaging (RCI) is a quantitative version of monochromatic beam diffraction topography that involves using a two-dimensional detector, each pixel of which records its own ‘local’ rocking curve. From these local rocking curves one can reconstruct maps of particularly relevant quantities (e.g. integrated intensity, angular position of the centre of gravity, FWHM). Up to now RCI images have been exploited in the reflection case, giving a quantitative picture of the features present in a several-micrometre-thick subsurface layer. Recently, a three-dimensional Bragg diffraction imaging technique, which combines RCI with ‘pinhole’ and ‘section’ diffraction topography in the transmission case, was implemented. It allows three-dimensional images of defects to be obtained and measurement of three-dimensional distortions within a 50 × 50 × 50 µm elementary volume inside the crystal with angular misorientations down to 10−5–10−6 rad. In the present paper, this three-dimensional-RCI (3D-RCI) technique is used to study one of the grains of a three-grained ice polycrystal. The inception of the deformation process is followed by reconstructing virtual slices in the crystal bulk. 3D-RCI capabilities allow the effective distortion in the bulk of the crystal to be investigated, and the predictions of diffraction theories to be checked, well beyond what has been possible up to now. PMID:24046486
Philip, Armelle; Meyssonnier, Jacques; Kluender, Rafael T; Baruchel, José
2013-08-01
Rocking curve imaging (RCI) is a quantitative version of monochromatic beam diffraction topography that involves using a two-dimensional detector, each pixel of which records its own 'local' rocking curve. From these local rocking curves one can reconstruct maps of particularly relevant quantities ( e.g. integrated intensity, angular position of the centre of gravity, FWHM). Up to now RCI images have been exploited in the reflection case, giving a quantitative picture of the features present in a several-micrometre-thick subsurface layer. Recently, a three-dimensional Bragg diffraction imaging technique, which combines RCI with 'pinhole' and 'section' diffraction topography in the transmission case, was implemented. It allows three-dimensional images of defects to be obtained and measurement of three-dimensional distortions within a 50 × 50 × 50 µm elementary volume inside the crystal with angular misorientations down to 10 -5 -10 -6 rad. In the present paper, this three-dimensional-RCI (3D-RCI) technique is used to study one of the grains of a three-grained ice polycrystal. The inception of the deformation process is followed by reconstructing virtual slices in the crystal bulk. 3D-RCI capabilities allow the effective distortion in the bulk of the crystal to be investigated, and the predictions of diffraction theories to be checked, well beyond what has been possible up to now.
NASA Astrophysics Data System (ADS)
Deschenes, Sylvain; Sheng, Yunlong; Chevrette, Paul C.
1998-03-01
3D object classification from 2D IR images is shown. The wavelet transform is used for edge detection. Edge tracking is used for removing noise effectively int he wavelet transform. The invariant Fourier descriptor is used to describe the contour curves. Invariance under out-of-plane rotation is achieved by the feature space trajectory neural network working as a classifier.
NASA Astrophysics Data System (ADS)
van de Moortele, Tristan; Nemes, Andras; Wendt, Christine; Coletti, Filippo
2016-11-01
The morphological features of the airway tree directly affect the air flow features during breathing, which determines the gas exchange and inhaled particle transport. Lung disease, Chronic Obstructive Pulmonary Disease (COPD) in this study, affects the structural features of the lungs, which in turn negatively affects the air flow through the airways. Here bronchial tree air volume geometries are segmented from Computed Tomography (CT) scans of healthy and diseased subjects. Geometrical analysis of the airway centerlines and corresponding cross-sectional areas provide insight into the specific effects of COPD on the airway structure. These geometries are also used to 3D print anatomically accurate, patient specific flow models. Three-component, three-dimensional velocity fields within these models are acquired using Magnetic Resonance Imaging (MRI). The three-dimensional flow fields provide insight into the change in flow patterns and features. Additionally, particle trajectories are determined using the velocity fields, to identify the fate of therapeutic and harmful inhaled aerosols. Correlation between disease-specific and patient-specific anatomical features with dysfunctional airflow patterns can be achieved by combining geometrical and flow analysis.
Depth measurements through controlled aberrations of projected patterns.
Birch, Gabriel C; Tyo, J Scott; Schwiegerling, Jim
2012-03-12
Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture three-dimensional images in space confined environments and without major modifications to current cameras is uncommon. Our goal is to create a simple modification to a conventional camera that allows for three dimensional reconstruction. We require such an imaging system have imaging and illumination paths coincident. Furthermore, we require that any three-dimensional modification to a camera also permits full resolution 2D image capture.Here we present a method of extracting depth information with a single camera and aberrated projected pattern. A commercial digital camera is used in conjunction with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of a projected pattern, thereby encoding depth. By designing an aberrated projected pattern, we are able to exploit this differential focus in post-processing designed to exploit the projected pattern and optical system. We are able to correlate the distance of an object at a particular transverse position from the camera to ratios of particular wavelet coefficients.We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.
NASA Astrophysics Data System (ADS)
Zharinov, I. O.; Zharinov, O. O.
2017-12-01
The problem of the research is concerned with quantitative analysis of influence of technological variation of the screen color profile parameters on chromaticity coordinates of the displayed image. Some mathematical expressions which approximate the two-dimensional distribution of chromaticity coordinates of an image, which is displayed on the screen with a three-component color formation principle were proposed. Proposed mathematical expressions show the way to development of correction techniques to improve reproducibility of the colorimetric features of displays.
NASA Astrophysics Data System (ADS)
Jones, Terry Jay; Humphreys, Roberta M.; Helton, L. Andrew; Gui, Changfeng; Huang, Xiang
2007-06-01
We use imaging polarimetry taken with the HST Advanced Camera for Surveys High Resolution Camera to explore the three-dimensional structure of the circumstellar dust distribution around the red supergiant VY Canis Majoris. The polarization vectors of the nebulosity surrounding VY CMa show a strong centrosymmetric pattern in all directions except directly east and range from 10% to 80% in fractional polarization. In regions that are optically thin, and therefore likely to have only single scattering, we use the fractional polarization and photometric color to locate the physical position of the dust along the line of sight. Most of the individual arclike features and clumps seen in the intensity image are also features in the fractional polarization map. These features must be distinct geometric objects. If they were just local density enhancements, the fractional polarization would not change so abruptly at the edge of the feature. The location of these features in the ejecta of VY CMa using polarimetry provides a determination of their three-dimensional geometry independent of, but in close agreement with, the results from our study of their kinematics (Paper I). Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
NASA Astrophysics Data System (ADS)
Brookshire, B. N., Jr.; Mattox, B. A.; Parish, A. E.; Burks, A. G.
2016-02-01
Utilizing recently advanced ultrahigh-resolution 3-dimensional (UHR3D) seismic tools we have imaged the seafloor geomorphology and associated subsurface aspects of seep related expulsion features along the continental slope of the northern Gulf of Mexico with unprecedented clarity and continuity. Over an area of approximately 400 km2, over 50 discrete features were identified and three general seafloor geomorphologies indicative of seep activity including mounds, depressions and bathymetrically complex features were quantitatively characterized. Moreover, areas of high seafloor reflectivity indicative of mineralization and areas of coherent seismic amplitude anomalies in the near-seafloor water column indicative of active gas expulsion were identified. In association with these features, shallow source gas accumulations and migration pathways based on salt related stratigraphic uplift and faulting were imaged. Shallow, bottom simulating reflectors (BSRs) interpreted to be free gas trapped under near seafloor gas hydrate accumulations were very clearly imaged.
Combustion monitoring of a water tube boiler using a discriminant radial basis network.
Sujatha, K; Pappa, N
2011-01-01
This research work includes a combination of Fisher's linear discriminant (FLD) analysis and a radial basis network (RBN) for monitoring the combustion conditions for a coal fired boiler so as to allow control of the air/fuel ratio. For this, two-dimensional flame images are required, which were captured with a CCD camera; the features of the images-average intensity, area, brightness and orientation etc of the flame-are extracted after preprocessing the images. The FLD is applied to reduce the n-dimensional feature size to a two-dimensional feature size for faster learning of the RBN. Also, three classes of images corresponding to different burning conditions of the flames have been extracted from continuous video processing. In this, the corresponding temperatures, and the carbon monoxide (CO) emissions and those of other flue gases have been obtained through measurement. Further, the training and testing of Fisher's linear discriminant radial basis network (FLDRBN), with the data collected, have been carried out and the performance of the algorithms is presented. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jaferzadeh, Keyvan; Moon, Inkyu
2016-12-01
The classification of erythrocytes plays an important role in the field of hematological diagnosis, specifically blood disorders. Since the biconcave shape of red blood cell (RBC) is altered during the different stages of hematological disorders, we believe that the three-dimensional (3-D) morphological features of erythrocyte provide better classification results than conventional two-dimensional (2-D) features. Therefore, we introduce a set of 3-D features related to the morphological and chemical properties of RBC profile and try to evaluate the discrimination power of these features against 2-D features with a neural network classifier. The 3-D features include erythrocyte surface area, volume, average cell thickness, sphericity index, sphericity coefficient and functionality factor, MCH and MCHSD, and two newly introduced features extracted from the ring section of RBC at the single-cell level. In contrast, the 2-D features are RBC projected surface area, perimeter, radius, elongation, and projected surface area to perimeter ratio. All features are obtained from images visualized by off-axis digital holographic microscopy with a numerical reconstruction algorithm, and four categories of biconcave (doughnut shape), flat-disc, stomatocyte, and echinospherocyte RBCs are interested. Our experimental results demonstrate that the 3-D features can be more useful in RBC classification than the 2-D features. Finally, we choose the best feature set of the 2-D and 3-D features by sequential forward feature selection technique, which yields better discrimination results. We believe that the final feature set evaluated with a neural network classification strategy can improve the RBC classification accuracy.
Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images.
Zhang, Lefei; Zhang, Qian; Du, Bo; Huang, Xin; Tang, Yuan Yan; Tao, Dacheng
2018-01-01
In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature, and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation has not efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.
NASA Astrophysics Data System (ADS)
Xu, Ye; Lee, Michael C.; Boroczky, Lilla; Cann, Aaron D.; Borczuk, Alain C.; Kawut, Steven M.; Powell, Charles A.
2009-02-01
Features calculated from different dimensions of images capture quantitative information of the lung nodules through one or multiple image slices. Previously published computer-aided diagnosis (CADx) systems have used either twodimensional (2D) or three-dimensional (3D) features, though there has been little systematic analysis of the relevance of the different dimensions and of the impact of combining different dimensions. The aim of this study is to determine the importance of combining features calculated in different dimensions. We have performed CADx experiments on 125 pulmonary nodules imaged using multi-detector row CT (MDCT). The CADx system computed 192 2D, 2.5D, and 3D image features of the lesions. Leave-one-out experiments were performed using five different combinations of features from different dimensions: 2D, 3D, 2.5D, 2D+3D, and 2D+3D+2.5D. The experiments were performed ten times for each group. Accuracy, sensitivity and specificity were used to evaluate the performance. Wilcoxon signed-rank tests were applied to compare the classification results from these five different combinations of features. Our results showed that 3D image features generate the best result compared with other combinations of features. This suggests one approach to potentially reducing the dimensionality of the CADx data space and the computational complexity of the system while maintaining diagnostic accuracy.
a Novel Deep Convolutional Neural Network for Spectral-Spatial Classification of Hyperspectral Data
NASA Astrophysics Data System (ADS)
Li, N.; Wang, C.; Zhao, H.; Gong, X.; Wang, D.
2018-04-01
Spatial and spectral information are obtained simultaneously by hyperspectral remote sensing. Joint extraction of these information of hyperspectral image is one of most import methods for hyperspectral image classification. In this paper, a novel deep convolutional neural network (CNN) is proposed, which extracts spectral-spatial information of hyperspectral images correctly. The proposed model not only learns sufficient knowledge from the limited number of samples, but also has powerful generalization ability. The proposed framework based on three-dimensional convolution can extract spectral-spatial features of labeled samples effectively. Though CNN has shown its robustness to distortion, it cannot extract features of different scales through the traditional pooling layer that only have one size of pooling window. Hence, spatial pyramid pooling (SPP) is introduced into three-dimensional local convolutional filters for hyperspectral classification. Experimental results with a widely used hyperspectral remote sensing dataset show that the proposed model provides competitive performance.
Venus - Three-Dimensional Perspective View of Alpha Regio
NASA Technical Reports Server (NTRS)
1992-01-01
A portion of Alpha Regio is displayed in this three-dimensional perspective view of the surface of Venus. Alpha Regio, a topographic upland approximately 1300 kilometers across, is centered on 25 degrees south latitude, 4 degrees east longitude. In 1963, Alpha Regio was the first feature on Venus to be identified from Earth-based radar. The radar-bright area of Alpha Regio is characterized by multiple sets of intersecting trends of structural features such as ridges, troughs, and flat-floored fault valleys that, together, form a polygonal outline. Directly south of the complex ridged terrain is a large ovoid-shaped feature named Eve. The radar-bright spot located centrally within Eve marks the location of the prime meridian of Venus. Magellan synthetic aperture radar data is combined with radar altimetry to develop a three-dimensional map of the surface. Ray tracing is used to generate a perspective view from this map. The vertical scale is exaggerated approximately 23 times. Simulated color and a digital elevation map developed by the U. S. Geological Survey are used to enhance small scale structure. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. The image was produced at the JPL Multimission Image Processing Laboratory by Eric De Jong, Jeff Hall, and Myche McAuley, and is a single frame from the movie released at the March 5, 1991, press conference.
Reproducibility and Prognosis of Quantitative Features Extracted from CT Images12
Balagurunathan, Yoganand; Gu, Yuhua; Wang, Hua; Kumar, Virendra; Grove, Olya; Hawkins, Sam; Kim, Jongphil; Goldgof, Dmitry B; Hall, Lawrence O; Gatenby, Robert A; Gillies, Robert J
2014-01-01
We study the reproducibility of quantitative imaging features that are used to describe tumor shape, size, and texture from computed tomography (CT) scans of non-small cell lung cancer (NSCLC). CT images are dependent on various scanning factors. We focus on characterizing image features that are reproducible in the presence of variations due to patient factors and segmentation methods. Thirty-two NSCLC nonenhanced lung CT scans were obtained from the Reference Image Database to Evaluate Response data set. The tumors were segmented using both manual (radiologist expert) and ensemble (software-automated) methods. A set of features (219 three-dimensional and 110 two-dimensional) was computed, and quantitative image features were statistically filtered to identify a subset of reproducible and nonredundant features. The variability in the repeated experiment was measured by the test-retest concordance correlation coefficient (CCCTreT). The natural range in the features, normalized to variance, was measured by the dynamic range (DR). In this study, there were 29 features across segmentation methods found with CCCTreT and DR ≥ 0.9 and R2Bet ≥ 0.95. These reproducible features were tested for predicting radiologist prognostic score; some texture features (run-length and Laws kernels) had an area under the curve of 0.9. The representative features were tested for their prognostic capabilities using an independent NSCLC data set (59 lung adenocarcinomas), where one of the texture features, run-length gray-level nonuniformity, was statistically significant in separating the samples into survival groups (P ≤ .046). PMID:24772210
Intrinsic feature-based pose measurement for imaging motion compensation
Baba, Justin S.; Goddard, Jr., James Samuel
2014-08-19
Systems and methods for generating motion corrected tomographic images are provided. A method includes obtaining first images of a region of interest (ROI) to be imaged and associated with a first time, where the first images are associated with different positions and orientations with respect to the ROI. The method also includes defining an active region in the each of the first images and selecting intrinsic features in each of the first images based on the active region. Second, identifying a portion of the intrinsic features temporally and spatially matching intrinsic features in corresponding ones of second images of the ROI associated with a second time prior to the first time and computing three-dimensional (3D) coordinates for the portion of the intrinsic features. Finally, the method includes computing a relative pose for the first images based on the 3D coordinates.
Saliency Detection for Stereoscopic 3D Images in the Quaternion Frequency Domain
NASA Astrophysics Data System (ADS)
Cai, Xingyu; Zhou, Wujie; Cen, Gang; Qiu, Weiwei
2018-06-01
Recent studies have shown that a remarkable distinction exists between human binocular and monocular viewing behaviors. Compared with two-dimensional (2D) saliency detection models, stereoscopic three-dimensional (S3D) image saliency detection is a more challenging task. In this paper, we propose a saliency detection model for S3D images. The final saliency map of this model is constructed from the local quaternion Fourier transform (QFT) sparse feature and global QFT log-Gabor feature. More specifically, the local QFT feature measures the saliency map of an S3D image by analyzing the location of a similar patch. The similar patch is chosen using a sparse representation method. The global saliency map is generated by applying the wake edge-enhanced gradient QFT map through a band-pass filter. The results of experiments on two public datasets show that the proposed model outperforms existing computational saliency models for estimating S3D image saliency.
Confocal Imaging of porous media
NASA Astrophysics Data System (ADS)
Shah, S.; Crawshaw, D.; Boek, D.
2012-12-01
Carbonate rocks, which hold approximately 50% of the world's oil and gas reserves, have a very complicated and heterogeneous structure in comparison with sandstone reservoir rock. We present advances with different techniques to image, reconstruct, and characterize statistically the micro-geometry of carbonate pores. The main goal here is to develop a technique to obtain two dimensional and three dimensional images using Confocal Laser Scanning Microscopy. CLSM is used in epi-fluorescent imaging mode, allowing for the very high optical resolution of features well below 1μm size. Images of pore structures were captured using CLSM imaging where spaces in the carbonate samples were impregnated with a fluorescent, dyed epoxy-resin, and scanned in the x-y plane by a laser probe. We discuss the sample preparation in detail for Confocal Imaging to obtain sub-micron resolution images of heterogeneous carbonate rocks. We also discuss the technical and practical aspects of this imaging technique, including its advantages and limitation. We present several examples of this application, including studying pore geometry in carbonates, characterizing sub-resolution porosity in two dimensional images. We then describe approaches to extract statistical information about porosity using image processing and spatial correlation function. We have managed to obtain very low depth information in z -axis (~ 50μm) to develop three dimensional images of carbonate rocks with the current capabilities and limitation of CLSM technique. Hence, we have planned a novel technique to obtain higher depth information to obtain high three dimensional images with sub-micron resolution possible in the lateral and axial planes.
Automated Recognition of 3D Features in GPIR Images
NASA Technical Reports Server (NTRS)
Park, Han; Stough, Timothy; Fijany, Amir
2007-01-01
A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.
Sampling and Visualizing Creases with Scale-Space Particles
Kindlmann, Gordon L.; Estépar, Raúl San José; Smith, Stephen M.; Westin, Carl-Fredrik
2010-01-01
Particle systems have gained importance as a methodology for sampling implicit surfaces and segmented objects to improve mesh generation and shape analysis. We propose that particle systems have a significantly more general role in sampling structure from unsegmented data. We describe a particle system that computes samplings of crease features (i.e. ridges and valleys, as lines or surfaces) that effectively represent many anatomical structures in scanned medical data. Because structure naturally exists at a range of sizes relative to the image resolution, computer vision has developed the theory of scale-space, which considers an n-D image as an (n + 1)-D stack of images at different blurring levels. Our scale-space particles move through continuous four-dimensional scale-space according to spatial constraints imposed by the crease features, a particle-image energy that draws particles towards scales of maximal feature strength, and an inter-particle energy that controls sampling density in space and scale. To make scale-space practical for large three-dimensional data, we present a spline-based interpolation across scale from a small number of pre-computed blurrings at optimally selected scales. The configuration of the particle system is visualized with tensor glyphs that display information about the local Hessian of the image, and the scale of the particle. We use scale-space particles to sample the complex three-dimensional branching structure of airways in lung CT, and the major white matter structures in brain DTI. PMID:19834216
Deep neural network using color and synthesized three-dimensional shape for face recognition
NASA Astrophysics Data System (ADS)
Rhee, Seon-Min; Yoo, ByungIn; Han, Jae-Joon; Hwang, Wonjun
2017-03-01
We present an approach for face recognition using synthesized three-dimensional (3-D) shape information together with two-dimensional (2-D) color in a deep convolutional neural network (DCNN). As 3-D facial shape is hardly affected by the extrinsic 2-D texture changes caused by illumination, make-up, and occlusions, it could provide more reliable complementary features in harmony with the 2-D color feature in face recognition. Unlike other approaches that use 3-D shape information with the help of an additional depth sensor, our approach generates a personalized 3-D face model by using only face landmarks in the 2-D input image. Using the personalized 3-D face model, we generate a frontalized 2-D color facial image as well as 3-D facial images (e.g., a depth image and a normal image). In our DCNN, we first feed 2-D and 3-D facial images into independent convolutional layers, where the low-level kernels are successfully learned according to their own characteristics. Then, we merge them and feed into higher-level layers under a single deep neural network. Our proposed approach is evaluated with labeled faces in the wild dataset and the results show that the error rate of the verification rate at false acceptance rate 1% is improved by up to 32.1% compared with the baseline where only a 2-D color image is used.
NASA Astrophysics Data System (ADS)
Awad, Joseph; Krasinski, Adam; Spence, David; Parraga, Grace; Fenster, Aaron
2010-03-01
Carotid atherosclerosis is the major cause of ischemic stroke, a leading cause of death and disability. This is driving the development of image analysis methods to quantitatively evaluate local arterial effects of potential treatments of carotid disease. Here we investigate the use of novel texture analysis tools to detect potential changes in the carotid arteries after statin therapy. Three-dimensional (3D) carotid ultrasound images were acquired from the left and right carotid arteries of 35 subjects (16 treated with 80 mg atorvastatin and 19 treated with placebo) at baseline and after 3 months of treatment. Two-hundred and seventy texture features were extracted from 3D ultrasound carotid artery images. These images previously had their vessel walls (VW) manually segmented. Highly ranked individual texture features were selected and compared to the VW volume (VWV) change using 3 measures: distance between classes, Wilcoxon rank sum test, and accuracy of the classifiers. Six classifiers were used. Using texture feature (L7R7) increases the average accuracy and area under the ROC curve to 74.4% and 0.72 respectively compared to 57.2% and 0.61 using VWV change. Thus, the results demonstrate that texture features are more sensitive in detecting drug effects on the carotid vessel wall than VWV change.
NASA Astrophysics Data System (ADS)
Li, Dong; Wei, Zhen; Song, Dawei; Sun, Wenfeng; Fan, Xiaoyan
2016-11-01
With the development of space technology, the number of spacecrafts and debris are increasing year by year. The demand for detecting and identification of spacecraft is growing strongly, which provides support to the cataloguing, crash warning and protection of aerospace vehicles. The majority of existing approaches for three-dimensional reconstruction is scattering centres correlation, which is based on the radar high resolution range profile (HRRP). This paper proposes a novel method to reconstruct the threedimensional scattering centre structure of target from a sequence of radar ISAR images, which mainly consists of three steps. First is the azimuth scaling of consecutive ISAR images based on fractional Fourier transform (FrFT). The later is the extraction of scattering centres and matching between adjacent ISAR images using grid method. Finally, according to the coordinate matrix of scattering centres, the three-dimensional scattering centre structure is reconstructed using improved factorization method. The three-dimensional structure is featured with stable and intuitive characteristic, which provides a new way to improve the identification probability and reduce the complexity of the model matching library. A satellite model is reconstructed using the proposed method from four consecutive ISAR images. The simulation results prove that the method has gotten a satisfied consistency and accuracy.
Grid point extraction and coding for structured light system
NASA Astrophysics Data System (ADS)
Song, Zhan; Chung, Ronald
2011-09-01
A structured light system simplifies three-dimensional reconstruction by illuminating a specially designed pattern to the target object, thereby generating a distinct texture on it for imaging and further processing. Success of the system hinges upon what features are to be coded in the projected pattern, extracted in the captured image, and matched between the projector's display panel and the camera's image plane. The codes have to be such that they are largely preserved in the image data upon illumination from the projector, reflection from the target object, and projective distortion in the imaging process. The features also need to be reliably extracted in the image domain. In this article, a two-dimensional pseudorandom pattern consisting of rhombic color elements is proposed, and the grid points between the pattern elements are chosen as the feature points. We describe how a type classification of the grid points plus the pseudorandomness of the projected pattern can equip each grid point with a unique label that is preserved in the captured image. We also present a grid point detector that extracts the grid points without the need of segmenting the pattern elements, and that localizes the grid points in subpixel accuracy. Extensive experiments are presented to illustrate that, with the proposed pattern feature definition and feature detector, more features points in higher accuracy can be reconstructed in comparison with the existing pseudorandomly encoded structured light systems.
Viking orbiter stereo imaging catalog
NASA Technical Reports Server (NTRS)
Blasius, K. R.; Vertrone, A. V.; Lewis, B. H.; Martin, M. D.
1982-01-01
The extremely long mission of the two Viking Orbiter spacecraft produced a wealth of photos of surface features. Many of these photos can be used to form stereo images allowing the student of Mars to examine a subject in three dimensional. This catalog is a technical guide to the use of stereo coverage within the complex Viking imaging data set.
NASA Astrophysics Data System (ADS)
Yamauchi, Toyohiko; Kakuno, Yumi; Goto, Kentaro; Fukami, Tadashi; Sugiyama, Norikazu; Iwai, Hidenao; Mizuguchi, Yoshinori; Yamashita, Yutaka
2014-03-01
There is an increasing need for non-invasive imaging techniques in the field of stem cell research. Label-free techniques are the best choice for assessment of stem cells because the cells remain intact after imaging and can be used for further studies such as differentiation induction. To develop a high-resolution label-free imaging system, we have been working on a low-coherence quantitative phase microscope (LC-QPM). LC-QPM is a Linnik-type interference microscope equipped with nanometer-resolution optical-path-length control and capable of obtaining three-dimensional volumetric images. The lateral and vertical resolutions of our system are respectively 0.5 and 0.93 μm and this performance allows capturing sub-cellular morphological features of live cells without labeling. Utilizing LC-QPM, we reported on three-dimensional imaging of membrane fluctuations, dynamics of filopodia, and motions of intracellular organelles. In this presentation, we report three-dimensional morphological imaging of human induced pluripotent stem cells (hiPS cells). Two groups of monolayer hiPS cell cultures were prepared so that one group was cultured in a suitable culture medium that kept the cells undifferentiated, and the other group was cultured in a medium supplemented with retinoic acid, which forces the stem cells to differentiate. The volumetric images of the 2 groups show distinctive differences, especially in surface roughness. We believe that our LC-QPM system will prove useful in assessing many other stem cell conditions.
Creating Body Shapes From Verbal Descriptions by Linking Similarity Spaces.
Hill, Matthew Q; Streuber, Stephan; Hahn, Carina A; Black, Michael J; O'Toole, Alice J
2016-11-01
Brief verbal descriptions of people's bodies (e.g., "curvy," "long-legged") can elicit vivid mental images. The ease with which these mental images are created belies the complexity of three-dimensional body shapes. We explored the relationship between body shapes and body descriptions and showed that a small number of words can be used to generate categorically accurate representations of three-dimensional bodies. The dimensions of body-shape variation that emerged in a language-based similarity space were related to major dimensions of variation computed directly from three-dimensional laser scans of 2,094 bodies. This relationship allowed us to generate three-dimensional models of people in the shape space using only their coordinates on analogous dimensions in the language-based description space. Human descriptions of photographed bodies and their corresponding models matched closely. The natural mapping between the spaces illustrates the role of language as a concise code for body shape that captures perceptually salient global and local body features. © The Author(s) 2016.
Feature Matching of Historical Images Based on Geometry of Quadrilaterals
NASA Astrophysics Data System (ADS)
Maiwald, F.; Schneider, D.; Henze, F.; Münster, S.; Niebling, F.
2018-05-01
This contribution shows an approach to match historical images from the photo library of the Saxon State and University Library Dresden (SLUB) in the context of a historical three-dimensional city model of Dresden. In comparison to recent images, historical photography provides diverse factors which make an automatical image analysis (feature detection, feature matching and relative orientation of images) difficult. Due to e.g. film grain, dust particles or the digitalization process, historical images are often covered by noise interfering with the image signal needed for a robust feature matching. The presented approach uses quadrilaterals in image space as these are commonly available in man-made structures and façade images (windows, stones, claddings). It is explained how to generally detect quadrilaterals in images. Consequently, the properties of the quadrilaterals as well as the relationship to neighbouring quadrilaterals are used for the description and matching of feature points. The results show that most of the matches are robust and correct but still small in numbers.
Overhead View of Area Surrounding Pathfinder
NASA Technical Reports Server (NTRS)
1997-01-01
Overhead view of the area surrounding the Pathfinder lander illustrating the Sojourner traverse. Red rectangles are rover positions at the end of sols 1-30. Locations of soil mechanics experiments, wheel abrasion experiments, and APXS measurements are shown. The A numbers refer to APXS measurements as discussed in the paper by Rieder et al. (p. 1770, Science Magazine, see image note). Coordinates are given in the LL frame.
The photorealistic, interactive, three-dimensional virtual reality (VR) terrain models were created from IMP images using a software package developed for Pathfinder by C. Stoker et al. as a participating science project. By matching features in the left and right camera, an automated machine vision algorithm produced dense range maps of the nearfield, which were projected into a three-dimensional model as a connected polygonal mesh. Distance and angle measurements can be made on features viewed in the model using a mouse-driven three-dimensional cursor and a point-and-click interface. The VR model also incorporates graphical representations of the lander and rover and the sequence and spatial locations at which rover data were taken. As the rover moved, graphical models of the rover were added for each position that could be uniquely determined using stereo images of the rover taken by the IMP. Images taken by the rover were projected into the model as two-dimensional 'billboards' to show the proper perspective of these images.NOTE: original caption as published in Science MagazineMars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech).Kurosumi, M; Mizukoshi, K
2018-05-01
The types of shape feature that constitutes a face have not been comprehensively established, and most previous studies of age-related changes in facial shape have focused on individual characteristics, such as wrinkle, sagging skin, etc. In this study, we quantitatively measured differences in face shape between individuals and investigated how shape features changed with age. We analyzed three-dimensionally the faces of 280 Japanese women aged 20-69 years and used principal component analysis to establish the shape features that characterized individual differences. We also evaluated the relationships between each feature and age, clarifying the shape features characteristic of different age groups. Changes in facial shape in middle age were a decreased volume of the upper face and increased volume of the whole cheeks and around the chin. Changes in older people were an increased volume of the lower cheeks and around the chin, sagging skin, and jaw distortion. Principal component analysis was effective for identifying facial shape features that represent individual and age-related differences. This method allowed straightforward measurements, such as the increase or decrease in cheeks caused by soft tissue changes or skeletal-based changes to the forehead or jaw, simply by acquiring three-dimensional facial images. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Three dimensional perspective view of portion of western Galapagos Islands
NASA Technical Reports Server (NTRS)
1994-01-01
This is a three dimensional perspective view of Isla Isabela in the western Galapagos Islands. It was taken by the L-band radar in HH polarization from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperature Radar on the 40th orbit of the Shuttle Endeavour. This view was constructed by overlaying a SIR-C radar image on a U.S. Geological Survey digital elevation map. The image is centered at about .5 degrees south latitude and 91 degrees West longitude and covers an area of 75 km by 60 km. This SIR-C/X-SAR image of Alcedo and Sierra Negra volcanoes shows the rougher lava flows as bright features, while ash deposits and smooth Pahoehoe lava flows dark. The Jet Propulsion Laboratory alternative photo number is P-43938.
Estimating Slopes In Images Of Terrain By Use Of BRDF
NASA Technical Reports Server (NTRS)
Scholl, Marija S.
1995-01-01
Proposed method of estimating slopes of terrain features based on use of bidirectional reflectivity distribution function (BRDF) in analyzing aerial photographs, satellite video images, or other images produced by remote sensors. Estimated slopes integrated along horizontal coordinates to obtain estimated heights; generating three-dimensional terrain maps. Method does not require coregistration of terrain features in pairs of images acquired from slightly different perspectives nor requires Sun or other source of illumination to be low in sky over terrain of interest. On contrary, best when Sun is high. Works at almost all combinations of illumination and viewing angles.
Seafloor Topographic Analysis in Staged Ocean Resource Exploration
NASA Astrophysics Data System (ADS)
Ikeda, M.; Okawa, M.; Osawa, K.; Kadoshima, K.; Asakawa, E.; Sumi, T.
2017-12-01
J-MARES (Research and Development Partnership for Next Generation Technology of Marine Resources Survey, JAPAN) has been designing a low-expense and high-efficiency exploration system for seafloor hydrothermal massive sulfide deposits in "Cross-ministerial Strategic Innovation Promotion Program (SIP)" granted by the Cabinet Office, Government of Japan since 2014. We designed a method to focus mineral deposit prospective area in multi-stages (the regional survey, semi-detail survey and detail survey) by extracted topographic features of some well-known seafloor massive sulfide deposits from seafloor topographic analysis using seafloor topographic data acquired by the bathymetric survey. We applied this procedure to an area of interest more than 100km x 100km over Okinawa Trough, including some known seafloor massive sulfide deposits. In Addition, we tried to create a three-dimensional model of seafloor topography by SfM (Structure from Motion) technique using multiple image data of Chimney distributed around well-known seafloor massive sulfide deposit taken with Hi-Vision camera mounted on ROV in detail survey such as geophysical exploration. Topographic features of Chimney was extracted by measuring created three-dimensional model. As the result, it was possible to estimate shape of seafloor sulfide such as Chimney to be mined by three-dimensional model created from image data taken with camera mounted on ROV. In this presentation, we will discuss about focusing mineral deposit prospective area in multi-stages by seafloor topographic analysis using seafloor topographic data in exploration system for seafloor massive sulfide deposit and also discuss about three-dimensional model of seafloor topography created from seafloor image data taken with ROV.
High-resolution non-destructive three-dimensional imaging of integrated circuits
NASA Astrophysics Data System (ADS)
Holler, Mirko; Guizar-Sicairos, Manuel; Tsai, Esther H. R.; Dinapoli, Roberto; Müller, Elisabeth; Bunk, Oliver; Raabe, Jörg; Aeppli, Gabriel
2017-03-01
Modern nanoelectronics has advanced to a point at which it is impossible to image entire devices and their interconnections non-destructively because of their small feature sizes and the complex three-dimensional structures resulting from their integration on a chip. This metrology gap implies a lack of direct feedback between design and manufacturing processes, and hampers quality control during production, shipment and use. Here we demonstrate that X-ray ptychography—a high-resolution coherent diffractive imaging technique—can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip. Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources, optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.
High-resolution non-destructive three-dimensional imaging of integrated circuits.
Holler, Mirko; Guizar-Sicairos, Manuel; Tsai, Esther H R; Dinapoli, Roberto; Müller, Elisabeth; Bunk, Oliver; Raabe, Jörg; Aeppli, Gabriel
2017-03-15
Modern nanoelectronics has advanced to a point at which it is impossible to image entire devices and their interconnections non-destructively because of their small feature sizes and the complex three-dimensional structures resulting from their integration on a chip. This metrology gap implies a lack of direct feedback between design and manufacturing processes, and hampers quality control during production, shipment and use. Here we demonstrate that X-ray ptychography-a high-resolution coherent diffractive imaging technique-can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip. Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources, optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.
Linking brain, mind and behavior.
Makeig, Scott; Gramann, Klaus; Jung, Tzyy-Ping; Sejnowski, Terrence J; Poizner, Howard
2009-08-01
Cortical brain areas and dynamics evolved to organize motor behavior in our three-dimensional environment also support more general human cognitive processes. Yet traditional brain imaging paradigms typically allow and record only minimal participant behavior, then reduce the recorded data to single map features of averaged responses. To more fully investigate the complex links between distributed brain dynamics and motivated natural behavior, we propose the development of wearable mobile brain/body imaging (MoBI) systems that continuously capture the wearer's high-density electrical brain and muscle signals, three-dimensional body movements, audiovisual scene and point of regard, plus new data-driven analysis methods to model their interrelationships. The new imaging modality should allow new insights into how spatially distributed brain dynamics support natural human cognition and agency.
Nabavizadeh, Behnam; Mozafarpour, Sarah; Hosseini Sharifi, Seyed Hossein; Nabavizadeh, Reza; Abbasioun, Reza; Kajbafzadeh, Abdol-Mohammad
2018-03-01
Ureterocele is a sac-like dilatation of terminal ureter. Precise anatomic delineation is of utmost importance to proceed with the surgical plan, particularly in the ectopic subtype. However, the level of ureterocele extension is not always elucidated by the existing imaging modalities and even by conventional cystoscopy, which is considered as the gold standard for evaluation of ureterocele. This study aims to evaluate the accuracy of three-dimensional virtual sonographic cystoscopy (VSC) in the characterization of ureterocele in duplex collecting systems. Sixteen children with a mean age of 5.1 (standard deviation 1.96) years with transabdominal ultrasonography-proven duplex system and ureterocele were included. They underwent VSC by a single pediatric radiologist. All of them subsequently had conventional cystoscopy, and the results were compared in terms of ureterocele features including anatomy, number, size, location, and extension. Three-dimensional VSC was well tolerated in all cases without any complication. Image quality was suboptimal in 2 of 16 patients. Out of the remaining 14 cases, VSC had a high accuracy in characterization of the ureterocele features (93%). Only the extension of one ureterocele was not precisely detected by VSC. The results of this study suggest three-dimensional sonography as a promising noninvasive diagnostic modality in the evaluation of ectopic ureterocele in children. © 2017 by the American Institute of Ultrasound in Medicine.
Objective breast tissue image classification using Quantitative Transmission ultrasound tomography
NASA Astrophysics Data System (ADS)
Malik, Bilal; Klock, John; Wiskin, James; Lenox, Mark
2016-12-01
Quantitative Transmission Ultrasound (QT) is a powerful and emerging imaging paradigm which has the potential to perform true three-dimensional image reconstruction of biological tissue. Breast imaging is an important application of QT and allows non-invasive, non-ionizing imaging of whole breasts in vivo. Here, we report the first demonstration of breast tissue image classification in QT imaging. We systematically assess the ability of the QT images’ features to differentiate between normal breast tissue types. The three QT features were used in Support Vector Machines (SVM) classifiers, and classification of breast tissue as either skin, fat, glands, ducts or connective tissue was demonstrated with an overall accuracy of greater than 90%. Finally, the classifier was validated on whole breast image volumes to provide a color-coded breast tissue volume. This study serves as a first step towards a computer-aided detection/diagnosis platform for QT.
Tsai, Meng-Yin; Lan, Kuo-Chung; Ou, Chia-Yo; Chen, Jen-Huang; Chang, Shiuh-Young; Hsu, Te-Yao
2004-02-01
Our purpose was to evaluate whether the application of serial three-dimensional (3D) sonography and the mandibular size monogram can allow observation of dynamic changes in facial features, as well as chin development in utero. The mandibular size monogram has been established through a cross-sectional study involving 183 fetal images. The serial changes of facial features and chin development are assessed in a cohort study involving 40 patients. The monogram reveals that the Biparietal distance (BPD)/Mandibular body length (MBL) ratio is gradually decreased with the advance of gestational age. The cohort study conducted with serial 3D sonography shows the same tendency. Both the images and the results of paired-samples t test (P<.001) statistical analysis suggest that the fetuses develop wider chins and broader facial features in later weeks. The serial 3D sonography and mandibular size monogram display disproportionate growth of the fetal head and chin that leads to changes in facial features in late gestation. This fact must be considered when we evaluate fetuses at risk for development of micrognathia.
Huo, Guanying
2017-01-01
As a typical deep-learning model, Convolutional Neural Networks (CNNs) can be exploited to automatically extract features from images using the hierarchical structure inspired by mammalian visual system. For image classification tasks, traditional CNN models employ the softmax function for classification. However, owing to the limited capacity of the softmax function, there are some shortcomings of traditional CNN models in image classification. To deal with this problem, a new method combining Biomimetic Pattern Recognition (BPR) with CNNs is proposed for image classification. BPR performs class recognition by a union of geometrical cover sets in a high-dimensional feature space and therefore can overcome some disadvantages of traditional pattern recognition. The proposed method is evaluated on three famous image classification benchmarks, that is, MNIST, AR, and CIFAR-10. The classification accuracies of the proposed method for the three datasets are 99.01%, 98.40%, and 87.11%, respectively, which are much higher in comparison with the other four methods in most cases. PMID:28316614
Three dimensional image of Isla Isabela in the western Galapagos Islands
NASA Technical Reports Server (NTRS)
1994-01-01
This is a three-dimensional image of Isla Isabela in the western Galapagos Islands off the western coast of Ecuador, South America. The view was constructed by overlaying a Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar image on a TOPSAR digital elevation map. The vertical scale in this image is exaggerated by a factor of 1.87. The SIR-C/X-SAR image was taken on the 40th orbit of the shuttle Endeavour. The image is centered at about .5 degrees south latitude and 91 degrees West longitude and covers an area of 75 km by 60 km. The radar incidence angle at the center of the image is about 20 degrees. This SIR-C/X-SAR image of Alcedo and Sierra Negra volcanoes shows the rougher lava flowas as bright features, while ash deposits and smooth Pahoehoe lava flows appear dark. A small portion of Isla Fernandina is visible in the extreme upper left corner of the image. The Jet Propulsion Laboratory alternative photo number is P-43913.
Atomic Force Microscopy Based Cell Shape Index
NASA Astrophysics Data System (ADS)
Adia-Nimuwa, Usienemfon; Mujdat Tiryaki, Volkan; Hartz, Steven; Xie, Kan; Ayres, Virginia
2013-03-01
Stellation is a measure of cell physiology and pathology for several cell groups including neural, liver and pancreatic cells. In the present work, we compare the results of a conventional two-dimensional shape index study of both atomic force microscopy (AFM) and fluorescent microscopy images with the results obtained using a new three-dimensional AFM-based shape index similar to sphericity index. The stellation of astrocytes is investigated on nanofibrillar scaffolds composed of electrospun polyamide nanofibers that has demonstrated promise for central nervous system (CNS) repair. Recent work by our group has given us the ability to clearly segment the cells from nanofibrillar scaffolds in AFM images. The clear-featured AFM images indicated that the astrocyte processes were longer than previously identified at 24h. It was furthermore shown that cell spreading could vary significantly as a function of environmental parameters, and that AFM images could record these variations. The new three-dimensional AFM-based shape index incorporates the new information: longer stellate processes and cell spreading. The support of NSF PHY-095776 is acknowledged.
Variable importance in nonlinear kernels (VINK): classification of digitized histopathology.
Ginsburg, Shoshana; Ali, Sahirzeeshan; Lee, George; Basavanhally, Ajay; Madabhushi, Anant
2013-01-01
Quantitative histomorphometry is the process of modeling appearance of disease morphology on digitized histopathology images via image-based features (e.g., texture, graphs). Due to the curse of dimensionality, building classifiers with large numbers of features requires feature selection (which may require a large training set) or dimensionality reduction (DR). DR methods map the original high-dimensional features in terms of eigenvectors and eigenvalues, which limits the potential for feature transparency or interpretability. Although methods exist for variable selection and ranking on embeddings obtained via linear DR schemes (e.g., principal components analysis (PCA)), similar methods do not yet exist for nonlinear DR (NLDR) methods. In this work we present a simple yet elegant method for approximating the mapping between the data in the original feature space and the transformed data in the kernel PCA (KPCA) embedding space; this mapping provides the basis for quantification of variable importance in nonlinear kernels (VINK). We show how VINK can be implemented in conjunction with the popular Isomap and Laplacian eigenmap algorithms. VINK is evaluated in the contexts of three different problems in digital pathology: (1) predicting five year PSA failure following radical prostatectomy, (2) predicting Oncotype DX recurrence risk scores for ER+ breast cancers, and (3) distinguishing good and poor outcome p16+ oropharyngeal tumors. We demonstrate that subsets of features identified by VINK provide similar or better classification or regression performance compared to the original high dimensional feature sets.
Scene analysis for effective visual search in rough three-dimensional-modeling scenes
NASA Astrophysics Data System (ADS)
Wang, Qi; Hu, Xiaopeng
2016-11-01
Visual search is a fundamental technology in the computer vision community. It is difficult to find an object in complex scenes when there exist similar distracters in the background. We propose a target search method in rough three-dimensional-modeling scenes based on a vision salience theory and camera imaging model. We give the definition of salience of objects (or features) and explain the way that salience measurements of objects are calculated. Also, we present one type of search path that guides to the target through salience objects. Along the search path, when the previous objects are localized, the search region of each subsequent object decreases, which is calculated through imaging model and an optimization method. The experimental results indicate that the proposed method is capable of resolving the ambiguities resulting from distracters containing similar visual features with the target, leading to an improvement of search speed by over 50%.
Shahidi, Shoaleh; Bahrampour, Ehsan; Soltanimehr, Elham; Zamani, Ali; Oshagh, Morteza; Moattari, Marzieh; Mehdizadeh, Alireza
2014-09-16
Two-dimensional projection radiographs have been traditionally considered the modality of choice for cephalometric analysis. To overcome the shortcomings of two-dimensional images, three-dimensional computed tomography (CT) has been used to evaluate craniofacial structures. However, manual landmark detection depends on medical expertise, and the process is time-consuming. The present study was designed to produce software capable of automated localization of craniofacial landmarks on cone beam (CB) CT images based on image registration and to evaluate its accuracy. The software was designed using MATLAB programming language. The technique was a combination of feature-based (principal axes registration) and voxel similarity-based methods for image registration. A total of 8 CBCT images were selected as our reference images for creating a head atlas. Then, 20 CBCT images were randomly selected as the test images for evaluating the method. Three experts twice located 14 landmarks in all 28 CBCT images during two examinations set 6 weeks apart. The differences in the distances of coordinates of each landmark on each image between manual and automated detection methods were calculated and reported as mean errors. The combined intraclass correlation coefficient for intraobserver reliability was 0.89 and for interobserver reliability 0.87 (95% confidence interval, 0.82 to 0.93). The mean errors of all 14 landmarks were <4 mm. Additionally, 63.57% of landmarks had a mean error of <3 mm compared with manual detection (gold standard method). The accuracy of our approach for automated localization of craniofacial landmarks, which was based on combining feature-based and voxel similarity-based methods for image registration, was acceptable. Nevertheless we recommend repetition of this study using other techniques, such as intensity-based methods.
Six-dimensional real and reciprocal space small-angle X-ray scattering tomography
NASA Astrophysics Data System (ADS)
Schaff, Florian; Bech, Martin; Zaslansky, Paul; Jud, Christoph; Liebi, Marianne; Guizar-Sicairos, Manuel; Pfeiffer, Franz
2015-11-01
When used in combination with raster scanning, small-angle X-ray scattering (SAXS) has proven to be a valuable imaging technique of the nanoscale, for example of bone, teeth and brain matter. Although two-dimensional projection imaging has been used to characterize various materials successfully, its three-dimensional extension, SAXS computed tomography, poses substantial challenges, which have yet to be overcome. Previous work using SAXS computed tomography was unable to preserve oriented SAXS signals during reconstruction. Here we present a solution to this problem and obtain a complete SAXS computed tomography, which preserves oriented scattering information. By introducing virtual tomography axes, we take advantage of the two-dimensional SAXS information recorded on an area detector and use it to reconstruct the full three-dimensional scattering distribution in reciprocal space for each voxel of the three-dimensional object in real space. The presented method could be of interest for a combined six-dimensional real and reciprocal space characterization of mesoscopic materials with hierarchically structured features with length scales ranging from a few nanometres to a few millimetres—for example, biomaterials such as bone or teeth, or functional materials such as fuel-cell or battery components.
Six-dimensional real and reciprocal space small-angle X-ray scattering tomography.
Schaff, Florian; Bech, Martin; Zaslansky, Paul; Jud, Christoph; Liebi, Marianne; Guizar-Sicairos, Manuel; Pfeiffer, Franz
2015-11-19
When used in combination with raster scanning, small-angle X-ray scattering (SAXS) has proven to be a valuable imaging technique of the nanoscale, for example of bone, teeth and brain matter. Although two-dimensional projection imaging has been used to characterize various materials successfully, its three-dimensional extension, SAXS computed tomography, poses substantial challenges, which have yet to be overcome. Previous work using SAXS computed tomography was unable to preserve oriented SAXS signals during reconstruction. Here we present a solution to this problem and obtain a complete SAXS computed tomography, which preserves oriented scattering information. By introducing virtual tomography axes, we take advantage of the two-dimensional SAXS information recorded on an area detector and use it to reconstruct the full three-dimensional scattering distribution in reciprocal space for each voxel of the three-dimensional object in real space. The presented method could be of interest for a combined six-dimensional real and reciprocal space characterization of mesoscopic materials with hierarchically structured features with length scales ranging from a few nanometres to a few millimetres--for example, biomaterials such as bone or teeth, or functional materials such as fuel-cell or battery components.
NASA Astrophysics Data System (ADS)
Robbins, Woodrow E.
1988-01-01
The present conference discusses topics in novel technologies and techniques of three-dimensional imaging, human factors-related issues in three-dimensional display system design, three-dimensional imaging applications, and image processing for remote sensing. Attention is given to a 19-inch parallactiscope, a chromostereoscopic CRT-based display, the 'SpaceGraph' true three-dimensional peripheral, advantages of three-dimensional displays, holographic stereograms generated with a liquid crystal spatial light modulator, algorithms and display techniques for four-dimensional Cartesian graphics, an image processing system for automatic retina diagnosis, the automatic frequency control of a pulsed CO2 laser, and a three-dimensional display of magnetic resonance imaging of the spine.
Arisha, Mohammed J; Hsiung, Ming C; Nanda, Navin C; ElKaryoni, Ahmed; Mohamed, Ahmed H; Wei, Jeng
2017-08-01
Hemangiomas are rarely found in the heart and pericardial involvement is even more rare. We report a case of primary pericardial hemangioma, in which three-dimensional transesophageal echocardiography (3DTEE) provided incremental benefit over standard two-dimensional images. Our case also highlights the importance of systematic cropping of the 3D datasets in making a diagnosis of pericardial hemangioma with a greater degree of certainty. In addition, we also provide a literature review of the features of cardiac/pericardial hemangiomas in a tabular form. © 2017, Wiley Periodicals, Inc.
Hierarchical classification in high dimensional numerous class cases
NASA Technical Reports Server (NTRS)
Kim, Byungyong; Landgrebe, D. A.
1990-01-01
As progress in new sensor technology continues, increasingly high resolution imaging sensors are being developed. These sensors give more detailed and complex data for each picture element and greatly increase the dimensionality of data over past systems. Three methods for designing a decision tree classifier are discussed: a top down approach, a bottom up approach, and a hybrid approach. Three feature extraction techniques are implemented. Canonical and extended canonical techniques are mainly dependent upon the mean difference between two classes. An autocorrelation technique is dependent upon the correlation differences. The mathematical relationship between sample size, dimensionality, and risk value is derived.
Wang, Jingjing; Sun, Tao; Gao, Ni; Menon, Desmond Dev; Luo, Yanxia; Gao, Qi; Li, Xia; Wang, Wei; Zhu, Huiping; Lv, Pingxin; Liang, Zhigang; Tao, Lixin; Liu, Xiangtong; Guo, Xiuhua
2014-01-01
To determine the value of contourlet textural features obtained from solitary pulmonary nodules in two dimensional CT images used in diagnoses of lung cancer. A total of 6,299 CT images were acquired from 336 patients, with 1,454 benign pulmonary nodule images from 84 patients (50 male, 34 female) and 4,845 malignant from 252 patients (150 male, 102 female). Further to this, nineteen patient information categories, which included seven demographic parameters and twelve morphological features, were also collected. A contourlet was used to extract fourteen types of textural features. These were then used to establish three support vector machine models. One comprised a database constructed of nineteen collected patient information categories, another included contourlet textural features and the third one contained both sets of information. Ten-fold cross-validation was used to evaluate the diagnosis results for the three databases, with sensitivity, specificity, accuracy, the area under the curve (AUC), precision, Youden index, and F-measure were used as the assessment criteria. In addition, the synthetic minority over-sampling technique (SMOTE) was used to preprocess the unbalanced data. Using a database containing textural features and patient information, sensitivity, specificity, accuracy, AUC, precision, Youden index, and F-measure were: 0.95, 0.71, 0.89, 0.89, 0.92, 0.66, and 0.93 respectively. These results were higher than results derived using the database without textural features (0.82, 0.47, 0.74, 0.67, 0.84, 0.29, and 0.83 respectively) as well as the database comprising only textural features (0.81, 0.64, 0.67, 0.72, 0.88, 0.44, and 0.85 respectively). Using the SMOTE as a pre-processing procedure, new balanced database generated, including observations of 5,816 benign ROIs and 5,815 malignant ROIs, and accuracy was 0.93. Our results indicate that the combined contourlet textural features of solitary pulmonary nodules in CT images with patient profile information could potentially improve the diagnosis of lung cancer.
Image processing and 3D visualization in forensic pathologic examination
NASA Astrophysics Data System (ADS)
Oliver, William R.; Altschuler, Bruce R.
1996-02-01
The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing and three-dimensional visualization in the analysis of patterned injuries and tissue damage. While image processing will never replace classical understanding and interpretation of how injuries develop and evolve, it can be a useful tool in helping an observer notice features in an image, may help provide correlation of surface to deep tissue injury, and provide a mechanism for the development of a metric for analyzing how likely it may be that a given object may have caused a given wound. We are also exploring methods of acquiring three-dimensional data for such measurements, which is the subject of a second paper.
The Application of Three-Dimensional Surface Imaging System in Plastic and Reconstructive Surgery.
Li, Yanqi; Yang, Xin; Li, Dong
2016-02-01
Three-dimensional (3D) surface imaging system has gained popularity worldwide in clinical application. Unlike computed tomography and magnetic resonance imaging, it has the ability to capture 3D images with both shape and texture information. This feature has made it quite useful for plastic surgeons. This review article is mainly focusing on demonstrating the current status and analyzing the future of the application of 3D surface imaging systems in plastic and reconstructive surgery.Currently, 3D surface imaging system is mainly used in plastic and reconstructive surgery to help improve the reliability of surgical planning and assessing surgical outcome objectively. There have already been reports of its using on plastic and reconstructive surgery from head to toe. Studies on facial aging process, online applications development, and so on, have also been done through the use of 3D surface imaging system.Because different types of 3D surface imaging devices have their own advantages and disadvantages, a basic knowledge of their features is required and careful thought should be taken to choose the one that best fits a surgeon's demand.In the future, by integrating with other imaging tools and the 3D printing technology, 3D surface imaging system will play an important role in individualized surgical planning, implants production, meticulous surgical simulation, operative techniques training, and patient education.
NASA Technical Reports Server (NTRS)
Degnan, John J. (Inventor)
2007-01-01
This invention is directed to a 3-dimensional imaging lidar, which utilizes modest power kHz rate lasers, array detectors, photon-counting multi-channel timing receivers, and dual wedge optical scanners with transmitter point-ahead correction to provide contiguous high spatial resolution mapping of surface features including ground, water, man-made objects, vegetation and submerged surfaces from an aircraft or a spacecraft.
Stoffer, Philip W.
2008-01-01
This is a set of two sheets of 3D images showing geologic features of many National Parks. Red-and-cyan viewing glasses are need to see the three-dimensional effect. A search on the World Wide Web will yield many sites about anaglyphs and where to get 3D glasses. Red-blue glasses will do but red-cyan glasses are a little better. This publication features a photo quiz game: Name that park! where you can explore, interpret, and identify selected park landscapes. Can you identify landscape features in the images? Can you explain processes that may have helped form the landscape features? You can get the answers online.
Visualization of anisotropic-isotropic phase transformation dynamics in battery electrode particles
Wang, Jiajun; Karen Chen-Wiegart, Yu-chen; Eng, Christopher; ...
2016-08-12
Anisotropy, or alternatively, isotropy of phase transformations extensively exist in a number of solid-state materials, with performance depending on the three-dimensional transformation features. Fundamental insights into internal chemical phase evolution allow manipulating materials with desired functionalities, and can be developed via real-time multi-dimensional imaging methods. In this paper, we report a five-dimensional imaging method to track phase transformation as a function of charging time in individual lithium iron phosphate battery cathode particles during delithiation. The electrochemically driven phase transformation is initially anisotropic with a preferred boundary migration direction, but becomes isotropic as delithiation proceeds further. We also observe the expectedmore » two-phase coexistence throughout the entire charging process. Finally, we expect this five-dimensional imaging method to be broadly applicable to problems in energy, materials, environmental and life sciences.« less
NASA Astrophysics Data System (ADS)
He, Zhi; Liu, Lin
2016-11-01
Empirical mode decomposition (EMD) and its variants have recently been applied for hyperspectral image (HSI) classification due to their ability to extract useful features from the original HSI. However, it remains a challenging task to effectively exploit the spectral-spatial information by the traditional vector or image-based methods. In this paper, a three-dimensional (3D) extension of EMD (3D-EMD) is proposed to naturally treat the HSI as a cube and decompose the HSI into varying oscillations (i.e. 3D intrinsic mode functions (3D-IMFs)). To achieve fast 3D-EMD implementation, 3D Delaunay triangulation (3D-DT) is utilized to determine the distances of extrema, while separable filters are adopted to generate the envelopes. Taking the extracted 3D-IMFs as features of different tasks, robust multitask learning (RMTL) is further proposed for HSI classification. In RMTL, pairs of low-rank and sparse structures are formulated by trace-norm and l1,2 -norm to capture task relatedness and specificity, respectively. Moreover, the optimization problems of RMTL can be efficiently solved by the inexact augmented Lagrangian method (IALM). Compared with several state-of-the-art feature extraction and classification methods, the experimental results conducted on three benchmark data sets demonstrate the superiority of the proposed methods.
Quantitative three-dimensional photoacoustic tomography of the finger joints: an in vivo study
NASA Astrophysics Data System (ADS)
Sun, Yao; Sobel, Eric; Jiang, Huabei
2009-11-01
We present for the first time in vivo full three-dimensional (3-D) photoacoustic tomography (PAT) of the distal interphalangeal joint in a human subject. Both absorbed energy density and absorption coefficient images of the joint are quantitatively obtained using our finite-element-based photoacoustic image reconstruction algorithm coupled with the photon diffusion equation. The results show that major anatomical features in the joint along with the side arteries can be imaged with a 1-MHz transducer in a spherical scanning geometry. In addition, the cartilages associated with the joint can be quantitatively differentiated from the phalanx. This in vivo study suggests that the 3-D PAT method described has the potential to be used for early diagnosis of joint diseases such as osteoarthritis and rheumatoid arthritis.
Single camera volumetric velocimetry in aortic sinus with a percutaneous valve
NASA Astrophysics Data System (ADS)
Clifford, Chris; Thurow, Brian; Midha, Prem; Okafor, Ikechukwu; Raghav, Vrishank; Yoganathan, Ajit
2016-11-01
Cardiac flows have long been understood to be highly three dimensional, yet traditional in vitro techniques used to capture these complexities are costly and cumbersome. Thus, two dimensional techniques are primarily used for heart valve flow diagnostics. The recent introduction of plenoptic camera technology allows for traditional cameras to capture both spatial and angular information from a light field through the addition of a microlens array in front of the image sensor. When combined with traditional particle image velocimetry (PIV) techniques, volumetric velocity data may be acquired with a single camera using off-the-shelf optics. Particle volume pairs are reconstructed from raw plenoptic images using a filtered refocusing scheme, followed by three-dimensional cross-correlation. This technique was applied to the sinus region (known for having highly three-dimensional flow structures) of an in vitro aortic model with a percutaneous valve. Phase-locked plenoptic PIV data was acquired at two cardiac outputs (2 and 5 L/min) and 7 phases of the cardiac cycle. The volumetric PIV data was compared to standard 2D-2C PIV. Flow features such as recirculation and stagnation were observed in the sinus region in both cases.
Strobl, Frederic; Schmitz, Alexander; Stelzer, Ernst H K
2017-06-01
Light-sheet-based fluorescence microscopy features optical sectioning in the excitation process. This reduces phototoxicity and photobleaching by up to four orders of magnitude compared with that caused by confocal fluorescence microscopy, simplifies segmentation and quantification for three-dimensional cell biology, and supports the transition from on-demand to systematic data acquisition in developmental biology applications.
Remote assessment of diabetic foot ulcers using a novel wound imaging system.
Bowling, Frank L; King, Laurie; Paterson, James A; Hu, Jingyi; Lipsky, Benjamin A; Matthews, David R; Boulton, Andrew J M
2011-01-01
Telemedicine allows experts to assess patients in remote locations, enabling quality convenient, cost-effective care. To help assess foot wounds remotely, we investigated the reliability of a novel optical imaging system employing a three-dimensional camera and disposable optical marker. We first examined inter- and intraoperator measurement variability (correlation coefficient) of five clinicians examining three different wounds. Then, to assess of the system's ability to identify key clinically relevant features, we had two clinicians evaluate 20 different wounds at two centers, recording observations on a standardized form. Three other clinicians recorded their observations using only the corresponding three-dimensional images. Using the in-person assessment as the criterion standard, we assessed concordance of the remote with in-person assessments. Measurement variation of area was 3.3% for intraoperator and 11.9% for interoperator; difference in clinician opinion about wound boundary location was significant. Overall agreement for remote vs. in-person assessments was good, but was lowest on the subjective clinical assessments, e.g., value of debridement to improve healing. Limitations of imaging included inability to show certain characteristics, e.g., moistness or exudation. Clinicians gave positive feedback on visual fidelity. This pilot study showed that a clinician viewing only the three-dimensional images could accurately measure and assess a diabetic foot wound remotely. © 2010 by the Wound Healing Society.
Parallax scanning methods for stereoscopic three-dimensional imaging
NASA Astrophysics Data System (ADS)
Mayhew, Christopher A.; Mayhew, Craig M.
2012-03-01
Under certain circumstances, conventional stereoscopic imagery is subject to being misinterpreted. Stereo perception created from two static horizontally separated views can create a "cut out" 2D appearance for objects at various planes of depth. The subject volume looks three-dimensional, but the objects themselves appear flat. This is especially true if the images are captured using small disparities. One potential explanation for this effect is that, although three-dimensional perception comes primarily from binocular vision, a human's gaze (the direction and orientation of a person's eyes with respect to their environment) and head motion also contribute additional sub-process information. The absence of this information may be the reason that certain stereoscopic imagery appears "odd" and unrealistic. Another contributing factor may be the absence of vertical disparity information in a traditional stereoscopy display. Recently, Parallax Scanning technologies have been introduced, which provide (1) a scanning methodology, (2) incorporate vertical disparity, and (3) produce stereo images with substantially smaller disparities than the human interocular distances.1 To test whether these three features would improve the realism and reduce the cardboard cutout effect of stereo images, we have applied Parallax Scanning (PS) technologies to commercial stereoscopic digital cinema productions and have tested the results with a panel of stereo experts. These informal experiments show that the addition of PS information into the left and right image capture improves the overall perception of three-dimensionality for most viewers. Parallax scanning significantly increases the set of tools available for 3D storytelling while at the same time presenting imagery that is easy and pleasant to view.
ProteinShader: illustrative rendering of macromolecules
Weber, Joseph R
2009-01-01
Background Cartoon-style illustrative renderings of proteins can help clarify structural features that are obscured by space filling or balls and sticks style models, and recent advances in programmable graphics cards offer many new opportunities for improving illustrative renderings. Results The ProteinShader program, a new tool for macromolecular visualization, uses information from Protein Data Bank files to produce illustrative renderings of proteins that approximate what an artist might create by hand using pen and ink. A combination of Hermite and spherical linear interpolation is used to draw smooth, gradually rotating three-dimensional tubes and ribbons with a repeating pattern of texture coordinates, which allows the application of texture mapping, real-time halftoning, and smooth edge lines. This free platform-independent open-source program is written primarily in Java, but also makes extensive use of the OpenGL Shading Language to modify the graphics pipeline. Conclusion By programming to the graphics processor unit, ProteinShader is able to produce high quality images and illustrative rendering effects in real-time. The main feature that distinguishes ProteinShader from other free molecular visualization tools is its use of texture mapping techniques that allow two-dimensional images to be mapped onto the curved three-dimensional surfaces of ribbons and tubes with minimum distortion of the images. PMID:19331660
Xin, Zhaowei; Wei, Dong; Xie, Xingwang; Chen, Mingce; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng
2018-02-19
Light-field imaging is a crucial and straightforward way of measuring and analyzing surrounding light worlds. In this paper, a dual-polarized light-field imaging micro-system based on a twisted nematic liquid-crystal microlens array (TN-LCMLA) for direct three-dimensional (3D) observation is fabricated and demonstrated. The prototyped camera has been constructed by integrating a TN-LCMLA with a common CMOS sensor array. By switching the working state of the TN-LCMLA, two orthogonally polarized light-field images can be remapped through the functioned imaging sensors. The imaging micro-system in conjunction with the electric-optical microstructure can be used to perform polarization and light-field imaging, simultaneously. Compared with conventional plenoptic cameras using liquid-crystal microlens array, the polarization-independent light-field images with a high image quality can be obtained in the arbitrary polarization state selected. We experimentally demonstrate characters including a relatively wide operation range in the manipulation of incident beams and the multiple imaging modes, such as conventional two-dimensional imaging, light-field imaging, and polarization imaging. Considering the obvious features of the TN-LCMLA, such as very low power consumption, providing multiple imaging modes mentioned, simple and low-cost manufacturing, the imaging micro-system integrated with this kind of liquid-crystal microstructure driven electrically presents the potential capability of directly observing a 3D object in typical scattering media.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trease, Lynn L.; Trease, Harold E.; Fowler, John
2007-03-15
One of the critical steps toward performing computational biology simulations, using mesh based integration methods, is in using topologically faithful geometry derived from experimental digital image data as the basis for generating the computational meshes. Digital image data representations contain both the topology of the geometric features and experimental field data distributions. The geometric features that need to be captured from the digital image data are three-dimensional, therefore the process and tools we have developed work with volumetric image data represented as data-cubes. This allows us to take advantage of 2D curvature information during the segmentation and feature extraction process.more » The process is basically: 1) segmenting to isolate and enhance the contrast of the features that we wish to extract and reconstruct, 2) extracting the geometry of the features in an isosurfacing technique, and 3) building the computational mesh using the extracted feature geometry. “Quantitative” image reconstruction and feature extraction is done for the purpose of generating computational meshes, not just for producing graphics "screen" quality images. For example, the surface geometry that we extract must represent a closed water-tight surface.« less
NASA Astrophysics Data System (ADS)
Brown, C. David; Ih, Charles S.; Arce, Gonzalo R.; Fertell, David A.
1987-01-01
Vision systems for mobile robots or autonomous vehicles navigating in an unknown terrain environment must provide a rapid and accurate method of segmenting the scene ahead into regions of pathway and background. A major distinguishing feature between the pathway and background is the three dimensional texture of these two regions. Typical methods of textural image segmentation are very computationally intensive, often lack the required robustness, and are incapable of sensing the three dimensional texture of various regions of the scene. A method is presented where scanned laser projected lines of structured light, viewed by a stereoscopically located single video camera, resulted in an image in which the three dimensional characteristics of the scene were represented by the discontinuity of the projected lines. This image was conducive to processing with simple regional operators to classify regions as pathway or background. Design of some operators and application methods, and demonstration on sample images are presented. This method provides rapid and robust scene segmentation capability that has been implemented on a microcomputer in near real time, and should result in higher speed and more reliable robotic or autonomous navigation in unstructured environments.
Zhao, Guangjun; Wang, Xuchu; Niu, Yanmin; Tan, Liwen; Zhang, Shao-Xiang
2016-01-01
Cryosection brain images in Chinese Visible Human (CVH) dataset contain rich anatomical structure information of tissues because of its high resolution (e.g., 0.167 mm per pixel). Fast and accurate segmentation of these images into white matter, gray matter, and cerebrospinal fluid plays a critical role in analyzing and measuring the anatomical structures of human brain. However, most existing automated segmentation methods are designed for computed tomography or magnetic resonance imaging data, and they may not be applicable for cryosection images due to the imaging difference. In this paper, we propose a supervised learning-based CVH brain tissues segmentation method that uses stacked autoencoder (SAE) to automatically learn the deep feature representations. Specifically, our model includes two successive parts where two three-layer SAEs take image patches as input to learn the complex anatomical feature representation, and then these features are sent to Softmax classifier for inferring the labels. Experimental results validated the effectiveness of our method and showed that it outperformed four other classical brain tissue detection strategies. Furthermore, we reconstructed three-dimensional surfaces of these tissues, which show their potential in exploring the high-resolution anatomical structures of human brain. PMID:27057543
Zhao, Guangjun; Wang, Xuchu; Niu, Yanmin; Tan, Liwen; Zhang, Shao-Xiang
2016-01-01
Cryosection brain images in Chinese Visible Human (CVH) dataset contain rich anatomical structure information of tissues because of its high resolution (e.g., 0.167 mm per pixel). Fast and accurate segmentation of these images into white matter, gray matter, and cerebrospinal fluid plays a critical role in analyzing and measuring the anatomical structures of human brain. However, most existing automated segmentation methods are designed for computed tomography or magnetic resonance imaging data, and they may not be applicable for cryosection images due to the imaging difference. In this paper, we propose a supervised learning-based CVH brain tissues segmentation method that uses stacked autoencoder (SAE) to automatically learn the deep feature representations. Specifically, our model includes two successive parts where two three-layer SAEs take image patches as input to learn the complex anatomical feature representation, and then these features are sent to Softmax classifier for inferring the labels. Experimental results validated the effectiveness of our method and showed that it outperformed four other classical brain tissue detection strategies. Furthermore, we reconstructed three-dimensional surfaces of these tissues, which show their potential in exploring the high-resolution anatomical structures of human brain.
Zhang, Yu; Wu, Jianxin; Cai, Jianfei
2016-05-01
In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.
Online coupled camera pose estimation and dense reconstruction from video
Medioni, Gerard; Kang, Zhuoliang
2016-11-01
A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.
Schad, L R; Boesecke, R; Schlegel, W; Hartmann, G H; Sturm, V; Strauss, L G; Lorenz, W J
1987-01-01
A treatment planning system for stereotactic convergent beam irradiation of deeply localized brain tumors is reported. The treatment technique consists of several moving field irradiations in noncoplanar planes at a linear accelerator facility. Using collimated narrow beams, a high concentration of dose within small volumes with a dose gradient of 10-15%/mm was obtained. The dose calculation was based on geometrical information of multiplanar CT or magnetic resonance (MR) imaging data. The patient's head was fixed in a stereotactic localization system, which is usable at CT, MR, and positron emission tomography (PET) installations. Special computer programs for correction of the geometrical MR distortions allowed a precise correlation of the different imaging modalities. The therapist can use combinations of CT, MR, and PET data for defining target volume. For instance, the superior soft tissue contrast of MR coupled with the metabolic features of PET may be a useful addition in the radiation treatment planning process. Furthermore, other features such as calculated dose distribution to critical structures can also be transferred from one set of imaging data to another and can be displayed as three-dimensional shaded structures.
The extraction and use of facial features in low bit-rate visual communication.
Pearson, D
1992-01-29
A review is given of experimental investigations by the author and his collaborators into methods of extracting binary features from images of the face and hands. The aim of the research has been to enable deaf people to communicate by sign language over the telephone network. Other applications include model-based image coding and facial-recognition systems. The paper deals with the theoretical postulates underlying the successful experimental extraction of facial features. The basic philosophy has been to treat the face as an illuminated three-dimensional object and to identify features from characteristics of their Gaussian maps. It can be shown that in general a composite image operator linked to a directional-illumination estimator is required to accomplish this, although the latter can often be omitted in practice.
Efficient Data Mining for Local Binary Pattern in Texture Image Analysis
Kwak, Jin Tae; Xu, Sheng; Wood, Bradford J.
2015-01-01
Local binary pattern (LBP) is a simple gray scale descriptor to characterize the local distribution of the grey levels in an image. Multi-resolution LBP and/or combinations of the LBPs have shown to be effective in texture image analysis. However, it is unclear what resolutions or combinations to choose for texture analysis. Examining all the possible cases is impractical and intractable due to the exponential growth in a feature space. This limits the accuracy and time- and space-efficiency of LBP. Here, we propose a data mining approach for LBP, which efficiently explores a high-dimensional feature space and finds a relatively smaller number of discriminative features. The features can be any combinations of LBPs. These may not be achievable with conventional approaches. Hence, our approach not only fully utilizes the capability of LBP but also maintains the low computational complexity. We incorporated three different descriptors (LBP, local contrast measure, and local directional derivative measure) with three spatial resolutions and evaluated our approach using two comprehensive texture databases. The results demonstrated the effectiveness and robustness of our approach to different experimental designs and texture images. PMID:25767332
NASA Astrophysics Data System (ADS)
Wang, X.
2018-04-01
Tourism geological resources are of high value in admiration, scientific research and universal education, which need to be protected and rationally utilized. In the past, most of the remote sensing investigations of tourism geological resources used two-dimensional remote sensing interpretation method, which made it difficult for some geological heritages to be interpreted and led to the omission of some information. This aim of this paper is to assess the value of a method using the three-dimensional visual remote sensing image to extract information of geological heritages. skyline software system is applied to fuse the 0.36 m aerial images and 5m interval DEM to establish the digital earth model. Based on the three-dimensional shape, color tone, shadow, texture and other image features, the distribution of tourism geological resources in Shandong Province and the location of geological heritage sites were obtained, such as geological structure, DaiGu landform, granite landform, Volcanic landform, sandy landform, Waterscapes, etc. The results show that using this method for remote sensing interpretation is highly recognizable, making the interpretation more accurate and comprehensive.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knogler, Thomas; El-Rabadi, Karem; Weber, Michael
2014-12-15
Purpose: To determine the diagnostic performance of three-dimensional (3D) texture analysis (TA) of contrast-enhanced computed tomography (CE-CT) images for treatment response assessment in patients with Hodgkin lymphoma (HL), compared with F-18-fludeoxyglucose (FDG) positron emission tomography/CT. Methods: 3D TA of 48 lymph nodes in 29 patients was performed on venous-phase CE-CT images before and after chemotherapy. All lymph nodes showed pathologically elevated FDG uptake at baseline. A stepwise logistic regression with forward selection was performed to identify classic CT parameters and texture features (TF) that enable the separation of complete response (CR) and persistent disease. Results: The TF fraction of imagemore » in runs, calculated for the 45° direction, was able to correctly identify CR with an accuracy of 75%, a sensitivity of 79.3%, and a specificity of 68.4%. Classical CT features achieved an accuracy of 75%, a sensitivity of 86.2%, and a specificity of 57.9%, whereas the combination of TF and CT imaging achieved an accuracy of 83.3%, a sensitivity of 86.2%, and a specificity of 78.9%. Conclusions: 3D TA of CE-CT images is potentially useful to identify nodal residual disease in HL, with a performance comparable to that of classical CT parameters. Best results are achieved when TA and classical CT features are combined.« less
Optimized SIFTFlow for registration of whole-mount histology to reference optical images
Shojaii, Rushin; Martel, Anne L.
2016-01-01
Abstract. The registration of two-dimensional histology images to reference images from other modalities is an important preprocessing step in the reconstruction of three-dimensional histology volumes. This is a challenging problem because of the differences in the appearances of histology images and other modalities, and the presence of large nonrigid deformations which occur during slide preparation. This paper shows the feasibility of using densely sampled scale-invariant feature transform (SIFT) features and a SIFTFlow deformable registration algorithm for coregistering whole-mount histology images with blockface optical images. We present a method for jointly optimizing the regularization parameters used by the SIFTFlow objective function and use it to determine the most appropriate values for the registration of breast lumpectomy specimens. We demonstrate that tuning the regularization parameters results in significant improvements in accuracy and we also show that SIFTFlow outperforms a previously described edge-based registration method. The accuracy of the histology images to blockface images registration using the optimized SIFTFlow method was assessed using an independent test set of images from five different lumpectomy specimens and the mean registration error was 0.32±0.22 mm. PMID:27774494
NASA Astrophysics Data System (ADS)
Klyen, Blake R.; Shavlakadze, Thea; Radley-Crabb, Hannah G.; Grounds, Miranda D.; Sampson, David D.
2011-07-01
Three-dimensional optical coherence tomography (3D-OCT) was used to image the structure and pathology of skeletal muscle tissue from the treadmill-exercised mdx mouse model of human Duchenne muscular dystrophy. Optical coherence tomography (OCT) images of excised muscle samples were compared with co-registered hematoxylin and eosin-stained and Evans blue dye fluorescence histology. We show, for the first time, structural 3D-OCT images of skeletal muscle dystropathology well correlated with co-located histology. OCT could identify morphological features of interest and necrotic lesions within the muscle tissue samples based on intrinsic optical contrast. These findings demonstrate the utility of 3D-OCT for the evaluation of small-animal skeletal muscle morphology and pathology, particularly for studies of mouse models of muscular dystrophy.
A three-dimensional study of the glottal jet
NASA Astrophysics Data System (ADS)
Krebs, F.; Silva, F.; Sciamarella, D.; Artana, G.
2012-05-01
This work builds upon the efforts to characterize the three-dimensional features of the glottal jet during vocal fold vibration. The study uses a Stereoscopic Particle Image Velocimetry setup on a self-oscillating physical model of the vocal folds with a uniform vocal tract. Time averages are documented and analyzed within the framework given by observations reported for jets exiting elongated nozzles. Phase averages are locked to the audio signal and used to obtain a volumetric reconstruction of the jet. From this reconstruction, the intra-cycle dynamics of the jet axis switching is disclosed.
Fault Diagnosis for Rotating Machinery: A Method based on Image Processing
Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie
2016-01-01
Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery. PMID:27711246
Fault Diagnosis for Rotating Machinery: A Method based on Image Processing.
Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie
2016-01-01
Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery.
Ross, James D.; Cullen, D. Kacy; Harris, James P.; LaPlaca, Michelle C.; DeWeerth, Stephen P.
2015-01-01
Three-dimensional (3-D) image analysis techniques provide a powerful means to rapidly and accurately assess complex morphological and functional interactions between neural cells. Current software-based identification methods of neural cells generally fall into two applications: (1) segmentation of cell nuclei in high-density constructs or (2) tracing of cell neurites in single cell investigations. We have developed novel methodologies to permit the systematic identification of populations of neuronal somata possessing rich morphological detail and dense neurite arborization throughout thick tissue or 3-D in vitro constructs. The image analysis incorporates several novel automated features for the discrimination of neurites and somata by initially classifying features in 2-D and merging these classifications into 3-D objects; the 3-D reconstructions automatically identify and adjust for over and under segmentation errors. Additionally, the platform provides for software-assisted error corrections to further minimize error. These features attain very accurate cell boundary identifications to handle a wide range of morphological complexities. We validated these tools using confocal z-stacks from thick 3-D neural constructs where neuronal somata had varying degrees of neurite arborization and complexity, achieving an accuracy of ≥95%. We demonstrated the robustness of these algorithms in a more complex arena through the automated segmentation of neural cells in ex vivo brain slices. These novel methods surpass previous techniques by improving the robustness and accuracy by: (1) the ability to process neurites and somata, (2) bidirectional segmentation correction, and (3) validation via software-assisted user input. This 3-D image analysis platform provides valuable tools for the unbiased analysis of neural tissue or tissue surrogates within a 3-D context, appropriate for the study of multi-dimensional cell-cell and cell-extracellular matrix interactions. PMID:26257609
Detecting natural occlusion boundaries using local cues
DiMattina, Christopher; Fox, Sean A.; Lewicki, Michael S.
2012-01-01
Occlusion boundaries and junctions provide important cues for inferring three-dimensional scene organization from two-dimensional images. Although several investigators in machine vision have developed algorithms for detecting occlusions and other edges in natural images, relatively few psychophysics or neurophysiology studies have investigated what features are used by the visual system to detect natural occlusions. In this study, we addressed this question using a psychophysical experiment where subjects discriminated image patches containing occlusions from patches containing surfaces. Image patches were drawn from a novel occlusion database containing labeled occlusion boundaries and textured surfaces in a variety of natural scenes. Consistent with related previous work, we found that relatively large image patches were needed to attain reliable performance, suggesting that human subjects integrate complex information over a large spatial region to detect natural occlusions. By defining machine observers using a set of previously studied features measured from natural occlusions and surfaces, we demonstrate that simple features defined at the spatial scale of the image patch are insufficient to account for human performance in the task. To define machine observers using a more biologically plausible multiscale feature set, we trained standard linear and neural network classifiers on the rectified outputs of a Gabor filter bank applied to the image patches. We found that simple linear classifiers could not match human performance, while a neural network classifier combining filter information across location and spatial scale compared well. These results demonstrate the importance of combining a variety of cues defined at multiple spatial scales for detecting natural occlusions. PMID:23255731
Ultra-fast framing camera tube
Kalibjian, Ralph
1981-01-01
An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.
A fish on the hunt, observed neuron by neuron
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2010-01-01
This three-dimensional microscopy image reveals an output neuron of the optic tectum lighting up in response to visual information from the retina. The scientists used this state-of-the-art imaging technology to learn how neurons in the optic tectum take visual information and convert it into an output that drives action. More information: http://newscenter.lbl.gov/feature-stories/2010/10/29/zebrafish-vision/
Spectral-spatial classification of hyperspectral image using three-dimensional convolution network
NASA Astrophysics Data System (ADS)
Liu, Bing; Yu, Xuchu; Zhang, Pengqiang; Tan, Xiong; Wang, Ruirui; Zhi, Lu
2018-01-01
Recently, hyperspectral image (HSI) classification has become a focus of research. However, the complex structure of an HSI makes feature extraction difficult to achieve. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. The design of an improved 3-D convolutional neural network (3D-CNN) model for HSI classification is described. This model extracts features from both the spectral and spatial dimensions through the application of 3-D convolutions, thereby capturing the important discrimination information encoded in multiple adjacent bands. The designed model views the HSI cube data altogether without relying on any pre- or postprocessing. In addition, the model is trained in an end-to-end fashion without any handcrafted features. The designed model was applied to three widely used HSI datasets. The experimental results demonstrate that the 3D-CNN-based method outperforms conventional methods even with limited labeled training samples.
Automatic recognition of ship types from infrared images using superstructure moment invariants
NASA Astrophysics Data System (ADS)
Li, Heng; Wang, Xinyu
2007-11-01
Automatic object recognition is an active area of interest for military and commercial applications. In this paper, a system addressing autonomous recognition of ship types in infrared images is proposed. Firstly, an approach of segmentation based on detection of salient features of the target with subsequent shadow removing is proposed, as is the base of the subsequent object recognition. Considering the differences between the shapes of various ships mainly lie in their superstructures, we then use superstructure moment functions invariant to translation, rotation and scale differences in input patterns and develop a robust algorithm of obtaining ship superstructure. Subsequently a back-propagation neural network is used as a classifier in the recognition stage and projection images of simulated three-dimensional ship models are used as the training sets. Our recognition model was implemented and experimentally validated using both simulated three-dimensional ship model images and real images derived from video of an AN/AAS-44V Forward Looking Infrared(FLIR) sensor.
[Three-dimensional reconstruction of functional brain images].
Inoue, M; Shoji, K; Kojima, H; Hirano, S; Naito, Y; Honjo, I
1999-08-01
We consider PET (positron emission tomography) measurement with SPM (Statistical Parametric Mapping) analysis to be one of the most useful methods to identify activated areas of the brain involved in language processing. SPM is an effective analytical method that detects markedly activated areas over the whole brain. However, with the conventional presentations of these functional brain images, such as horizontal slices, three directional projection, or brain surface coloring, makes understanding and interpreting the positional relationships among various brain areas difficult. Therefore, we developed three-dimensionally reconstructed images from these functional brain images to improve the interpretation. The subjects were 12 normal volunteers. The following three types of images were constructed: 1) routine images by SPM, 2) three-dimensional static images, and 3) three-dimensional dynamic images, after PET images were analyzed by SPM during daily dialog listening. The creation of images of both the three-dimensional static and dynamic types employed the volume rendering method by VTK (The Visualization Toolkit). Since the functional brain images did not include original brain images, we synthesized SPM and MRI brain images by self-made C++ programs. The three-dimensional dynamic images were made by sequencing static images with available software. Images of both the three-dimensional static and dynamic types were processed by a personal computer system. Our newly created images showed clearer positional relationships among activated brain areas compared to the conventional method. To date, functional brain images have been employed in fields such as neurology or neurosurgery, however, these images may be useful even in the field of otorhinolaryngology, to assess hearing and speech. Exact three-dimensional images based on functional brain images are important for exact and intuitive interpretation, and may lead to new developments in brain science. Currently, the surface model is the most common method of three-dimensional display. However, the volume rendering method may be more effective for imaging regions such as the brain.
Aqeel, Yousuf; Siddiqui, Ruqaiyyah; Ateeq, Muhammad; Raza Shah, Muhammad; Kulsoom, Huma; Khan, Naveed Ahmed
2015-01-01
Light microscopy and electron microscopy have been successfully used in the study of microbes, as well as free-living protists. Unlike light microscopy, which enables us to observe living organisms or the electron microscope which provides a two-dimensional image, atomic force microscopy provides a three-dimensional surface profile. Here, we observed two free-living amoebae, Acanthamoeba castellanii and Balamuthia mandrillaris under the phase contrast inverted microscope, transmission electron microscope and atomic force microscope. Although light microscopy was of lower magnification, it revealed functional biology of live amoebae such as motility and osmoregulation using contractile vacuoles of the trophozoite stage, but it is of limited value in defining the cyst stage. In contrast, transmission electron microscopy showed significantly greater magnification and resolution to reveal the ultra-structural features of trophozoites and cysts including intracellular organelles and cyst wall characteristics but it only produced a snapshot in time of a dead amoeba cell. Atomic force microscopy produced three-dimensional images providing detailed topographic description of shape and surface, phase imaging measuring boundary stiffness, and amplitude measurements including width, height and length of A. castellanii and B. mandrillaris trophozoites and cysts. These results demonstrate the importance of the application of various microscopic methods in the biological and structural characterization of the whole cell, ultra-structural features, as well as surface components and cytoskeleton of protist pathogens. © 2014 The Author(s) Journal of Eukaryotic Microbiology © 2014 International Society of Protistologists.
NASA Technical Reports Server (NTRS)
Gramenopoulos, N. (Principal Investigator)
1974-01-01
The author has identified the following significant results. A diffraction pattern analysis of MSS images led to the development of spatial signatures for farm land, urban areas and mountains. Four spatial features are employed to describe the spatial characteristics of image cells in the digital data. Three spectral features are combined with the spatial features to form a seven dimensional vector describing each cell. Then, the classification of the feature vectors is accomplished by using the maximum likelihood criterion. It was determined that the recognition accuracy with the maximum likelihood criterion depends on the statistics of the feature vectors. It was also determined that for a given geographic area the statistics of the classes remain invariable for a period of a month, but vary substantially between seasons. Three ERTS-1 images from the Phoenix, Arizona area were processed, and recognition rates between 85% and 100% were obtained for the terrain classes of desert, farms, mountains, and urban areas. To eliminate the need for training data, a new clustering algorithm has been developed. Seven ERTS-1 images from four test sites have been processed through the clustering algorithm, and high recognition rates have been achieved for all terrain classes.
Jamieson, Andrew R; Giger, Maryellen L; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha
2010-01-01
In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, "Laplacian eigenmaps for dimensionality reduction and data representation," Neural Comput. 15, 1373-1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, "Visualizing data using t-SNE," J. Mach. Learn. Res. 9, 2579-2605 (2008)]. These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier's AUC performance. In the large U.S. data set, sample high performance results include, AUC0.632+ = 0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+ = 0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+ = 0.90 with interval [0.847;0.919], all using the MCMC-BANN. Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space.
Wang, Jingjing; Sun, Tao; Gao, Ni; Menon, Desmond Dev; Luo, Yanxia; Gao, Qi; Li, Xia; Wang, Wei; Zhu, Huiping; Lv, Pingxin; Liang, Zhigang; Tao, Lixin; Liu, Xiangtong; Guo, Xiuhua
2014-01-01
Objective To determine the value of contourlet textural features obtained from solitary pulmonary nodules in two dimensional CT images used in diagnoses of lung cancer. Materials and Methods A total of 6,299 CT images were acquired from 336 patients, with 1,454 benign pulmonary nodule images from 84 patients (50 male, 34 female) and 4,845 malignant from 252 patients (150 male, 102 female). Further to this, nineteen patient information categories, which included seven demographic parameters and twelve morphological features, were also collected. A contourlet was used to extract fourteen types of textural features. These were then used to establish three support vector machine models. One comprised a database constructed of nineteen collected patient information categories, another included contourlet textural features and the third one contained both sets of information. Ten-fold cross-validation was used to evaluate the diagnosis results for the three databases, with sensitivity, specificity, accuracy, the area under the curve (AUC), precision, Youden index, and F-measure were used as the assessment criteria. In addition, the synthetic minority over-sampling technique (SMOTE) was used to preprocess the unbalanced data. Results Using a database containing textural features and patient information, sensitivity, specificity, accuracy, AUC, precision, Youden index, and F-measure were: 0.95, 0.71, 0.89, 0.89, 0.92, 0.66, and 0.93 respectively. These results were higher than results derived using the database without textural features (0.82, 0.47, 0.74, 0.67, 0.84, 0.29, and 0.83 respectively) as well as the database comprising only textural features (0.81, 0.64, 0.67, 0.72, 0.88, 0.44, and 0.85 respectively). Using the SMOTE as a pre-processing procedure, new balanced database generated, including observations of 5,816 benign ROIs and 5,815 malignant ROIs, and accuracy was 0.93. Conclusion Our results indicate that the combined contourlet textural features of solitary pulmonary nodules in CT images with patient profile information could potentially improve the diagnosis of lung cancer. PMID:25250576
Development of a customizable software application for medical imaging analysis and visualization.
Martinez-Escobar, Marisol; Peloquin, Catherine; Juhnke, Bethany; Peddicord, Joanna; Jose, Sonia; Noon, Christian; Foo, Jung Leng; Winer, Eliot
2011-01-01
Graphics technology has extended medical imaging tools to the hands of surgeons and doctors, beyond the radiology suite. However, a common issue in most medical imaging software is the added complexity for non-radiologists. This paper presents the development of a unique software toolset that is highly customizable and targeted at the general physicians as well as the medical specialists. The core functionality includes features such as viewing medical images in two-and three-dimensional representations, clipping, tissue windowing, and coloring. Additional features can be loaded in the form of 'plug-ins' such as tumor segmentation, tissue deformation, and surgical planning. This allows the software to be lightweight and easy to use while still giving the user the flexibility of adding the necessary features, thus catering to a wide range of user population.
Quantitative imaging methods in osteoporosis.
Oei, Ling; Koromani, Fjorda; Rivadeneira, Fernando; Zillikens, M Carola; Oei, Edwin H G
2016-12-01
Osteoporosis is characterized by a decreased bone mass and quality resulting in an increased fracture risk. Quantitative imaging methods are critical in the diagnosis and follow-up of treatment effects in osteoporosis. Prior radiographic vertebral fractures and bone mineral density (BMD) as a quantitative parameter derived from dual-energy X-ray absorptiometry (DXA) are among the strongest known predictors of future osteoporotic fractures. Therefore, current clinical decision making relies heavily on accurate assessment of these imaging features. Further, novel quantitative techniques are being developed to appraise additional characteristics of osteoporosis including three-dimensional bone architecture with quantitative computed tomography (QCT). Dedicated high-resolution (HR) CT equipment is available to enhance image quality. At the other end of the spectrum, by utilizing post-processing techniques such as the trabecular bone score (TBS) information on three-dimensional architecture can be derived from DXA images. Further developments in magnetic resonance imaging (MRI) seem promising to not only capture bone micro-architecture but also characterize processes at the molecular level. This review provides an overview of various quantitative imaging techniques based on different radiological modalities utilized in clinical osteoporosis care and research.
High resolution projection micro stereolithography system and method
Spadaccini, Christopher M.; Farquar, George; Weisgraber, Todd; Gemberling, Steven; Fang, Nicholas; Xu, Jun; Alonso, Matthew; Lee, Howon
2016-11-15
A high-resolution P.mu.SL system and method incorporating one or more of the following features with a standard P.mu.SL system using a SLM projected digital image to form components in a stereolithographic bath: a far-field superlens for producing sub-diffraction-limited features, multiple spatial light modulators (SLM) to generate spatially-controlled three-dimensional interference holograms with nanoscale features, and the integration of microfluidic components into the resin bath of a P.mu.SL system to fabricate microstructures of different materials.
NASA Astrophysics Data System (ADS)
Dangi, Shusil; Ben-Zikri, Yehuda K.; Cahill, Nathan; Schwarz, Karl Q.; Linte, Cristian A.
2015-03-01
Two-dimensional (2D) ultrasound (US) has been the clinical standard for over two decades for monitoring and assessing cardiac function and providing support via intra-operative visualization and guidance for minimally invasive cardiac interventions. Developments in three-dimensional (3D) image acquisition and transducer design and technology have revolutionized echocardiography imaging enabling both real-time 3D trans-esophageal and intra-cardiac image acquisition. However, in most cases the clinicians do not access the entire 3D image volume when analyzing the data, rather they focus on several key views that render the cardiac anatomy of interest during the US imaging exam. This approach enables image acquisition at a much higher spatial and temporal resolution. Two such common approaches are the bi-plane and tri-plane data acquisition protocols; as their name states, the former comprises two orthogonal image views, while the latter depicts the cardiac anatomy based on three co-axially intersecting views spaced at 600 to one another. Since cardiac anatomy is continuously changing, the intra-operative anatomy depicted using real-time US imaging also needs to be updated by tracking the key features of interest and endocardial left ventricle (LV) boundaries. Therefore, rapid automatic feature tracking in US images is critical for three reasons: 1) to perform cardiac function assessment; 2) to identify location of surgical targets for accurate tool to target navigation and on-target instrument positioning; and 3) to enable pre- to intra-op image registration as a means to fuse pre-op CT or MR images used during planning with intra-operative images for enhanced guidance. In this paper we utilize monogenic filtering, graph-cut based segmentation and robust spline smoothing in a combined work flow to process the acquired tri-plane TEE time series US images and demonstrate robust and accurate tracking of the LV endocardial features. We reconstruct the endocardial LV geometry using the tri-plane contours and spline interpolation, and assess the accuracy of the proposed work flow against gold-standard results from the GE Echopac PC clinical software according to quantitative clinical LV characterization parameters, such as the length, circumference, area and volume. Our proposed combined work flow leads to consistent, rapid and automated identification of the LV endocardium, suitable for intra-operative applications and "on-the-fly" computer-assisted assessment of ejection fraction for cardiac function monitoring.Two-dimensional (2D) ultrasound (US) has been the clinical standard for over two decades for monitoring and assessing cardiac function and providing support via intra-operative visualization and guidance for minimally invasive cardiac interventions. Developments in three-dimensional (3D) image acquisition and transducer design and technology have revolutionized echocardiography imaging enabling both real-time 3D trans-esophageal and intra-cardiac image acquisition. However, in most cases the clinicians do not access the entire 3D image volume when analyzing the data, rather they focus on several key views that render the cardiac anatomy of interest during the US imaging exam. This approach enables image acquisition at a much higher spatial and temporal resolution. Two such common approaches are the bi-plane and tri-plane data acquisition protocols; as their name states, the former comprises two orthogonal image views, while the latter depicts the cardiac anatomy based on three co-axially intersecting views spaced at 600 to one another. Since cardiac anatomy is continuously changing, the intra-operative anatomy depicted using real-time US imaging also needs to be updated by tracking the key features of interest and endocardial left ventricle (LV) boundaries. Therefore, rapid automatic feature tracking in US images is critical for three reasons: 1) to perform cardiac function assessment; 2) to identify location of surgical targets for accurate tool to target navigation and on-target instrument positioning; and 3) to enable pre- to intra-op image registration as a means to fuse pre-op CT or MR images used during planning with intra-operative images for enhanced guidance. In this paper we utilize monogenic filtering, graph-cut based segmentation and robust spline smoothing in a combined work flow to process the acquired tri-plane TEE time series US images and demonstrate robust and accurate tracking of the LV endocardial features. We reconstruct the endocardial LV geometry using the tri-plane contours and spline interpolation, and assess the accuracy of the proposed work flow against gold-standard results from the GE Echopac PC clinical software according to quantitative clinical LV characterization parameters, such as the length, circumference, area and volume. Our proposed combined work flow leads to consistent, rapid and automated identification of the LV endocardium, suitable for intra-operative applications and on-the- y" computer-assisted assessment of ejection fraction for cardiac function monitoring.
Luegmair, Georg; Mehta, Daryush D.; Kobler, James B.; Döllinger, Michael
2015-01-01
Vocal fold kinematics and its interaction with aerodynamic characteristics play a primary role in acoustic sound production of the human voice. Investigating the temporal details of these kinematics using high-speed videoendoscopic imaging techniques has proven challenging in part due to the limitations of quantifying complex vocal fold vibratory behavior using only two spatial dimensions. Thus, we propose an optical method of reconstructing the superior vocal fold surface in three spatial dimensions using a high-speed video camera and laser projection system. Using stereo-triangulation principles, we extend the camera-laser projector method and present an efficient image processing workflow to generate the three-dimensional vocal fold surfaces during phonation captured at 4000 frames per second. Initial results are provided for airflow-driven vibration of an ex vivo vocal fold model in which at least 75% of visible laser points contributed to the reconstructed surface. The method captures the vertical motion of the vocal folds at a high accuracy to allow for the computation of three-dimensional mucosal wave features such as vibratory amplitude, velocity, and asymmetry. PMID:26087485
Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography
Wang, Kun; Su, Richard; Oraevsky, Alexander A; Anastasio, Mark A
2012-01-01
Iterative image reconstruction algorithms for optoacoustic tomography (OAT), also known as photoacoustic tomography, have the ability to improve image quality over analytic algorithms due to their ability to incorporate accurate models of the imaging physics, instrument response, and measurement noise. However, to date, there have been few reported attempts to employ advanced iterative image reconstruction algorithms for improving image quality in three-dimensional (3D) OAT. In this work, we implement and investigate two iterative image reconstruction methods for use with a 3D OAT small animal imager: namely, a penalized least-squares (PLS) method employing a quadratic smoothness penalty and a PLS method employing a total variation norm penalty. The reconstruction algorithms employ accurate models of the ultrasonic transducer impulse responses. Experimental data sets are employed to compare the performances of the iterative reconstruction algorithms to that of a 3D filtered backprojection (FBP) algorithm. By use of quantitative measures of image quality, we demonstrate that the iterative reconstruction algorithms can mitigate image artifacts and preserve spatial resolution more effectively than FBP algorithms. These features suggest that the use of advanced image reconstruction algorithms can improve the effectiveness of 3D OAT while reducing the amount of data required for biomedical applications. PMID:22864062
Mizutani, Hiroya; Ono, Satoshi; Ushiku, Tetsuo; Kudo, Yotaro; Ikemura, Masako; Kageyama, Natsuko; Yamamichi, Nobutake; Fujishiro, Mitsuhiro; Someya, Takao; Fukayama, Masashi; Koike, Kazuhiko; Onodera, Hiroshi
2018-02-01
Although high-resolution three-dimensional imaging of endoscopically resected gastrointestinal specimens can help elucidating morphological features of gastrointestinal mucosa or tumor, there are no established methods to achieve this without breaking specimens apart. We evaluated the utility of transparency-enhancing technology for three-dimensional assessment of gastrointestinal mucosa in porcine models. Esophagus, stomach, and colon mucosa samples obtained from a sacrificed swine were formalin-fixed and paraffin-embedded, and subsequently deparaffinized for analysis. The samples were fluorescently stained, optically cleared using transparency-enhancing technology: ilLUmination of Cleared organs to IDentify target molecules method (LUCID), and visualized using laser scanning microscopy. After observation, all specimens were paraffin-embedded again and evaluated by conventional histopathological assessment to measure the impact of transparency-enhancing procedures. As a result, microscopic observation revealed horizontal section views of mucosa at deeper levels and enabled the three-dimensional image reconstruction of glandular and vascular structures. Besides, paraffin-embedded specimens after transparency-enhancing procedures were all assessed appropriately by conventional histopathological staining. These results suggest that transparency-enhancing technology may be feasible for clinical application and enable the three-dimensional structural analysis of endoscopic resected specimen non-destructively. Although there remain many limitations or problems to be solved, this promising technology might represent a novel histopathological method for evaluating gastrointestinal cancers. © 2018 Japanese Society of Pathology and John Wiley & Sons Australia, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vu, Cung Khac; Skelt, Christopher; Nihei, Kurt
A system and a method for generating a three-dimensional image of a rock formation, compressional velocity VP, shear velocity VS and velocity ratio VP/VS of a rock formation are provided. A first acoustic signal includes a first plurality of pulses. A second acoustic signal from a second source includes a second plurality of pulses. A detected signal returning to the borehole includes a signal generated by a non-linear mixing process from the first and second acoustic signals in a non-linear mixing zone within an intersection volume. The received signal is processed to extract the signal over noise and/or signals resultingmore » from linear interaction and the three dimensional image of is generated.« less
Analysis of x-ray tomography data of an extruded low density styrenic foam: an image analysis study
NASA Astrophysics Data System (ADS)
Lin, Jui-Ching; Heeschen, William
2016-10-01
Extruded styrenic foams are low density foams that are widely used for thermal insulation. It is difficult to precisely characterize the structure of the cells in low density foams by traditional cross-section viewing due to the frailty of the walls of the cells. X-ray computed tomography (CT) is a non-destructive, three dimensional structure characterization technique that has great potential for structure characterization of styrenic foams. Unfortunately the intrinsic artifacts of the data and the artifacts generated during image reconstruction are often comparable in size and shape to the thin walls of the foam, making robust and reliable analysis of cell sizes challenging. We explored three different image processing methods to clean up artifacts in the reconstructed images, thus allowing quantitative three dimensional determination of cell size in a low density styrenic foam. Three image processing approaches - an intensity based approach, an intensity variance based approach, and a machine learning based approach - are explored in this study, and the machine learning image feature classification method was shown to be the best. Individual cells are segmented within the images after the images were cleaned up using the three different methods and the cell sizes are measured and compared in the study. Although the collected data with the image analysis methods together did not yield enough measurements for a good statistic of the measurement of cell sizes, the problem can be resolved by measuring multiple samples or increasing imaging field of view.
Three-dimensional Radar Imaging of a Building
2012-12-01
spotlight configuration and H-V ( cross ) polarization as seen from two different aspect angles. The feature colors correspond to their brightness... cross - ranges but at different heights. This effect may create significant confusion in image interpretation and result in missed target detections...over a range of azimuth angles ( centered at = 0°) and elevation angles ( centered at 0), creating cross -range and height resolution, while
Improved disparity map analysis through the fusion of monocular image segmentations
NASA Technical Reports Server (NTRS)
Perlant, Frederic P.; Mckeown, David M.
1991-01-01
The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.
Computational ghost imaging using deep learning
NASA Astrophysics Data System (ADS)
Shimobaba, Tomoyoshi; Endo, Yutaka; Nishitsuji, Takashi; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Shiraki, Atsushi; Ito, Tomoyoshi
2018-04-01
Computational ghost imaging (CGI) is a single-pixel imaging technique that exploits the correlation between known random patterns and the measured intensity of light transmitted (or reflected) by an object. Although CGI can obtain two- or three-dimensional images with a single or a few bucket detectors, the quality of the reconstructed images is reduced by noise due to the reconstruction of images from random patterns. In this study, we improve the quality of CGI images using deep learning. A deep neural network is used to automatically learn the features of noise-contaminated CGI images. After training, the network is able to predict low-noise images from new noise-contaminated CGI images.
Three dimensional identification card and applications
NASA Astrophysics Data System (ADS)
Zhou, Changhe; Wang, Shaoqing; Li, Chao; Li, Hao; Liu, Zhao
2016-10-01
Three dimensional Identification Card, with its three-dimensional personal image displayed and stored for personal identification, is supposed be the advanced version of the present two-dimensional identification card in the future [1]. Three dimensional Identification Card means that there are three-dimensional optical techniques are used, the personal image on ID card is displayed to be three-dimensional, so we can see three dimensional personal face. The ID card also stores the three-dimensional face information in its inside electronics chip, which might be recorded by using two-channel cameras, and it can be displayed in computer as three-dimensional images for personal identification. Three-dimensional ID card might be one interesting direction to update the present two-dimensional card in the future. Three-dimension ID card might be widely used in airport custom, entrance of hotel, school, university, as passport for on-line banking, registration of on-line game, etc...
Three-dimensional spatiotemporal features for fast content-based retrieval of focal liver lesions.
Roy, Sharmili; Chi, Yanling; Liu, Jimin; Venkatesh, Sudhakar K; Brown, Michael S
2014-11-01
Content-based image retrieval systems for 3-D medical datasets still largely rely on 2-D image-based features extracted from a few representative slices of the image stack. Most 2 -D features that are currently used in the literature not only model a 3-D tumor incompletely but are also highly expensive in terms of computation time, especially for high-resolution datasets. Radiologist-specified semantic labels are sometimes used along with image-based 2-D features to improve the retrieval performance. Since radiological labels show large interuser variability, are often unstructured, and require user interaction, their use as lesion characterizing features is highly subjective, tedious, and slow. In this paper, we propose a 3-D image-based spatiotemporal feature extraction framework for fast content-based retrieval of focal liver lesions. All the features are computer generated and are extracted from four-phase abdominal CT images. Retrieval performance and query processing times for the proposed framework is evaluated on a database of 44 hepatic lesions comprising of five pathological types. Bull's eye percentage score above 85% is achieved for three out of the five lesion pathologies and for 98% of query lesions, at least one same type of lesion is ranked among the top two retrieved results. Experiments show that the proposed system's query processing is more than 20 times faster than other already published systems that use 2-D features. With fast computation time and high retrieval accuracy, the proposed system has the potential to be used as an assistant to radiologists for routine hepatic tumor diagnosis.
Ultra-wideband three-dimensional optoacoustic tomography.
Gateau, Jérôme; Chekkoury, Andrei; Ntziachristos, Vasilis
2013-11-15
Broadband optoacoustic waves generated by biological tissues excited with nanosecond laser pulses carry information corresponding to a wide range of geometrical scales. Typically, the frequency content present in the signals generated during optoacoustic imaging is much larger compared to the frequency band captured by common ultrasonic detectors, the latter typically acting as bandpass filters. To image optical absorption within structures ranging from entire organs to microvasculature in three dimensions, we implemented optoacoustic tomography with two ultrasound linear arrays featuring a center frequency of 6 and 24 MHz, respectively. In the present work, we show that complementary information on anatomical features could be retrieved and provide a better understanding on the localization of structures in the general anatomy by analyzing multi-bandwidth datasets acquired on a freshly excised kidney.
NASA Astrophysics Data System (ADS)
Haitjema, Henk M.
1985-10-01
A technique is presented to incorporate three-dimensional flow in a Dupuit-Forchheimer model. The method is based on superposition of approximate analytic solutions to both two- and three-dimensional flow features in a confined aquifer of infinite extent. Three-dimensional solutions are used in the domain of interest, while farfield conditions are represented by two-dimensional solutions. Approximate three- dimensional solutions have been derived for a partially penetrating well and a shallow creek. Each of these solutions satisfies the condition that no flow occurs across the confining layers of the aquifer. Because of this condition, the flow at some distance of a three-dimensional feature becomes nearly horizontal. Consequently, remotely from a three-dimensional feature, its three-dimensional solution is replaced by a corresponding two-dimensional one. The latter solution is trivial as compared to its three-dimensional counterpart, and its use greatly enhances the computational efficiency of the model. As an example, the flow is modeled between a partially penetrating well and a shallow creek that occur in a regional aquifer system.
A 3D ultrasound scanner: real time filtering and rendering algorithms.
Cifarelli, D; Ruggiero, C; Brusacà, M; Mazzarella, M
1997-01-01
The work described here has been carried out within a collaborative project between DIST and ESAOTE BIOMEDICA aiming to set up a new ultrasonic scanner performing 3D reconstruction. A system is being set up to process and display 3D ultrasonic data in a fast, economical and user friendly way to help the physician during diagnosis. A comparison is presented among several algorithms for digital filtering, data segmentation and rendering for real time, PC based, three-dimensional reconstruction from B-mode ultrasonic biomedical images. Several algorithms for digital filtering have been compared as relates to processing time and to final image quality. Three-dimensional data segmentation techniques and rendering has been carried out with special reference to user friendly features for foreseeable applications and reconstruction speed.
Tomographic PIV behind a prosthetic heart valve
NASA Astrophysics Data System (ADS)
Hasler, D.; Landolt, A.; Obrist, D.
2016-05-01
The instantaneous three-dimensional velocity field past a bioprosthetic heart valve was measured using tomographic particle image velocimetry. Two digital cameras were used together with a mirror setup to record PIV images from four different angles. Measurements were conducted in a transparent silicone phantom with a simplified geometry of the aortic root. The refraction indices of the silicone phantom and the working fluid were matched to minimize optical distortion from the flow field to the cameras. The silicone phantom of the aorta was integrated in a flow loop driven by a piston pump. Measurements were conducted for steady and pulsatile flow conditions. Results of the instantaneous, ensemble and phase-averaged flow field are presented. The three-dimensional velocity field reveals a flow topology, which can be related to features of the aortic valve prosthesis.
The design and performance characteristics of a cellular logic 3-D image classification processor
NASA Astrophysics Data System (ADS)
Ankeney, L. A.
1981-04-01
The introduction of high resolution scanning laser radar systems which are capable of collecting range and reflectivity images, is predicted to significantly influence the development of processors capable of performing autonomous target classification tasks. Actively sensed range images are shown to be superior to passively collected infrared images in both image stability and information content. An illustrated tutorial introduces cellular logic (neighborhood) transformations and two and three dimensional erosion and dilation operations which are used for noise filters and geometric shape measurement. A unique 'cookbook' approach to selecting a sequence of neighborhood transformations suitable for object measurement is developed and related to false alarm rate and algorithm effectiveness measures. The cookbook design approach is used to develop an algorithm to classify objects based upon their 3-D geometrical features. A Monte Carlo performance analysis is used to demonstrate the utility of the design approach by characterizing the ability of the algorithm to classify randomly positioned three dimensional objects in the presence of additive noise, scale variations, and other forms of image distortion.
Yu, Zeyun; Holst, Michael J.; Hayashi, Takeharu; Bajaj, Chandrajit L.; Ellisman, Mark H.; McCammon, J. Andrew; Hoshijima, Masahiko
2009-01-01
A general framework of image-based geometric processing is presented to bridge the gap between three-dimensional (3D) imaging that provides structural details of a biological system and mathematical simulation where high-quality surface or volumetric meshes are required. A 3D density map is processed in the order of image pre-processing (contrast enhancement and anisotropic filtering), feature extraction (boundary segmentation and skeletonization), and high-quality and realistic surface (triangular) and volumetric (tetrahedral) mesh generation. While the tool-chain described is applicable to general types of 3D imaging data, the performance is demonstrated specifically on membrane-bound organelles in ventricular myocytes that are imaged and reconstructed with electron microscopic (EM) tomography and two-photon microscopy (T-PM). Of particular interest in this study are two types of membrane-bound Ca2+-handling organelles, namely, transverse tubules (T-tubules) and junctional sarcoplasmic reticulum (jSR), both of which play an important role in regulating the excitation-contraction (E-C) coupling through dynamic Ca2+ mobilization in cardiomyocytes. PMID:18835449
Yu, Zeyun; Holst, Michael J; Hayashi, Takeharu; Bajaj, Chandrajit L; Ellisman, Mark H; McCammon, J Andrew; Hoshijima, Masahiko
2008-12-01
A general framework of image-based geometric processing is presented to bridge the gap between three-dimensional (3D) imaging that provides structural details of a biological system and mathematical simulation where high-quality surface or volumetric meshes are required. A 3D density map is processed in the order of image pre-processing (contrast enhancement and anisotropic filtering), feature extraction (boundary segmentation and skeletonization), and high-quality and realistic surface (triangular) and volumetric (tetrahedral) mesh generation. While the tool-chain described is applicable to general types of 3D imaging data, the performance is demonstrated specifically on membrane-bound organelles in ventricular myocytes that are imaged and reconstructed with electron microscopic (EM) tomography and two-photon microscopy (T-PM). Of particular interest in this study are two types of membrane-bound Ca(2+)-handling organelles, namely, transverse tubules (T-tubules) and junctional sarcoplasmic reticulum (jSR), both of which play an important role in regulating the excitation-contraction (E-C) coupling through dynamic Ca(2+) mobilization in cardiomyocytes.
Scherer, Michael D; Kattadiyil, Mathew T; Parciak, Ewa; Puri, Shweta
2014-01-01
Three-dimensional radiographic imaging for dental implant treatment planning is gaining widespread interest and popularity. However, application of the data from 30 imaging can be a complex and daunting process initially. The purpose of this article is to describe features of three software packages and the respective computerized guided surgical templates (GST) fabricated from them. A step-by-step method of interpreting and ordering a GST to simplify the process of the surgical planning and implant placement is discussed.
Bayesian depth estimation from monocular natural images.
Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C
2017-05-01
Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arinilhaq,; Widita, Rena
2014-09-30
Optical Coherence Tomography is often used in medical image acquisition to diagnose that change due easy to use and low price. Unfortunately, this type of examination produces a two-dimensional retinal image of the point of acquisition. Therefore, this study developed a method that combines and reconstruct 2-dimensional retinal images into three-dimensional images to display volumetric macular accurately. The system is built with three main stages: data acquisition, data extraction and 3-dimensional reconstruction. At data acquisition step, Optical Coherence Tomography produced six *.jpg images of each patient were further extracted with MATLAB 2010a software into six one-dimensional arrays. The six arraysmore » are combined into a 3-dimensional matrix using a kriging interpolation method with SURFER9 resulting 3-dimensional graphics of macula. Finally, system provides three-dimensional color graphs based on the data distribution normal macula. The reconstruction system which has been designed produces three-dimensional images with size of 481 × 481 × h (retinal thickness) pixels.« less
Schmidt, M J; Langen, N; Klumpp, S; Nasirimanesh, F; Shirvanchi, P; Ondreka, N; Kramer, M
2012-01-01
Although magnetic resonance imaging has been used to examine the brain of domestic ruminants, detailed information relating the precise anatomical features in these species is lacking. In this study the brain structures of calves (Bos taurus domesticus), sheep (Ovis aries), goats (Capra hircus) and a mesaticephalic dog (Canis lupis familiaris) were examined using T2-weighed Turbo Spin Echo sequences; three-dimensional models based on high-resolution gradient echo scans were used to identify brain sulci and gyri in two-dimensional images. The ruminant brains examined were similar in structure and organisation to those of other mammals but particular features included the deep depression of the insula and the pronounced gyri of the cortices, the dominant position of the visual (optic nerve, optic chiasm and rostral colliculus) and olfactory (olfactory bulb, olfactory tracts and piriform lobe) systems, and the relatively large size of the diencephalon. Copyright © 2010 Elsevier Ltd. All rights reserved.
Three-dimensional electron diffraction of plant light-harvesting complex
Wang, Da Neng; Kühlbrandt, Werner
1992-01-01
Electron diffraction patterns of two-dimensional crystals of light-harvesting chlorophyll a/b-protein complex (LHC-II) from photosynthetic membranes of pea chloroplasts, tilted at different angles up to 60°, were collected to 3.2 Å resolution at -125°C. The reflection intensities were merged into a three-dimensional data set. The Friedel R-factor and the merging R-factor were 21.8 and 27.6%, respectively. Specimen flatness and crystal size were critical for recording electron diffraction patterns from crystals at high tilts. The principal sources of experimental error were attributed to limitations of the number of unit cells contributing to an electron diffraction pattern, and to the critical electron dose. The distribution of strong diffraction spots indicated that the three-dimensional structure of LHC-II is less regular than that of other known membrane proteins and is not dominated by a particular feature of secondary structure. ImagesFIGURE 1FIGURE 2 PMID:19431817
Three-dimensional reconstruction of rat knee joint using episcopic fluorescence image capture.
Takaishi, R; Aoyama, T; Zhang, X; Higuchi, S; Yamada, S; Takakuwa, T
2014-10-01
Development of the knee joint was morphologically investigated, and the process of cavitation was analyzed by using episcopic fluorescence image capture (EFIC) to create spatial and temporal three-dimensional (3D) reconstructions. Knee joints of Wister rat embryos between embryonic day (E)14 and E20 were investigated. Samples were sectioned and visualized using an EFIC. Then, two-dimensional image stacks were reconstructed using OsiriX software, and 3D reconstructions were generated using Amira software. Cavitations of the knee joint were constructed from five divided portions. Cavity formation initiated at multiple sites at E17; among them, the femoropatellar cavity (FPC) was the first. Cavitations of the medial side preceded those of the lateral side. Each cavity connected at E20 when cavitations around the anterior cruciate ligament (ACL) and posterior cruciate ligament (PCL) were completed. Cavity formation initiated from six portions. In each portion, development proceeded asymmetrically. These results concerning anatomical development of the knee joint using EFIC contribute to a better understanding of the structural feature of the knee joint. Copyright © 2014 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Peetermans, S.; Bopp, M.; Vontobel, P.; Lehmann, E. H.
Common neutron imaging uses the full polychromatic neutron beam spectrum to reveal the material distribution in a non-destructive way. Performing it with a reduced energy band, i.e. energy-selective neutron imaging, allows access to local variation in sample crystallographic properties. Two sample categories can be discerned with different energy responses. Polycrystalline materials have an energy-dependent cross-section featuring Bragg edges. Energy-selective neutron imaging can be used to distinguish be- tween crystallographic phases, increase material sensitivity or penetration, improve quantification etc. An example of the latter is shown by the examination of copper discs prior to machining them into linear accelerator cavity structures. The cross-section of single crystals features distinct Bragg peaks. Based on their pattern, one can determine the orientation of the crystal, as in a Laue pattern, but with the tremendous advantage that the operation can be performed for each pixel, yielding crystal orientation maps at high spatial resolution. A wholly different method to investigate such samples is also introduced: neutron diffraction imaging. It is based on projections formed by neutrons diffracted from the crystal lattice out of the direct beam. The position of these projections on the detector gives information on the crystal orientation. The projection itself can be used to reconstruct the crystal shape. A three-dimensional mapping of local Bragg reflectivity or a grain orientation mapping can thus be obtained.
A moving observer in a three-dimensional world
2016-01-01
For many tasks such as retrieving a previously viewed object, an observer must form a representation of the world at one location and use it at another. A world-based three-dimensional reconstruction of the scene built up from visual information would fulfil this requirement, something computer vision now achieves with great speed and accuracy. However, I argue that it is neither easy nor necessary for the brain to do this. I discuss biologically plausible alternatives, including the possibility of avoiding three-dimensional coordinate frames such as ego-centric and world-based representations. For example, the distance, slant and local shape of surfaces dictate the propensity of visual features to move in the image with respect to one another as the observer's perspective changes (through movement or binocular viewing). Such propensities can be stored without the need for three-dimensional reference frames. The problem of representing a stable scene in the face of continual head and eye movements is an appropriate starting place for understanding the goal of three-dimensional vision, more so, I argue, than the case of a static binocular observer. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269608
Newton, Peter O; Hahn, Gregory W; Fricka, Kevin B; Wenger, Dennis R
2002-04-15
A retrospective radiographic review of 31 patients with congenital spine abnormalities who underwent conventional radiography and advanced imaging studies was conducted. To analyze the utility of three-dimensional computed tomography with multiplanar reformatted images for congenital spine anomalies, as compared with plain radiographs and axial two-dimensional computed tomography imaging. Conventional radiographic imaging for congenital spine disorders often are difficult to interpret because of the patient's small size, the complexity of the disorder, a deformity not in the plane of the radiographs, superimposed structures, and difficulty in forming a mental three-dimensional image. Multiplanar reformatted and three-dimensional computed tomographic imaging offers many potential advantages for defining congenital spine anomalies including visualization of the deformity in any plane, from any angle, with the overlying structures subtracted. The imaging studies of patients who had undergone a three-dimensional computed tomography for congenital deformities of the spine between 1992 and 1998 were reviewed (31 cases). All plain radiographs and axial two-dimensional computed tomography images performed before the three-dimensional computed tomography were reviewed and the findings documented. This was repeated for the three-dimensional reconstructions and, when available, the multiplanar reformatted images (15 cases). In each case, the utility of the advanced imaging was graded as one of the following: Grade A (substantial new information obtained), Grade B (confirmatory with improved visualization and understanding of the deformity), and Grade C (no added useful information obtained). In 17 of 31 cases, the multiplanar reformatted and three-dimensional images allowed identification of unrecognized malformations. In nine additional cases, the advanced imaging was helpful in better visualizing and understanding previously identified deformities. In five cases, no new information was gained. The standard and curved multiplanar reformatted images were best for defining the occiput-C1-C2 anatomy and the extent of segmentation defects. The curved multiplanar reformatted images were especially helpful in keeping the spine from "coming in" and "going out" of the plane of the image when there was significant spine deformity in the sagittal or coronal plane. The three-dimensional reconstructions proved valuable in defining failures of formation. Advanced computed tomography imaging (three-dimensional computed tomography and curved/standard multiplanar reformatted images) allows better definition of congenital spine anomalies. More than 50% of the cases showed additional abnormalities not appreciated on plain radiographs or axial two-dimensional computed tomography images. Curved multiplanar reformatted images allowed imaging in the coronal and sagittal planes of the entire deformity.
A VLSI implementation for synthetic aperture radar image processing
NASA Technical Reports Server (NTRS)
Premkumar, A.; Purviance, J.
1990-01-01
A simple physical model for the Synthetic Aperture Radar (SAR) is presented. This model explains the one dimensional and two dimensional nature of the received SAR signal in the range and azimuth directions. A time domain correlator, its algorithm, and features are explained. The correlator is ideally suited for VLSI implementation. A real time SAR architecture using these correlators is proposed. In the proposed architecture, the received SAR data is processed using one dimensional correlators for determining the range while two dimensional correlators are used to determine the azimuth of a target. The architecture uses only three different types of custom VLSI chips and a small amount of memory.
Henry Feugeas, Marie Cécile; De Marco, Giovanni; Peretti, Ilana Idy; Godon-Hardy, Sylvie; Fredy, Daniel; Claeys, Elisabeth Schouman
2005-11-01
Our purpose was to investigate leukoaraïosis (LA) using three-dimensional MR imaging combined with advanced image-processing technology to attempt to group signal abnormalities according to their etiology. Coronal T2-weighted fast fluid-attenuated inversion-recovery (FLAIR) sequences and three-dimensional T1-weighted fast spoiled gradient recalled echo sequences were used to examine cerebral white matter changes in 75 elderly people with memory complaint but no dementia. They were otherwise healthy, community-dwelling subjects. Three subtypes of LA were defined on the basis of their shape, geography and extent: the so-called subependymal/subpial LA, perivascular LA and "bands" along long white matter tracts. Subependymal changes were directly contiguous with ventricular spaces. They showed features of "water hammer" lesions with ventricular systematisation and a more frequent location around the frontal horns than around the bodies (P=.0008). The use of cerebrospinal fluid (CSF) contiguity criterion allowed a classification of splenial changes in the subpial group. Conversely, posterior periventricular lesions in the centrum ovale as well as irregular and extensive periventricular lesions were not directly contiguous with CSF spaces. The so-called perivascular changes showed features of small-vessel-associated disease; they surrounded linear CSF-like signals that followed the direction of perforating vessels. Distribution of these perivascular changes appeared heterogeneous (P ranging from .04 to 5.10(-16)). These findings suggest that subependymal/subpial LA and subcortical LA may be separate manifestations of a single underlying pulse-wave encephalopathy.
ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.
2005-01-01
ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.
Wagner, Martin G; Hatt, Charles R; Dunkerley, David A P; Bodart, Lindsay E; Raval, Amish N; Speidel, Michael A
2018-04-16
Transcatheter aortic valve replacement (TAVR) is a minimally invasive procedure in which a prosthetic heart valve is placed and expanded within a defective aortic valve. The device placement is commonly performed using two-dimensional (2D) fluoroscopic imaging. Within this work, we propose a novel technique to track the motion and deformation of the prosthetic valve in three dimensions based on biplane fluoroscopic image sequences. The tracking approach uses a parameterized point cloud model of the valve stent which can undergo rigid three-dimensional (3D) transformation and different modes of expansion. Rigid elements of the model are individually rotated and translated in three dimensions to approximate the motions of the stent. Tracking is performed using an iterative 2D-3D registration procedure which estimates the model parameters by minimizing the mean-squared image values at the positions of the forward-projected model points. Additionally, an initialization technique is proposed, which locates clusters of salient features to determine the initial position and orientation of the model. The proposed algorithms were evaluated based on simulations using a digital 4D CT phantom as well as experimentally acquired images of a prosthetic valve inside a chest phantom with anatomical background features. The target registration error was 0.12 ± 0.04 mm in the simulations and 0.64 ± 0.09 mm in the experimental data. The proposed algorithm could be used to generate 3D visualization of the prosthetic valve from two projections. In combination with soft-tissue sensitive-imaging techniques like transesophageal echocardiography, this technique could enable 3D image guidance during TAVR procedures. © 2018 American Association of Physicists in Medicine.
Constrained surface controllers for three-dimensional image data reformatting.
Graves, Martin J; Black, Richard T; Lomas, David J
2009-07-01
This study did not require ethical approval in the United Kingdom. The aim of this work was to create two controllers for navigating a two-dimensional image plane through a volumetric data set, providing two important features of the ultrasonographic paradigm: orientation matching of the navigation device and the desired image plane in the three-dimensional (3D) data and a constraining surface to provide a nonvisual reference for the image plane location in the 3D data. The first constrained surface controller (CSC) uses a planar constraining surface, while the second CSC uses a hemispheric constraining surface. Ten radiologists were asked to obtain specific image reformations by using both controllers and a commercially available medical imaging workstation. The time taken to perform each reformatting task was recorded. The users were also asked structured questions comparing the utility of both methods. There was a significant reduction in the time taken to perform the specified reformatting tasks by using the simpler planar controller as compared with a standard workstation, whereas there was no significant difference for the more complex hemispheric controller. The majority of users reported that both controllers allowed them to concentrate entirely on the reformatting task and the related image rather than being distracted by the need for interaction with the workstation interface. In conclusion, the CSCs provide an intuitive paradigm for interactive reformatting of volumetric data. (c) RSNA, 2009.
Dai, Qiong; Cheng, Jun-Hu; Sun, Da-Wen; Zeng, Xin-An
2015-01-01
There is an increased interest in the applications of hyperspectral imaging (HSI) for assessing food quality, safety, and authenticity. HSI provides abundance of spatial and spectral information from foods by combining both spectroscopy and imaging, resulting in hundreds of contiguous wavebands for each spatial position of food samples, also known as the curse of dimensionality. It is desirable to employ feature selection algorithms for decreasing computation burden and increasing predicting accuracy, which are especially relevant in the development of online applications. Recently, a variety of feature selection algorithms have been proposed that can be categorized into three groups based on the searching strategy namely complete search, heuristic search and random search. This review mainly introduced the fundamental of each algorithm, illustrated its applications in hyperspectral data analysis in the food field, and discussed the advantages and disadvantages of these algorithms. It is hoped that this review should provide a guideline for feature selections and data processing in the future development of hyperspectral imaging technique in foods.
NASA Astrophysics Data System (ADS)
Aytaç Korkmaz, Sevcan; Binol, Hamidullah
2018-03-01
Patients who die from stomach cancer are still present. Early diagnosis is crucial in reducing the mortality rate of cancer patients. Therefore, computer aided methods have been developed for early detection in this article. Stomach cancer images were obtained from Fırat University Medical Faculty Pathology Department. The Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG) features of these images are calculated. At the same time, Sammon mapping, Stochastic Neighbor Embedding (SNE), Isomap, Classical multidimensional scaling (MDS), Local Linear Embedding (LLE), Linear Discriminant Analysis (LDA), t-Distributed Stochastic Neighbor Embedding (t-SNE), and Laplacian Eigenmaps methods are used for dimensional the reduction of the features. The high dimension of these features has been reduced to lower dimensions using dimensional reduction methods. Artificial neural networks (ANN) and Random Forest (RF) classifiers were used to classify stomach cancer images with these new lower feature sizes. New medical systems have developed to measure the effects of these dimensions by obtaining features in different dimensional with dimensional reduction methods. When all the methods developed are compared, it has been found that the best accuracy results are obtained with LBP_MDS_ANN and LBP_LLE_ANN methods.
Long-term imaging of mouse embryos using adaptive harmonic generation microscopy
NASA Astrophysics Data System (ADS)
Thayil, Anisha; Watanabe, Tomoko; Jesacher, Alexander; Wilson, Tony; Srinivas, Shankar; Booth, Martin
2011-04-01
We present a detailed description of an adaptive harmonic generation (HG) microscope and culture techniques that permit long-term, three-dimensional imaging of mouse embryos. HG signal from both pre- and postimplantation stage (0.5-5.5 day-old) mouse embryos are fully characterized. The second HG images reveal central spindles during cytokinesis whereas third HG images show several features, such as lipid droplets, nucleoli, and plasma membranes. The embryos are found to develop normally during one-day-long discontinuous HG imaging, permitting the observation of several dynamic events, such as morula compaction and blastocyst formation.
Relative location prediction in CT scan images using convolutional neural networks.
Guo, Jiajia; Du, Hongwei; Zhu, Jianyue; Yan, Ting; Qiu, Bensheng
2018-07-01
Relative location prediction in computed tomography (CT) scan images is a challenging problem. Many traditional machine learning methods have been applied in attempts to alleviate this problem. However, the accuracy and speed of these methods cannot meet the requirement of medical scenario. In this paper, we propose a regression model based on one-dimensional convolutional neural networks (CNN) to determine the relative location of a CT scan image both quickly and precisely. In contrast to other common CNN models that use a two-dimensional image as an input, the input of this CNN model is a feature vector extracted by a shape context algorithm with spatial correlation. Normalization via z-score is first applied as a pre-processing step. Then, in order to prevent overfitting and improve model's performance, 20% of the elements of the feature vectors are randomly set to zero. This CNN model consists primarily of three one-dimensional convolutional layers, three dropout layers and two fully-connected layers with appropriate loss functions. A public dataset is employed to validate the performance of the proposed model using a 5-fold cross validation. Experimental results demonstrate an excellent performance of the proposed model when compared with contemporary techniques, achieving a median absolute error of 1.04 cm and mean absolute error of 1.69 cm. The time taken for each relative location prediction is approximately 2 ms. Results indicate that the proposed CNN method can contribute to a quick and accurate relative location prediction in CT scan images, which can improve efficiency of the medical picture archiving and communication system in the future. Copyright © 2018 Elsevier B.V. All rights reserved.
Bühnemann, Claudia; Li, Simon; Yu, Haiyue; Branford White, Harriet; Schäfer, Karl L; Llombart-Bosch, Antonio; Machado, Isidro; Picci, Piero; Hogendoorn, Pancras C W; Athanasou, Nicholas A; Noble, J Alison; Hassan, A Bassim
2014-01-01
Driven by genomic somatic variation, tumour tissues are typically heterogeneous, yet unbiased quantitative methods are rarely used to analyse heterogeneity at the protein level. Motivated by this problem, we developed automated image segmentation of images of multiple biomarkers in Ewing sarcoma to generate distributions of biomarkers between and within tumour cells. We further integrate high dimensional data with patient clinical outcomes utilising random survival forest (RSF) machine learning. Using material from cohorts of genetically diagnosed Ewing sarcoma with EWSR1 chromosomal translocations, confocal images of tissue microarrays were segmented with level sets and watershed algorithms. Each cell nucleus and cytoplasm were identified in relation to DAPI and CD99, respectively, and protein biomarkers (e.g. Ki67, pS6, Foxo3a, EGR1, MAPK) localised relative to nuclear and cytoplasmic regions of each cell in order to generate image feature distributions. The image distribution features were analysed with RSF in relation to known overall patient survival from three separate cohorts (185 informative cases). Variation in pre-analytical processing resulted in elimination of a high number of non-informative images that had poor DAPI localisation or biomarker preservation (67 cases, 36%). The distribution of image features for biomarkers in the remaining high quality material (118 cases, 104 features per case) were analysed by RSF with feature selection, and performance assessed using internal cross-validation, rather than a separate validation cohort. A prognostic classifier for Ewing sarcoma with low cross-validation error rates (0.36) was comprised of multiple features, including the Ki67 proliferative marker and a sub-population of cells with low cytoplasmic/nuclear ratio of CD99. Through elimination of bias, the evaluation of high-dimensionality biomarker distribution within cell populations of a tumour using random forest analysis in quality controlled tumour material could be achieved. Such an automated and integrated methodology has potential application in the identification of prognostic classifiers based on tumour cell heterogeneity.
Case study of 3D fingerprints applications
Liu, Feng; Liang, Jinrong; Shen, Linlin; Yang, Meng; Zhang, David; Lai, Zhihui
2017-01-01
Human fingers are 3D objects. More information will be provided if three dimensional (3D) fingerprints are available compared with two dimensional (2D) fingerprints. Thus, this paper firstly collected 3D finger point cloud data by Structured-light Illumination method. Additional features from 3D fingerprint images are then studied and extracted. The applications of these features are finally discussed. A series of experiments are conducted to demonstrate the helpfulness of 3D information to fingerprint recognition. Results show that a quick alignment can be easily implemented under the guidance of 3D finger shape feature even though this feature does not work for fingerprint recognition directly. The newly defined distinctive 3D shape ridge feature can be used for personal authentication with Equal Error Rate (EER) of ~8.3%. Also, it is helpful to remove false core point. Furthermore, a promising of EER ~1.3% is realized by combining this feature with 2D features for fingerprint recognition which indicates the prospect of 3D fingerprint recognition. PMID:28399141
Case study of 3D fingerprints applications.
Liu, Feng; Liang, Jinrong; Shen, Linlin; Yang, Meng; Zhang, David; Lai, Zhihui
2017-01-01
Human fingers are 3D objects. More information will be provided if three dimensional (3D) fingerprints are available compared with two dimensional (2D) fingerprints. Thus, this paper firstly collected 3D finger point cloud data by Structured-light Illumination method. Additional features from 3D fingerprint images are then studied and extracted. The applications of these features are finally discussed. A series of experiments are conducted to demonstrate the helpfulness of 3D information to fingerprint recognition. Results show that a quick alignment can be easily implemented under the guidance of 3D finger shape feature even though this feature does not work for fingerprint recognition directly. The newly defined distinctive 3D shape ridge feature can be used for personal authentication with Equal Error Rate (EER) of ~8.3%. Also, it is helpful to remove false core point. Furthermore, a promising of EER ~1.3% is realized by combining this feature with 2D features for fingerprint recognition which indicates the prospect of 3D fingerprint recognition.
An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images.
Gumaei, Abdu; Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed
2018-05-15
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang's method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used.
Color image definition evaluation method based on deep learning method
NASA Astrophysics Data System (ADS)
Liu, Di; Li, YingChun
2018-01-01
In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.
NASA Astrophysics Data System (ADS)
Chitchian, Shahab; Weldon, Thomas P.; Fried, Nathaniel M.
2009-07-01
The cavernous nerves course along the surface of the prostate and are responsible for erectile function. Improvements in identification, imaging, and visualization of the cavernous nerves during prostate cancer surgery may improve nerve preservation and postoperative sexual potency. Two-dimensional (2-D) optical coherence tomography (OCT) images of the rat prostate were segmented to differentiate the cavernous nerves from the prostate gland. To detect these nerves, three image features were employed: Gabor filter, Daubechies wavelet, and Laws filter. The Gabor feature was applied with different standard deviations in the x and y directions. In the Daubechies wavelet feature, an 8-tap Daubechies orthonormal wavelet was implemented, and the low-pass sub-band was chosen as the filtered image. Last, Laws feature extraction was applied to the images. The features were segmented using a nearest-neighbor classifier. N-ary morphological postprocessing was used to remove small voids. The cavernous nerves were differentiated from the prostate gland with a segmentation error rate of only 0.058+/-0.019. This algorithm may be useful for implementation in clinical endoscopic OCT systems currently being studied for potential intraoperative diagnostic use in laparoscopic and robotic nerve-sparing prostate cancer surgery.
Chitchian, Shahab; Weldon, Thomas P; Fried, Nathaniel M
2009-01-01
The cavernous nerves course along the surface of the prostate and are responsible for erectile function. Improvements in identification, imaging, and visualization of the cavernous nerves during prostate cancer surgery may improve nerve preservation and postoperative sexual potency. Two-dimensional (2-D) optical coherence tomography (OCT) images of the rat prostate were segmented to differentiate the cavernous nerves from the prostate gland. To detect these nerves, three image features were employed: Gabor filter, Daubechies wavelet, and Laws filter. The Gabor feature was applied with different standard deviations in the x and y directions. In the Daubechies wavelet feature, an 8-tap Daubechies orthonormal wavelet was implemented, and the low-pass sub-band was chosen as the filtered image. Last, Laws feature extraction was applied to the images. The features were segmented using a nearest-neighbor classifier. N-ary morphological postprocessing was used to remove small voids. The cavernous nerves were differentiated from the prostate gland with a segmentation error rate of only 0.058+/-0.019. This algorithm may be useful for implementation in clinical endoscopic OCT systems currently being studied for potential intraoperative diagnostic use in laparoscopic and robotic nerve-sparing prostate cancer surgery.
Munroe, Jeffrey S.; Doolittle, James A.; Kanevskiy, Mikhail; Hinkel, Kenneth M.; Nelson, Frederick E.; Jones, Benjamin M.; Shur, Yuri; Kimble, John M.
2007-01-01
Three-dimensional ground-penetrating radar (3D GPR) was used to investigate the subsurface structure of ice-wedge polygons and other features of the frozen active layer and near-surface permafrost near Barrow, Alaska. Surveys were conducted at three sites located on landscapes of different geomorphic age. At each site, sediment cores were collected and characterised to aid interpretation of GPR data. At two sites, 3D GPR was able to delineate subsurface ice-wedge networks with high fidelity. Three-dimensional GPR data also revealed a fundamental difference in ice-wedge morphology between these two sites that is consistent with differences in landscape age. At a third site, the combination of two-dimensional and 3D GPR revealed the location of an active frost boil with ataxitic cryostructure. When supplemented by analysis of soil cores, 3D GPR offers considerable potential for imaging, interpreting and 3D mapping of near-surface soil and ice structures in permafrost environments.
Online signature recognition using principal component analysis and artificial neural network
NASA Astrophysics Data System (ADS)
Hwang, Seung-Jun; Park, Seung-Je; Baek, Joong-Hwan
2016-12-01
In this paper, we propose an algorithm for on-line signature recognition using fingertip point in the air from the depth image acquired by Kinect. We extract 10 statistical features from X, Y, Z axis, which are invariant to changes in shifting and scaling of the signature trajectories in three-dimensional space. Artificial neural network is adopted to solve the complex signature classification problem. 30 dimensional features are converted into 10 principal components using principal component analysis, which is 99.02% of total variances. We implement the proposed algorithm and test to actual on-line signatures. In experiment, we verify the proposed method is successful to classify 15 different on-line signatures. Experimental result shows 98.47% of recognition rate when using only 10 feature vectors.
Chen, Hsin-Yu; Ng, Li-Shia; Chang, Chun-Shin; Lu, Ting-Chen; Chen, Ning-Hung; Chen, Zung-Chung
2017-06-01
Advances in three-dimensional imaging and three-dimensional printing technology have expanded the frontier of presurgical design for microtia reconstruction from two-dimensional curved lines to three-dimensional perspectives. This study presents an algorithm for combining three-dimensional surface imaging, computer-assisted design, and three-dimensional printing to create patient-specific auricular frameworks in unilateral microtia reconstruction. Between January of 2015 and January of 2016, six patients with unilateral microtia were enrolled. The average age of the patients was 7.6 years. A three-dimensional image of the patient's head was captured by 3dMDcranial, and virtual sculpture carried out using Geomagic Freeform software and a Touch X Haptic device for fabrication of the auricular template. Each template was tailored according to the patient's unique auricular morphology. The final construct was mirrored onto the defective side and printed out with biocompatible acrylic material. During the surgery, the prefabricated customized template served as a three-dimensional guide for surgical simulation and sculpture of the MEDPOR framework. Average follow-up was 10.3 months. Symmetric and good aesthetic results with regard to auricular shape, projection, and orientation were obtained. One case with severe implant exposure was salvaged with free temporoparietal fascia transfer and skin grafting. The combination of three-dimensional imaging and manufacturing technology with the malleability of MEDPOR has surpassed existing limitations resulting from the use of autologous materials and the ambiguity of two-dimensional planning. This approach allows surgeons to customize the auricular framework in a highly precise and sophisticated manner, taking a big step closer to the goal of mirror-image reconstruction for unilateral microtia patients. Therapeutic, IV.
Multiscale Anomaly Detection and Image Registration Algorithms for Airborne Landmine Detection
2008-05-01
with the sensed image. The two- dimensional correlation coefficient r for two matrices A and B both of size M ×N is given by r = ∑ m ∑ n (Amn...correlation based method by matching features in a high- dimensional feature- space . The current implementation of the SIFT algorithm uses a brute-force...by repeatedly convolving the image with a Guassian kernel. Each plane of the scale
Jamieson, Andrew R.; Giger, Maryellen L.; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha
2010-01-01
Purpose: In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res. 9, 2579–2605 (2008)]. Methods: These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier’s AUC performance. Results: In the large U.S. data set, sample high performance results include, AUC0.632+=0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+=0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+=0.90 with interval [0.847;0.919], all using the MCMC-BANN. Conclusions: Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space. PMID:20175497
The contribution of synchrotron X-ray computed microtomography to understanding volcanic processes.
Polacci, Margherita; Mancini, Lucia; Baker, Don R
2010-03-01
A series of computed microtomography experiments are reported which were performed by using a third-generation synchrotron radiation source on volcanic rocks from various active hazardous volcanoes in Italy and other volcanic areas in the world. The applied technique allowed the internal structure of the investigated material to be accurately imaged at the micrometre scale and three-dimensional views of the investigated samples to be produced as well as three-dimensional quantitative measurements of textural features. The geometry of the vesicle (gas-filled void) network in volcanic products of both basaltic and trachytic compositions were particularly focused on, as vesicle textures are directly linked to the dynamics of volcano degassing. This investigation provided novel insights into modes of gas exsolution, transport and loss in magmas that were not recognized in previous studies using solely conventional two-dimensional imaging techniques. The results of this study are important to understanding the behaviour of volcanoes and can be combined with other geosciences disciplines to forecast their future activity.
High Content Imaging (HCI) on Miniaturized Three-Dimensional (3D) Cell Cultures
Joshi, Pranav; Lee, Moo-Yeal
2015-01-01
High content imaging (HCI) is a multiplexed cell staining assay developed for better understanding of complex biological functions and mechanisms of drug action, and it has become an important tool for toxicity and efficacy screening of drug candidates. Conventional HCI assays have been carried out on two-dimensional (2D) cell monolayer cultures, which in turn limit predictability of drug toxicity/efficacy in vivo; thus, there has been an urgent need to perform HCI assays on three-dimensional (3D) cell cultures. Although 3D cell cultures better mimic in vivo microenvironments of human tissues and provide an in-depth understanding of the morphological and functional features of tissues, they are also limited by having relatively low throughput and thus are not amenable to high-throughput screening (HTS). One attempt of making 3D cell culture amenable for HTS is to utilize miniaturized cell culture platforms. This review aims to highlight miniaturized 3D cell culture platforms compatible with current HCI technology. PMID:26694477
A general prediction model for the detection of ADHD and Autism using structural and functional MRI.
Sen, Bhaskar; Borle, Neil C; Greiner, Russell; Brown, Matthew R G
2018-01-01
This work presents a novel method for learning a model that can diagnose Attention Deficit Hyperactivity Disorder (ADHD), as well as Autism, using structural texture and functional connectivity features obtained from 3-dimensional structural magnetic resonance imaging (MRI) and 4-dimensional resting-state functional magnetic resonance imaging (fMRI) scans of subjects. We explore a series of three learners: (1) The LeFMS learner first extracts features from the structural MRI images using the texture-based filters produced by a sparse autoencoder. These filters are then convolved with the original MRI image using an unsupervised convolutional network. The resulting features are used as input to a linear support vector machine (SVM) classifier. (2) The LeFMF learner produces a diagnostic model by first computing spatial non-stationary independent components of the fMRI scans, which it uses to decompose each subject's fMRI scan into the time courses of these common spatial components. These features can then be used with a learner by themselves or in combination with other features to produce the model. Regardless of which approach is used, the final set of features are input to a linear support vector machine (SVM) classifier. (3) Finally, the overall LeFMSF learner uses the combined features obtained from the two feature extraction processes in (1) and (2) above as input to an SVM classifier, achieving an accuracy of 0.673 on the ADHD-200 holdout data and 0.643 on the ABIDE holdout data. Both of these results, obtained with the same LeFMSF framework, are the best known, over all hold-out accuracies on these datasets when only using imaging data-exceeding previously-published results by 0.012 for ADHD and 0.042 for Autism. Our results show that combining multi-modal features can yield good classification accuracy for diagnosis of ADHD and Autism, which is an important step towards computer-aided diagnosis of these psychiatric diseases and perhaps others as well.
Three-dimensional echocardiography of congenital abnormalities of the left atrioventricular valve.
Rice, Kathryn; Simpson, John
2015-03-01
Congenital abnormalities of the left atrioventricular (AV) valve are a significant diagnostic challenge. Traditionally, reliance has been placed on two-dimensional echocardiographic (2DE) imaging to guide recognition of the specific morphological features. Real-time 3DE can provide unique views of the left AV valve with the potential to improve understanding of valve morphology and function to facilitate surgical planning. This review illustrates the features of congenital abnormalities of the left AV valve assessed by 3DE. The similarities and differences in morphology between different lesions are described, both with respect to the valve itself and supporting chordal apparatus. The potential advantages as well as limitations of this technique in clinical practice are outlined.
3D Texture Analysis in Renal Cell Carcinoma Tissue Image Grading
Cho, Nam-Hoon; Choi, Heung-Kook
2014-01-01
One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM) and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system. PMID:25371701
A flexible new method for 3D measurement based on multi-view image sequences
NASA Astrophysics Data System (ADS)
Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu
2016-11-01
Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.
Advancing three-dimensional MEMS by complimentary laser micro manufacturing
NASA Astrophysics Data System (ADS)
Palmer, Jeremy A.; Williams, John D.; Lemp, Tom; Lehecka, Tom M.; Medina, Francisco; Wicker, Ryan B.
2006-01-01
This paper describes improvements that enable engineers to create three-dimensional MEMS in a variety of materials. It also provides a means for selectively adding three-dimensional, high aspect ratio features to pre-existing PMMA micro molds for subsequent LIGA processing. This complimentary method involves in situ construction of three-dimensional micro molds in a stand-alone configuration or directly adjacent to features formed by x-ray lithography. Three-dimensional micro molds are created by micro stereolithography (MSL), an additive rapid prototyping technology. Alternatively, three-dimensional features may be added by direct femtosecond laser micro machining. Parameters for optimal femtosecond laser micro machining of PMMA at 800 nanometers are presented. The technical discussion also includes strategies for enhancements in the context of material selection and post-process surface finish. This approach may lead to practical, cost-effective 3-D MEMS with the surface finish and throughput advantages of x-ray lithography. Accurate three-dimensional metal microstructures are demonstrated. Challenges remain in process planning for micro stereolithography and development of buried features following femtosecond laser micro machining.
Three-dimension reconstruction based on spatial light modulator
NASA Astrophysics Data System (ADS)
Deng, Xuejiao; Zhang, Nanyang; Zeng, Yanan; Yin, Shiliang; Wang, Weiyu
2011-02-01
Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .
Janousova, Eva; Schwarz, Daniel; Kasparek, Tomas
2015-06-30
We investigated a combination of three classification algorithms, namely the modified maximum uncertainty linear discriminant analysis (mMLDA), the centroid method, and the average linkage, with three types of features extracted from three-dimensional T1-weighted magnetic resonance (MR) brain images, specifically MR intensities, grey matter densities, and local deformations for distinguishing 49 first episode schizophrenia male patients from 49 healthy male subjects. The feature sets were reduced using intersubject principal component analysis before classification. By combining the classifiers, we were able to obtain slightly improved results when compared with single classifiers. The best classification performance (81.6% accuracy, 75.5% sensitivity, and 87.8% specificity) was significantly better than classification by chance. We also showed that classifiers based on features calculated using more computation-intensive image preprocessing perform better; mMLDA with classification boundary calculated as weighted mean discriminative scores of the groups had improved sensitivity but similar accuracy compared to the original MLDA; reducing a number of eigenvectors during data reduction did not always lead to higher classification accuracy, since noise as well as the signal important for classification were removed. Our findings provide important information for schizophrenia research and may improve accuracy of computer-aided diagnostics of neuropsychiatric diseases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A fast image matching algorithm based on key points
NASA Astrophysics Data System (ADS)
Wang, Huilin; Wang, Ying; An, Ru; Yan, Peng
2014-05-01
Image matching is a very important technique in image processing. It has been widely used for object recognition and tracking, image retrieval, three-dimensional vision, change detection, aircraft position estimation, and multi-image registration. Based on the requirements of matching algorithm for craft navigation, such as speed, accuracy and adaptability, a fast key point image matching method is investigated and developed. The main research tasks includes: (1) Developing an improved celerity key point detection approach using self-adapting threshold of Features from Accelerated Segment Test (FAST). A method of calculating self-adapting threshold was introduced for images with different contrast. Hessian matrix was adopted to eliminate insecure edge points in order to obtain key points with higher stability. This approach in detecting key points has characteristics of small amount of computation, high positioning accuracy and strong anti-noise ability; (2) PCA-SIFT is utilized to describe key point. 128 dimensional vector are formed based on the SIFT method for the key points extracted. A low dimensional feature space was established by eigenvectors of all the key points, and each eigenvector was projected onto the feature space to form a low dimensional eigenvector. These key points were re-described by dimension-reduced eigenvectors. After reducing the dimension by the PCA, the descriptor was reduced to 20 dimensions from the original 128. This method can reduce dimensions of searching approximately near neighbors thereby increasing overall speed; (3) Distance ratio between the nearest neighbour and second nearest neighbour searching is regarded as the measurement criterion for initial matching points from which the original point pairs matched are obtained. Based on the analysis of the common methods (e.g. RANSAC (random sample consensus) and Hough transform cluster) used for elimination false matching point pairs, a heuristic local geometric restriction strategy is adopted to discard false matched point pairs further; and (4) Affine transformation model is introduced to correct coordinate difference between real-time image and reference image. This resulted in the matching of the two images. SPOT5 Remote sensing images captured at different date and airborne images captured with different flight attitude were used to test the performance of the method from matching accuracy, operation time and ability to overcome rotation. Results show the effectiveness of the approach.
Multiview alignment hashing for efficient image search.
Liu, Li; Yu, Mengyang; Shao, Ling
2015-03-01
Hashing is a popular and efficient method for nearest neighbor search in large-scale data spaces by embedding high-dimensional feature descriptors into a similarity preserving Hamming space with a low dimension. For most hashing methods, the performance of retrieval heavily depends on the choice of the high-dimensional feature descriptor. Furthermore, a single type of feature cannot be descriptive enough for different images when it is used for hashing. Thus, how to combine multiple representations for learning effective hashing functions is an imminent task. In this paper, we present a novel unsupervised multiview alignment hashing approach based on regularized kernel nonnegative matrix factorization, which can find a compact representation uncovering the hidden semantics and simultaneously respecting the joint probability distribution of data. In particular, we aim to seek a matrix factorization to effectively fuse the multiple information sources meanwhile discarding the feature redundancy. Since the raised problem is regarded as nonconvex and discrete, our objective function is then optimized via an alternate way with relaxation and converges to a locally optimal solution. After finding the low-dimensional representation, the hashing functions are finally obtained through multivariable logistic regression. The proposed method is systematically evaluated on three data sets: 1) Caltech-256; 2) CIFAR-10; and 3) CIFAR-20, and the results show that our method significantly outperforms the state-of-the-art multiview hashing techniques.
Use of shape-from-shading to characterize mucosal topography in celiac disease videocapsule images
Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H
2017-01-01
AIM To use a computerized shape-from-shading technique to characterize the topography of the small intestinal mucosa. METHODS Videoclips comprised of 100-200 images each were obtained from the distal duodenum in 8 celiac and 8 control patients. Images with high texture were selected from each videoclip and projected from two to three dimensions by using grayscale pixel brightness as the Z-axis spatial variable. The resulting images for celiac patients were then ordered using the Marsh score to estimate the degree of villous atrophy, and compared with control data. RESULTS Topographic changes in celiac patient three-dimensional constructs were often more variable as compared to controls. The mean absolute derivative in elevation was 2.34 ± 0.35 brightness units for celiacs vs 1.95 ± 0.28 for controls (P = 0.014). The standard deviation of the derivative in elevation was 4.87 ± 0.35 brightness units for celiacs vs 4.47 ± 0.36 for controls (P = 0.023). Celiac patients with Marsh IIIC villous atrophy tended to have the largest topographic changes. Plotted in two dimensions, celiac data could be separated from controls with 80% sensitivity and specificity. CONCLUSION Use of shape-from-shading to construct three-dimensional projections approximating the actual spatial geometry of the small intestinal substrate is useful to observe features not readily apparent in two-dimensional videocapsule images. This method represents a potentially helpful adjunct to detect areas of pathology during videocapsule analysis. PMID:28744343
NASA Astrophysics Data System (ADS)
Guan, Yihong; Luo, Yatao; Yang, Tao; Qiu, Lei; Li, Junchang
2012-01-01
The features of the spatial information of Markov random field image was used in image segmentation. It can effectively remove the noise, and get a more accurate segmentation results. Based on the fuzziness and clustering of pixel grayscale information, we find clustering center of the medical image different organizations and background through Fuzzy cmeans clustering method. Then we find each threshold point of multi-threshold segmentation through two dimensional histogram method, and segment it. The features of fusing multivariate information based on the Dempster-Shafer evidence theory, getting image fusion and segmentation. This paper will adopt the above three theories to propose a new human brain image segmentation method. Experimental result shows that the segmentation result is more in line with human vision, and is of vital significance to accurate analysis and application of tissues.
NASA Technical Reports Server (NTRS)
Jones, Robert E.; Kramarchuk, Ihor; Williams, Wallace D.; Pouch, John J.; Gilbert, Percy
1989-01-01
Computer-controlled thermal-wave microscope developed to investigate III-V compound semiconductor devices and materials. Is nondestructive technique providing information on subsurface thermal features of solid samples. Furthermore, because this is subsurface technique, three-dimensional imaging also possible. Microscope uses intensity-modulated electron beam of modified scanning electron microscope to generate thermal waves in sample. Acoustic waves generated by thermal waves received by transducer and processed in computer to form images displayed on video display of microscope or recorded on magnetic disk.
NASA Astrophysics Data System (ADS)
Taşkin Kaya, Gülşen
2013-10-01
Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input-output relationships in high-dimensional systems for many problems in science and engineering. The HDMR method is developed to improve the efficiency of the deducing high dimensional behaviors. The method is formed by a particular organization of low dimensional component functions, in which each function is the contribution of one or more input variables to the output variables.
D Tracking Based Augmented Reality for Cultural Heritage Data Management
NASA Astrophysics Data System (ADS)
Battini, C.; Landi, G.
2015-02-01
The development of contactless documentation techniques is allowing researchers to collect high volumes of three-dimensional data in a short time but with high levels of accuracy. The digitalisation of cultural heritage opens up the possibility of using image processing and analysis, and computer graphics techniques, to preserve this heritage for future generations; augmenting it with additional information or with new possibilities for its enjoyment and use. The collection of precise datasets about cultural heritage status is crucial for its interpretation, its conservation and during the restoration processes. The application of digital-imaging solutions for various feature extraction, image data-analysis techniques, and three-dimensional reconstruction of ancient artworks, allows the creation of multidimensional models that can incorporate information coming from heterogeneous data sets, research results and historical sources. Real objects can be scanned and reconstructed virtually, with high levels of data accuracy and resolution. Real-time visualisation software and hardware is rapidly evolving and complex three-dimensional models can be interactively visualised and explored on applications developed for mobile devices. This paper will show how a 3D reconstruction of an object, with multiple layers of information, can be stored and visualised through a mobile application that will allow interaction with a physical object for its study and analysis, using 3D Tracking based Augmented Reality techniques.
Erdenebat, Munkh-Uchral; Kim, Byeong-Jun; Piao, Yan-Ling; Park, Seo-Yeon; Kwon, Ki-Chul; Piao, Mei-Lan; Yoo, Kwan-Hee; Kim, Nam
2017-10-01
A mobile three-dimensional image acquisition and reconstruction system using a computer-generated integral imaging technique is proposed. A depth camera connected to the mobile device acquires the color and depth data of a real object simultaneously, and an elemental image array is generated based on the original three-dimensional information for the object, with lens array specifications input into the mobile device. The three-dimensional visualization of the real object is reconstructed on the mobile display through optical or digital reconstruction methods. The proposed system is implemented successfully and the experimental results certify that the system is an effective and interesting method of displaying real three-dimensional content on a mobile device.
Lew, Matthew D.; Thompson, Michael A.; Badieirostami, Majid; Moerner, W. E.
2010-01-01
The point spread function (PSF) of a widefield fluorescence microscope is not suitable for three-dimensional super-resolution imaging. We characterize the localization precision of a unique method for 3D superresolution imaging featuring a double-helix point spread function (DH-PSF). The DH-PSF is designed to have two lobes that rotate about their midpoint in any transverse plane as a function of the axial position of the emitter. In effect, the PSF appears as a double helix in three dimensions. By comparing the Cramer-Rao bound of the DH-PSF with the standard PSF as a function of the axial position, we show that the DH-PSF has a higher and more uniform localization precision than the standard PSF throughout a 2 μm depth of field. Comparisons between the DH-PSF and other methods for 3D super-resolution are briefly discussed. We also illustrate the applicability of the DH-PSF for imaging weak emitters in biological systems by tracking the movement of quantum dots in glycerol and in live cells. PMID:20563317
MO-AB-BRA-10: Cancer Therapy Outcome Prediction Based On Dempster-Shafer Theory and PET Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lian, C; University of Rouen, QuantIF - EA 4108 LITIS, 76000 Rouen; Li, H
2015-06-15
Purpose: In cancer therapy, utilizing FDG-18 PET image-based features for accurate outcome prediction is challenging because of 1) limited discriminative information within a small number of PET image sets, and 2) fluctuant feature characteristics caused by the inferior spatial resolution and system noise of PET imaging. In this study, we proposed a new Dempster-Shafer theory (DST) based approach, evidential low-dimensional transformation with feature selection (ELT-FS), to accurately predict cancer therapy outcome with both PET imaging features and clinical characteristics. Methods: First, a specific loss function with sparse penalty was developed to learn an adaptive low-rank distance metric for representing themore » dissimilarity between different patients’ feature vectors. By minimizing this loss function, a linear low-dimensional transformation of input features was achieved. Also, imprecise features were excluded simultaneously by applying a l2,1-norm regularization of the learnt dissimilarity metric in the loss function. Finally, the learnt dissimilarity metric was applied in an evidential K-nearest-neighbor (EK- NN) classifier to predict treatment outcome. Results: Twenty-five patients with stage II–III non-small-cell lung cancer and thirty-six patients with esophageal squamous cell carcinomas treated with chemo-radiotherapy were collected. For the two groups of patients, 52 and 29 features, respectively, were utilized. The leave-one-out cross-validation (LOOCV) protocol was used for evaluation. Compared to three existing linear transformation methods (PCA, LDA, NCA), the proposed ELT-FS leads to higher prediction accuracy for the training and testing sets both for lung-cancer patients (100+/−0.0, 88.0+/−33.17) and for esophageal-cancer patients (97.46+/−1.64, 83.33+/−37.8). The ELT-FS also provides superior class separation in both test data sets. Conclusion: A novel DST- based approach has been proposed to predict cancer treatment outcome using PET image features and clinical characteristics. A specific loss function has been designed for robust accommodation of feature set incertitude and imprecision, facilitating adaptive learning of the dissimilarity metric for the EK-NN classifier.« less
Three-dimensional imaging technology offers promise in medicine.
Karako, Kenji; Wu, Qiong; Gao, Jianjun
2014-04-01
Medical imaging plays an increasingly important role in the diagnosis and treatment of disease. Currently, medical equipment mainly has two-dimensional (2D) imaging systems. Although this conventional imaging largely satisfies clinical requirements, it cannot depict pathologic changes in 3 dimensions. The development of three-dimensional (3D) imaging technology has encouraged advances in medical imaging. Three-dimensional imaging technology offers doctors much more information on a pathology than 2D imaging, thus significantly improving diagnostic capability and the quality of treatment. Moreover, the combination of 3D imaging with augmented reality significantly improves surgical navigation process. The advantages of 3D imaging technology have made it an important component of technological progress in the field of medical imaging.
NASA Technical Reports Server (NTRS)
Alvertos, Nicolas; Dcunha, Ivan
1993-01-01
The problem of recognizing and positioning of objects in three-dimensional space is important for robotics and navigation applications. In recent years, digital range data, also referred to as range images or depth maps, have been available for the analysis of three-dimensional objects owing to the development of several active range finding techniques. The distinct advantage of range images is the explicitness of the surface information available. Many industrial and navigational robotics tasks will be more easily accomplished if such explicit information can be efficiently interpreted. In this research, a new technique based on analytic geometry for the recognition and description of three-dimensional quadric surfaces from range images is presented. Beginning with the explicit representation of quadrics, a set of ten coefficients are determined for various three-dimensional surfaces. For each quadric surface, a unique set of two-dimensional curves which serve as a feature set is obtained from the various angles at which the object is intersected with a plane. Based on a discriminant method, each of the curves is classified as a parabola, circle, ellipse, hyperbola, or a line. Each quadric surface is shown to be uniquely characterized by a set of these two-dimensional curves, thus allowing discrimination from the others. Before the recognition process can be implemented, the range data have to undergo a set of pre-processing operations, thereby making it more presentable to classification algorithms. One such pre-processing step is to study the effect of median filtering on raw range images. Utilizing a variety of surface curvature techniques, reliable sets of image data that approximate the shape of a quadric surface are determined. Since the initial orientation of the surfaces is unknown, a new technique is developed wherein all the rotation parameters are determined and subsequently eliminated. This approach enables us to position the quadric surfaces in a desired coordinate system. Experiments were conducted on raw range images of spheres, cylinders, and cones. Experiments were also performed on simulated data for surfaces such as hyperboloids of one and two sheets, elliptical and hyperbolic paraboloids, elliptical and hyperbolic cylinders, ellipsoids and the quadric cones. Both the real and simulated data yielded excellent results. Our approach is found to be more accurate and computationally inexpensive as compared to traditional approaches, such as the three-dimensional discriminant approach which involves evaluation of the rank of a matrix. Finally, we have proposed one other new approach, which involves the formulation of a mapping between the explicit and implicit forms of representing quadric surfaces. This approach, when fully realized, will yield a three-dimensional discriminant, which will recognize quadric surfaces based upon their component surfaces patches. This approach is faster than prior approaches and at the same time is invariant to pose and orientation of the surfaces in three-dimensional space.
NASA Astrophysics Data System (ADS)
Alvertos, Nicolas; Dcunha, Ivan
1993-03-01
The problem of recognizing and positioning of objects in three-dimensional space is important for robotics and navigation applications. In recent years, digital range data, also referred to as range images or depth maps, have been available for the analysis of three-dimensional objects owing to the development of several active range finding techniques. The distinct advantage of range images is the explicitness of the surface information available. Many industrial and navigational robotics tasks will be more easily accomplished if such explicit information can be efficiently interpreted. In this research, a new technique based on analytic geometry for the recognition and description of three-dimensional quadric surfaces from range images is presented. Beginning with the explicit representation of quadrics, a set of ten coefficients are determined for various three-dimensional surfaces. For each quadric surface, a unique set of two-dimensional curves which serve as a feature set is obtained from the various angles at which the object is intersected with a plane. Based on a discriminant method, each of the curves is classified as a parabola, circle, ellipse, hyperbola, or a line. Each quadric surface is shown to be uniquely characterized by a set of these two-dimensional curves, thus allowing discrimination from the others. Before the recognition process can be implemented, the range data have to undergo a set of pre-processing operations, thereby making it more presentable to classification algorithms. One such pre-processing step is to study the effect of median filtering on raw range images. Utilizing a variety of surface curvature techniques, reliable sets of image data that approximate the shape of a quadric surface are determined. Since the initial orientation of the surfaces is unknown, a new technique is developed wherein all the rotation parameters are determined and subsequently eliminated. This approach enables us to position the quadric surfaces in a desired coordinate system. Experiments were conducted on raw range images of spheres, cylinders, and cones. Experiments were also performed on simulated data for surfaces such as hyperboloids of one and two sheets, elliptical and hyperbolic paraboloids, elliptical and hyperbolic cylinders, ellipsoids and the quadric cones. Both the real and simulated data yielded excellent results. Our approach is found to be more accurate and computationally inexpensive as compared to traditional approaches, such as the three-dimensional discriminant approach which involves evaluation of the rank of a matrix.
1998-06-03
The view from NASA's Magellan spacecraft shows most of Galindo V-40 quadrangle looking east; Atete Corona, in the foreground, is a 600-km-long and about 450-km-wide, circular volcano-tectonic feature. http://photojournal.jpl.nasa.gov/catalog/PIA00096
Perfusion flow bioreactor for 3D in situ imaging: investigating cell/biomaterials interactions.
Stephens, J S; Cooper, J A; Phelan, F R; Dunkers, J P
2007-07-01
The capability to image real time cell/material interactions in a three-dimensional (3D) culture environment will aid in the advancement of tissue engineering. This paper describes a perfusion flow bioreactor designed to hold tissue engineering scaffolds and allow for in situ imaging using an upright microscope. The bioreactor can hold a scaffold of desirable thickness for implantation (>2 mm). Coupling 3D culture and perfusion flow leads to the creation of a more biomimetic environment. We examined the ability of the bioreactor to maintain cell viability outside of an incubator environment (temperature and pH stability), investigated the flow features of the system (flow induced shear stress), and determined the image quality in order to perform time-lapsed imaging of two-dimensional (2D) and 3D cell culture. In situ imaging was performed on 2D and 3D, culture samples and cell viability was measured under perfusion flow (2.5 mL/min, 0.016 Pa). The visualization of cell response to their environment, in real time, will help to further elucidate the influences of biomaterial surface features, scaffold architectures, and the influence of flow induced shear on cell response and growth of new tissue. (c) 2006 Wiley Periodicals, Inc.
Simulation of Mirror Electron Microscopy Caustic Images in Three-Dimensions
NASA Astrophysics Data System (ADS)
Kennedy, S. M.; Zheng, C. X.; Jesson, D. E.
A full, three-dimensional (3D) ray tracing approach is developed to simulate the caustics visible in mirror electron microscopy (MEM). The method reproduces MEM image contrast resulting from 3D surface relief. To illustrate the potential of the simulation methods, we study the evolution of crater contrast associated with a movie of GaAs structures generated by the droplet epitaxy technique. Specifically, we simulate the image contrast resulting from both a precursor stage and the final crater morphology which is consistent with an inverted pyramid consisting of (111) facet walls. The method therefore facilities the study of how self-assembled quantum structures evolve with time and, in particular, the development of anisotropic features including faceting.
Three-dimensional digital mapping of the optic nerve head cupping in glaucoma
NASA Astrophysics Data System (ADS)
Mitra, Sunanda; Ramirez, Manuel; Morales, Jose
1992-08-01
Visualization of the optic nerve head cupping is clinically achieved by stereoscopic viewing of a fundus image pair of the suspected eye. A novel algorithm for three-dimensional digital surface representation of the optic nerve head, using fusion of stereo depth map with a linearly stretched intensity image of a stereo fundus image pair, is presented. Prior to depth map acquisition, a number of preprocessing tasks including feature extraction, registration by cepstral analysis, and correction for intensity variations are performed. The depth map is obtained by using a coarse to fine strategy for obtaining disparities between corresponding areas. The required matching techniques to obtain the translational differences in every step, uses cepstral analysis and correlation-like scanning technique in the spatial domain for the finest details. The quantitative and precise representation of the optic nerve head surface topography following this algorithm is not computationally intensive and should provide more useful information than just qualitative stereoscopic viewing of the fundus as one of the diagnostic criteria for diagnosis of glaucoma.
Diatom Valve Three-Dimensional Representation: A New Imaging Method Based on Combined Microscopies
Ferrara, Maria Antonietta; De Tommasi, Edoardo; Coppola, Giuseppe; De Stefano, Luca; Rea, Ilaria; Dardano, Principia
2016-01-01
The frustule of diatoms, unicellular microalgae, shows very interesting photonic features, generally related to its complicated and quasi-periodic micro- and nano-structure. In order to simulate light propagation inside and through this natural structure, it is important to develop three-dimensional (3D) models for synthetic replica with high spatial resolution. In this paper, we present a new method that generates images of microscopic diatoms with high definition, by merging scanning electron microscopy and digital holography microscopy or atomic force microscopy data. Starting from two digital images, both acquired separately with standard characterization procedures, a high spatial resolution (Δz = λ/20, Δx = Δy ≅ 100 nm, at least) 3D model of the object has been generated. Then, the two sets of data have been processed by matrix formalism, using an original mathematical algorithm implemented on a commercially available software. The developed methodology could be also of broad interest in the design and fabrication of micro-opto-electro-mechanical systems. PMID:27690008
Three-dimensional minority-carrier collection channels at shunt locations in silicon solar cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guthrey, Harvey; Johnston, Steve; Weiss, Dirk N.
2016-10-01
In this contribution, we demonstrate the value of using a multiscale multi-technique characterization approach to study the performance-limiting defects in multi-crystalline silicon (mc-Si) photovoltaic devices. The combination of dark lock-in thermography (DLIT) imaging, electron beam induced current imaging, and both transmission and scanning transmission electron microscopy (TEM/STEM) on the same location revealed the nanoscale origin of the optoelectronic properties of shunts visible at the device scale. Our site-specific correlative approach identified the shunt behavior to be a result of three-dimensional inversion channels around structural defects decorated with oxide precipitates. These inversion channels facilitate enhanced minority-carrier transport that results in themore » increased heating observed through DLIT imaging. The definitive connection between the nanoscale structure and chemistry of the type of shunt investigated here allows photovoltaic device manufacturers to immediately address the oxygen content of their mc-Si absorber material when such features are present, instead of engaging in costly characterization.« less
Towards Automated Three-Dimensional Tracking of Nephrons through Stacked Histological Image Sets
Bhikha, Charita; Andreasen, Arne; Christensen, Erik I.; Letts, Robyn F. R.; Pantanowitz, Adam; Rubin, David M.; Thomsen, Jesper S.; Zhai, Xiao-Yue
2015-01-01
An automated approach for tracking individual nephrons through three-dimensional histological image sets of mouse and rat kidneys is presented. In a previous study, the available images were tracked manually through the image sets in order to explore renal microarchitecture. The purpose of the current research is to reduce the time and effort required to manually trace nephrons by creating an automated, intelligent system as a standard tool for such datasets. The algorithm is robust enough to isolate closely packed nephrons and track their convoluted paths despite a number of nonideal, interfering conditions such as local image distortions, artefacts, and interstitial tissue interference. The system comprises image preprocessing, feature extraction, and a custom graph-based tracking algorithm, which is validated by a rule base and a machine learning algorithm. A study of a selection of automatically tracked nephrons, when compared with manual tracking, yields a 95% tracking accuracy for structures in the cortex, while those in the medulla have lower accuracy due to narrower diameter and higher density. Limited manual intervention is introduced to improve tracking, enabling full nephron paths to be obtained with an average of 17 manual corrections per mouse nephron and 58 manual corrections per rat nephron. PMID:26170896
Towards Automated Three-Dimensional Tracking of Nephrons through Stacked Histological Image Sets.
Bhikha, Charita; Andreasen, Arne; Christensen, Erik I; Letts, Robyn F R; Pantanowitz, Adam; Rubin, David M; Thomsen, Jesper S; Zhai, Xiao-Yue
2015-01-01
An automated approach for tracking individual nephrons through three-dimensional histological image sets of mouse and rat kidneys is presented. In a previous study, the available images were tracked manually through the image sets in order to explore renal microarchitecture. The purpose of the current research is to reduce the time and effort required to manually trace nephrons by creating an automated, intelligent system as a standard tool for such datasets. The algorithm is robust enough to isolate closely packed nephrons and track their convoluted paths despite a number of nonideal, interfering conditions such as local image distortions, artefacts, and interstitial tissue interference. The system comprises image preprocessing, feature extraction, and a custom graph-based tracking algorithm, which is validated by a rule base and a machine learning algorithm. A study of a selection of automatically tracked nephrons, when compared with manual tracking, yields a 95% tracking accuracy for structures in the cortex, while those in the medulla have lower accuracy due to narrower diameter and higher density. Limited manual intervention is introduced to improve tracking, enabling full nephron paths to be obtained with an average of 17 manual corrections per mouse nephron and 58 manual corrections per rat nephron.
Method and Apparatus for Virtual Interactive Medical Imaging by Multiple Remotely-Located Users
NASA Technical Reports Server (NTRS)
Ross, Muriel D. (Inventor); Twombly, Ian Alexander (Inventor); Senger, Steven O. (Inventor)
2003-01-01
A virtual interactive imaging system allows the displaying of high-resolution, three-dimensional images of medical data to a user and allows the user to manipulate the images, including rotation of images in any of various axes. The system includes a mesh component that generates a mesh to represent a surface of an anatomical object, based on a set of data of the object, such as from a CT or MRI scan or the like. The mesh is generated so as to avoid tears, or holes, in the mesh, providing very high-quality representations of topographical features of the object, particularly at high- resolution. The system further includes a virtual surgical cutting tool that enables the user to simulate the removal of a piece or layer of a displayed object, such as a piece of skin or bone, view the interior of the object, manipulate the removed piece, and reattach the removed piece if desired. The system further includes a virtual collaborative clinic component, which allows the users of multiple, remotely-located computer systems to collaboratively and simultaneously view and manipulate the high-resolution, three-dimensional images of the object in real-time.
Orientation Modeling for Amateur Cameras by Matching Image Line Features and Building Vector Data
NASA Astrophysics Data System (ADS)
Hung, C. H.; Chang, W. C.; Chen, L. C.
2016-06-01
With the popularity of geospatial applications, database updating is getting important due to the environmental changes over time. Imagery provides a lower cost and efficient way to update the database. Three dimensional objects can be measured by space intersection using conjugate image points and orientation parameters of cameras. However, precise orientation parameters of light amateur cameras are not always available due to their costliness and heaviness of precision GPS and IMU. To automatize data updating, the correspondence of object vector data and image may be built to improve the accuracy of direct georeferencing. This study contains four major parts, (1) back-projection of object vector data, (2) extraction of image feature lines, (3) object-image feature line matching, and (4) line-based orientation modeling. In order to construct the correspondence of features between an image and a building model, the building vector features were back-projected onto the image using the initial camera orientation from GPS and IMU. Image line features were extracted from the imagery. Afterwards, the matching procedure was done by assessing the similarity between the extracted image features and the back-projected ones. Then, the fourth part utilized line features in orientation modeling. The line-based orientation modeling was performed by the integration of line parametric equations into collinearity condition equations. The experiment data included images with 0.06 m resolution acquired by Canon EOS Mark 5D II camera on a Microdrones MD4-1000 UAV. Experimental results indicate that 2.1 pixel accuracy may be reached, which is equivalent to 0.12 m in the object space.
Transparent 3D display for augmented reality
NASA Astrophysics Data System (ADS)
Lee, Byoungho; Hong, Jisoo
2012-11-01
Two types of transparent three-dimensional display systems applicable for the augmented reality are demonstrated. One of them is a head-mounted-display-type implementation which utilizes the principle of the system adopting the concave floating lens to the virtual mode integral imaging. Such configuration has an advantage in that the threedimensional image can be displayed at sufficiently far distance resolving the accommodation conflict with the real world scene. Incorporating the convex half mirror, which shows a partial transparency, instead of the concave floating lens, makes it possible to implement the transparent three-dimensional display system. The other type is the projection-type implementation, which is more appropriate for the general use than the head-mounted-display-type implementation. Its imaging principle is based on the well-known reflection-type integral imaging. We realize the feature of transparent display by imposing the partial transparency to the array of concave mirror which is used for the screen of reflection-type integral imaging. Two types of configurations, relying on incoherent and coherent light sources, are both possible. For the incoherent configuration, we introduce the concave half mirror array, whereas the coherent one adopts the holographic optical element which replicates the functionality of the lenslet array. Though the projection-type implementation is beneficial than the head-mounted-display in principle, the present status of the technical advance of the spatial light modulator still does not provide the satisfactory visual quality of the displayed three-dimensional image. Hence we expect that the head-mounted-display-type and projection-type implementations will come up in the market in sequence.
Biodynamic profiling of three-dimensional tissue growth techniques
NASA Astrophysics Data System (ADS)
Sun, Hao; Merrill, Dan; Turek, John; Nolte, David
2016-03-01
Three-dimensional tissue culture presents a more biologically relevant environment in which to perform drug development than conventional two-dimensional cell culture. However, obtaining high-content information from inside three dimensional tissue has presented an obstacle to rapid adoption of 3D tissue culture for pharmaceutical applications. Biodynamic imaging is a high-content three-dimensional optical imaging technology based on low-coherence interferometry and digital holography that uses intracellular dynamics as high-content image contrast. In this paper, we use biodynamic imaging to compare pharmaceutical responses to Taxol of three-dimensional multicellular spheroids grown by three different growth techniques: rotating bioreactor, hanging-drop and plate-grown spheroids. The three growth techniques have systematic variations among tissue cohesiveness and intracellular activity and consequently display different pharmacodynamics under identical drug dose conditions. The in vitro tissue cultures are also compared to ex vivo living biopsies. These results demonstrate that three-dimensional tissue cultures are not equivalent, and that drug-response studies must take into account the growth method.
Automated choroid segmentation of three-dimensional SD-OCT images by incorporating EDI-OCT images.
Chen, Qiang; Niu, Sijie; Fang, Wangyi; Shuai, Yuanlu; Fan, Wen; Yuan, Songtao; Liu, Qinghuai
2018-05-01
The measurement of choroidal volume is more related with eye diseases than choroidal thickness, because the choroidal volume can reflect the diseases comprehensively. The purpose is to automatically segment choroid for three-dimensional (3D) spectral domain optical coherence tomography (SD-OCT) images. We present a novel choroid segmentation strategy for SD-OCT images by incorporating the enhanced depth imaging OCT (EDI-OCT) images. The down boundary of the choroid, namely choroid-sclera junction (CSJ), is almost invisible in SD-OCT images, while visible in EDI-OCT images. During the SD-OCT imaging, the EDI-OCT images can be generated for the same eye. Thus, we present an EDI-OCT-driven choroid segmentation method for SD-OCT images, where the choroid segmentation results of the EDI-OCT images are used to estimate the average choroidal thickness and to improve the construction of the CSJ feature space of the SD-OCT images. We also present a whole registration method between EDI-OCT and SD-OCT images based on retinal thickness and Bruch's Membrane (BM) position. The CSJ surface is obtained with a 3D graph search in the CSJ feature space. Experimental results with 768 images (6 cubes, 128 B-scan images for each cube) from 2 healthy persons, 2 age-related macular degeneration (AMD) and 2 diabetic retinopathy (DR) patients, and 210 B-scan images from other 8 healthy persons and 21 patients demonstrate that our method can achieve high segmentation accuracy. The mean choroid volume difference and overlap ratio for 6 cubes between our proposed method and outlines drawn by experts were -1.96µm3 and 88.56%, respectively. Our method is effective for the 3D choroid segmentation of SD-OCT images because the segmentation accuracy and stability are compared with the manual segmentation. Copyright © 2017. Published by Elsevier B.V.
Camouflaged target detection based on polarized spectral features
NASA Astrophysics Data System (ADS)
Tan, Jian; Zhang, Junping; Zou, Bin
2016-05-01
The polarized hyperspectral images (PHSI) include polarization, spectral, spatial and radiant features, which provide more information about objects and scenes than traditional intensity or spectrum ones. And polarization can suppress the background and highlight the object, leading to the high potential to improve camouflaged target detection. So polarized hyperspectral imaging technique has aroused extensive concern in the last few years. Nowadays, the detection methods are still not very mature, most of which are rooted in the detection of hyperspectral image. And before using these algorithms, Stokes vector is used to process the original four-dimensional polarized hyperspectral data firstly. However, when the data is large and complex, the amount of calculation and error will increase. In this paper, tensor is applied to reconstruct the original four-dimensional data into new three-dimensional data, then, the constraint energy minimization (CEM) is used to process the new data, which adds the polarization information to construct the polarized spectral filter operator and takes full advantages of spectral and polarized information. This way deals with the original data without extracting the Stokes vector, so as to reduce the computation and error greatly. The experimental results also show that the proposed method in this paper is more suitable for the target detection of the PHSI.
[Bone drilling simulation by three-dimensional imaging].
Suto, Y; Furuhata, K; Kojima, T; Kurokawa, T; Kobayashi, M
1989-06-01
The three-dimensional display technique has a wide range of medical applications. Pre-operative planning is one typical application: in orthopedic surgery, three-dimensional image processing has been used very successfully. We have employed this technique in pre-operative planning for orthopedic surgery, and have developed a simulation system for bone-drilling. Positive results were obtained by pre-operative rehearsal; when a region of interest is indicated by means of a mouse on the three-dimensional image displayed on the CRT, the corresponding region appears on the slice image which is displayed simultaneously. Consequently, the status of the bone-drilling is constantly monitored. In developing this system, we have placed emphasis on the quality of the reconstructed three-dimensional images, on fast processing, and on the easy operation of the surgical planning simulation.
The 3D morphology of the ejecta surrounding VY Canis Majoris
NASA Astrophysics Data System (ADS)
Jones, Terry Jay; Humphreys, Roberta M.; Helton, L. Andrew
2007-03-01
We use second epoch images taken with WFPC2 on the HST and imaging polarimetry taken with the HST/ACS/HRC to explore the three dimensional structure of the circumstellar dust distribution around the red supergiant VY Canis Majoris. Transverse motions, combined with radial velocities, provide a picture of the kinematics of the ejecta, including the total space motions. The fractional polarization and photometric colors provide an independent method of locating the physical position of the dust along the line-of-sight. Most of the individual arc-like features and clumps seen in the intensity image are also features in the fractional polarization map, and must be distinct geometric objects. The location of these features in the ejecta of VY CMa using kinematics and polarimetry agree well with each other, and strongly suggest they are the result of relatively massive ejections, probably associated with magnetic fields.
A novel method to acquire 3D data from serial 2D images of a dental cast
NASA Astrophysics Data System (ADS)
Yi, Yaxing; Li, Zhongke; Chen, Qi; Shao, Jun; Li, Xinshe; Liu, Zhiqin
2007-05-01
This paper introduced a newly developed method to acquire three-dimensional data from serial two-dimensional images of a dental cast. The system consists of a computer and a set of data acquiring device. The data acquiring device is used to take serial pictures of the a dental cast; an artificial neural network works to translate two-dimensional pictures to three-dimensional data; then three-dimensional image can reconstruct by the computer. The three-dimensional data acquiring of dental casts is the foundation of computer-aided diagnosis and treatment planning in orthodontics.
NASA Astrophysics Data System (ADS)
Funane, Tsukasa; Hou, Steven S.; Zoltowska, Katarzyna Marta; van Veluw, Susanne J.; Berezovska, Oksana; Kumar, Anand T. N.; Bacskai, Brian J.
2018-05-01
We have developed an imaging technique which combines selective plane illumination microscopy with time-domain fluorescence lifetime imaging microscopy (SPIM-FLIM) for three-dimensional volumetric imaging of cleared mouse brains with micro- to mesoscopic resolution. The main features of the microscope include a wavelength-adjustable pulsed laser source (Ti:sapphire) (near-infrared) laser, a BiBO frequency-doubling photonic crystal, a liquid chamber, an electrically focus-tunable lens, a cuvette based sample holder, and an air (dry) objective lens. The performance of the system was evaluated with a lifetime reference dye and micro-bead phantom measurements. Intensity and lifetime maps of three-dimensional human embryonic kidney (HEK) cell culture samples and cleared mouse brain samples expressing green fluorescent protein (GFP) (donor only) and green and red fluorescent protein [positive Förster (fluorescence) resonance energy transfer] were acquired. The results show that the SPIM-FLIM system can be used for sample sizes ranging from single cells to whole mouse organs and can serve as a powerful tool for medical and biological research.
Bhattacharya, Dipanjan; Singh, Vijay Raj; Zhi, Chen; So, Peter T. C.; Matsudaira, Paul; Barbastathis, George
2012-01-01
Laser sheet based microscopy has become widely accepted as an effective active illumination method for real time three-dimensional (3D) imaging of biological tissue samples. The light sheet geometry, where the camera is oriented perpendicular to the sheet itself, provides an effective method of eliminating some of the scattered light and minimizing the sample exposure to radiation. However, residual background noise still remains, limiting the contrast and visibility of potentially interesting features in the samples. In this article, we investigate additional structuring of the illumination for improved background rejection, and propose a new technique, “3D HiLo” where we combine two HiLo images processed from orthogonal directions to improve the condition of the 3D reconstruction. We present a comparative study of conventional structured illumination based demodulation methods, namely 3Phase and HiLo with a newly implemented 3D HiLo approach and demonstrate that the latter yields superior signal-to-background ratio in both lateral and axial dimensions, while simultaneously suppressing image processing artifacts. PMID:23262684
Bhattacharya, Dipanjan; Singh, Vijay Raj; Zhi, Chen; So, Peter T C; Matsudaira, Paul; Barbastathis, George
2012-12-03
Laser sheet based microscopy has become widely accepted as an effective active illumination method for real time three-dimensional (3D) imaging of biological tissue samples. The light sheet geometry, where the camera is oriented perpendicular to the sheet itself, provides an effective method of eliminating some of the scattered light and minimizing the sample exposure to radiation. However, residual background noise still remains, limiting the contrast and visibility of potentially interesting features in the samples. In this article, we investigate additional structuring of the illumination for improved background rejection, and propose a new technique, "3D HiLo" where we combine two HiLo images processed from orthogonal directions to improve the condition of the 3D reconstruction. We present a comparative study of conventional structured illumination based demodulation methods, namely 3Phase and HiLo with a newly implemented 3D HiLo approach and demonstrate that the latter yields superior signal-to-background ratio in both lateral and axial dimensions, while simultaneously suppressing image processing artifacts.
Content metamorphosis in synthetic holography
NASA Astrophysics Data System (ADS)
Desbiens, Jacques
2013-02-01
A synthetic hologram is an optical system made of hundreds of images amalgamated in a structure of holographic cells. Each of these images represents a point of view on a three-dimensional space which makes us consider synthetic holography as a multiple points of view perspective system. In the composition of a computer graphics scene for a synthetic hologram, the field of view of the holographic image can be divided into several viewing zones. We can attribute these divisions to any object or image feature independently and operate different transformations on image content. In computer generated holography, we tend to consider content variations as a continuous animation much like a short movie. However, by composing sequential variations of image features in relation with spatial divisions, we can build new narrative forms distinct from linear cinematographic narration. When observers move freely and change their viewing positions, they travel from one field of view division to another. In synthetic holography, metamorphoses of image content are within the observer's path. In all imaging Medias, the transformation of image features in synchronisation with the observer's position is a rare occurrence. However, this is a predominant characteristic of synthetic holography. This paper describes some of my experimental works in the development of metamorphic holographic images.
Computer-generated 3D ultrasound images of the carotid artery
NASA Technical Reports Server (NTRS)
Selzer, Robert H.; Lee, Paul L.; Lai, June Y.; Frieden, Howard J.; Blankenhorn, David H.
1989-01-01
A method is under development to measure carotid artery lesions from a computer-generated three-dimensional ultrasound image. For each image, the position of the transducer in six coordinates (x, y, z, azimuth, elevation, and roll) is recorded and used to position each B-mode picture element in its proper spatial position in a three-dimensional memory array. After all B-mode images have been assembled in the memory, the three-dimensional image is filtered and resampled to produce a new series of parallel-plane two-dimensional images from which arterial boundaries are determined using edge tracking methods.
Computer-generated 3D ultrasound images of the carotid artery
NASA Astrophysics Data System (ADS)
Selzer, Robert H.; Lee, Paul L.; Lai, June Y.; Frieden, Howard J.; Blankenhorn, David H.
A method is under development to measure carotid artery lesions from a computer-generated three-dimensional ultrasound image. For each image, the position of the transducer in six coordinates (x, y, z, azimuth, elevation, and roll) is recorded and used to position each B-mode picture element in its proper spatial position in a three-dimensional memory array. After all B-mode images have been assembled in the memory, the three-dimensional image is filtered and resampled to produce a new series of parallel-plane two-dimensional images from which arterial boundaries are determined using edge tracking methods.
Lai, Hei Ming; Liu, Alan King Lun; Ng, Wai-Lung; DeFelice, John; Lee, Wing Sang; Li, Heng; Li, Wen; Ng, Ho Man; Chang, Raymond Chuen-Chung; Lin, Bin; Wu, Wutian; Gentleman, Steve M.
2016-01-01
Three-dimensional visualization of intact tissues is now being achieved by turning tissues transparent. CLARITY is a unique tissue clearing technique, which features the use of detergents to remove lipids from fixed tissues to achieve optical transparency. To preserve tissue integrity, an acrylamide-based hydrogel has been proposed to embed the tissue. In this study, we examined the rationale behind the use of acrylamide in CLARITY, and presented evidence to suggest that the omission of acrylamide-hydrogel embedding in CLARITY does not alter the preservation of tissue morphology and molecular information in fixed tissues. We therefore propose a novel and simplified workflow for formaldehyde-fixed tissue clearing, which will facilitate the laboratory implementation of this technique. Furthermore, we have investigated the basic tissue clearing process in detail and have highlighted some areas for targeted improvement of technologies essential for the emerging subject of three-dimensional histology. PMID:27359336
Gu, Min; Bird, Damian
2003-05-01
The three-dimensional optical transfer function is derived for analyzing the imaging performance in fiber-optical two-photon fluorescence microscopy. Two types of fiber-optical geometry are considered: The first involves a single-mode fiber for delivering a laser beam for illumination, and the second is based on the use of a single-mode fiber coupler for both illumination delivery and signal collection. It is found that in the former case the transverse and axial cutoff spatial frequencies of the three-dimensional optical transfer function are the same as those in conventional two-photon fluorescence microscopy without the use of a pinhole.However, the transverse and axial cutoff spatial frequencies in the latter case are 1.7 times as large as those in the former case. Accordingly, this feature leads to an enhanced optical sectioning effect when a fiber coupler is used, which is consistent with our recent experimental observation.
NASA Astrophysics Data System (ADS)
Adabi, Saba; Conforto, Silvia; Hosseinzadeh, Matin; Noe, Shahryar; Daveluy, Steven; Mehregan, Darius; Nasiriavanaki, Mohammadreza
2017-02-01
Optical Coherence Tomography (OCT) offers real-time high-resolution three-dimensional images of tissue microstructures. In this study, we used OCT skin images acquired from ten volunteers, neither of whom had any skin conditions addressing the features of their anatomic location. OCT segmented images are analyzed based on their optical properties (attenuation coefficient) and textural image features e.g., contrast, correlation, homogeneity, energy, entropy, etc. Utilizing the information and referring to their clinical insight, we aim to make a comprehensive computational model for the healthy skin. The derived parameters represent the OCT microstructural morphology and might provide biological information for generating an atlas of normal skin from different anatomic sites of human skin and may allow for identification of cell microstructural changes in cancer patients. We then compared the parameters of healthy samples with those of abnormal skin and classified them using a linear Support Vector Machines (SVM) with 82% accuracy.
Li, Qiu-yang; Tang, Jie; He, En-hui; Li, Yan-mi; Zhou, Yun; Zhang, Xu; Chen, Guangfu
2012-11-01
The purpose of this study was to evaluate the effectiveness of three-dimensional contrast-enhanced ultrasound in differentiating invasive and noninvasive neoplasms of urinary bladder. A total of 60 lesions in 60 consecutive patients with bladder tumors received three dimensional ultrasonography, low acoustic power contrast enhanced ultrasonography and low acoustic power three-dimensional contrast-enhanced ultrasound examination. The IU22 ultrasound scanner and a volume transducer were used and the ultrasound contrast agent was SonoVue. The contrast-specific sonographic imaging modes were PI (pulse inversion) and PM (power modulation). The three dimensional ultrasonography, contrast enhanced ultrasonography, and three-dimensional contrast-enhanced ultrasound images were independently reviewed by two readers who were not in the images acquisition. Images were analyzed off-site. A level of confidence in the diagnosis of tumor invasion of the muscle layer was assigned on a 5° scale. Receiver operating characteristic analysis was used to assess overall confidence in the diagnosis of muscle invasion by tumor. Kappa values were used to assess inter-readers agreement. Histologic diagnosis was obtained for all patients. Final pathologic staging revealed 44 noninvasive tumors and 16 invasive tumors. Three-dimensional contrast-enhanced ultrasound depicted all 16 muscle-invasive tumors. The diagnostic performance of three-dimensional contrast-enhanced ultrasound was better than those of three dimensional ultrasonography and contrast enhanced ultrasonography. The receiver operating characteristic curves were 0.976 and 0.967 for three-dimensional contrast-enhanced ultrasound, those for three dimensional ultrasonography were 0.881 and 0.869, those for contrast enhanced ultrasonography were 0.927 and 0.929. The kappa values in the three dimensional ultrasonography, contrast enhanced ultrasonography and three-dimensional contrast-enhanced ultrasound for inter-reader agreements were 0.717, 0.794 and 0.914. Three-dimensional contrast-enhanced ultrasound imaging, with contrast-enhanced spatial visualization is clinical useful for differentiating invasive and noninvasive neoplasms of urinary bladder objectively. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images
Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed
2018-01-01
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang’s method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used. PMID:29762519
Stemmer, A
1995-04-01
The design of a scanned-cantilever-type force microscope is presented which is fully integrated into an inverted high-resolution video-enhanced light microscope. This set-up allows us to acquire thin optical sections in differential interference contrast (DIC) or polarization while the force microscope is in place. Such a hybrid microscope provides a unique platform to study how cell surface properties determine, or are affected by, the three-dimensional dynamic organization inside the living cell. The hybrid microscope presented in this paper has proven reliable and versatile for biological applications. It is the only instrument that can image a specimen by force microscopy and high-power DIC without having either to translate the specimen or to remove the force microscope. Adaptation of the design features could greatly enhance the suitability of other force microscopes for biological work.
Three-dimensional representation of curved nanowires.
Huang, Z; Dikin, D A; Ding, W; Qiao, Y; Chen, X; Fridman, Y; Ruoff, R S
2004-12-01
Nanostructures, such as nanowires, nanotubes and nanocoils, can be described in many cases as quasi one-dimensional curved objects projecting in three-dimensional space. A parallax method to construct the correct three-dimensional geometry of such one-dimensional nanostructures is presented. A series of scanning electron microscope images was acquired at different view angles, thus providing a set of image pairs that were used to generate three-dimensional representations using a matlab program. An error analysis as a function of the view angle between the two images is presented and discussed. As an example application, the importance of knowing the true three-dimensional shape of boron nanowires is demonstrated; without the nanowire's correct length and diameter, mechanical resonance data cannot provide an accurate estimate of Young's modulus.
Three-dimensional echocardiography of congenital abnormalities of the left atrioventricular valve
Rice, Kathryn
2015-01-01
Congenital abnormalities of the left atrioventricular (AV) valve are a significant diagnostic challenge. Traditionally, reliance has been placed on two-dimensional echocardiographic (2DE) imaging to guide recognition of the specific morphological features. Real-time 3DE can provide unique views of the left AV valve with the potential to improve understanding of valve morphology and function to facilitate surgical planning. This review illustrates the features of congenital abnormalities of the left AV valve assessed by 3DE. The similarities and differences in morphology between different lesions are described, both with respect to the valve itself and supporting chordal apparatus. The potential advantages as well as limitations of this technique in clinical practice are outlined. PMID:26693328
Khotanlou, Hassan; Afrasiabi, Mahlagha
2012-10-01
This paper presents a new feature selection approach for automatically extracting multiple sclerosis (MS) lesions in three-dimensional (3D) magnetic resonance (MR) images. Presented method is applicable to different types of MS lesions. In this method, T1, T2, and fluid attenuated inversion recovery (FLAIR) images are firstly preprocessed. In the next phase, effective features to extract MS lesions are selected by using a genetic algorithm (GA). The fitness function of the GA is the Similarity Index (SI) of a support vector machine (SVM) classifier. The results obtained on different types of lesions have been evaluated by comparison with manual segmentations. This algorithm is evaluated on 15 real 3D MR images using several measures. As a result, the SI between MS regions determined by the proposed method and radiologists was 87% on average. Experiments and comparisons with other methods show the effectiveness and the efficiency of the proposed approach.
Medical image registration based on normalized multidimensional mutual information
NASA Astrophysics Data System (ADS)
Li, Qi; Ji, Hongbing; Tong, Ming
2009-10-01
Registration of medical images is an essential research topic in medical image processing and applications, and especially a preliminary and key step for multimodality image fusion. This paper offers a solution to medical image registration based on normalized multi-dimensional mutual information. Firstly, affine transformation with translational and rotational parameters is applied to the floating image. Then ordinal features are extracted by ordinal filters with different orientations to represent spatial information in medical images. Integrating ordinal features with pixel intensities, the normalized multi-dimensional mutual information is defined as similarity criterion to register multimodality images. Finally the immune algorithm is used to search registration parameters. The experimental results demonstrate the effectiveness of the proposed registration scheme.
A Dimensionally Aligned Signal Projection for Classification of Unintended Radiated Emissions
Vann, Jason Michael; Karnowski, Thomas P.; Kerekes, Ryan; ...
2017-04-24
Characterization of unintended radiated emissions (URE) from electronic devices plays an important role in many research areas from electromagnetic interference to nonintrusive load monitoring to information system security. URE can provide insights for applications ranging from load disaggregation and energy efficiency to condition-based maintenance of equipment-based upon detected fault conditions. URE characterization often requires subject matter expertise to tailor transforms and feature extractors for the specific electrical devices of interest. We present a novel approach, named dimensionally aligned signal projection (DASP), for projecting aligned signal characteristics that are inherent to the physical implementation of many commercial electronic devices. These projectionsmore » minimize the need for an intimate understanding of the underlying physical circuitry and significantly reduce the number of features required for signal classification. We present three possible DASP algorithms that leverage frequency harmonics, modulation alignments, and frequency peak spacings, along with a two-dimensional image manipulation method for statistical feature extraction. To demonstrate the ability of DASP to generate relevant features from URE, we measured the conducted URE from 14 residential electronic devices using a 2 MS/s collection system. Furthermore, a linear discriminant analysis classifier was trained using DASP generated features and was blind tested resulting in a greater than 90% classification accuracy for each of the DASP algorithms and an accuracy of 99.1% when DASP features are used in combination. Furthermore, we show that a rank reduced feature set of the combined DASP algorithms provides a 98.9% classification accuracy with only three features and outperforms a set of spectral features in terms of general classification as well as applicability across a broad number of devices.« less
A Dimensionally Aligned Signal Projection for Classification of Unintended Radiated Emissions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vann, Jason Michael; Karnowski, Thomas P.; Kerekes, Ryan
Characterization of unintended radiated emissions (URE) from electronic devices plays an important role in many research areas from electromagnetic interference to nonintrusive load monitoring to information system security. URE can provide insights for applications ranging from load disaggregation and energy efficiency to condition-based maintenance of equipment-based upon detected fault conditions. URE characterization often requires subject matter expertise to tailor transforms and feature extractors for the specific electrical devices of interest. We present a novel approach, named dimensionally aligned signal projection (DASP), for projecting aligned signal characteristics that are inherent to the physical implementation of many commercial electronic devices. These projectionsmore » minimize the need for an intimate understanding of the underlying physical circuitry and significantly reduce the number of features required for signal classification. We present three possible DASP algorithms that leverage frequency harmonics, modulation alignments, and frequency peak spacings, along with a two-dimensional image manipulation method for statistical feature extraction. To demonstrate the ability of DASP to generate relevant features from URE, we measured the conducted URE from 14 residential electronic devices using a 2 MS/s collection system. Furthermore, a linear discriminant analysis classifier was trained using DASP generated features and was blind tested resulting in a greater than 90% classification accuracy for each of the DASP algorithms and an accuracy of 99.1% when DASP features are used in combination. Furthermore, we show that a rank reduced feature set of the combined DASP algorithms provides a 98.9% classification accuracy with only three features and outperforms a set of spectral features in terms of general classification as well as applicability across a broad number of devices.« less
Fetit, Ahmed E; Novak, Jan; Peet, Andrew C; Arvanitits, Theodoros N
2015-09-01
The aim of this study was to assess the efficacy of three-dimensional texture analysis (3D TA) of conventional MR images for the classification of childhood brain tumours in a quantitative manner. The dataset comprised pre-contrast T1 - and T2-weighted MRI series obtained from 48 children diagnosed with brain tumours (medulloblastoma, pilocytic astrocytoma and ependymoma). 3D and 2D TA were carried out on the images using first-, second- and higher order statistical methods. Six supervised classification algorithms were trained with the most influential 3D and 2D textural features, and their performances in the classification of tumour types, using the two feature sets, were compared. Model validation was carried out using the leave-one-out cross-validation (LOOCV) approach, as well as stratified 10-fold cross-validation, in order to provide additional reassurance. McNemar's test was used to test the statistical significance of any improvements demonstrated by 3D-trained classifiers. Supervised learning models trained with 3D textural features showed improved classification performances to those trained with conventional 2D features. For instance, a neural network classifier showed 12% improvement in area under the receiver operator characteristics curve (AUC) and 19% in overall classification accuracy. These improvements were statistically significant for four of the tested classifiers, as per McNemar's tests. This study shows that 3D textural features extracted from conventional T1 - and T2-weighted images can improve the diagnostic classification of childhood brain tumours. Long-term benefits of accurate, yet non-invasive, diagnostic aids include a reduction in surgical procedures, improvement in surgical and therapy planning, and support of discussions with patients' families. It remains necessary, however, to extend the analysis to a multicentre cohort in order to assess the scalability of the techniques used. Copyright © 2015 John Wiley & Sons, Ltd.
A multiscale MDCT image-based breathing lung model with time-varying regional ventilation
Yin, Youbing; Choi, Jiwoong; Hoffman, Eric A.; Tawhai, Merryn H.; Lin, Ching-Long
2012-01-01
A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C1 continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung. PMID:23794749
Imaging three-dimensional innervation zone distribution in muscles from M-wave recordings
NASA Astrophysics Data System (ADS)
Zhang, Chuan; Peng, Yun; Liu, Yang; Li, Sheng; Zhou, Ping; Zev Rymer, William; Zhang, Yingchun
2017-06-01
Objective. To localize neuromuscular junctions in skeletal muscles in vivo which is of great importance in understanding, diagnosing and managing of neuromuscular disorders. Approach. A three-dimensional global innervation zone imaging technique was developed to characterize the global distribution of innervation zones, as an indication of the location and features of neuromuscular junctions, using electrically evoked high-density surface electromyogram recordings. Main results. The performance of the technique was evaluated in the biceps brachii of six intact human subjects. The geometric centers of the distributions of the reconstructed innervation zones were determined with a mean distance of 9.4 ± 1.4 cm from the reference plane, situated at the medial epicondyle of the humerus. A mean depth was calculated as 1.5 ± 0.3 cm from the geometric centers to the closed points over the skin. The results are consistent with those reported in previous histology studies. It was also found that the volumes and distributions of the reconstructed innervation zones changed as the stimulation intensities increased until the supramaximal muscle response was achieved. Significance. Results have demonstrated the high performance of the proposed imaging technique in noninvasively imaging global distributions of the innervation zones in the three-dimensional muscle space in vivo, and the feasibility of its clinical applications, such as guiding botulinum toxin injections in spasticity management, or in early diagnosis of neurodegenerative progression of amyotrophic lateral sclerosis.
NASA Astrophysics Data System (ADS)
Takizawa, Kuniharu
A novel three-dimensional (3-D) projection display used with polarized eyeglasses is proposed. It consists of polymer-dispersed liquid crystal-light valves that modulate the illuminated light based on light scattering, a polarization beam splitter, and a Schlieren projection system. The features of the proposed display include a 3-D image display with a single projector, half size and half power consumption compared with a conventional 3-D projector with polarized glasses. Measured electro-optic characteristics of a polymer-dispersed liquid-crystal cell inserted between crossed polarizers suggests that the proposed display achieves small cross talk and high-extinction ratio.
Doan, Nhat Trung; van den Bogaard, Simon J A; Dumas, Eve M; Webb, Andrew G; van Buchem, Mark A; Roos, Raymund A C; van der Grond, Jeroen; Reiber, Johan H C; Milles, Julien
2014-03-01
To develop a framework for quantitative detection of between-group textural differences in ultrahigh field T2*-weighted MR images of the brain. MR images were acquired using a three-dimensional (3D) T2*-weighted gradient echo sequence on a 7 Tesla MRI system. The phase images were high-pass filtered to remove phase wraps. Thirteen textural features were computed for both the magnitude and phase images of a region of interest based on 3D Gray-Level Co-occurrence Matrix, and subsequently evaluated to detect between-group differences using a Mann-Whitney U-test. We applied the framework to study textural differences in subcortical structures between premanifest Huntington's disease (HD), manifest HD patients, and controls. In premanifest HD, four phase-based features showed a difference in the caudate nucleus. In manifest HD, 7 magnitude-based features showed a difference in the pallidum, 6 phase-based features in the caudate nucleus, and 10 phase-based features in the putamen. After multiple comparison correction, significant differences were shown in the putamen in manifest HD by two phase-based features (both adjusted P values=0.04). This study provides the first evidence of textural heterogeneity of subcortical structures in HD. Texture analysis of ultrahigh field T2*-weighted MR images can be useful for noninvasive monitoring of neurodegenerative diseases. Copyright © 2013 Wiley Periodicals, Inc.
Target recognition for ladar range image using slice image
NASA Astrophysics Data System (ADS)
Xia, Wenze; Han, Shaokun; Wang, Liang
2015-12-01
A shape descriptor and a complete shape-based recognition system using slice images as geometric feature descriptor for ladar range images are introduced. A slice image is a two-dimensional image generated by three-dimensional Hough transform and the corresponding mathematical transformation. The system consists of two processes, the model library construction and recognition. In the model library construction process, a series of range images are obtained after the model object is sampled at preset attitude angles. Then, all the range images are converted into slice images. The number of slice images is reduced by clustering analysis and finding a representation to reduce the size of the model library. In the recognition process, the slice image of the scene is compared with the slice image in the model library. The recognition results depend on the comparison. Simulated ladar range images are used to analyze the recognition and misjudgment rates, and comparison between the slice image representation method and moment invariants representation method is performed. The experimental results show that whether in conditions without noise or with ladar noise, the system has a high recognition rate and low misjudgment rate. The comparison experiment demonstrates that the slice image has better representation ability than moment invariants.
Nayak, Deepak Ranjan; Dash, Ratnakar; Majhi, Banshidhar
2017-01-01
This paper presents an automatic classification system for segregating pathological brain from normal brains in magnetic resonance imaging scanning. The proposed system employs contrast limited adaptive histogram equalization scheme to enhance the diseased region in brain MR images. Two-dimensional stationary wavelet transform is harnessed to extract features from the preprocessed images. The feature vector is constructed using the energy and entropy values, computed from the level- 2 SWT coefficients. Then, the relevant and uncorrelated features are selected using symmetric uncertainty ranking filter. Subsequently, the selected features are given input to the proposed AdaBoost with support vector machine classifier, where SVM is used as the base classifier of AdaBoost algorithm. To validate the proposed system, three standard MR image datasets, Dataset-66, Dataset-160, and Dataset- 255 have been utilized. The 5 runs of k-fold stratified cross validation results indicate the suggested scheme offers better performance than other existing schemes in terms of accuracy and number of features. The proposed system earns ideal classification over Dataset-66 and Dataset-160; whereas, for Dataset- 255, an accuracy of 99.45% is achieved. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Suenaga, Hideyuki; Hoang Tran, Huy; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Mori, Yoshiyuki; Takato, Tsuyoshi
2013-01-01
To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. Three-dimensional surface models of the jawbones, based on the computed tomography data, were used to create the integral videography images of a subject's maxillofacial area. The three-dimensional augmented reality system (integral videography display, computed tomography, a position tracker and a computer) was used to generate a three-dimensional overlay that was projected on the surgical site via a half-silvered mirror. Thereafter, a feasibility study was performed on a volunteer. The accuracy of this system was verified on a solid model while simulating bone resection. Positional registration was attained by identifying and tracking the patient/surgical instrument's position. Thus, integral videography images of jawbones, teeth and the surgical tool were superimposed in the correct position. Stereoscopic images viewed from various angles were accurately displayed. Change in the viewing angle did not negatively affect the surgeon's ability to simultaneously observe the three-dimensional images and the patient, without special glasses. The difference in three-dimensional position of each measuring point on the solid model and augmented reality navigation was almost negligible (<1 mm); this indicates that the system was highly accurate. This augmented reality system was highly accurate and effective for surgical navigation and for overlaying a three-dimensional computed tomography image on a patient's surgical area, enabling the surgeon to understand the positional relationship between the preoperative image and the actual surgical site, with the naked eye. PMID:23703710
CT Scans of Cores Metadata, Barrow, Alaska 2015
Katie McKnight; Tim Kneafsey; Craig Ulrich
2015-03-11
Individual ice cores were collected from Barrow Environmental Observatory in Barrow, Alaska, throughout 2013 and 2014. Cores were drilled along different transects to sample polygonal features (i.e. the trough, center and rim of high, transitional and low center polygons). Most cores were drilled around 1 meter in depth and a few deep cores were drilled around 3 meters in depth. Three-dimensional images of the frozen cores were constructed using a medical X-ray computed tomography (CT) scanner. TIFF files can be uploaded to ImageJ (an open-source imaging software) to examine soil structure and densities within each core.
Kim, Youngseop; Choi, Eun Seo; Kwak, Wooseop; Shin, Yongjin; Jung, Woonggyu; Ahn, Yeh-Chan; Chen, Zhongping
2008-06-01
We demonstrate the use of optical coherence tomography (OCT) as a non-destructive diagnostic tool for evaluating laser-processing performance by imaging the features of a pit and a rim. A pit formed on a material at different laser-processing conditions is imaged using both a conventional scanning electron microscope (SEM) and OCT. Then using corresponding images, the geometrical characteristics of the pit are analyzed and compared. From the results, we could verify the feasibility and the potential of the application of OCT to the monitoring of the laser-processing performance.
Kim, Youngseop; Choi, Eun Seo; Kwak, Wooseop; Shin, Yongjin; Jung, Woonggyu; Ahn, Yeh-Chan; Chen, Zhongping
2014-01-01
We demonstrate the use of optical coherence tomography (OCT) as a non-destructive diagnostic tool for evaluating laser-processing performance by imaging the features of a pit and a rim. A pit formed on a material at different laser-processing conditions is imaged using both a conventional scanning electron microscope (SEM) and OCT. Then using corresponding images, the geometrical characteristics of the pit are analyzed and compared. From the results, we could verify the feasibility and the potential of the application of OCT to the monitoring of the laser-processing performance. PMID:24932051
Locally Linear Embedding of Local Orthogonal Least Squares Images for Face Recognition
NASA Astrophysics Data System (ADS)
Hafizhelmi Kamaru Zaman, Fadhlan
2018-03-01
Dimensionality reduction is very important in face recognition since it ensures that high-dimensionality data can be mapped to lower dimensional space without losing salient and integral facial information. Locally Linear Embedding (LLE) has been previously used to serve this purpose, however, the process of acquiring LLE features requires high computation and resources. To overcome this limitation, we propose a locally-applied Local Orthogonal Least Squares (LOLS) model can be used as initial feature extraction before the application of LLE. By construction of least squares regression under orthogonal constraints we can preserve more discriminant information in the local subspace of facial features while reducing the overall features into a more compact form that we called LOLS images. LLE can then be applied on the LOLS images to maps its representation into a global coordinate system of much lower dimensionality. Several experiments carried out using publicly available face datasets such as AR, ORL, YaleB, and FERET under Single Sample Per Person (SSPP) constraint demonstrates that our proposed method can reduce the time required to compute LLE features while delivering better accuracy when compared to when either LLE or OLS alone is used. Comparison against several other feature extraction methods and more recent feature-learning method such as state-of-the-art Convolutional Neural Networks (CNN) also reveal the superiority of the proposed method under SSPP constraint.
Baghaie, Ahmadreza; Pahlavan Tafti, Ahmad; Owen, Heather A; D'Souza, Roshan M; Yu, Zeyun
2017-01-01
Scanning Electron Microscope (SEM) as one of the major research and industrial equipment for imaging of micro-scale samples and surfaces has gained extensive attention from its emerge. However, the acquired micrographs still remain two-dimensional (2D). In the current work a novel and highly accurate approach is proposed to recover the hidden third-dimension by use of multi-view image acquisition of the microscopic samples combined with pre/post-processing steps including sparse feature-based stereo rectification, nonlocal-based optical flow estimation for dense matching and finally depth estimation. Employing the proposed approach, three-dimensional (3D) reconstructions of highly complex microscopic samples were achieved to facilitate the interpretation of topology and geometry of surface/shape attributes of the samples. As a byproduct of the proposed approach, high-definition 3D printed models of the samples can be generated as a tangible means of physical understanding. Extensive comparisons with the state-of-the-art reveal the strength and superiority of the proposed method in uncovering the details of the highly complex microscopic samples.
Research on three-dimensional reconstruction method based on binocular vision
NASA Astrophysics Data System (ADS)
Li, Jinlin; Wang, Zhihui; Wang, Minjun
2018-03-01
As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.
NASA Astrophysics Data System (ADS)
Shiraishi, Yuhki; Takeda, Fumiaki
In this research, we have developed a sorting system for fishes, which is comprised of a conveyance part, a capturing image part, and a sorting part. In the conveyance part, we have developed an independent conveyance system in order to separate one fish from an intertwined group of fishes. After the image of the separated fish is captured in the capturing part, a rotation invariant feature is extracted using two-dimensional fast Fourier transform, which is the mean value of the power spectrum with the same distance from the origin in the spectrum field. After that, the fishes are classified by three-layered feed-forward neural networks. The experimental results show that the developed system classifies three kinds of fishes captured in various angles with the classification ratio of 98.95% for 1044 captured images of five fishes. The other experimental results show the classification ratio of 90.7% for 300 fishes by 10-fold cross validation method.
Speckle interferometry of asteroids
NASA Technical Reports Server (NTRS)
Drummond, Jack
1988-01-01
By studying the image two-dimensional power spectra or autocorrelations projected by an asteroid as it rotates, it is possible to locate its rotational pole and derive its three axes dimensions through speckle interferometry under certain assumptions of uniform, geometric scattering, and triaxial ellipsoid shape. However, in cases where images can be reconstructed, the need for making the assumptions is obviated. Furthermore, the ultimate goal for speckle interferometry of image reconstruction will lead to mapping albedo features (if they exist) as impact areas or geological units. The first glimpses of the surface of an asteroid were obtained from images of 4 Vesta reconstructed from speckle interferometric observations. These images reveal that Vesta is quite Moon-like in having large hemispheric-scale albedo features. All of its lightcurves can be produced from a simple model developed from the images. Although undoubtedly more intricate than the model, Vesta's lightcurves can be matched by a model with three dark and four bright spots. The dark areas so dominate one hemisphere that a lightcurve minimum occurs when the maximum cross-section area is visible. The triaxial ellipsoid shape derived for Vesta is not consistent with the notion that the asteroid has an equilibrium shape in spite of its having apparently been differentiated.
NASA Astrophysics Data System (ADS)
Gao, Liang; Hammoudi, Ahmad A.; Li, Fuhai; Thrall, Michael J.; Cagle, Philip T.; Chen, Yuanxin; Yang, Jian; Xia, Xiaofeng; Fan, Yubo; Massoud, Yehia; Wang, Zhiyong; Wong, Stephen T. C.
2012-06-01
The advent of molecularly targeted therapies requires effective identification of the various cell types of non-small cell lung carcinomas (NSCLC). Currently, cell type diagnosis is performed using small biopsies or cytology specimens that are often insufficient for molecular testing after morphologic analysis. Thus, the ability to rapidly recognize different cancer cell types, with minimal tissue consumption, would accelerate diagnosis and preserve tissue samples for subsequent molecular testing in targeted therapy. We report a label-free molecular vibrational imaging framework enabling three-dimensional (3-D) image acquisition and quantitative analysis of cellular structures for identification of NSCLC cell types. This diagnostic imaging system employs superpixel-based 3-D nuclear segmentation for extracting such disease-related features as nuclear shape, volume, and cell-cell distance. These features are used to characterize cancer cell types using machine learning. Using fresh unstained tissue samples derived from cell lines grown in a mouse model, the platform showed greater than 97% accuracy for diagnosis of NSCLC cell types within a few minutes. As an adjunct to subsequent histology tests, our novel system would allow fast delineation of cancer cell types with minimum tissue consumption, potentially facilitating on-the-spot diagnosis, while preserving specimens for additional tests. Furthermore, 3-D measurements of cellular structure permit evaluation closer to the native state of cells, creating an alternative to traditional 2-D histology specimen evaluation, potentially increasing accuracy in diagnosing cell type of lung carcinomas.
NASA Astrophysics Data System (ADS)
Walker, Robin A.
2013-02-01
Hungarian physicist Dennis Gabor won the Pulitzer Prize for his 1947 introduction of basic holographic principles, but it was not until the invention of the laser in 1960 that research scientists, physicians, technologists and the general public began to seriously consider the interdisciplinary potentiality of holography. Questions around whether and when Three-Dimensional (3-D) images and systems would impact American entertainment and the arts would be answered before educators, instructional designers and students would discover how much Three-Dimensional Hologram Technology (3DHT) would affect teaching practices and learning environments. In the following International Symposium on Display Holograms (ISDH) poster presentation, the author features a traditional board game as well as a reflection hologram to illustrate conventional and evolving Three-Dimensional representations and technology for education. Using elements from the American children's toy Operation® (Hasbro, 2005) as well as a reflection hologram of a human brain (Ko, 1998), this poster design highlights the pedagogical effects of 3-D images, games and systems on learning science. As teaching agents, holograms can be considered substitutes for real objects, (human beings, organs, and animated characters) as well as agents (pedagogical, avatars, reflective) in various learning environments using many systems (direct, emergent, augmented reality) and electronic tools (cellphones, computers, tablets, television). In order to understand the particular importance of utilizing holography in school, clinical and public settings, the author identifies advantages and benefits of using 3-D images and technology as instructional tools.
Three-dimensional object recognition based on planar images
NASA Astrophysics Data System (ADS)
Mital, Dinesh P.; Teoh, Eam-Khwang; Au, K. C.; Chng, E. K.
1993-01-01
This paper presents the development and realization of a robotic vision system for the recognition of 3-dimensional (3-D) objects. The system can recognize a single object from among a group of known regular convex polyhedron objects that is constrained to lie on a calibrated flat platform. The approach adopted comprises a series of image processing operations on a single 2-dimensional (2-D) intensity image to derive an image line drawing. Subsequently, a feature matching technique is employed to determine 2-D spatial correspondences of the image line drawing with the model in the database. Besides its identification ability, the system can also provide important position and orientation information of the recognized object. The system was implemented on an IBM-PC AT machine executing at 8 MHz without the 80287 Maths Co-processor. In our overall performance evaluation based on a 600 recognition cycles test, the system demonstrated an accuracy of above 80% with recognition time well within 10 seconds. The recognition time is, however, indirectly dependent on the number of models in the database. The reliability of the system is also affected by illumination conditions which must be clinically controlled as in any industrial robotic vision system.
Kirk, R.L.; Howington-Kraus, E.; Hare, T.; Dorrer, E.; Cook, D.; Becker, K.; Thompson, K.; Redding, B.; Blue, J.; Galuszka, D.; Lee, E.M.; Gaddis, L.R.; Johnson, J. R.; Soderblom, L.A.; Ward, A.W.; Smith, P.H.; Britt, D.T.
1999-01-01
This paper describes our photogrammetric analysis of the Imager for Mars Pathfinder data, part of a broader program of mapping the Mars Pathfinder landing site in support of geoscience investigations. This analysis, carried out primarily with a commercial digital photogrammetric system, supported by our in-house Integrated Software for Imagers and Spectrometers (ISIS), consists of three steps: (1) geometric control: simultaneous solution for refined estimates of camera positions and pointing plus three-dimensional (3-D) coordinates of ???103 features sitewide, based on the measured image coordinates of those features; (2) topographic modeling: identification of ???3 ?? 105 closely spaced points in the images and calculation (based on camera parameters from step 1) of their 3-D coordinates, yielding digital terrain models (DTMs); and (3) geometric manipulation of the data: combination of the DTMs from different stereo pairs into a sitewide model, and reprojection of image data to remove parallax between the different spectral filters in the two cameras and to provide an undistorted planimetric view of the site. These processes are described in detail and example products are shown. Plans for combining the photogrammetrically derived topographic data with spectrophotometry are also described. These include photometric modeling using surface orientations from the DTM to study surface microtextures and improve the accuracy of spectral measurements, and photoclinometry to refine the DTM to single-pixel resolution where photometric properties are sufficiently uniform. Finally, the inclusion of rover images in a joint photogrammetric analysis with IMP images is described. This challenging task will provide coverage of areas hidden to the IMP, but accurate ranging of distant features can be achieved only if the lander is also visible in the rover image used. Copyright 1999 by the American Geophysical Union.
Rodriguez-Rivera, Veronica; Weidner, John W.; Yost, Michael J.
2016-01-01
Tissue scaffolds play a crucial role in the tissue regeneration process. The ideal scaffold must fulfill several requirements such as having proper composition, targeted modulus, and well-defined architectural features. Biomaterials that recapitulate the intrinsic architecture of in vivo tissue are vital for studying diseases as well as to facilitate the regeneration of lost and malformed soft tissue. A novel biofabrication technique was developed which combines state of the art imaging, three-dimensional (3D) printing, and selective enzymatic activity to create a new generation of biomaterials for research and clinical application. The developed material, Bovine Serum Albumin rubber, is reaction injected into a mold that upholds specific geometrical features. This sacrificial material allows the adequate transfer of architectural features to a natural scaffold material. The prototype consists of a 3D collagen scaffold with 4 and 3 mm channels that represent a branched architecture. This paper emphasizes the use of this biofabrication technique for the generation of natural constructs. This protocol utilizes a computer-aided software (CAD) to manufacture a solid mold which will be reaction injected with BSA rubber followed by the enzymatic digestion of the rubber, leaving its architectural features within the scaffold material. PMID:26967145
Rodriguez-Rivera, Veronica; Weidner, John W; Yost, Michael J
2016-02-12
Tissue scaffolds play a crucial role in the tissue regeneration process. The ideal scaffold must fulfill several requirements such as having proper composition, targeted modulus, and well-defined architectural features. Biomaterials that recapitulate the intrinsic architecture of in vivo tissue are vital for studying diseases as well as to facilitate the regeneration of lost and malformed soft tissue. A novel biofabrication technique was developed which combines state of the art imaging, three-dimensional (3D) printing, and selective enzymatic activity to create a new generation of biomaterials for research and clinical application. The developed material, Bovine Serum Albumin rubber, is reaction injected into a mold that upholds specific geometrical features. This sacrificial material allows the adequate transfer of architectural features to a natural scaffold material. The prototype consists of a 3D collagen scaffold with 4 and 3 mm channels that represent a branched architecture. This paper emphasizes the use of this biofabrication technique for the generation of natural constructs. This protocol utilizes a computer-aided software (CAD) to manufacture a solid mold which will be reaction injected with BSA rubber followed by the enzymatic digestion of the rubber, leaving its architectural features within the scaffold material.
Zhai, Hong Lin; Zhai, Yue Yuan; Li, Pei Zhen; Tian, Yue Li
2013-01-21
A very simple approach to quantitative analysis is proposed based on the technology of digital image processing using three-dimensional (3D) spectra obtained by high-performance liquid chromatography coupled with a diode array detector (HPLC-DAD). As the region-based shape features of a grayscale image, Zernike moments with inherently invariance property were employed to establish the linear quantitative models. This approach was applied to the quantitative analysis of three compounds in mixed samples using 3D HPLC-DAD spectra, and three linear models were obtained, respectively. The correlation coefficients (R(2)) for training and test sets were more than 0.999, and the statistical parameters and strict validation supported the reliability of established models. The analytical results suggest that the Zernike moment selected by stepwise regression can be used in the quantitative analysis of target compounds. Our study provides a new idea for quantitative analysis using 3D spectra, which can be extended to the analysis of other 3D spectra obtained by different methods or instruments.
User-guided segmentation for volumetric retinal optical coherence tomography images
Yin, Xin; Chao, Jennifer R.; Wang, Ruikang K.
2014-01-01
Abstract. Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method. PMID:25147962
User-guided segmentation for volumetric retinal optical coherence tomography images.
Yin, Xin; Chao, Jennifer R; Wang, Ruikang K
2014-08-01
Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method.
Feature-based three-dimensional registration for repetitive geometry in machine vision
Gong, Yuanzheng; Seibel, Eric J.
2016-01-01
As an important step in three-dimensional (3D) machine vision, 3D registration is a process of aligning two or multiple 3D point clouds that are collected from different perspectives together into a complete one. The most popular approach to register point clouds is to minimize the difference between these point clouds iteratively by Iterative Closest Point (ICP) algorithm. However, ICP does not work well for repetitive geometries. To solve this problem, a feature-based 3D registration algorithm is proposed to align the point clouds that are generated by vision-based 3D reconstruction. By utilizing texture information of the object and the robustness of image features, 3D correspondences can be retrieved so that the 3D registration of two point clouds is to solve a rigid transformation. The comparison of our method and different ICP algorithms demonstrates that our proposed algorithm is more accurate, efficient and robust for repetitive geometry registration. Moreover, this method can also be used to solve high depth uncertainty problem caused by little camera baseline in vision-based 3D reconstruction. PMID:28286703
Three-dimensional imaging of the craniofacial complex.
Nguyen, Can X.; Nissanov, Jonathan; Öztürk, Cengizhan; Nuveen, Michiel J.; Tuncay, Orhan C.
2000-02-01
Orthodontic treatment requires the rearrangement of craniofacial complex elements in three planes of space, but oddly the diagnosis is done with two-dimensional images. Here we report on a three-dimensional (3D) imaging system that employs the stereoimaging method of structured light to capture the facial image. The images can be subsequently integrated with 3D cephalometric tracings derived from lateral and PA films (www.clinorthodres.com/cor-c-070). The accuracy of the reconstruction obtained with this inexpensive system is about 400 µ.
An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors
Luo, Liyan; Xu, Luping; Zhang, Hua
2015-01-01
In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms. PMID:26198233
Luo, Liyan; Xu, Luping; Zhang, Hua
2015-07-07
In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.
Report on New Mission Concept Study: Stereo X-Ray Corona Imager Mission
NASA Technical Reports Server (NTRS)
Liewer, Paulett C.; Davis, John M.; DeJong, E. M.; Gary, G. Allen; Klimchuk, James A.; Reinert, R. P.
1998-01-01
Studies of the three-dimensional structure and dynamics of the solar corona have been severely limited by the constraint of single viewpoint observations. The Stereo X-Ray Coronal Imager (SXCI) mission will send a single instrument, an X-ray telescope, into deep space expressly to record stereoscopic images of the solar corona. The SXCI spacecraft will be inserted into a approximately 1 AU heliocentric orbit leading Earth by approximately 25 deg at the end of nine months. The SXCI X-ray telescope forms one element of a stereo pair, the second element being an identical X-ray telescope in Earth orbit placed there as part of the NOAA GOES program. X-ray emission is a powerful diagnostic of the corona and its magnetic fields, and three dimensional information on the coronal magnetic structure would be obtained by combining the data from the two X-ray telescopes. This information can be used to address the major solar physics questions of (1) what causes explosive coronal events such as coronal mass ejections (CMEs), eruptive flares and prominence eruptions and (2) what causes the transient heating of coronal loops. Stereoscopic views of the optically thin corona will resolve some ambiguities inherent in single line-of-sight observations. Triangulation gives 3D solar coordinates of features which can be seen in the simultaneous images from both telescopes. As part of this study, tools were developed for determining the 3D geometry of coronal features using triangulation. Advanced technologies for visualization and analysis of stereo images were tested. Results of mission and spacecraft studies are also reported.
Generating Stereoscopic Television Images With One Camera
NASA Technical Reports Server (NTRS)
Coan, Paul P.
1996-01-01
Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.
Computer-aided teniae coli detection using height maps from computed tomographic colonography images
NASA Astrophysics Data System (ADS)
Wei, Zhuoshi; Yao, Jianhua; Wang, Shijun; Summers, Ronald M.
2011-03-01
Computed tomographic colonography (CTC) is a minimally invasive technique for colonic polyps and cancer screening. Teniae coli are three bands of longitudinal smooth muscle on the colon surface. They are parallel, equally distributed on the colon wall, and form a triple helix structure from the appendix to the sigmoid colon. Because of their characteristics, teniae coli are important anatomical meaningful landmarks on human colon. This paper proposes a novel method for teniae coli detection on CT colonography. We first unfold the three-dimensional (3D) colon using a reversible projection technique and compute the two-dimensional (2D) height map of the unfolded colon. The height map records the elevation of colon surface relative to the unfolding plane, where haustral folds corresponding to high elevation points and teniae to low elevation points. The teniae coli are detected on the height map and then projected back to the 3D colon. Since teniae are located where the haustral folds meet, we break down the problem by first detecting haustral folds. We apply 2D Gabor filter banks to extract fold features. The maximum response of the filter banks is then selected as the feature image. The fold centers are then identified based on piecewise thresholding on the feature image. Connecting the fold centers yields a path of the folds. Teniae coli are finally extracted as lines running between the fold paths. Experiments were carried out on 7 cases. The proposed method yielded a promising result with an average normalized RMSE of 5.66% and standard deviation of 4.79% of the circumference of the colon.
Learning discriminative features from RGB-D images for gender and ethnicity identification
NASA Astrophysics Data System (ADS)
Azzakhnini, Safaa; Ballihi, Lahoucine; Aboutajdine, Driss
2016-11-01
The development of sophisticated sensor technologies gave rise to an interesting variety of data. With the appearance of affordable devices, such as the Microsoft Kinect, depth-maps and three-dimensional data became easily accessible. This attracted many computer vision researchers seeking to exploit this information in classification and recognition tasks. In this work, the problem of face classification in the context of RGB images and depth information (RGB-D images) is addressed. The purpose of this paper is to study and compare some popular techniques for gender recognition and ethnicity classification to understand how much depth data can improve the quality of recognition. Furthermore, we investigate which combination of face descriptors, feature selection methods, and learning techniques is best suited to better exploit RGB-D images. The experimental results show that depth data improve the recognition accuracy for gender and ethnicity classification applications in many use cases.
Time-efficient high-resolution whole-brain three-dimensional macromolecular proton fraction mapping
Yarnykh, Vasily L.
2015-01-01
Purpose Macromolecular proton fraction (MPF) mapping is a quantitative MRI method that reconstructs parametric maps of a relative amount of macromolecular protons causing the magnetization transfer (MT) effect and provides a biomarker of myelination in neural tissues. This study aimed to develop a high-resolution whole-brain MPF mapping technique utilizing a minimal possible number of source images for scan time reduction. Methods The described technique is based on replacement of an actually acquired reference image without MT saturation by a synthetic one reconstructed from R1 and proton density maps, thus requiring only three source images. This approach enabled whole-brain three-dimensional MPF mapping with isotropic 1.25×1.25×1.25 mm3 voxel size and scan time of 20 minutes. The synthetic reference method was validated against standard MPF mapping with acquired reference images based on data from 8 healthy subjects. Results Mean MPF values in segmented white and gray matter appeared in close agreement with no significant bias and small within-subject coefficients of variation (<2%). High-resolution MPF maps demonstrated sharp white-gray matter contrast and clear visualization of anatomical details including gray matter structures with high iron content. Conclusions Synthetic reference method improves resolution of MPF mapping and combines accurate MPF measurements with unique neuroanatomical contrast features. PMID:26102097
Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena
2013-01-01
In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.
1998-06-03
The view from NASA's Magellan spacecraft shows part of Galindo V40 quadrangle looking north; Nagavonyi Corona is in the foreground. Coronae are roughly circular, volcanic features believed to form over hot upwellings of magma within the Venusian mantle. http://photojournal.jpl.nasa.gov/catalog/PIA00095
Diffusion accessibility as a method for visualizing macromolecular surface geometry.
Tsai, Yingssu; Holton, Thomas; Yeates, Todd O
2015-10-01
Important three-dimensional spatial features such as depth and surface concavity can be difficult to convey clearly in the context of two-dimensional images. In the area of macromolecular visualization, the computer graphics technique of ray-tracing can be helpful, but further techniques for emphasizing surface concavity can give clearer perceptions of depth. The notion of diffusion accessibility is well-suited for emphasizing such features of macromolecular surfaces, but a method for calculating diffusion accessibility has not been made widely available. Here we make available a web-based platform that performs the necessary calculation by solving the Laplace equation for steady state diffusion, and produces scripts for visualization that emphasize surface depth by coloring according to diffusion accessibility. The URL is http://services.mbi.ucla.edu/DiffAcc/. © 2015 The Protein Society.
Hsieh, K S; Lin, C C; Liu, W S; Chen, F L
1996-01-01
Two-dimensional echocardiography had long been a standard diagnostic modality for congenital heart disease. Further attempts of three-dimensional reconstruction using two-dimensional echocardiographic images to visualize stereotypic structure of cardiac lesions have been successful only recently. So far only very few studies have been done to display three-dimensional anatomy of the heart through two-dimensional image acquisition because such complex procedures were involved. This study introduced a recently developed image acquisition and processing system for dynamic three-dimensional visualization of various congenital cardiac lesions. From December 1994 to April 1995, 35 cases were selected in the Echo Laboratory here from about 3000 Echo examinations completed. Each image was acquired on-line with specially designed high resolution image grazmber with EKG and respiratory gating technique. Off-line image processing using a window-architectured interactive software package includes construction of 2-D ehcocardiographic pixel to 3-D "voxel" with conversion of orthogonal to rotatory axial system, interpolation, extraction of region of interest, segmentation, shading and, finally, 3D rendering. Three-dimensional anatomy of various congenital cardiac defects was shown, including four cases with ventricular septal defects, two cases with atrial septal defects, and two cases with aortic stenosis. Dynamic reconstruction of a "beating heart" is recorded as vedio tape with video interface. The potential application of 3D display of the reconstruction from 2D echocardiographic images for the diagnosis of various congenital heart defects has been shown. The 3D display was able to improve the diagnostic ability of echocardiography, and clear-cut display of the various congenital cardiac defects and vavular stenosis could be demonstrated. Reinforcement of current techniques will expand future application of 3D display of conventional 2D images.
Wu, Zhi-fang; Lei, Yong-hua; Li, Wen-jie; Liao, Sheng-hui; Zhao, Zi-jin
2013-02-01
To explore an effective method to construct and validate a finite element model of the unilateral cleft lip and palate(UCLP) craniomaxillary complex with sutures, which could be applied in further three-dimensional finite element analysis (FEA). One male patient aged 9 with left complete lip and palate cleft was selected and CT scan was taken at 0.75mm intervals on the skull. The CT data was saved in Dicom format, which was, afterwards, imported into Software Mimics 10.0 to generate a three-dimensional anatomic model. Then Software Geomagic Studio 12.0 was used to match, smoothen and transfer the anatomic model into a CAD model with NURBS patches. Then, 12 circum-maxillary sutures were integrated into the CAD model by Solidworks (2011 version). Finally meshing by E-feature Biomedical Modeler was done and a three-dimensional finite element model with sutures was obtained. A maxillary protraction force (500 g per side, 20° downward and forward from the occlusal plane) was applied. Displacement and stress distribution of some important craniofacial structures were measured and compared with the results of related researches in the literature. A three-dimensional finite element model of UCLP craniomaxillary complex with 12 sutures was established from the CT scan data. This simulation model consisted of 206 753 individual elements with 260 662 nodes, which was a more precise simulation and a better representation of human craniomaxillary complex than the formerly available FEA models. By comparison, this model was proved to be valid. It is an effective way to establish the three-dimensional finite element model of UCLP cranio-maxillary complex with sutures from CT images with the help of the following softwares: Mimics 10.0, Geomagic Studio 12.0, Solidworks and E-feature Biomedical Modeler.
Near-field three-dimensional radar imaging techniques and applications.
Sheen, David; McMakin, Douglas; Hall, Thomas
2010-07-01
Three-dimensional radio frequency imaging techniques have been developed for a variety of near-field applications, including radar cross-section imaging, concealed weapon detection, ground penetrating radar imaging, through-barrier imaging, and nondestructive evaluation. These methods employ active radar transceivers that operate at various frequency ranges covering a wide range, from less than 100 MHz to in excess of 350 GHz, with the frequency range customized for each application. Computational wavefront reconstruction imaging techniques have been developed that optimize the resolution and illumination quality of the images. In this paper, rectilinear and cylindrical three-dimensional imaging techniques are described along with several application results.
Three-dimensional T1rho-weighted MRI at 1.5 Tesla.
Borthakur, Arijitt; Wheaton, Andrew; Charagundla, Sridhar R; Shapiro, Erik M; Regatte, Ravinder R; Akella, Sarma V S; Kneeland, J Bruce; Reddy, Ravinder
2003-06-01
To design and implement a magnetic resonance imaging (MRI) pulse sequence capable of performing three-dimensional T(1rho)-weighted MRI on a 1.5-T clinical scanner, and determine the optimal sequence parameters, both theoretically and experimentally, so that the energy deposition by the radiofrequency pulses in the sequence, measured as the specific absorption rate (SAR), does not exceed safety guidelines for imaging human subjects. A three-pulse cluster was pre-encoded to a three-dimensional gradient-echo imaging sequence to create a three-dimensional, T(1rho)-weighted MRI pulse sequence. Imaging experiments were performed on a GE clinical scanner with a custom-built knee-coil. We validated the performance of this sequence by imaging articular cartilage of a bovine patella and comparing T(1rho) values measured by this sequence to those obtained with a previously tested two-dimensional imaging sequence. Using a previously developed model for SAR calculation, the imaging parameters were adjusted such that the energy deposition by the radiofrequency pulses in the sequence did not exceed safety guidelines for imaging human subjects. The actual temperature increase due to the sequence was measured in a phantom by a MRI-based temperature mapping technique. Following these experiments, the performance of this sequence was demonstrated in vivo by obtaining T(1rho)-weighted images of the knee joint of a healthy individual. Calculated T(1rho) of articular cartilage in the specimen was similar for both and three-dimensional and two-dimensional methods (84 +/- 2 msec and 80 +/- 3 msec, respectively). The temperature increase in the phantom resulting from the sequence was 0.015 degrees C, which is well below the established safety guidelines. Images of the human knee joint in vivo demonstrate a clear delineation of cartilage from surrounding tissues. We developed and implemented a three-dimensional T(1rho)-weighted pulse sequence on a 1.5-T clinical scanner. Copyright 2003 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Lau, Kristen C.; Lee, Hyo Min; Singh, Tanushriya; Maidment, Andrew D. A.
2015-03-01
Dual-energy contrast-enhanced digital breast tomosynthesis (DE CE-DBT) uses an iodinated contrast agent to image the three-dimensional breast vasculature. The University of Pennsylvania has an ongoing DE CE-DBT clinical study in patients with known breast cancers. The breast is compressed continuously and imaged at four time points (1 pre-contrast; 3 post-contrast). DE images are obtained by a weighted logarithmic subtraction of the high-energy (HE) and low-energy (LE) image pairs. Temporal subtraction of the post-contrast DE images from the pre-contrast DE image is performed to analyze iodine uptake. Our previous work investigated image registration methods to correct for patient motion, enhancing the evaluation of vascular kinetics. In this project we investigate a segmentation algorithm which identifies blood vessels in the breast from our temporal DE subtraction images. Anisotropic diffusion filtering, Gabor filtering, and morphological filtering are used for the enhancement of vessel features. Vessel labeling methods are then used to distinguish vessel and background features successfully. Statistical and clinical evaluations of segmentation accuracy in DE-CBT images are ongoing.
NASA Astrophysics Data System (ADS)
Dobner, Sven; Fallnich, Carsten
2014-02-01
We present the hyperspectral imaging capabilities of in-line interferometric femtosecond stimulated Raman scattering. The beneficial features of this method, namely, the improved signal-to-background ratio compared to other applicable broadband stimulated Raman scattering methods and the simple experimental implementation, allow for a rather fast acquisition of three-dimensional raster-scanned hyperspectral data-sets, which is shown for PMMA beads and a lipid droplet in water as a demonstration. A subsequent application of a principle component analysis displays the chemical selectivity of the method.
Ji, Songbai; Fan, Xiaoyao; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.
2014-01-01
Stereovision is an important intraoperative imaging technique that captures the exposed parenchymal surface noninvasively during open cranial surgery. Estimating cortical surface shift efficiently and accurately is critical to compensate for brain deformation in the operating room (OR). In this study, we present an automatic and robust registration technique based on optical flow (OF) motion tracking to compensate for cortical surface displacement throughout surgery. Stereo images of the cortical surface were acquired at multiple time points after dural opening to reconstruct three-dimensional (3D) texture intensity-encoded cortical surfaces. A local coordinate system was established with its z-axis parallel to the average surface normal direction of the reconstructed cortical surface immediately after dural opening in order to produce two-dimensional (2D) projection images. A dense displacement field between the two projection images was determined directly from OF motion tracking without the need for feature identification or tracking. The starting and end points of the displacement vectors on the two cortical surfaces were then obtained following spatial mapping inversion to produce the full 3D displacement of the exposed cortical surface. We evaluated the technique with images obtained from digital phantoms and 18 surgical cases – 10 of which involved independent measurements of feature locations acquired with a tracked stylus for accuracy comparisons, and 8 others of which 4 involved stereo image acquisitions at three or more time points during surgery to illustrate utility throughout a procedure. Results from the digital phantom images were very accurate (0.05 pixels). In the 10 surgical cases with independently digitized point locations, the average agreement between feature coordinates derived from the cortical surface reconstructions was 1.7–2.1 mm relative to those determined with the tracked stylus probe. The agreement in feature displacement tracking was also comparable to tracked probe data (difference in displacement magnitude was <1 mm on average). The average magnitude of cortical surface displacement was 7.9 ± 5.7 mm (range 0.3–24.4 mm) in all patient cases with the displacement components along gravity being 5.2 ± 6.0 mm relative to the lateral movement of 2.4 ± 1.6 mm. Thus, our technique appears to be sufficiently accurate and computationally efficiency (typically ~15 s), for applications in the OR. PMID:25077845
Pérez-Beteta, Julián; Luque, Belén; Arregui, Elena; Calvo, Manuel; Borrás, José M; López, Carlos; Martino, Juan; Velasquez, Carlos; Asenjo, Beatriz; Benavides, Manuel; Herruzo, Ismael; Martínez-González, Alicia; Pérez-Romasanta, Luis; Arana, Estanislao; Pérez-García, Víctor M
2016-01-01
Objective: The main objective of this retrospective work was the study of three-dimensional (3D) heterogeneity measures of post-contrast pre-operative MR images acquired with T1 weighted sequences of patients with glioblastoma (GBM) as predictors of clinical outcome. Methods: 79 patients from 3 hospitals were included in the study. 16 3D textural heterogeneity measures were computed including run-length matrix (RLM) features (regional heterogeneity) and co-occurrence matrix (CM) features (local heterogeneity). The significance of the results was studied using Kaplan–Meier curves and Cox proportional hazards analysis. Correlation between the variables of the study was assessed using the Spearman's correlation coefficient. Results: Kaplan–Meyer survival analysis showed that 4 of the 11 RLM features and 4 of the 5 CM features considered were robust predictors of survival. The median survival differences in the most significant cases were of over 6 months. Conclusion: Heterogeneity measures computed on the post-contrast pre-operative T1 weighted MR images of patients with GBM are predictors of survival. Advances in knowledge: Texture analysis to assess tumour heterogeneity has been widely studied. However, most works develop a two-dimensional analysis, focusing only on one MRI slice to state tumour heterogeneity. The study of fully 3D heterogeneity textural features as predictors of clinical outcome is more robust and is not dependent on the selected slice of the tumour. PMID:27319577
Molina, David; Pérez-Beteta, Julián; Luque, Belén; Arregui, Elena; Calvo, Manuel; Borrás, José M; López, Carlos; Martino, Juan; Velasquez, Carlos; Asenjo, Beatriz; Benavides, Manuel; Herruzo, Ismael; Martínez-González, Alicia; Pérez-Romasanta, Luis; Arana, Estanislao; Pérez-García, Víctor M
2016-07-04
The main objective of this retrospective work was the study of three-dimensional (3D) heterogeneity measures of post-contrast pre-operative MR images acquired with T 1 weighted sequences of patients with glioblastoma (GBM) as predictors of clinical outcome. 79 patients from 3 hospitals were included in the study. 16 3D textural heterogeneity measures were computed including run-length matrix (RLM) features (regional heterogeneity) and co-occurrence matrix (CM) features (local heterogeneity). The significance of the results was studied using Kaplan-Meier curves and Cox proportional hazards analysis. Correlation between the variables of the study was assessed using the Spearman's correlation coefficient. Kaplan-Meyer survival analysis showed that 4 of the 11 RLM features and 4 of the 5 CM features considered were robust predictors of survival. The median survival differences in the most significant cases were of over 6 months. Heterogeneity measures computed on the post-contrast pre-operative T 1 weighted MR images of patients with GBM are predictors of survival. Texture analysis to assess tumour heterogeneity has been widely studied. However, most works develop a two-dimensional analysis, focusing only on one MRI slice to state tumour heterogeneity. The study of fully 3D heterogeneity textural features as predictors of clinical outcome is more robust and is not dependent on the selected slice of the tumour.
Mathew, B; Schmitz, A; Muñoz-Descalzo, S; Ansari, N; Pampaloni, F; Stelzer, E H K; Fischer, S C
2015-06-08
Due to the large amount of data produced by advanced microscopy, automated image analysis is crucial in modern biology. Most applications require reliable cell nuclei segmentation. However, in many biological specimens cell nuclei are densely packed and appear to touch one another in the images. Therefore, a major difficulty of three-dimensional cell nuclei segmentation is the decomposition of cell nuclei that apparently touch each other. Current methods are highly adapted to a certain biological specimen or a specific microscope. They do not ensure similarly accurate segmentation performance, i.e. their robustness for different datasets is not guaranteed. Hence, these methods require elaborate adjustments to each dataset. We present an advanced three-dimensional cell nuclei segmentation algorithm that is accurate and robust. Our approach combines local adaptive pre-processing with decomposition based on Lines-of-Sight (LoS) to separate apparently touching cell nuclei into approximately convex parts. We demonstrate the superior performance of our algorithm using data from different specimens recorded with different microscopes. The three-dimensional images were recorded with confocal and light sheet-based fluorescence microscopes. The specimens are an early mouse embryo and two different cellular spheroids. We compared the segmentation accuracy of our algorithm with ground truth data for the test images and results from state-of-the-art methods. The analysis shows that our method is accurate throughout all test datasets (mean F-measure: 91%) whereas the other methods each failed for at least one dataset (F-measure≤69%). Furthermore, nuclei volume measurements are improved for LoS decomposition. The state-of-the-art methods required laborious adjustments of parameter values to achieve these results. Our LoS algorithm did not require parameter value adjustments. The accurate performance was achieved with one fixed set of parameter values. We developed a novel and fully automated three-dimensional cell nuclei segmentation method incorporating LoS decomposition. LoS are easily accessible features that ensure correct splitting of apparently touching cell nuclei independent of their shape, size or intensity. Our method showed superior performance compared to state-of-the-art methods, performing accurately for a variety of test images. Hence, our LoS approach can be readily applied to quantitative evaluation in drug testing, developmental and cell biology.
Automatic Reconstruction of Spacecraft 3D Shape from Imagery
NASA Astrophysics Data System (ADS)
Poelman, C.; Radtke, R.; Voorhees, H.
We describe a system that computes the three-dimensional (3D) shape of a spacecraft from a sequence of uncalibrated, two-dimensional images. While the mathematics of multi-view geometry is well understood, building a system that accurately recovers 3D shape from real imagery remains an art. A novel aspect of our approach is the combination of algorithms from computer vision, photogrammetry, and computer graphics. We demonstrate our system by computing spacecraft models from imagery taken by the Air Force Research Laboratory's XSS-10 satellite and DARPA's Orbital Express satellite. Using feature tie points (each identified in two or more images), we compute the relative motion of each frame and the 3D location of each feature using iterative linear factorization followed by non-linear bundle adjustment. The "point cloud" that results from this traditional shape-from-motion approach is typically too sparse to generate a detailed 3D model. Therefore, we use the computed motion solution as input to a volumetric silhouette-carving algorithm, which constructs a solid 3D model based on viewpoint consistency with the image frames. The resulting voxel model is then converted to a facet-based surface representation and is texture-mapped, yielding realistic images from arbitrary viewpoints. We also illustrate other applications of the algorithm, including 3D mensuration and stereoscopic 3D movie generation.
Scene segmentation of natural images using texture measures and back-propagation
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Phatak, Anil; Chatterji, Gano
1993-01-01
Knowledge of the three-dimensional world is essential for many guidance and navigation applications. A sequence of images from an electro-optical sensor can be processed using optical flow algorithms to provide a sparse set of ranges as a function of azimuth and elevation. A natural way to enhance the range map is by interpolation. However, this should be undertaken with care since interpolation assumes continuity of range. The range is continuous in certain parts of the image and can jump at object boundaries. In such situations, the ability to detect homogeneous object regions by scene segmentation can be used to determine regions in the range map that can be enhanced by interpolation. The use of scalar features derived from the spatial gray-level dependence matrix for texture segmentation is explored. Thresholding of histograms of scalar texture features is done for several images to select scalar features which result in a meaningful segmentation of the images. Next, the selected scalar features are used with a neural net to automate the segmentation procedure. Back-propagation is used to train the feed forward neural network. The generalization of the network approach to subsequent images in the sequence is examined. It is shown that the use of multiple scalar features as input to the neural network result in a superior segmentation when compared with a single scalar feature. It is also shown that the scalar features, which are not useful individually, result in a good segmentation when used together. The methodology is applied to both indoor and outdoor images.
An anti-disturbing real time pose estimation method and system
NASA Astrophysics Data System (ADS)
Zhou, Jian; Zhang, Xiao-hu
2011-08-01
Pose estimation relating two-dimensional (2D) images to three-dimensional (3D) rigid object need some known features to track. In practice, there are many algorithms which perform this task in high accuracy, but all of these algorithms suffer from features lost. This paper investigated the pose estimation when numbers of known features or even all of them were invisible. Firstly, known features were tracked to calculate pose in the current and the next image. Secondly, some unknown but good features to track were automatically detected in the current and the next image. Thirdly, those unknown features which were on the rigid and could match each other in the two images were retained. Because of the motion characteristic of the rigid object, the 3D information of those unknown features on the rigid could be solved by the rigid object's pose at the two moment and their 2D information in the two images except only two case: the first one was that both camera and object have no relative motion and camera parameter such as focus length, principle point, and etc. have no change at the two moment; the second one was that there was no shared scene or no matched feature in the two image. Finally, because those unknown features at the first time were known now, pose estimation could go on in the followed images in spite of the missing of known features in the beginning by repeating the process mentioned above. The robustness of pose estimation by different features detection algorithms such as Kanade-Lucas-Tomasi (KLT) feature, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Feature (SURF) were compared and the compact of the different relative motion between camera and the rigid object were discussed in this paper. Graphic Processing Unit (GPU) parallel computing was also used to extract and to match hundreds of features for real time pose estimation which was hard to work on Central Processing Unit (CPU). Compared with other pose estimation methods, this new method can estimate pose between camera and object when part even all known features are lost, and has a quick response time benefit from GPU parallel computing. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in autonomous navigation and positioning, robots fields at unknown environment. The results of simulation and experiments demonstrate that proposed method could suppress noise effectively, extracted features robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.
NASA Astrophysics Data System (ADS)
Yamaguchi, Atsuko; Ohashi, Takeyoshi; Kawasaki, Takahiro; Inoue, Osamu; Kawada, Hiroki
2013-04-01
A new method for calculating critical dimension (CDs) at the top and bottom of three-dimensional (3D) pattern profiles from a critical-dimension scanning electron microscope (CD-SEM) image, called as "T-sigma method", is proposed and evaluated. Without preparing a library of database in advance, T-sigma can estimate a feature of a pattern sidewall. Furthermore, it supplies the optimum edge-definition (i.e., threshold level for determining edge position from a CDSEM signal) to detect the top and bottom of the pattern. This method consists of three steps. First, two components of line-edge roughness (LER); noise-induced bias (i.e., LER bias) and unbiased component (i.e., bias-free LER) are calculated with set threshold level. Second, these components are calculated with various threshold values, and the threshold-dependence of these two components, "T-sigma graph", is obtained. Finally, the optimum threshold value for the top and the bottom edge detection are given by the analysis of T-sigma graph. T-sigma was applied to CD-SEM images of three kinds of resist-pattern samples. In addition, reference metrology was performed with atomic force microscope (AFM) and scanning transmission electron microscope (STEM). Sensitivity of CD measured by T-sigma to the reference CD was higher than or equal to that measured by the conventional edge definition. Regarding the absolute measurement accuracy, T-sigma showed better results than the conventional definition. Furthermore, T-sigma graphs were calculated from CD-SEM images of two kinds of resist samples and compared with corresponding STEM observation results. Both bias-free LER and LER bias increased as the detected edge point moved from the bottom to the top of the pattern in the case that the pattern had a straight sidewall and a round top. On the other hand, they were almost constant in the case that the pattern had a re-entrant profile. T-sigma will be able to reveal a re-entrant feature. From these results, it is found that T-sigma method can provide rough cross-sectional pattern features and achieve quick, easy and accurate measurements of top and bottom CD.
NASA Astrophysics Data System (ADS)
Yi, Faliu; Moon, Inkyu; Lee, Yeon H.
2015-01-01
Counting morphologically normal cells in human red blood cells (RBCs) is extremely beneficial in the health care field. We propose a three-dimensional (3-D) classification method of automatically determining the morphologically normal RBCs in the phase image of multiple human RBCs that are obtained by off-axis digital holographic microscopy (DHM). The RBC holograms are first recorded by DHM, and then the phase images of multiple RBCs are reconstructed by a computational numerical algorithm. To design the classifier, the three typical RBC shapes, which are stomatocyte, discocyte, and echinocyte, are used for training and testing. Nonmain or abnormal RBC shapes different from the three normal shapes are defined as the fourth category. Ten features, including projected surface area, average phase value, mean corpuscular hemoglobin, perimeter, mean corpuscular hemoglobin surface density, circularity, mean phase of center part, sphericity coefficient, elongation, and pallor, are extracted from each RBC after segmenting the reconstructed phase images by using a watershed transform algorithm. Moreover, four additional properties, such as projected surface area, perimeter, average phase value, and elongation, are measured from the inner part of each cell, which can give significant information beyond the previous 10 features for the separation of the RBC groups; these are verified in the experiment by the statistical method of Hotelling's T-square test. We also apply the principal component analysis algorithm to reduce the dimension number of variables and establish the Gaussian mixture densities using the projected data with the first eight principal components. Consequently, the Gaussian mixtures are used to design the discriminant functions based on Bayesian decision theory. To improve the performance of the Bayes classifier and the accuracy of estimation of its error rate, the leaving-one-out technique is applied. Experimental results show that the proposed method can yield good results for calculating the percentage of each typical normal RBC shape in a reconstructed phase image of multiple RBCs that will be favorable to the analysis of RBC-related diseases. In addition, we show that the discrimination performance for the counting of normal shapes of RBCs can be improved by using 3-D features of an RBC.
NASA Astrophysics Data System (ADS)
Johnson, Kyle; Thurow, Brian; Kim, Taehoon; Blois, Gianluca; Christensen, Kenneth
2016-11-01
Three-dimensional, three-component (3D-3C) measurements were made using a plenoptic camera on the flow around a roughness element immersed in a turbulent boundary layer. A refractive index matched approach allowed whole-field optical access from a single camera to a measurement volume that includes transparent solid geometries. In particular, this experiment measures the flow over a single hemispherical roughness element made of acrylic and immersed in a working fluid consisting of Sodium Iodide solution. Our results demonstrate that plenoptic particle image velocimetry (PIV) is a viable technique to obtaining statistically-significant volumetric velocity measurements even in a complex separated flow. The boundary layer to roughness height-ratio of the flow was 4.97 and the Reynolds number (based on roughness height) was 4.57×103. Our measurements reveal key flow features such as spiraling legs of the shear layer, a recirculation region, and shed arch vortices. Proper orthogonal decomposition (POD) analysis was applied to the instantaneous velocity and vorticity data to extract these features. Supported by the National Science Foundation Grant No. 1235726.
Wang, Yilin; Kanchanawong, Pakorn
2016-12-01
Fluorescence microscopy enables direct visualization of specific biomolecules within cells. However, for conventional fluorescence microscopy, the spatial resolution is restricted by diffraction to ~ 200 nm within the image plane and > 500 nm along the optical axis. As a result, fluorescence microscopy has long been severely limited in the observation of ultrastructural features within cells. The recent development of super resolution microscopy methods has overcome this limitation. In particular, the advent of photoswitchable fluorophores enables localization-based super resolution microscopy, which provides resolving power approaching the molecular-length scale. Here, we describe the application of a three-dimensional super resolution microscopy method based on single-molecule localization microscopy and multiphase interferometry, called interferometric PhotoActivated Localization Microscopy (iPALM). This method provides nearly isotropic resolution on the order of 20 nm in all three dimensions. Protocols for visualizing the filamentous actin cytoskeleton, including specimen preparation and operation of the iPALM instrument, are described here. These protocols are also readily adaptable and instructive for the study of other ultrastructural features in cells.
Three-Dimensional Anatomic Evaluation of the Anterior Cruciate Ligament for Planning Reconstruction
Hoshino, Yuichi; Kim, Donghwi; Fu, Freddie H.
2012-01-01
Anatomic study related to the anterior cruciate ligament (ACL) reconstruction surgery has been developed in accordance with the progress of imaging technology. Advances in imaging techniques, especially the move from two-dimensional (2D) to three-dimensional (3D) image analysis, substantially contribute to anatomic understanding and its application to advanced ACL reconstruction surgery. This paper introduces previous research about image analysis of the ACL anatomy and its application to ACL reconstruction surgery. Crucial bony landmarks for the accurate placement of the ACL graft can be identified by 3D imaging technique. Additionally, 3D-CT analysis of the ACL insertion site anatomy provides better and more consistent evaluation than conventional “clock-face” reference and roentgenologic quadrant method. Since the human anatomy has a complex three-dimensional structure, further anatomic research using three-dimensional imaging analysis and its clinical application by navigation system or other technologies is warranted for the improvement of the ACL reconstruction. PMID:22567310
Who Needs 3D When the Universe Is Flat?
ERIC Educational Resources Information Center
Eriksson, Urban; Linder, Cedric; Airey, John; Redfors, Andreas
2014-01-01
An overlooked feature in astronomy education is the need for students to learn to extrapolate three-dimensionality and the challenges that this may involve. Discerning critical features in the night sky that are embedded in dimensionality is a long-term learning process. Several articles have addressed the usefulness of three-dimensional (3D)…
Biological imaging by soft X-ray diffraction microscopy
NASA Astrophysics Data System (ADS)
Shapiro, David
We have developed a microscope for soft x-ray diffraction imaging of dry or frozen hydrated biological specimens. This lensless imaging system does not suffer from the resolution or specimen thickness limitations that other short wavelength microscopes experience. The microscope, currently situated at beamline 9.0.1 of the Advanced Light Source, can collect diffraction data to 12 nm resolution with 750 eV photons and 17 nm resolution with 520 eV photons. The specimen can be rotated with a precision goniometer through an angle of 160 degrees allowing for the collection of nearly complete three-dimensional diffraction data. The microscope is fully computer controlled through a graphical user interface and a scripting language automates the collection of both two-dimensional and three-dimensional data. Diffraction data from a freeze-dried dwarf yeast cell, Saccharomyces cerevisiae carrying the CLN3-1 mutation, was collected to 12 run resolution from 8 specimen orientations spanning a total rotation of 8 degrees. The diffraction data was phased using the difference map algorithm and the reconstructions provide real space images of the cell to 30 nm resolution from each of the orientations. The agreement of the different reconstructions provides confidence in the recovered, and previously unknown, structure and indicates the three dimensionality of the cell. This work represents the first imaging of the natural complex refractive contrast from a whole unstained cell by the diffraction microscopy method and has achieved a resolution superior to lens based x-ray tomographic reconstructions of similar specimens. Studies of the effects of exposure to large radiation doses were also carried out. It was determined that the freeze-dried cell suffers from an initial collapse, which is followed by a uniform, but slow, shrinkage. This structural damage to the cell is not accompanied by a diminished ability to see small features in the specimen. Preliminary measurements on frozen-hydrated yeast indicate that the frozen specimens do not exhibit these changes even with doses as high as 5 x 109 Gray.
Multilevel image recognition using discriminative patches and kernel covariance descriptor
NASA Astrophysics Data System (ADS)
Lu, Le; Yao, Jianhua; Turkbey, Evrim; Summers, Ronald M.
2014-03-01
Computer-aided diagnosis of medical images has emerged as an important tool to objectively improve the performance, accuracy and consistency for clinical workflow. To computerize the medical image diagnostic recognition problem, there are three fundamental problems: where to look (i.e., where is the region of interest from the whole image/volume), image feature description/encoding, and similarity metrics for classification or matching. In this paper, we exploit the motivation, implementation and performance evaluation of task-driven iterative, discriminative image patch mining; covariance matrix based descriptor via intensity, gradient and spatial layout; and log-Euclidean distance kernel for support vector machine, to address these three aspects respectively. To cope with often visually ambiguous image patterns for the region of interest in medical diagnosis, discovery of multilabel selective discriminative patches is desired. Covariance of several image statistics summarizes their second order interactions within an image patch and is proved as an effective image descriptor, with low dimensionality compared with joint statistics and fast computation regardless of the patch size. We extensively evaluate two extended Gaussian kernels using affine-invariant Riemannian metric or log-Euclidean metric with support vector machines (SVM), on two medical image classification problems of degenerative disc disease (DDD) detection on cortical shell unwrapped CT maps and colitis detection on CT key images. The proposed approach is validated with promising quantitative results on these challenging tasks. Our experimental findings and discussion also unveil some interesting insights on the covariance feature composition with or without spatial layout for classification and retrieval, and different kernel constructions for SVM. This will also shed some light on future work using covariance feature and kernel classification for medical image analysis.
PET and MRI image fusion based on combination of 2-D Hilbert transform and IHS method.
Haddadpour, Mozhdeh; Daneshvar, Sabalan; Seyedarabi, Hadi
2017-08-01
The process of medical image fusion is combining two or more medical images such as Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) and mapping them to a single image as fused image. So purpose of our study is assisting physicians to diagnose and treat the diseases in the least of the time. We used Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) as input images, so fused them based on combination of two dimensional Hilbert transform (2-D HT) and Intensity Hue Saturation (IHS) method. Evaluation metrics that we apply are Discrepancy (D k ) as an assessing spectral features and Average Gradient (AG k ) as an evaluating spatial features and also Overall Performance (O.P) to verify properly of the proposed method. In this paper we used three common evaluation metrics like Average Gradient (AG k ) and the lowest Discrepancy (D k ) and Overall Performance (O.P) to evaluate the performance of our method. Simulated and numerical results represent the desired performance of proposed method. Since that the main purpose of medical image fusion is preserving both spatial and spectral features of input images, so based on numerical results of evaluation metrics such as Average Gradient (AG k ), Discrepancy (D k ) and Overall Performance (O.P) and also desired simulated results, it can be concluded that our proposed method can preserve both spatial and spectral features of input images. Copyright © 2017 Chang Gung University. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Lian; Yang, Xiukun; Zhong, Mingliang; Liu, Yao; Jing, Xiaojun; Yang, Qin
2018-04-01
The discrete fractional Brownian incremental random (DFBIR) field is used to describe the irregular, random, and highly complex shapes of natural objects such as coastlines and biological tissues, for which traditional Euclidean geometry cannot be used. In this paper, an anisotropic variable window (AVW) directional operator based on the DFBIR field model is proposed for extracting spatial characteristics of Fourier transform infrared spectroscopy (FTIR) microscopic imaging. Probabilistic principal component analysis first extracts spectral features, and then the spatial features of the proposed AVW directional operator are combined with the former to construct a spatial-spectral structure, which increases feature-related information and helps a support vector machine classifier to obtain more efficient distribution-related information. Compared to Haralick’s grey-level co-occurrence matrix, Gabor filters, and local binary patterns (e.g. uniform LBPs, rotation-invariant LBPs, uniform rotation-invariant LBPs), experiments on three FTIR spectroscopy microscopic imaging datasets show that the proposed AVW directional operator is more advantageous in terms of classification accuracy, particularly for low-dimensional spaces of spatial characteristics.
Hayashi, K; Hoeksema, J T; Liu, Y; Bobra, M G; Sun, X D; Norton, A A
Time-dependent three-dimensional magnetohydrodynamics (MHD) simulation modules are implemented at the Joint Science Operation Center (JSOC) of the Solar Dynamics Observatory (SDO). The modules regularly produce three-dimensional data of the time-relaxed minimum-energy state of the solar corona using global solar-surface magnetic-field maps created from Helioseismic and Magnetic Imager (HMI) full-disk magnetogram data. With the assumption of a polytropic gas with specific-heat ratio of 1.05, three types of simulation products are currently generated: i) simulation data with medium spatial resolution using the definitive calibrated synoptic map of the magnetic field with a cadence of one Carrington rotation, ii) data with low spatial resolution using the definitive version of the synchronic frame format of the magnetic field, with a cadence of one day, and iii) low-resolution data using near-real-time (NRT) synchronic format of the magnetic field on a daily basis. The MHD data available in the JSOC database are three-dimensional, covering heliocentric distances from 1.025 to 4.975 solar radii, and contain all eight MHD variables: the plasma density, temperature, and three components of motion velocity, and three components of the magnetic field. This article describes details of the MHD simulations as well as the production of the input magnetic-field maps, and details of the products available at the JSOC database interface. To assess the merits and limits of the model, we show the simulated data in early 2011 and compare with the actual coronal features observed by the Atmospheric Imaging Assembly (AIA) and the near-Earth in-situ data.
AstroBlend: An astrophysical visualization package for Blender
NASA Astrophysics Data System (ADS)
Naiman, J. P.
2016-04-01
The rapid growth in scale and complexity of both computational and observational astrophysics over the past decade necessitates efficient and intuitive methods for examining and visualizing large datasets. Here, I present AstroBlend, an open-source Python library for use within the three dimensional modeling software, Blender. While Blender has been a popular open-source software among animators and visual effects artists, in recent years it has also become a tool for visualizing astrophysical datasets. AstroBlend combines the three dimensional capabilities of Blender with the analysis tools of the widely used astrophysical toolset, yt, to afford both computational and observational astrophysicists the ability to simultaneously analyze their data and create informative and appealing visualizations. The introduction of this package includes a description of features, work flow, and various example visualizations. A website - www.astroblend.com - has been developed which includes tutorials, and a gallery of example images and movies, along with links to downloadable data, three dimensional artistic models, and various other resources.
Face recognition by applying wavelet subband representation and kernel associative memory.
Zhang, Bai-Ling; Zhang, Haihong; Ge, Shuzhi Sam
2004-01-01
In this paper, we propose an efficient face recognition scheme which has two features: 1) representation of face images by two-dimensional (2-D) wavelet subband coefficients and 2) recognition by a modular, personalised classification method based on kernel associative memory models. Compared to PCA projections and low resolution "thumb-nail" image representations, wavelet subband coefficients can efficiently capture substantial facial features while keeping computational complexity low. As there are usually very limited samples, we constructed an associative memory (AM) model for each person and proposed to improve the performance of AM models by kernel methods. Specifically, we first applied kernel transforms to each possible training pair of faces sample and then mapped the high-dimensional feature space back to input space. Our scheme using modular autoassociative memory for face recognition is inspired by the same motivation as using autoencoders for optical character recognition (OCR), for which the advantages has been proven. By associative memory, all the prototypical faces of one particular person are used to reconstruct themselves and the reconstruction error for a probe face image is used to decide if the probe face is from the corresponding person. We carried out extensive experiments on three standard face recognition datasets, the FERET data, the XM2VTS data, and the ORL data. Detailed comparisons with earlier published results are provided and our proposed scheme offers better recognition accuracy on all of the face datasets.
Learning-based saliency model with depth information.
Ma, Chih-Yao; Hang, Hsueh-Ming
2015-01-01
Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.
Spottiswoode, B S; van den Heever, D J; Chang, Y; Engelhardt, S; Du Plessis, S; Nicolls, F; Hartzenberg, H B; Gretschel, A
2013-01-01
Neurosurgeons regularly plan their surgery using magnetic resonance imaging (MRI) images, which may show a clear distinction between the area to be resected and the surrounding healthy brain tissue depending on the nature of the pathology. However, this distinction is often unclear with the naked eye during the surgical intervention, and it may be difficult to infer depth and an accurate volumetric interpretation from a series of MRI image slices. In this work, MRI data are used to create affordable patient-specific 3-dimensional (3D) scale models of the brain which clearly indicate the location and extent of a tumour relative to brain surface features and important adjacent structures. This is achieved using custom software and rapid prototyping. In addition, functionally eloquent areas identified using functional MRI are integrated into the 3D models. Preliminary in vivo results are presented for 2 patients. The accuracy of the technique was estimated both theoretically and by printing a geometrical phantom, with mean dimensional errors of less than 0.5 mm observed. This may provide a practical and cost-effective tool which can be used for training, and during neurosurgical planning and intervention. Copyright © 2013 S. Karger AG, Basel.
Three-dimensional coherent X-ray diffractive imaging of whole frozen-hydrated cells
Rodriguez, Jose A.; Xu, Rui; Chen, Chien-Chun; Huang, Zhifeng; Jiang, Huaidong; Chen, Allan L.; Raines, Kevin S.; Pryor Jr, Alan; Nam, Daewoong; Wiegart, Lutz; Song, Changyong; Madsen, Anders; Chushkin, Yuriy; Zontone, Federico; Bradley, Peter J.; Miao, Jianwei
2015-01-01
A structural understanding of whole cells in three dimensions at high spatial resolution remains a significant challenge and, in the case of X-rays, has been limited by radiation damage. By alleviating this limitation, cryogenic coherent diffractive imaging (cryo-CDI) can in principle be used to bridge the important resolution gap between optical and electron microscopy in bio-imaging. Here, the first experimental demonstration of cryo-CDI for quantitative three-dimensional imaging of whole frozen-hydrated cells using 8 keV X-rays is reported. As a proof of principle, a tilt series of 72 diffraction patterns was collected from a frozen-hydrated Neospora caninum cell and the three-dimensional mass density of the cell was reconstructed and quantified based on its natural contrast. This three-dimensional reconstruction reveals the surface and internal morphology of the cell, including its complex polarized sub-cellular structure. It is believed that this work represents an experimental milestone towards routine quantitative three-dimensional imaging of whole cells in their natural state with spatial resolutions in the tens of nanometres. PMID:26306199
Three-dimensional coherent X-ray diffractive imaging of whole frozen-hydrated cells
Rodriguez, Jose A.; Xu, Rui; Chen, Chien -Chun; ...
2015-09-01
Here, a structural understanding of whole cells in three dimensions at high spatial resolution remains a significant challenge and, in the case of X-rays, has been limited by radiation damage. By alleviating this limitation, cryogenic coherent diffractive imaging (cryo-CDI) can in principle be used to bridge the important resolution gap between optical and electron microscopy in bio-imaging. Here, the first experimental demonstration of cryo-CDI for quantitative three-dimensional imaging of whole frozen-hydrated cells using 8 Kev X-rays is reported. As a proof of principle, a tilt series of 72 diffraction patterns was collected from a frozen-hydrated Neospora caninum cell and themore » three-dimensional mass density of the cell was reconstructed and quantified based on its natural contrast. This three-dimensional reconstruction reveals the surface and internal morphology of the cell, including its complex polarized sub-cellular structure. Finally, it is believed that this work represents an experimental milestone towards routine quantitative three-dimensional imaging of whole cells in their natural state with spatial resolutions in the tens of nanometres.« less
Three-dimensional coherent X-ray diffractive imaging of whole frozen-hydrated cells.
Rodriguez, Jose A; Xu, Rui; Chen, Chien-Chun; Huang, Zhifeng; Jiang, Huaidong; Chen, Allan L; Raines, Kevin S; Pryor, Alan; Nam, Daewoong; Wiegart, Lutz; Song, Changyong; Madsen, Anders; Chushkin, Yuriy; Zontone, Federico; Bradley, Peter J; Miao, Jianwei
2015-09-01
A structural understanding of whole cells in three dimensions at high spatial resolution remains a significant challenge and, in the case of X-rays, has been limited by radiation damage. By alleviating this limitation, cryogenic coherent diffractive imaging (cryo-CDI) can in principle be used to bridge the important resolution gap between optical and electron microscopy in bio-imaging. Here, the first experimental demonstration of cryo-CDI for quantitative three-dimensional imaging of whole frozen-hydrated cells using 8 keV X-rays is reported. As a proof of principle, a tilt series of 72 diffraction patterns was collected from a frozen-hydrated Neospora caninum cell and the three-dimensional mass density of the cell was reconstructed and quantified based on its natural contrast. This three-dimensional reconstruction reveals the surface and internal morphology of the cell, including its complex polarized sub-cellular structure. It is believed that this work represents an experimental milestone towards routine quantitative three-dimensional imaging of whole cells in their natural state with spatial resolutions in the tens of nanometres.
Light ray field capture using focal plane sweeping and its optical reconstruction using 3D displays.
Park, Jae-Hyeung; Lee, Sung-Keun; Jo, Na-Young; Kim, Hee-Jae; Kim, Yong-Soo; Lim, Hong-Gi
2014-10-20
We propose a method to capture light ray field of three-dimensional scene using focal plane sweeping. Multiple images are captured using a usual camera at different focal distances, spanning the three-dimensional scene. The captured images are then back-projected to four-dimensional spatio-angular space to obtain the light ray field. The obtained light ray field can be visualized either using digital processing or optical reconstruction using various three-dimensional display techniques including integral imaging, layered display, and holography.
NASA Technical Reports Server (NTRS)
McDowell, Mark (Inventor); Glasgow, Thomas K. (Inventor)
1999-01-01
A system and a method for measuring three-dimensional velocities at a plurality of points in a fluid employing at least two cameras positioned approximately perpendicular to one another. The cameras are calibrated to accurately represent image coordinates in world coordinate system. The two-dimensional views of the cameras are recorded for image processing and centroid coordinate determination. Any overlapping particle clusters are decomposed into constituent centroids. The tracer particles are tracked on a two-dimensional basis and then stereo matched to obtain three-dimensional locations of the particles as a function of time so that velocities can be measured therefrom The stereo imaging velocimetry technique of the present invention provides a full-field. quantitative, three-dimensional map of any optically transparent fluid which is seeded with tracer particles.
Shin, Kang-Jae; Gil, Young-Chun; Lee, Shin-Hyo; Kim, Jeong-Nam; Yoo, Ja-Young; Kim, Soon-Heum; Choi, Hyun-Gon; Shin, Hyun Jin; Koh, Ki-Seok; Song, Wu-Chul
2017-01-01
The aim of the present study was to assess normal eyeball protrusion from the orbital rim using two- and three-dimensional images and demonstrate the better suitability of CT images for assessment of exophthalmos. The facial computed tomographic (CT) images of Korean adults were acquired in sagittal and transverse views. The CT images were used in reconstructing three-dimensional volume of faces using computer software. The protrusion distances from orbital rims and the diameters of eyeballs were measured in the two views of the CT image and three-dimensional volume of the face. Relative exophthalmometry was calculated by the difference in protrusion distance between the right and left sides. The eyeball protrusion was 4.9 and 12.5 mm in sagittal and transverse views, respectively. The protrusion distances were 2.9 mm in the three-dimensional volume of face. There were no significant differences between right and left sides in the degree of protrusion, and the difference was within 2 mm in more than 90% of the subjects. The results of the present study will provide reliable criteria for precise diagnosis and postoperative monitoring using CT imaging of diseases such as thyroid-associated ophthalmopathy and orbital tumors.
Three-dimensional confocal microscopy of the living cornea and ocular lens
NASA Astrophysics Data System (ADS)
Masters, Barry R.
1991-07-01
The three-dimensional reconstruction of the optic zone of the cornea and the ocular crystalline lens has been accomplished using confocal microscopy and volume rendering computer techniques. A laser scanning confocal microscope was used in the reflected light mode to obtain the two-dimensional images from the cornea and the ocular lens of a freshly enucleated rabbit eye. The light source was an argon ion laser with a 488 nm wavelength. The microscope objective was a Leitz X25, NA 0.6 water immersion lens. The 400 micron thick cornea was optically sectioned into 133 three micron sections. The semi-transparent cornea and the in-situ ocular lens was visualized as high resolution, high contrast two-dimensional images. The structures observed in the cornea include: superficial epithelial cells and their nuclei, basal epithelial cells and their 'beaded' cell borders, basal lamina, nerve plexus, nerve fibers, nuclei of stromal keratocytes, and endothelial cells. The structures observed in the in- situ ocular lens include: lens capsule, lens epithelial cells, and individual lens fibers. The three-dimensional data sets of the cornea and the ocular lens were reconstructed in the computer using volume rendering techniques. Stereo pairs were also created of the two- dimensional ocular images for visualization. The stack of two-dimensional images was reconstructed into a three-dimensional object using volume rendering techniques. This demonstration of the three-dimensional visualization of the intact, enucleated eye provides an important step toward quantitative three-dimensional morphometry of the eye. The important aspects of three-dimensional reconstruction are discussed.
Three-dimensional imaging modalities in endodontics
Mao, Teresa
2014-01-01
Recent research in endodontics has highlighted the need for three-dimensional imaging in the clinical arena as well as in research. Three-dimensional imaging using computed tomography (CT) has been used in endodontics over the past decade. Three types of CT scans have been studied in endodontics, namely cone-beam CT, spiral CT, and peripheral quantitative CT. Contemporary endodontics places an emphasis on the use of cone-beam CT for an accurate diagnosis of parameters that cannot be visualized on a two-dimensional image. This review discusses the role of CT in endodontics, pertaining to its importance in the diagnosis of root canal anatomy, detection of peri-radicular lesions, diagnosis of trauma and resorption, presurgical assessment, and evaluation of the treatment outcome. PMID:25279337
The nature of (sub-)micrometre cometary dust particles detected with MIDAS
NASA Astrophysics Data System (ADS)
Mannel, T.; Bentley, M. S.; Torkar, K.; Jeszenszky, H.; Romstedt, J.; Schmied, R.
2015-10-01
The MIDAS Atomic Force Microscope (AFM) onboard Rosetta collects dust particles and produces three-dimensional images with nano- to micrometre resolution. To date, several tens of particles have been detected, allowing determination of their properties at the smallest scale. The key features will be presented, including the particle size, their fragile character, and their morphology. These findings will be compared with the results of other Rosetta dust experiments.
In Situ Imaging during Compression of Plastic Bonded Explosives for Damage Modeling
NASA Astrophysics Data System (ADS)
Yeager, John; Manner, Virginia; Patterson, Brian; Walters, David; Cordes, Nikolaus; Henderson, Kevin; Tappan, Bryce; Luscher, Darby
2017-06-01
The microstructure of plastic bonded explosives (PBXs) is known to influence behavior during insults such as deformation, heating or initiation to detonation. Obtaining three-dimensional microstructural data can be difficult due in part to fragility of the material and small feature size. X-ray computed tomography (CT) is an ideal characterization technique but the explosive crystals and binder in formulations such as PBX 9501 do not have sufficient x-ray contrast to differentiate between the components. Here, we have formulated several PBXs using octahydro-1,3,5,7-tetranitro-1,3,5,7- tetrazocine (HMX) crystals and low-density binder systems. The full three-dimensional microstructure of these samples has been characterized using microscale CT during uniaxial mechanical compression in an interrupted in situ modality. The rigidity of the binder was observed to significantly influence fracture, crystal-binder delamination, and material flow. Additionally, the segmented, 3D images were meshed for finite element simulation. Initial results of the mesoscale modeling exhibit qualitatively similar delamination. Los Alamos National Laboratory - LDRD.
Boulanger, Pierre; Flores-Mir, Carlos; Ramirez, Juan F; Mesa, Elizabeth; Branch, John W
2009-01-01
The measurements from registered images obtained from Cone Beam Computed Tomography (CBCT) and a photogrammetric sensor are used to track three-dimensional shape variations of orthodontic patients before and after their treatments. The methodology consists of five main steps: (1) the patient's bone and skin shapes are measured in 3D using the fusion of images from a CBCT and a photogrammetric sensor. (2) The bone shape is extracted from the CBCT data using a standard marching cube algorithm. (3) The bone and skin shape measurements are registered using titanium targets located on the head of the patient. (4) Using a manual segmentation technique the head and lower jaw geometry are extracted separately to deal with jaw motion at the different record visits. (5) Using natural features of the upper head the two datasets are then registered with each other and then compared to evaluate bone, teeth, and skin displacements before and after treatments. This procedure is now used at the University of Alberta orthodontic clinic.
Gong, Yuanzheng; Seibel, Eric J.
2017-01-01
Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection. PMID:28286351
NASA Astrophysics Data System (ADS)
Gong, Yuanzheng; Seibel, Eric J.
2017-01-01
Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection.
NASA Astrophysics Data System (ADS)
Huang, Xin; Chen, Huijun; Gong, Jianya
2018-01-01
Spaceborne multi-angle images with a high-resolution are capable of simultaneously providing spatial details and three-dimensional (3D) information to support detailed and accurate classification of complex urban scenes. In recent years, satellite-derived digital surface models (DSMs) have been increasingly utilized to provide height information to complement spectral properties for urban classification. However, in such a way, the multi-angle information is not effectively exploited, which is mainly due to the errors and difficulties of the multi-view image matching and the inaccuracy of the generated DSM over complex and dense urban scenes. Therefore, it is still a challenging task to effectively exploit the available angular information from high-resolution multi-angle images. In this paper, we investigate the potential for classifying urban scenes based on local angular properties characterized from high-resolution ZY-3 multi-view images. Specifically, three categories of angular difference features (ADFs) are proposed to describe the angular information at three levels (i.e., pixel, feature, and label levels): (1) ADF-pixel: the angular information is directly extrapolated by pixel comparison between the multi-angle images; (2) ADF-feature: the angular differences are described in the feature domains by comparing the differences between the multi-angle spatial features (e.g., morphological attribute profiles (APs)). (3) ADF-label: label-level angular features are proposed based on a group of urban primitives (e.g., buildings and shadows), in order to describe the specific angular information related to the types of primitive classes. In addition, we utilize spatial-contextual information to refine the multi-level ADF features using superpixel segmentation, for the purpose of alleviating the effects of salt-and-pepper noise and representing the main angular characteristics within a local area. The experiments on ZY-3 multi-angle images confirm that the proposed ADF features can effectively improve the accuracy of urban scene classification, with a significant increase in overall accuracy (3.8-11.7%) compared to using the spectral bands alone. Furthermore, the results indicated the superiority of the proposed ADFs in distinguishing between the spectrally similar and complex man-made classes, including roads and various types of buildings (e.g., high buildings, urban villages, and residential apartments).
Qian, Jianjun; Yang, Jian; Xu, Yong
2013-09-01
This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.
Model-based vision for space applications
NASA Technical Reports Server (NTRS)
Chaconas, Karen; Nashman, Marilyn; Lumia, Ronald
1992-01-01
This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks.
Advanced Interactive Display Formats for Terminal Area Traffic Control
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.; Shaviv, G. E.
1999-01-01
This research project deals with an on-line dynamic method for automated viewing parameter management in perspective displays. Perspective images are optimized such that a human observer will perceive relevant spatial geometrical features with minimal errors. In order to compute the errors at which observers reconstruct spatial features from perspective images, a visual spatial-perception model was formulated. The model was employed as the basis of an optimization scheme aimed at seeking the optimal projection parameter setting. These ideas are implemented in the context of an air traffic control (ATC) application. A concept, referred to as an active display system, was developed. This system uses heuristic rules to identify relevant geometrical features of the three-dimensional air traffic situation. Agile, on-line optimization was achieved by a specially developed and custom-tailored genetic algorithm (GA), which was to deal with the multi-modal characteristics of the objective function and exploit its time-evolving nature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cover, Keith S.; Lagerwaard, Frank J.; Senan, Suresh
2006-03-01
Purpose: Four-dimensional computerized tomography scans (4DCT) enable intrafractional motion to be determined. Because more than 1500 images can be generated with each 4DCT study, tools for efficient data visualization and evaluation are needed. We describe the use of color intensity projections (CIP) for visualizing mobility. Methods: Four-dimensional computerized tomography images of each patient slice were combined into a CIP composite image. Pixels largely unchanged over the component images appear unchanged in the CIP image. However, pixels whose intensity changes over the phases of the 4DCT appear in the CIP image as colored pixels, and the hue encodes the percentage ofmore » time the tissue was in each location. CIPs of 18 patients were used to study tumor and surrogate markers, namely the diaphragm and an abdominal marker block. Results: Color intensity projections permitted mobility of high-contrast features to be quickly visualized and measured. In three selected expiratory phases ('gating phases') that were reviewed in the sagittal plane, gating would have reduced mean tumor mobility from 6.3 {+-} 2.0 mm to 1.4 {+-} 0.5 mm. Residual tumor mobility in gating phases better correlated with residual mobility of the marker block than that of the diaphragm. Conclusion: CIPs permit immediate visualization of mobility in 4DCT images and simplify the selection of appropriate surrogates for gated radiotherapy.« less
Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition
NASA Astrophysics Data System (ADS)
Rouabhia, C.; Tebbikh, H.
2008-06-01
Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).
Three-dimensional reconstruction of indoor whole elements based on mobile LiDAR point cloud data
NASA Astrophysics Data System (ADS)
Gong, Yuejian; Mao, Wenbo; Bi, Jiantao; Ji, Wei; He, Zhanjun
2014-11-01
Ground-based LiDAR is one of the most effective city modeling tools at present, which has been widely used for three-dimensional reconstruction of outdoor objects. However, as for indoor objects, there are some technical bottlenecks due to lack of GPS signal. In this paper, based on the high-precision indoor point cloud data which was obtained by LiDAR, an international advanced indoor mobile measuring equipment, high -precision model was fulfilled for all indoor ancillary facilities. The point cloud data we employed also contain color feature, which is extracted by fusion with CCD images. Thus, it has both space geometric feature and spectral information which can be used for constructing objects' surface and restoring color and texture of the geometric model. Based on Autodesk CAD platform and with help of PointSence plug, three-dimensional reconstruction of indoor whole elements was realized. Specifically, Pointools Edit Pro was adopted to edit the point cloud, then different types of indoor point cloud data was processed, including data format conversion, outline extracting and texture mapping of the point cloud model. Finally, three-dimensional visualization of the real-world indoor was completed. Experiment results showed that high-precision 3D point cloud data obtained by indoor mobile measuring equipment can be used for indoor whole elements' 3-d reconstruction and that methods proposed in this paper can efficiently realize the 3 -d construction of indoor whole elements. Moreover, the modeling precision could be controlled within 5 cm, which was proved to be a satisfactory result.
NASA Astrophysics Data System (ADS)
Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun
2014-01-01
We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.
Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos
2014-04-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.
Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos
2014-01-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564
Crack Modelling for Radiography
NASA Astrophysics Data System (ADS)
Chady, T.; Napierała, L.
2010-02-01
In this paper, possibility of creation of three-dimensional crack models, both random type and based on real-life radiographic images is discussed. Method for storing cracks in a number of two-dimensional matrices, as well algorithm for their reconstruction into three-dimensional objects is presented. Also the possibility of using iterative algorithm for matching simulated images of cracks to real-life radiographic images is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Rui; Singh, Sudhanshu S.; Chawla, Nikhilesh
2016-08-15
We present a robust method for automating removal of “segregation artifacts” in segmented tomographic images of three-dimensional heterogeneous microstructures. The objective of this method is to accurately identify and separate discrete features in composite materials where limitations in imaging resolution lead to spurious connections near close contacts. The method utilizes betweenness centrality, a measure of the importance of a node in the connectivity of a graph network, to identify voxels that create artificial bridges between otherwise distinct geometric features. To facilitate automation of the algorithm, we develop a relative centrality metric to allow for the selection of a threshold criterionmore » that is not sensitive to inclusion size or shape. As a demonstration of the effectiveness of the algorithm, we report on the segmentation of a 3D reconstruction of a SiC particle reinforced aluminum alloy, imaged by X-ray synchrotron tomography.« less
Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique
NASA Astrophysics Data System (ADS)
Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi
2013-09-01
According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.
NASA Astrophysics Data System (ADS)
Sheng, Yehua; Zhang, Ka; Ye, Chun; Liang, Cheng; Li, Jian
2008-04-01
Considering the problem of automatic traffic sign detection and recognition in stereo images captured under motion conditions, a new algorithm for traffic sign detection and recognition based on features and probabilistic neural networks (PNN) is proposed in this paper. Firstly, global statistical color features of left image are computed based on statistics theory. Then for red, yellow and blue traffic signs, left image is segmented to three binary images by self-adaptive color segmentation method. Secondly, gray-value projection and shape analysis are used to confirm traffic sign regions in left image. Then stereo image matching is used to locate the homonymy traffic signs in right image. Thirdly, self-adaptive image segmentation is used to extract binary inner core shapes of detected traffic signs. One-dimensional feature vectors of inner core shapes are computed by central projection transformation. Fourthly, these vectors are input to the trained probabilistic neural networks for traffic sign recognition. Lastly, recognition results in left image are compared with recognition results in right image. If results in stereo images are identical, these results are confirmed as final recognition results. The new algorithm is applied to 220 real images of natural scenes taken by the vehicle-borne mobile photogrammetry system in Nanjing at different time. Experimental results show a detection and recognition rate of over 92%. So the algorithm is not only simple, but also reliable and high-speed on real traffic sign detection and recognition. Furthermore, it can obtain geometrical information of traffic signs at the same time of recognizing their types.
Image intensifier-based volume tomographic angiography imaging system: system evaluation
NASA Astrophysics Data System (ADS)
Ning, Ruola; Wang, Xiaohui; Shen, Jianjun; Conover, David L.
1995-05-01
An image intensifier-based rotational volume tomographic angiography imaging system has been constructed. The system consists of an x-ray tube and an image intensifier that are separately mounted on a gantry. This system uses an image intensifier coupled to a TV camera as a two-dimensional detector so that a set of two-dimensional projections can be acquired for a direct three-dimensional reconstruction (3D). This system has been evaluated with two phantoms: a vascular phantom and a monkey head cadaver. One hundred eighty projections of each phantom were acquired with the system. A set of three-dimensional images were directly reconstructed from the projection data. The experimental results indicate that good imaging quality can be obtained with this system.
Real-time high dynamic range laser scanning microscopy
NASA Astrophysics Data System (ADS)
Vinegoni, C.; Leon Swisher, C.; Fumene Feruglio, P.; Giedt, R. J.; Rousso, D. L.; Stapleton, S.; Weissleder, R.
2016-04-01
In conventional confocal/multiphoton fluorescence microscopy, images are typically acquired under ideal settings and after extensive optimization of parameters for a given structure or feature, often resulting in information loss from other image attributes. To overcome the problem of selective data display, we developed a new method that extends the imaging dynamic range in optical microscopy and improves the signal-to-noise ratio. Here we demonstrate how real-time and sequential high dynamic range microscopy facilitates automated three-dimensional neural segmentation. We address reconstruction and segmentation performance on samples with different size, anatomy and complexity. Finally, in vivo real-time high dynamic range imaging is also demonstrated, making the technique particularly relevant for longitudinal imaging in the presence of physiological motion and/or for quantification of in vivo fast tracer kinetics during functional imaging.
Laser interference fringe tomography: a novel 3D imaging technique for pathology
NASA Astrophysics Data System (ADS)
Kazemzadeh, Farnoud; Haylock, Thomas M.; Chifman, Lev M.; Hajian, Arsen R.; Behr, Bradford B.; Cenko, Andrew T.; Meade, Jeff T.; Hendrikse, Jan
2011-03-01
Laser interference fringe tomography (LIFT) is within the class of optical imaging devices designed for in vivo and ex vivo medical imaging applications. LIFT is a very simple and cost-effective three-dimensional imaging device with performance rivaling some of the leading three-dimensional imaging devices used for histology. Like optical coherence tomography (OCT), it measures the reflectivity as a function of depth within a sample and is capable of producing three-dimensional images from optically scattering media. LIFT has the potential capability to produce high spectral resolution, full-color images. The optical design of LIFT along with the planned iterations for improvements and miniaturization are presented and discussed in addition to the theoretical concepts and preliminary imaging results of the device.
[Virtual endoscopy with a volumetric reconstruction technic: the technical aspects].
Pavone, P; Laghi, A; Panebianco, V; Catalano, C; Giura, R; Passariello, R
1998-06-01
We analyze the peculiar technical features of virtual endoscopy obtained with volume rendering. Our preliminary experience is based on virtual endoscopy images from volumetric data acquired with spiral CT (Siemens, Somatom Plus 4) using acquisition protocols standardized for different anatomic areas. Images are reformatted at the CT console, to obtain 1 mm thick contiguous slices, and transferred in DICOM format to an O2 workstation (Silicon Graphics, Mountain View CA, USA) with processor speed of 180 Mhz, 256 Mbyte RAM memory and 4.1 Gbyte hard disk. The software is Vitrea 1.0 (Vital Images, Fairfield, Iowa), running on a Unix platform. Image output is obtained through the Ethernet network to a Macintosh computer and a thermic printer (Kodak 8600 XLS). Diagnostic quality images were obtained in all the cases. Fly-through in the airways allowed correct evaluation of the main bronchi and of the origin of segmentary bronchi. In the vascular district, both carotid strictures and abdominal aortic aneurysms were depicted, with the same accuracy as with conventional reconstruction techniques. In the colon studies, polypoid lesions were correctly depicted in all the cases, with good correlation with endoscopic and double-contrast barium enema findings. In a case of lipoma of the ascending colon, virtual endoscopy allowed to study the colon both cranially and caudally to the lesion. The simultaneous evaluation of axial CT images permitted to characterize the lesion correctly on the basis of its density values. The peculiar feature of volume rendering is the use of the whole information inside the imaging volume to reconstruct three-dimensional images; no threshold values are used and no data are lost as opposite to conventional image reconstruction techniques. The different anatomic structures are visualized modifying the reciprocal opacities, showing the structures of no interest as translucent. The modulation of different opacities is obtained modifying the shape of the opacity curve, either using pre-set curves or in a completely independent way. Other technical features of volume rendering are the perspective evaluation of the objects, color and lighting. In conclusion, volume rendering is a promising technique to elaborate three-dimensional images, offering very realistic endoscopic views. At present, the main limitation is represented by the need of powerful and high-cost workstations.
Automated diagnosis of fetal alcohol syndrome using 3D facial image analysis
Fang, Shiaofen; McLaughlin, Jason; Fang, Jiandong; Huang, Jeffrey; Autti-Rämö, Ilona; Fagerlund, Åse; Jacobson, Sandra W.; Robinson, Luther K.; Hoyme, H. Eugene; Mattson, Sarah N.; Riley, Edward; Zhou, Feng; Ward, Richard; Moore, Elizabeth S.; Foroud, Tatiana
2012-01-01
Objectives Use three-dimensional (3D) facial laser scanned images from children with fetal alcohol syndrome (FAS) and controls to develop an automated diagnosis technique that can reliably and accurately identify individuals prenatally exposed to alcohol. Methods A detailed dysmorphology evaluation, history of prenatal alcohol exposure, and 3D facial laser scans were obtained from 149 individuals (86 FAS; 63 Control) recruited from two study sites (Cape Town, South Africa and Helsinki, Finland). Computer graphics, machine learning, and pattern recognition techniques were used to automatically identify a set of facial features that best discriminated individuals with FAS from controls in each sample. Results An automated feature detection and analysis technique was developed and applied to the two study populations. A unique set of facial regions and features were identified for each population that accurately discriminated FAS and control faces without any human intervention. Conclusion Our results demonstrate that computer algorithms can be used to automatically detect facial features that can discriminate FAS and control faces. PMID:18713153
A Higher-Order Neural Network Design for Improving Segmentation Performance in Medical Image Series
NASA Astrophysics Data System (ADS)
Selvi, Eşref; Selver, M. Alper; Güzeliş, Cüneyt; Dicle, Oǧuz
2014-03-01
Segmentation of anatomical structures from medical image series is an ongoing field of research. Although, organs of interest are three-dimensional in nature, slice-by-slice approaches are widely used in clinical applications because of their ease of integration with the current manual segmentation scheme. To be able to use slice-by-slice techniques effectively, adjacent slice information, which represents likelihood of a region to be the structure of interest, plays critical role. Recent studies focus on using distance transform directly as a feature or to increase the feature values at the vicinity of the search area. This study presents a novel approach by constructing a higher order neural network, the input layer of which receives features together with their multiplications with the distance transform. This allows higher-order interactions between features through the non-linearity introduced by the multiplication. The application of the proposed method to 9 CT datasets for segmentation of the liver shows higher performance than well-known higher order classification neural networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan Zhen; Zhang Qizhi; Sobel, Eric S.
Purpose: The aim of this study was to investigate the potential use of multimodality functional imaging techniques to identify the quantitative optical findings that can be used to distinguish between osteoarthritic and normal finger joints. Methods: Between 2006 and 2009, the distal interphalangeal finger joints from 40 female subjects including 22 patients and 18 healthy controls were examined clinically and scanned by a hybrid imaging system. This system integrated x-ray tomosynthetic setup with a diffuse optical imaging system. Optical absorption and scattering images were recovered based on a regularization-based hybrid reconstruction algorithm. A receiver operating characteristic curve was used tomore » calculate the statistical significance of specific optical features obtained from osteoarthritic and healthy joints groups. Results: The three-dimensional optical and x-ray images captured made it possible to quantify optical properties and joint space width of finger joints. Based on the recovered optical absorption and scattering parameters, the authors observed statistically significant differences between healthy and osteoarthritis finger joints. Conclusions: The statistical results revealed that sensitivity and specificity values up to 92% and 100%, respectively, can be achieved when optical properties of joint tissues were used as classifiers. This suggests that these optical imaging parameters are possible indicators for diagnosing osteoarthritis and monitoring its progression.« less
Lu, J; Wang, L; Zhang, Y C; Tang, H T; Xia, Z F
2017-10-20
Objective: To validate the clinical effect of three dimensional human body scanning system BurnCalc developed by our research team in the evaluation of burn wound area. Methods: A total of 48 burn patients treated in the outpatient department of our unit from January to June 2015, conforming to the study criteria, were enrolled in. For the first 12 patients, one wound on the limbs or torso was selected from each patient. The stability of the system was tested by 3 attending physicians using three dimensional human body scanning system BurnCalc to measure the area of wounds individually. For the following 36 patients, one wound was selected from each patient, including 12 wounds on limbs, front torso, and side torso, respectively. The area of wounds was measured by the same attending physician using transparency tracing method, National Institutes of Health (NIH) Image J method, and three dimensional human body scanning system BurnCalc, respectively. The time for getting information of 36 wounds by three methods was recorded by stopwatch. The stability among the testers was evaluated by the intra-class correlation coefficient (ICC). Data were processed with randomized blocks analysis of variance and Bonferroni test. Results: (1) Wound area of patients measured by three physicians using three dimensional human body scanning system BurnCalc was (122±95), (121±95), and (123±96) cm(2,) respectively, and there was no statistically significant difference among them ( F =1.55, P >0.05). The ICC among 3 physicians was 0.999. (2) The wound area of limbs of patients measured by transparency tracing method, NIH Image J method, and three dimensional human body scanning system BurnCalc was (84±50), (76±46), and (84±49) cm(2,) respectively. There was no statistically significant difference in the wound area of limbs of patients measured by transparency tracing method and three dimensional human body scanning system BurnCalc ( P >0.05). The wound area of limbs of patients measured by NIH Image J method was smaller than that measured by transparency tracing method and three dimensional human body scanning system BurnCalc (with P values below 0.05). There was no statistically significant difference in the wound area of front torso of patients measured by transparency tracing method, NIH Image J method, and three dimensional human body scanning system BurnCalc ( F =0.33, P >0.05). The wound area of side torso of patients measured by transparency tracing method, NIH Image J method, and three dimensional human body scanning system BurnCalc was (169±88), (150±80), and (169±86) cm(2,) respectively. There was no statistically significant difference in the wound area of side torso of patients measured by transparency tracing method and three dimensional human body scanning system BurnCalc ( P >0.05). The wound area of side torso of patients measured by NIH Image J method was smaller than that measured by transparency tracing method and three dimensional human body scanning system BurnCalc (with P values below 0.05). (3) The time for getting information of wounds of patients by transparency tracing method, NIH Image J method, and three dimensional human body scanning system BurnCalc was (77±14), (10±3), and (9±3) s, respectively. The time for getting information of wounds of patients by transparency tracing method was longer than that by NIH Image J method and three dimensional human body scanning system BurnCalc (with P values below 0.05). The time for getting information of wounds of patients by three dimensional human body scanning system BurnCalc was close to that by NIH Image J method ( P >0.05). Conclusions: The three dimensional human body scanning system BurnCalc is stable and can accurately evaluate the wound area on limbs and torso of burn patients.
An improved ASIFT algorithm for indoor panorama image matching
NASA Astrophysics Data System (ADS)
Fu, Han; Xie, Donghai; Zhong, Ruofei; Wu, Yu; Wu, Qiong
2017-07-01
The generation of 3D models for indoor objects and scenes is an attractive tool for digital city, virtual reality and SLAM purposes. Panoramic images are becoming increasingly more common in such applications due to their advantages to capture the complete environment in one single image with large field of view. The extraction and matching of image feature points are important and difficult steps in three-dimensional reconstruction, and ASIFT is a state-of-the-art algorithm to implement these functions. Compared with the SIFT algorithm, more feature points can be generated and the matching accuracy of ASIFT algorithm is higher, even for the panoramic images with obvious distortions. However, the algorithm is really time-consuming because of complex operations and performs not very well for some indoor scenes under poor light or without rich textures. To solve this problem, this paper proposes an improved ASIFT algorithm for indoor panoramic images: firstly, the panoramic images are projected into multiple normal perspective images. Secondly, the original ASIFT algorithm is simplified from the affine transformation of tilt and rotation with the images to the only tilt affine transformation. Finally, the results are re-projected to the panoramic image space. Experiments in different environments show that this method can not only ensure the precision of feature points extraction and matching, but also greatly reduce the computing time.
NASA Astrophysics Data System (ADS)
Lushnikov, D. S.; Zherdev, A. Y.; Odinokov, S. B.; Markin, V. V.; Smirnov, A. V.
2017-05-01
Visual security elements used in color holographic stereograms - three-dimensional colored security holograms - and methods their production is describes in this article. These visual security elements include color micro text, color-hidden image, the horizontal and vertical flip - flop effects by change color and image. The article also presents variants of optical systems that allow record the visual security elements as part of the holographic stereograms. The methods for solving of the optical problems arising in the recording visual security elements are presented. Also noted perception features of visual security elements for verification of security holograms by using these elements. The work was partially funded under the Agreement with the RF Ministry of Education and Science № 14.577.21.0197, grant RFMEFI57715X0197.
An Upgrade of the Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) Software
NASA Technical Reports Server (NTRS)
Mason, Michelle L.; Rufer, Shann J.
2015-01-01
The Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) code is used at NASA Langley Research Center to analyze global aeroheating data on wind tunnel models tested in the Langley Aerothermodynamics Laboratory. One-dimensional, semi-infinite heating data derived from IHEAT are used to design thermal protection systems to mitigate the risks due to the aeroheating loads on hypersonic vehicles, such as re-entry vehicles during descent and landing procedures. This code was originally written in the PV-WAVE programming language to analyze phosphor thermography data from the two-color, relativeintensity system developed at Langley. To increase the efficiency, functionality, and reliability of IHEAT, the code was migrated to MATLAB syntax and compiled as a stand-alone executable file labeled version 4.0. New features of IHEAT 4.0 include the options to batch process all of the data from a wind tunnel run, to map the two-dimensional heating distribution to a three-dimensional computer-aided design model of the vehicle to be viewed in Tecplot, and to extract data from a segmented line that follows an interesting feature in the data. Results from IHEAT 4.0 were compared on a pixel level to the output images from the legacy code to validate the program. The differences between the two codes were on the order of 10-5 to 10-7. IHEAT 4.0 replaces the PV-WAVE version as the production code for aeroheating experiments conducted in the hypersonic facilities at NASA Langley.
Classification of brain MRI with big data and deep 3D convolutional neural networks
NASA Astrophysics Data System (ADS)
Wegmayr, Viktor; Aitharaju, Sai; Buhmann, Joachim
2018-02-01
Our ever-aging society faces the growing problem of neurodegenerative diseases, in particular dementia. Magnetic Resonance Imaging provides a unique tool for non-invasive investigation of these brain diseases. However, it is extremely difficult for neurologists to identify complex disease patterns from large amounts of three-dimensional images. In contrast, machine learning excels at automatic pattern recognition from large amounts of data. In particular, deep learning has achieved impressive results in image classification. Unfortunately, its application to medical image classification remains difficult. We consider two reasons for this difficulty: First, volumetric medical image data is considerably scarcer than natural images. Second, the complexity of 3D medical images is much higher compared to common 2D images. To address the problem of small data set size, we assemble the largest dataset ever used for training a deep 3D convolutional neural network to classify brain images as healthy (HC), mild cognitive impairment (MCI) or Alzheimers disease (AD). We use more than 20.000 images from subjects of these three classes, which is almost 9x the size of the previously largest data set. The problem of high dimensionality is addressed by using a deep 3D convolutional neural network, which is state-of-the-art in large-scale image classification. We exploit its ability to process the images directly, only with standard preprocessing, but without the need for elaborate feature engineering. Compared to other work, our workflow is considerably simpler, which increases clinical applicability. Accuracy is measured on the ADNI+AIBL data sets, and the independent CADDementia benchmark.
Two-photon confocal microscopy in wound healing
NASA Astrophysics Data System (ADS)
Navarro, Fernando A.; So, Peter T. C.; Driessen, Antoine; Kropf, Nina; Park, Christine S.; Huertas, Juan C.; Lee, Hoon B.; Orgill, Dennis P.
2001-04-01
Advances in histopathology and immunohistochemistry have allowed for precise microanatomic detail of tissues. Two Photon Confocal Microscopy (TPCM) is a new technology useful in non-destructive analysis of tissue. Laser light excites the natural florophores, NAD(P)H and NADP+ and the scattering patterns of the emitted light are analyzed to reconstruct microanatomic features. Guinea pig skin was studied using TPCM and skin preparation methods including chemical depilation and tape striping. Results of TPCM were compared with conventional hematoxylin and eosin microscopy. Two-dimensional images were rendered from the three dimensional reconstructions. Images of deeper layers including basal cells and the dermo-epidermal junction improved after removing the stratum corneum with chemical depilation or tape stripping. TCPM allows good resolution of corneocytes, basal cells and collagen fibers and shows promise as a non-destructive method to study wound healing.
Three-dimensional imaging of dislocation dynamics during the hydriding phase transformation
NASA Astrophysics Data System (ADS)
Ulvestad, A.; Welland, M. J.; Cha, W.; Liu, Y.; Kim, J. W.; Harder, R.; Maxey, E.; Clark, J. N.; Highland, M. J.; You, H.; Zapol, P.; Hruszkewycz, S. O.; Stephenson, G. B.
2017-05-01
Crystallographic imperfections significantly alter material properties and their response to external stimuli, including solute-induced phase transformations. Despite recent progress in imaging defects using electron and X-ray techniques, in situ three-dimensional imaging of defect dynamics remains challenging. Here, we use Bragg coherent diffractive imaging to image defects during the hydriding phase transformation of palladium nanocrystals. During constant-pressure experiments we observe that the phase transformation begins after dislocation nucleation close to the phase boundary in particles larger than 300 nm. The three-dimensional phase morphology suggests that the hydrogen-rich phase is more similar to a spherical cap on the hydrogen-poor phase than to the core-shell model commonly assumed. We substantiate this using three-dimensional phase field modelling, demonstrating how phase morphology affects the critical size for dislocation nucleation. Our results reveal how particle size and phase morphology affects transformations in the PdH system.
NASA Astrophysics Data System (ADS)
Vasefi, Fartash; MacKinnon, Nicholas B.; Jain, Manu; Cordova, Miguel A.; Kose, Kivanc; Rajadhyaksha, Milind; Halpern, Allan C.; Farkas, Daniel L.
2017-02-01
Motivation and background: Melanoma, the fastest growing cancer worldwide, kills more than one person every hour in the United States. Determining the depth and distribution of dermal melanin and hemoglobin adds physio-morphologic information to the current diagnostic standard, cellular morphology, to further develop noninvasive methods to discriminate between melanoma and benign skin conditions. Purpose: To compare the performance of a multimode dermoscopy system (SkinSpect), which is designed to quantify and map in three dimensions, in vivo melanin and hemoglobin in skin, and to validate this with histopathology and three dimensional reflectance confocal microscopy (RCM) imaging. Methods: Sequentially capture SkinSpect and RCM images of suspect lesions and nearby normal skin and compare this with histopathology reports, RCM imaging allows noninvasive observation of nuclear, cellular and structural detail in 1-5 μm-thin optical sections in skin, and detection of pigmented skin lesions with sensitivity of 90-95% and specificity of 70-80%. The multimode imaging dermoscope combines polarization (cross and parallel), autofluorescence and hyperspectral imaging to noninvasively map the distribution of melanin, collagen and hemoglobin oxygenation in pigmented skin lesions. Results: We compared in vivo features of ten melanocytic lesions extracted by SkinSpect and RCM imaging, and correlated them to histopathologic results. We present results of two melanoma cases (in situ and invasive), and compare with in vivo features from eight benign lesions. Melanin distribution at different depths and hemodynamics, including abnormal vascularity, detected by both SkinSpect and RCM will be discussed. Conclusion: Diagnostic features such as dermal melanin and hemoglobin concentration provided in SkinSpect skin analysis for melanoma and normal pigmented lesions can be compared and validated using results from RCM and histopathology.
Palm vein recognition based on directional empirical mode decomposition
NASA Astrophysics Data System (ADS)
Lee, Jen-Chun; Chang, Chien-Ping; Chen, Wei-Kuei
2014-04-01
Directional empirical mode decomposition (DEMD) has recently been proposed to make empirical mode decomposition suitable for the processing of texture analysis. Using DEMD, samples are decomposed into a series of images, referred to as two-dimensional intrinsic mode functions (2-D IMFs), from finer to large scale. A DEMD-based 2 linear discriminant analysis (LDA) for palm vein recognition is proposed. The proposed method progresses through three steps: (i) a set of 2-D IMF features of various scale and orientation are extracted using DEMD, (ii) the 2LDA method is then applied to reduce the dimensionality of the feature space in both the row and column directions, and (iii) the nearest neighbor classifier is used for classification. We also propose two strategies for using the set of 2-D IMF features: ensemble DEMD vein representation (EDVR) and multichannel DEMD vein representation (MDVR). In experiments using palm vein databases, the proposed MDVR-based 2LDA method achieved recognition accuracy of 99.73%, thereby demonstrating its feasibility for palm vein recognition.
Application of digital interferogram evaluation techniques to the measurement of 3-D flow fields
NASA Technical Reports Server (NTRS)
Becker, Friedhelm; Yu, Yung H.
1987-01-01
A system for digitally evaluating interferograms, based on an image processing system connected to a host computer, was implemented. The system supports one- and two-dimensional interferogram evaluations. Interferograms are digitized, enhanced, and then segmented. The fringe coordinates are extracted, and the fringes are represented as polygonal data structures. Fringe numbering and fringe interpolation modules are implemented. The system supports editing and interactive features, as well as graphic visualization. An application of the system to the evaluation of double exposure interferograms from the transonic flow field around a helicopter blade and the reconstruction of the three dimensional flow field is given.
NASA Astrophysics Data System (ADS)
Li, Guo; Xia, Jun; Li, Lei; Wang, Lidai; Wang, Lihong V.
2015-03-01
Linear transducer arrays are readily available for ultrasonic detection in photoacoustic computed tomography. They offer low cost, hand-held convenience, and conventional ultrasonic imaging. However, the elevational resolution of linear transducer arrays, which is usually determined by the weak focus of the cylindrical acoustic lens, is about one order of magnitude worse than the in-plane axial and lateral spatial resolutions. Therefore, conventional linear scanning along the elevational direction cannot provide high-quality three-dimensional photoacoustic images due to the anisotropic spatial resolutions. Here we propose an innovative method to achieve isotropic resolutions for three-dimensional photoacoustic images through combined linear and rotational scanning. In each scan step, we first elevationally scan the linear transducer array, and then rotate the linear transducer array along its center in small steps, and scan again until 180 degrees have been covered. To reconstruct isotropic three-dimensional images from the multiple-directional scanning dataset, we use the standard inverse Radon transform originating from X-ray CT. We acquired a three-dimensional microsphere phantom image through the inverse Radon transform method and compared it with a single-elevational-scan three-dimensional image. The comparison shows that our method improves the elevational resolution by up to one order of magnitude, approaching the in-plane lateral-direction resolution. In vivo rat images were also acquired.
Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera
Qu, Yufu; Huang, Jianyu; Zhang, Xuan
2018-01-01
In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth–map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable. PMID:29342908
Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera.
Qu, Yufu; Huang, Jianyu; Zhang, Xuan
2018-01-14
In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.
An interactive framework for acquiring vision models of 3-D objects from 2-D images.
Motai, Yuichi; Kak, Avinash
2004-02-01
This paper presents a human-computer interaction (HCI) framework for building vision models of three-dimensional (3-D) objects from their two-dimensional (2-D) images. Our framework is based on two guiding principles of HCI: 1) provide the human with as much visual assistance as possible to help the human make a correct input; and 2) verify each input provided by the human for its consistency with the inputs previously provided. For example, when stereo correspondence information is elicited from a human, his/her job is facilitated by superimposing epipolar lines on the images. Although that reduces the possibility of error in the human marked correspondences, such errors are not entirely eliminated because there can be multiple candidate points close together for complex objects. For another example, when pose-to-pose correspondence is sought from a human, his/her job is made easier by allowing the human to rotate the partial model constructed in the previous pose in relation to the partial model for the current pose. While this facility reduces the incidence of human-supplied pose-to-pose correspondence errors, such errors cannot be eliminated entirely because of confusion created when multiple candidate features exist close together. Each input provided by the human is therefore checked against the previous inputs by invoking situation-specific constraints. Different types of constraints (and different human-computer interaction protocols) are needed for the extraction of polygonal features and for the extraction of curved features. We will show results on both polygonal objects and object containing curved features.
2016-04-28
Single- shot , volumetrically illuminated, three- dimensional, tomographic laser-induced- fluorescence imaging in a gaseous free jet Benjamin R. Halls...us.af.mil Abstract: Single- shot , tomographic imaging of the three-dimensional concentration field is demonstrated in a turbulent gaseous free jet in co-flow...2001). 6. K. M. Tacina and W. J. A. Dahm, “Effects of heat release on turbulent shear flows, Part 1. A general equivalence principle for non-buoyant
A method for brain 3D surface reconstruction from MR images
NASA Astrophysics Data System (ADS)
Zhao, De-xin
2014-09-01
Due to the encephalic tissues are highly irregular, three-dimensional (3D) modeling of brain always leads to complicated computing. In this paper, we explore an efficient method for brain surface reconstruction from magnetic resonance (MR) images of head, which is helpful to surgery planning and tumor localization. A heuristic algorithm is proposed for surface triangle mesh generation with preserved features, and the diagonal length is regarded as the heuristic information to optimize the shape of triangle. The experimental results show that our approach not only reduces the computational complexity, but also completes 3D visualization with good quality.
Visualizing Vector Fields Using Line Integral Convolution and Dye Advection
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu
1996-01-01
We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.
NASA Astrophysics Data System (ADS)
Brandl, Miriam B.; Beck, Dominik; Pham, Tuan D.
2011-06-01
The high dimensionality of image-based dataset can be a drawback for classification accuracy. In this study, we propose the application of fuzzy c-means clustering, cluster validity indices and the notation of a joint-feature-clustering matrix to find redundancies of image-features. The introduced matrix indicates how frequently features are grouped in a mutual cluster. The resulting information can be used to find data-derived feature prototypes with a common biological meaning, reduce data storage as well as computation times and improve the classification accuracy.
Edge detection and localization with edge pattern analysis and inflection characterization
NASA Astrophysics Data System (ADS)
Jiang, Bo
2012-05-01
In general edges are considered to be abrupt changes or discontinuities in two dimensional image signal intensity distributions. The accuracy of front-end edge detection methods in image processing impacts the eventual success of higher level pattern analysis downstream. To generalize edge detectors designed from a simple ideal step function model to real distortions in natural images, research on one dimensional edge pattern analysis to improve the accuracy of edge detection and localization proposes an edge detection algorithm, which is composed by three basic edge patterns, such as ramp, impulse, and step. After mathematical analysis, general rules for edge representation based upon the classification of edge types into three categories-ramp, impulse, and step (RIS) are developed to reduce detection and localization errors, especially reducing "double edge" effect that is one important drawback to the derivative method. But, when applying one dimensional edge pattern in two dimensional image processing, a new issue is naturally raised that the edge detector should correct marking inflections or junctions of edges. Research on human visual perception of objects and information theory pointed out that a pattern lexicon of "inflection micro-patterns" has larger information than a straight line. Also, research on scene perception gave an idea that contours have larger information are more important factor to determine the success of scene categorization. Therefore, inflections or junctions are extremely useful features, whose accurate description and reconstruction are significant in solving correspondence problems in computer vision. Therefore, aside from adoption of edge pattern analysis, inflection or junction characterization is also utilized to extend traditional derivative edge detection algorithm. Experiments were conducted to test my propositions about edge detection and localization accuracy improvements. The results support the idea that these edge detection method improvements are effective in enhancing the accuracy of edge detection and localization.
Three-dimensional Bragg coherent diffraction imaging of an extended ZnO crystal.
Huang, Xiaojing; Harder, Ross; Leake, Steven; Clark, Jesse; Robinson, Ian
2012-08-01
A complex three-dimensional quantitative image of an extended zinc oxide (ZnO) crystal has been obtained using Bragg coherent diffraction imaging integrated with ptychography. By scanning a 2.5 µm-long arm of a ZnO tetrapod across a 1.3 µm X-ray beam with fine step sizes while measuring a three-dimensional diffraction pattern at each scan spot, the three-dimensional electron density and projected displacement field of the entire crystal were recovered. The simultaneously reconstructed complex wavefront of the illumination combined with its coherence properties determined by a partial coherence analysis implemented in the reconstruction process provide a comprehensive characterization of the incident X-ray beam.
Unsupervised universal steganalyzer for high-dimensional steganalytic features
NASA Astrophysics Data System (ADS)
Hou, Xiaodan; Zhang, Tao
2016-11-01
The research in developing steganalytic features has been highly successful. These features are extremely powerful when applied to supervised binary classification problems. However, they are incompatible with unsupervised universal steganalysis because the unsupervised method cannot distinguish embedding distortion from varying levels of noises caused by cover variation. This study attempts to alleviate the problem by introducing similarity retrieval of image statistical properties (SRISP), with the specific aim of mitigating the effect of cover variation on the existing steganalytic features. First, cover images with some statistical properties similar to those of a given test image are searched from a retrieval cover database to establish an aided sample set. Then, unsupervised outlier detection is performed on a test set composed of the given test image and its aided sample set to determine the type (cover or stego) of the given test image. Our proposed framework, called SRISP-aided unsupervised outlier detection, requires no training. Thus, it does not suffer from model mismatch mess. Compared with prior unsupervised outlier detectors that do not consider SRISP, the proposed framework not only retains the universality but also exhibits superior performance when applied to high-dimensional steganalytic features.
Zernike phase contrast cryo-electron tomography of whole bacterial cells
Guerrero-Ferreira, Ricardo C.; Wright, Elizabeth R.
2014-01-01
Cryo-electron tomography (cryo-ET) provides three-dimensional (3D) structural information of bacteria preserved in a native, frozen-hydrated state. The typical low contrast of tilt-series images, a result of both the need for a low electron dose and the use of conventional defocus phase-contrast imaging, is a challenge for high-quality tomograms. We show that Zernike phase-contrast imaging allows the electron dose to be reduced. This limits movement of gold fiducials during the tilt series, which leads to better alignment and a higher-resolution reconstruction. Contrast is also enhanced, improving visibility of weak features. The reduced electron dose also means that more images at more tilt angles could be recorded, further increasing resolution. PMID:24075950
Task-oriented lossy compression of magnetic resonance images
NASA Astrophysics Data System (ADS)
Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques
1996-04-01
A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.
NASA Astrophysics Data System (ADS)
Davis, Benjamin L.; Berrier, Joel C.; Shields, Douglas W.; Kennefick, Julia; Kennefick, Daniel; Seigar, Marc S.; Lacy, Claud H. S.; Puerari, Ivânio
2012-04-01
A logarithmic spiral is a prominent feature appearing in a majority of observed galaxies. This feature has long been associated with the traditional Hubble classification scheme, but historical quotes of pitch angle of spiral galaxies have been almost exclusively qualitative. We have developed a methodology, utilizing two-dimensional fast Fourier transformations of images of spiral galaxies, in order to isolate and measure the pitch angles of their spiral arms. Our technique provides a quantitative way to measure this morphological feature. This will allow comparison of spiral galaxy pitch angle to other galactic parameters and test spiral arm genesis theories. In this work, we detail our image processing and analysis of spiral galaxy images and discuss the robustness of our analysis techniques.
Real-time model-based vision system for object acquisition and tracking
NASA Technical Reports Server (NTRS)
Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd
1987-01-01
A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.
Alonso-Farré, J M; Gonzalo-Orden, M; Barreiro-Vázquez, J D; Barreiro-Lois, A; André, M; Morell, M; Llarena-Reino, M; Monreal-Pawlowsky, T; Degollada, E
2015-02-01
Computed tomography (CT) and low-field magnetic resonance imaging (MRI) were used to scan seven by-caught dolphin cadavers, belonging to two species: four common dolphins (Delphinus delphis) and three striped dolphins (Stenella coeruleoalba). CT and MRI were obtained with the animals in ventral recumbency. After the imaging procedures, six dolphins were frozen at -20°C and sliced in the same position they were examined. Not only CT and MRI scans, but also cross sections of the heads were obtained in three body planes: transverse (slices of 1 cm thickness) in three dolphins, sagittal (5 cm thickness) in two dolphins and dorsal (5 cm thickness) in two dolphins. Relevant anatomical structures were identified and labelled on each cross section, obtaining a comprehensive bi-dimensional topographical anatomy guide of the main features of the common and the striped dolphin head. Furthermore, the anatomical cross sections were compared with their corresponding CT and MRI images, allowing an imaging identification of most of the anatomical features. CT scans produced an excellent definition of the bony and air-filled structures, while MRI allowed us to successfully identify most of the soft tissue structures in the dolphin's head. This paper provides a detailed anatomical description of the head structures of common and striped dolphins and compares anatomical cross sections with CT and MRI scans, becoming a reference guide for the interpretation of imaging studies. © 2014 Blackwell Verlag GmbH.
Coherent three-dimensional X-ray cryo-imaging.
Robinson, Ian
2015-09-01
The combination of cryogenic sample temperatures with three-dimensional coherent diffractive imaging for the case of whole frozen-hydrated cells is discussed in the light of theoretical predictions of the achievable resolution.
General tensor discriminant analysis and gabor features for gait recognition.
Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J
2007-10-01
The traditional image representations are not suited to conventional classification methods, such as the linear discriminant analysis (LDA), because of the under sample problem (USP): the dimensionality of the feature space is much higher than the number of training samples. Motivated by the successes of the two dimensional LDA (2DLDA) for face recognition, we develop a general tensor discriminant analysis (GTDA) as a preprocessing step for LDA. The benefits of GTDA compared with existing preprocessing methods, e.g., principal component analysis (PCA) and 2DLDA, include 1) the USP is reduced in subsequent classification by, for example, LDA; 2) the discriminative information in the training tensors is preserved; and 3) GTDA provides stable recognition rates because the alternating projection optimization algorithm to obtain a solution of GTDA converges, while that of 2DLDA does not. We use human gait recognition to validate the proposed GTDA. The averaged gait images are utilized for gait representation. Given the popularity of Gabor function based image decompositions for image understanding and object recognition, we develop three different Gabor function based image representations: 1) the GaborD representation is the sum of Gabor filter responses over directions, 2) GaborS is the sum of Gabor filter responses over scales, and 3) GaborSD is the sum of Gabor filter responses over scales and directions. The GaborD, GaborS and GaborSD representations are applied to the problem of recognizing people from their averaged gait images.A large number of experiments were carried out to evaluate the effectiveness (recognition rate) of gait recognition based on first obtaining a Gabor, GaborD, GaborS or GaborSD image representation, then using GDTA to extract features and finally using LDA for classification. The proposed methods achieved good performance for gait recognition based on image sequences from the USF HumanID Database. Experimental comparisons are made with nine state of the art classification methods in gait recognition.
Querying Patterns in High-Dimensional Heterogenous Datasets
ERIC Educational Resources Information Center
Singh, Vishwakarma
2012-01-01
The recent technological advancements have led to the availability of a plethora of heterogenous datasets, e.g., images tagged with geo-location and descriptive keywords. An object in these datasets is described by a set of high-dimensional feature vectors. For example, a keyword-tagged image is represented by a color-histogram and a…
2D/3D Visual Tracker for Rover Mast
NASA Technical Reports Server (NTRS)
Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria
2006-01-01
A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems that require coordination of vision and robotic motion.
Comparison of thyroid segmentation techniques for 3D ultrasound
NASA Astrophysics Data System (ADS)
Wunderling, T.; Golla, B.; Poudel, P.; Arens, C.; Friebe, M.; Hansen, C.
2017-02-01
The segmentation of the thyroid in ultrasound images is a field of active research. The thyroid is a gland of the endocrine system and regulates several body functions. Measuring the volume of the thyroid is regular practice of diagnosing pathological changes. In this work, we compare three approaches for semi-automatic thyroid segmentation in freehand-tracked three-dimensional ultrasound images. The approaches are based on level set, graph cut and feature classification. For validation, sixteen 3D ultrasound records were created with ground truth segmentations, which we make publicly available. The properties analyzed are the Dice coefficient when compared against the ground truth reference and the effort of required interaction. Our results show that in terms of Dice coefficient, all algorithms perform similarly. For interaction, however, each algorithm has advantages over the other. The graph cut-based approach gives the practitioner direct influence on the final segmentation. Level set and feature classifier require less interaction, but offer less control over the result. All three compared methods show promising results for future work and provide several possible extensions.
NASA Astrophysics Data System (ADS)
Guldner, Ian H.; Yang, Lin; Cowdrick, Kyle R.; Wang, Qingfei; Alvarez Barrios, Wendy V.; Zellmer, Victoria R.; Zhang, Yizhe; Host, Misha; Liu, Fang; Chen, Danny Z.; Zhang, Siyuan
2016-04-01
Metastatic microenvironments are spatially and compositionally heterogeneous. This seemingly stochastic heterogeneity provides researchers great challenges in elucidating factors that determine metastatic outgrowth. Herein, we develop and implement an integrative platform that will enable researchers to obtain novel insights from intricate metastatic landscapes. Our two-segment platform begins with whole tissue clearing, staining, and imaging to globally delineate metastatic landscape heterogeneity with spatial and molecular resolution. The second segment of our platform applies our custom-developed SMART 3D (Spatial filtering-based background removal and Multi-chAnnel forest classifiers-based 3D ReconsTruction), a multi-faceted image analysis pipeline, permitting quantitative interrogation of functional implications of heterogeneous metastatic landscape constituents, from subcellular features to multicellular structures, within our large three-dimensional (3D) image datasets. Coupling whole tissue imaging of brain metastasis animal models with SMART 3D, we demonstrate the capability of our integrative pipeline to reveal and quantify volumetric and spatial aspects of brain metastasis landscapes, including diverse tumor morphology, heterogeneous proliferative indices, metastasis-associated astrogliosis, and vasculature spatial distribution. Collectively, our study demonstrates the utility of our novel integrative platform to reveal and quantify the global spatial and volumetric characteristics of the 3D metastatic landscape with unparalleled accuracy, opening new opportunities for unbiased investigation of novel biological phenomena in situ.
NASA Astrophysics Data System (ADS)
Burton, Jason C.; Wang, Shang; Behringer, Richard R.; Larina, Irina V.
2016-03-01
Infertility is a known major health concern and is estimated to impact ~15% of couples in the U.S. The majority of failed pregnancies occur before or during implantation of the fertilized embryo into the uterus. Understanding the mechanisms regulating development by studying mouse reproductive organs could significantly contribute to an improved understanding of normal development of reproductive organs and developmental causes of infertility in humans. Towards this goal, we report a three-dimensional (3D) imaging study of the developing mouse reproductive organs (ovary, oviduct, and uterus) using optical coherence tomography (OCT). In our study, OCT was used for 3D imaging of reproductive organs without exogenous contrast agents and provides micro-scale spatial resolution. Experiments were conducted in vitro on mouse reproductive organs ranging from the embryonic day 14.5 to adult stages. Structural features of the ovary, oviduct, and uterus are presented. Additionally, a comparison with traditional histological analysis is illustrated. These results provide a basis for a wide range of infertility studies in mouse models. Through integration with traditional genetic and molecular biology approaches, this imaging method can improve understanding of ovary, oviduct, and uterus development and function, serving to further contribute to our understanding of fertility and infertility.
NASA Astrophysics Data System (ADS)
Xuan, Ruijiao; Zhao, Xinyan; Hu, Doudou; Jian, Jianbo; Wang, Tailing; Hu, Chunhong
2015-07-01
X-ray phase-contrast imaging (PCI) can substantially enhance contrast, and is particularly useful in differentiating biological soft tissues with small density differences. Combined with computed tomography (CT), PCI-CT enables the acquisition of accurate microstructures inside biological samples. In this study, liver microvasculature was visualized without contrast agents in vitro with PCI-CT using liver fibrosis samples induced by bile duct ligation (BDL) in rats. The histological section examination confirmed the correspondence of CT images with the microvascular morphology of the samples. By means of the PCI-CT and three-dimensional (3D) visualization technique, 3D microvascular structures in samples from different stages of liver fibrosis were clearly revealed. Different types of blood vessels, including portal veins and hepatic veins, in addition to ductular proliferation and bile ducts, could be distinguished with good sensitivity, excellent specificity and excellent accuracy. The study showed that PCI-CT could assess the morphological changes in liver microvasculature that result from fibrosis and allow characterization of the anatomical and pathological features of the microvasculature. With further development of PCI-CT technique, it may become a novel noninvasive imaging technique for the auxiliary analysis of liver fibrosis.
Three-dimensional device characterization by high-speed cinematography
NASA Astrophysics Data System (ADS)
Maier, Claus; Hofer, Eberhard P.
2001-10-01
Testing of micro-electro-mechanical systems (MEMS) for optimization purposes or reliability checks can be supported by device visualization whenever an optical access is available. The difficulty in such an investigation is the short time duration of dynamical phenomena in micro devices. This paper presents a test setup to visualize movements within MEMS in real-time and in two perpendicular directions. A three-dimensional view is achieved by the combination of a commercial high-speed camera system, which allows to take up to 8 images of the same process with a minimum interframe time of 10 ns for the first direction, with a second visualization system consisting of a highly sensitive CCD camera working with a multiple exposure LED illumination in the perpendicular direction. Well synchronized this provides 3-D information which is treated by digital image processing to correct image distortions and to perform the detection of object contours. Symmetric and asymmetric binary collisions of micro drops are chosen as test experiments, featuring coalescence and surface rupture. Another application shown here is the investigation of sprays produced by an atomizer. The second direction of view is a prerequisite for this measurement to select an intended plane of focus.
NASA Astrophysics Data System (ADS)
Tan, Maxine; Leader, Joseph K.; Liu, Hong; Zheng, Bin
2015-03-01
We recently investigated a new mammographic image feature based risk factor to predict near-term breast cancer risk after a woman has a negative mammographic screening. We hypothesized that unlike the conventional epidemiology-based long-term (or lifetime) risk factors, the mammographic image feature based risk factor value will increase as the time lag between the negative and positive mammography screening decreases. The purpose of this study is to test this hypothesis. From a large and diverse full-field digital mammography (FFDM) image database with 1278 cases, we collected all available sequential FFDM examinations for each case including the "current" and 1 to 3 most recently "prior" examinations. All "prior" examinations were interpreted negative, and "current" ones were either malignant or recalled negative/benign. We computed 92 global mammographic texture and density based features, and included three clinical risk factors (woman's age, family history and subjective breast density BIRADS ratings). On this initial feature set, we applied a fast and accurate Sequential Forward Floating Selection (SFFS) feature selection algorithm to reduce feature dimensionality. The features computed on both mammographic views were individually/ separately trained using two artificial neural network (ANN) classifiers. The classification scores of the two ANNs were then merged with a sequential ANN. The results show that the maximum adjusted odds ratios were 5.59, 7.98, and 15.77 for using the 3rd, 2nd, and 1st "prior" FFDM examinations, respectively, which demonstrates a higher association of mammographic image feature change and an increasing risk trend of developing breast cancer in the near-term after a negative screening.
NASA Astrophysics Data System (ADS)
Hoffmann, A.; Zimmermann, F.; Scharr, H.; Krömker, S.; Schulz, C.
2005-01-01
A laser-based technique for measuring instantaneous three-dimensional species concentration distributions in turbulent flows is presented. The laser beam from a single laser is formed into two crossed light sheets that illuminate the area of interest. The laser-induced fluorescence (LIF) signal emitted from excited species within both planes is detected with a single camera via a mirror arrangement. Image processing enables the reconstruction of the three-dimensional data set in close proximity to the cutting line of the two light sheets. Three-dimensional intensity gradients are computed and compared to the two-dimensional projections obtained from the two directly observed planes. Volume visualization by digital image processing gives unique insight into the three-dimensional structures within the turbulent processes. We apply this technique to measurements of toluene-LIF in a turbulent, non-reactive mixing process of toluene and air and to hydroxyl (OH) LIF in a turbulent methane-air flame upon excitation at 248 nm with a tunable KrF excimer laser.
Dynamic Imaging of Mouse Embryos and Cardiodynamics in Static Culture.
Lopez, Andrew L; Larina, Irina V
2018-01-01
The heart is a dynamic organ that quickly undergoes morphological and mechanical changes through early embryonic development. Characterizing these early moments is important for our understanding of proper embryonic development and the treatment of heart disease. Traditionally, tomographic imaging modalities and fluorescence-based microscopy are excellent approaches to visualize structural features and gene expression patterns, respectively, and connect aberrant gene programs to pathological phenotypes. However, these approaches usually require static samples or fluorescent markers, which can limit how much information we can derive from the dynamic and mechanical changes that regulate heart development. Optical coherence tomography (OCT) is unique in this circumstance because it allows for the acquisition of three-dimensional structural and four-dimensional (3D + time) functional images of living mouse embryos without fixation or contrast reagents. In this chapter, we focus on how OCT can visualize heart morphology at different stages of development and provide cardiodynamic information to reveal mechanical properties of the developing heart.
Super Talbot effect in indefinite metamaterial.
Zhao, Wangshi; Huang, Xiaoyue; Lu, Zhaolin
2011-08-01
The Talbot effect (or the self-imaging effect) can be observed for a periodic object with a pitch larger than the diffraction limit of an imaging system, where the paraxial approximation is applied. In this paper, we show that the super Talbot effect can be achieved in an indefinite metamaterial even when the period is much smaller than the diffraction limit in both two-dimensional and three-dimensional numerical simulations, where the paraxial approximation is not applied. This is attributed to the evanescent waves, which carry the information about subwavelength features of the object, can be converted into propagating waves and then conveyed to far field by the metamaterial, where the permittivity in the propagation direction is negative while the transverse ones are positive. The indefinite metamaterial can be approximated by a system of thin, alternating multilayer metal and insulator (MMI) stack. As long as the loss of the metamaterial is small enough, deep subwavelength image size can be obtained in the super Talbot effect.
Automated Geo/Co-Registration of Multi-Temporal Very-High-Resolution Imagery.
Han, Youkyung; Oh, Jaehong
2018-05-17
For time-series analysis using very-high-resolution (VHR) multi-temporal satellite images, both accurate georegistration to the map coordinates and subpixel-level co-registration among the images should be conducted. However, applying well-known matching methods, such as scale-invariant feature transform and speeded up robust features for VHR multi-temporal images, has limitations. First, they cannot be used for matching an optical image to heterogeneous non-optical data for georegistration. Second, they produce a local misalignment induced by differences in acquisition conditions, such as acquisition platform stability, the sensor's off-nadir angle, and relief displacement of the considered scene. Therefore, this study addresses the problem by proposing an automated geo/co-registration framework for full-scene multi-temporal images acquired from a VHR optical satellite sensor. The proposed method comprises two primary steps: (1) a global georegistration process, followed by (2) a fine co-registration process. During the first step, two-dimensional multi-temporal satellite images are matched to three-dimensional topographic maps to assign the map coordinates. During the second step, a local analysis of registration noise pixels extracted between the multi-temporal images that have been mapped to the map coordinates is conducted to extract a large number of well-distributed corresponding points (CPs). The CPs are finally used to construct a non-rigid transformation function that enables minimization of the local misalignment existing among the images. Experiments conducted on five Kompsat-3 full scenes confirmed the effectiveness of the proposed framework, showing that the georegistration performance resulted in an approximately pixel-level accuracy for most of the scenes, and the co-registration performance further improved the results among all combinations of the georegistered Kompsat-3 image pairs by increasing the calculated cross-correlation values.
NASA Astrophysics Data System (ADS)
Anavi, Yaron; Kogan, Ilya; Gelbart, Elad; Geva, Ofer; Greenspan, Hayit
2016-03-01
We explore the combination of text metadata, such as patients' age and gender, with image-based features, for X-ray chest pathology image retrieval. We focus on a feature set extracted from a pre-trained deep convolutional network shown in earlier work to achieve state-of-the-art results. Two distance measures are explored: a descriptor-based measure, which computes the distance between image descriptors, and a classification-based measure, which performed by a comparison of the corresponding SVM classification probabilities. We show that retrieval results increase once the age and gender information combined with the features extracted from the last layers of the network, with best results using the classification-based scheme. Visualization of the X-ray data is presented by embedding the high dimensional deep learning features in a 2-D dimensional space while preserving the pairwise distances using the t-SNE algorithm. The 2-D visualization gives the unique ability to find groups of X-ray images that are similar to the query image and among themselves, which is a characteristic we do not see in a 1-D traditional ranking.
Multiparallel Three-Dimensional Optical Microscopy
NASA Technical Reports Server (NTRS)
Nguyen, Lam K.; Price, Jeffrey H.; Kellner, Albert L.; Bravo-Zanoquera, Miguel
2010-01-01
Multiparallel three-dimensional optical microscopy is a method of forming an approximate three-dimensional image of a microscope sample as a collection of images from different depths through the sample. The imaging apparatus includes a single microscope plus an assembly of beam splitters and mirrors that divide the output of the microscope into multiple channels. An imaging array of photodetectors in each channel is located at a different distance along the optical path from the microscope, corresponding to a focal plane at a different depth within the sample. The optical path leading to each photodetector array also includes lenses to compensate for the variation of magnification with distance so that the images ultimately formed on all the photodetector arrays are of the same magnification. The use of optical components common to multiple channels in a simple geometry makes it possible to obtain high light-transmission efficiency with an optically and mechanically simple assembly. In addition, because images can be read out simultaneously from all the photodetector arrays, the apparatus can support three-dimensional imaging at a high scanning rate.
NASA Astrophysics Data System (ADS)
Liu, Zexi; Cohen, Fernand
2017-11-01
We describe an approach for synthesizing a three-dimensional (3-D) face structure from an image or images of a human face taken at a priori unknown poses using gender and ethnicity specific 3-D generic models. The synthesis process starts with a generic model, which is personalized as images of the person become available using preselected landmark points that are tessellated to form a high-resolution triangular mesh. From a single image, two of the three coordinates of the model are reconstructed in accordance with the given image of the person, while the third coordinate is sampled from the generic model, and the appearance is made in accordance with the image. With multiple images, all coordinates and appearance are reconstructed in accordance with the observed images. This method allows for accurate pose estimation as well as face identification in 3-D rendering of a difficult two-dimensional (2-D) face recognition problem into a much simpler 3-D surface matching problem. The estimation of the unknown pose is achieved using the Levenberg-Marquardt optimization process. Encouraging experimental results are obtained in a controlled environment with high-resolution images under a good illumination condition, as well as for images taken in an uncontrolled environment under arbitrary illumination with low-resolution cameras.
Gu, X; Fang, Z-M; Liu, Y; Lin, S-L; Han, B; Zhang, R; Chen, X
2014-01-01
Three-dimensional fluid-attenuated inversion recovery magnetic resonance imaging of the inner ear after intratympanic injection of gadolinium, together with magnetic resonance imaging scoring of the perilymphatic space, were used to investigate the positive identification rate of hydrops and determine the technique's diagnostic value for delayed endolymphatic hydrops. Twenty-five patients with delayed endolymphatic hydrops underwent pure tone audiometry, bithermal caloric testing, vestibular-evoked myogenic potential testing and three-dimensional magnetic resonance imaging of the inner ear after bilateral intratympanic injection of gadolinium. The perilymphatic space of the scanned images was analysed to investigate the positive identification rate of endolymphatic hydrops. According to the magnetic resonance imaging scoring of the perilymphatic space and the diagnostic standard, 84 per cent of the patients examined had endolymphatic hydrops. In comparison, the positive identification rates for vestibular-evoked myogenic potential and bithermal caloric testing were 52 per cent and 72 per cent respectively. Three-dimensional magnetic resonance imaging after intratympanic injection of gadolinium is valuable in the diagnosis of delayed endolymphatic hydrops and its classification. The perilymphatic space scoring system improved the diagnostic accuracy of magnetic resonance imaging.
von Braunmühl, T; Hartmann, D; Tietze, J K; Cekovic, D; Kunte, C; Ruzicka, T; Berking, C; Sattler, E C
2016-11-01
Optical coherence tomography (OCT) has become a valuable non-invasive tool in the in vivo diagnosis of non-melanoma skin cancer, especially of basal cell carcinoma (BCC). Due to an updated software-supported algorithm, a new en-face mode - similar to the horizontal en-face mode in high-definition OCT and reflectance confocal microscopy - surface-parallel imaging is possible which, in combination with the established slice mode of frequency domain (FD-)OCT, may offer additional information in the diagnosis of BCC. To define characteristic morphologic features of BCC using the new en-face mode in addition to the conventional cross-sectional imaging mode for three-dimensional imaging of BCC in FD-OCT. A total of 33 BCC were examined preoperatively by imaging in en-face mode as well as cross-sectional mode in FD-OCT. Characteristic features were evaluated and correlated with histopathology findings. Features established in the cross-sectional imaging mode as well as additional features were present in the en-face mode of FD-OCT: lobulated structures (100%), dark peritumoral rim (75%), bright peritumoral stroma (96%), branching vessels (90%), compressed fibrous bundles between lobulated nests ('star shaped') (78%), and intranodular small bright dots (51%). These features were also evaluated according to the histopathological subtype. In the en-face mode, the lobulated structures with compressed fibrous bundles of the BCC were more distinct than in the slice mode. FD-OCT with a new depiction for horizontal and vertical imaging modes offers additional information in the diagnosis of BCC, especially in nodular BCC, and enhances the possibility of the evaluation of morphologic tumour features. © 2016 European Academy of Dermatology and Venereology.
Image formation of thick three-dimensional objects in differential-interference-contrast microscopy.
Trattner, Sigal; Kashdan, Eugene; Feigin, Micha; Sochen, Nir
2014-05-01
The differential-interference-contrast (DIC) microscope is of widespread use in life sciences as it enables noninvasive visualization of transparent objects. The goal of this work is to model the image formation process of thick three-dimensional objects in DIC microscopy. The model is based on the principles of electromagnetic wave propagation and scattering. It simulates light propagation through the components of the DIC microscope to the image plane using a combined geometrical and physical optics approach and replicates the DIC image of the illuminated object. The model is evaluated by comparing simulated images of three-dimensional spherical objects with the recorded images of polystyrene microspheres. Our computer simulations confirm that the model captures the major DIC image characteristics of the simulated object, and it is sensitive to the defocusing effects.
A comparative study of new and current methods for dental micro-CT image denoising
Lashgari, Mojtaba; Qin, Jie; Swain, Michael
2016-01-01
Objectives: The aim of the current study was to evaluate the application of two advanced noise-reduction algorithms for dental micro-CT images and to implement a comparative analysis of the performance of new and current denoising algorithms. Methods: Denoising was performed using gaussian and median filters as the current filtering approaches and the block-matching and three-dimensional (BM3D) method and total variation method as the proposed new filtering techniques. The performance of the denoising methods was evaluated quantitatively using contrast-to-noise ratio (CNR), edge preserving index (EPI) and blurring indexes, as well as qualitatively using the double-stimulus continuous quality scale procedure. Results: The BM3D method had the best performance with regard to preservation of fine textural features (CNREdge), non-blurring of the whole image (blurring index), the clinical visual score in images with very fine features and the overall visual score for all types of images. On the other hand, the total variation method provided the best results with regard to smoothing of images in texture-free areas (CNRTex-free) and in preserving the edges and borders of image features (EPI). Conclusions: The BM3D method is the most reliable technique for denoising dental micro-CT images with very fine textural details, such as shallow enamel lesions, in which the preservation of the texture and fine features is of the greatest importance. On the other hand, the total variation method is the technique of choice for denoising images without very fine textural details in which the clinician or researcher is interested mainly in anatomical features and structural measurements. PMID:26764583
Robotic Vision-Based Localization in an Urban Environment
NASA Technical Reports Server (NTRS)
Mchenry, Michael; Cheng, Yang; Matthies
2007-01-01
A system of electronic hardware and software, now undergoing development, automatically estimates the location of a robotic land vehicle in an urban environment using a somewhat imprecise map, which has been generated in advance from aerial imagery. This system does not utilize the Global Positioning System and does not include any odometry, inertial measurement units, or any other sensors except a stereoscopic pair of black-and-white digital video cameras mounted on the vehicle. Of course, the system also includes a computer running software that processes the video image data. The software consists mostly of three components corresponding to the three major image-data-processing functions: Visual Odometry This component automatically tracks point features in the imagery and computes the relative motion of the cameras between sequential image frames. This component incorporates a modified version of a visual-odometry algorithm originally published in 1989. The algorithm selects point features, performs multiresolution area-correlation computations to match the features in stereoscopic images, tracks the features through the sequence of images, and uses the tracking results to estimate the six-degree-of-freedom motion of the camera between consecutive stereoscopic pairs of images (see figure). Urban Feature Detection and Ranging Using the same data as those processed by the visual-odometry component, this component strives to determine the three-dimensional (3D) coordinates of vertical and horizontal lines that are likely to be parts of, or close to, the exterior surfaces of buildings. The basic sequence of processes performed by this component is the following: 1. An edge-detection algorithm is applied, yielding a set of linked lists of edge pixels, a horizontal-gradient image, and a vertical-gradient image. 2. Straight-line segments of edges are extracted from the linked lists generated in step 1. Any straight-line segments longer than an arbitrary threshold (e.g., 30 pixels) are assumed to belong to buildings or other artificial objects. 3. A gradient-filter algorithm is used to test straight-line segments longer than the threshold to determine whether they represent edges of natural or artificial objects. In somewhat oversimplified terms, the test is based on the assumption that the gradient of image intensity varies little along a segment that represents the edge of an artificial object.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Sawant, A; Ruan, D
Purpose: The development of high dimensional imaging systems (e.g. volumetric MRI, CBCT, photogrammetry systems) in image-guided radiotherapy provides important pathways to the ultimate goal of real-time volumetric/surface motion monitoring. This study aims to develop a prediction method for the high dimensional state subject to respiratory motion. Compared to conventional linear dimension reduction based approaches, our method utilizes manifold learning to construct a descriptive feature submanifold, where more efficient and accurate prediction can be performed. Methods: We developed a prediction framework for high-dimensional state subject to respiratory motion. The proposed method performs dimension reduction in a nonlinear setting to permit moremore » descriptive features compared to its linear counterparts (e.g., classic PCA). Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where low-dimensional prediction is performed. A fixed-point iterative pre-image estimation method is applied subsequently to recover the predicted value in the original state space. We evaluated and compared the proposed method with PCA-based method on 200 level-set surfaces reconstructed from surface point clouds captured by the VisionRT system. The prediction accuracy was evaluated with respect to root-mean-squared-error (RMSE) for both 200ms and 600ms lookahead lengths. Results: The proposed method outperformed PCA-based approach with statistically higher prediction accuracy. In one-dimensional feature subspace, our method achieved mean prediction accuracy of 0.86mm and 0.89mm for 200ms and 600ms lookahead lengths respectively, compared to 0.95mm and 1.04mm from PCA-based method. The paired t-tests further demonstrated the statistical significance of the superiority of our method, with p-values of 6.33e-3 and 5.78e-5, respectively. Conclusion: The proposed approach benefits from the descriptiveness of a nonlinear manifold and the prediction reliability in such low dimensional manifold. The fixed-point iterative approach turns out to work well practically for the pre-image recovery. Our approach is particularly suitable to facilitate managing respiratory motion in image-guide radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less
Schmitz, Alexander; Fischer, Sabine C; Mattheyer, Christian; Pampaloni, Francesco; Stelzer, Ernst H K
2017-03-03
Three-dimensional multicellular aggregates such as spheroids provide reliable in vitro substitutes for tissues. Quantitative characterization of spheroids at the cellular level is fundamental. We present the first pipeline that provides three-dimensional, high-quality images of intact spheroids at cellular resolution and a comprehensive image analysis that completes traditional image segmentation by algorithms from other fields. The pipeline combines light sheet-based fluorescence microscopy of optically cleared spheroids with automated nuclei segmentation (F score: 0.88) and concepts from graph analysis and computational topology. Incorporating cell graphs and alpha shapes provided more than 30 features of individual nuclei, the cellular neighborhood and the spheroid morphology. The application of our pipeline to a set of breast carcinoma spheroids revealed two concentric layers of different cell density for more than 30,000 cells. The thickness of the outer cell layer depends on a spheroid's size and varies between 50% and 75% of its radius. In differently-sized spheroids, we detected patches of different cell densities ranging from 5 × 10 5 to 1 × 10 6 cells/mm 3 . Since cell density affects cell behavior in tissues, structural heterogeneities need to be incorporated into existing models. Our image analysis pipeline provides a multiscale approach to obtain the relevant data for a system-level understanding of tissue architecture.
Brain tissue analysis using texture features based on optical coherence tomography images
NASA Astrophysics Data System (ADS)
Lenz, Marcel; Krug, Robin; Dillmann, Christopher; Gerhardt, Nils C.; Welp, Hubert; Schmieder, Kirsten; Hofmann, Martin R.
2018-02-01
Brain tissue differentiation is highly demanded in neurosurgeries, i.e. tumor resection. Exact navigation during the surgery is essential in order to guarantee best life quality afterwards. So far, no suitable method has been found that perfectly covers this demands. With optical coherence tomography (OCT), fast three dimensional images can be obtained in vivo and contactless with a resolution of 1-15 μm. With these specifications OCT is a promising tool to support neurosurgeries. Here, we investigate ex vivo samples of meningioma, healthy white and healthy gray matter in a preliminary study towards in vivo brain tumor removal assistance. Raw OCT images already display structural variations for different tissue types, especially meningioma. But, in order to achieve neurosurgical guidance directly during resection, an automated differentiation approach is desired. For this reason, we employ different texture feature based algorithms, perform a Principal Component Analysis afterwards and then train a Support Vector Machine classifier. In the future we will try different combinations of texture features and perform in vivo measurements in order to validate our findings.
Principal component analysis of dynamic fluorescence images for diagnosis of diabetic vasculopathy
NASA Astrophysics Data System (ADS)
Seo, Jihye; An, Yuri; Lee, Jungsul; Ku, Taeyun; Kang, Yujung; Ahn, Chulwoo; Choi, Chulhee
2016-04-01
Indocyanine green (ICG) fluorescence imaging has been clinically used for noninvasive visualizations of vascular structures. We have previously developed a diagnostic system based on dynamic ICG fluorescence imaging for sensitive detection of vascular disorders. However, because high-dimensional raw data were used, the analysis of the ICG dynamics proved difficult. We used principal component analysis (PCA) in this study to extract important elements without significant loss of information. We examined ICG spatiotemporal profiles and identified critical features related to vascular disorders. PCA time courses of the first three components showed a distinct pattern in diabetic patients. Among the major components, the second principal component (PC2) represented arterial-like features. The explained variance of PC2 in diabetic patients was significantly lower than in normal controls. To visualize the spatial pattern of PCs, pixels were mapped with red, green, and blue channels. The PC2 score showed an inverse pattern between normal controls and diabetic patients. We propose that PC2 can be used as a representative bioimaging marker for the screening of vascular diseases. It may also be useful in simple extractions of arterial-like features.
An evaluation of three-dimensional sensors for the extravehicular activity helper/retreiver
NASA Technical Reports Server (NTRS)
Magee, Michael
1993-01-01
The Extravehicular Activity Retriever/Helper (EVAHR) is a robotic device currently under development at the NASA Johnson Space Center that is designed to fetch objects or to assist in retrieving an astronaut who may have become inadvertently de-tethered. The EVAHR will be required to exhibit a high degree of intelligent autonomous operation and will base much of its reasoning upon information obtained from one or more three-dimensional sensors that it will carry and control. At the highest level of visual cognition and reasoning, the EVAHR will be required to detect objects, recognize them, and estimate their spatial orientation and location. The recognition phase and estimation of spatial pose will depend on the ability of the vision system to reliably extract geometric features of the objects such as whether the surface topologies observed are planar or curved and the spatial relationships between the component surfaces. In order to achieve these tasks, accurate sensing of the operational environment and objects in the environment will therefore be critical. The qualitative and quantitative results of empirical studies of three sensors that are capable of providing three-dimensional information to the EVAHR, but using completely different hardware approaches are documented. The first of these devices is a phase shift laser with an effective operating range (ambiguity interval) of approximately 15 meters. The second sensor is a laser triangulation system designed to operate at much closer range and to provide higher resolution images. The third sensor is a dual camera stereo imaging system from which range images can also be obtained. The remainder of the report characterizes the strengths and weaknesses of each of these systems relative to quality of data extracted and how different object characteristics affect sensor operation.
Cerebral cortex three-dimensional profiling in human fetuses by magnetic resonance imaging
Sbarbati, Andrea; Pizzini, Francesca; Fabene, Paolo F; Nicolato, Elena; Marzola, Pasquina; Calderan, Laura; Simonati, Alessandro; Longo, Laura; Osculati, Antonio; Beltramello, Alberto
2004-01-01
Seven human fetuses of crown/rump length corresponding to gestational ages ranging from the 12th to the 16th week were studied using a paradigm based on three-dimensional reconstruction of the brain obtained by magnetic resonance imaging (MRI). The aim of the study was to evaluate brain morphology in situ and to describe developmental dynamics during an important period of fetal morphogenesis. Three-dimensional MRI showed the increasing degree of maturation of the brains; fronto-occipital distance, bitemporal distance and occipital angle were examined in all the fetuses. The data were interpreted by correlation with the internal structure as visualized using high-spatial-resolution MRI, acquired using a 4.7-T field intensity magnet with a gradient power of 20 G cm−1. The spatial resolution was sufficient for a detailed detection of five layers, and the contrast was optimized using sequences with different degrees of T1 and T2 weighting. Using the latter, it was possible to visualize the subplate and marginal zones. The cortical thickness was mapped on to the hemispheric surface, describing the thickness gradient from the insular cortex to the periphery of the hemispheres. The study demonstrates the utility of MRI for studying brain development. The method provides a quantitative profiling of the brain, which allows the calculation of important morphological parameters, and it provides informative regarding transient features of the developing brain. PMID:15198688
Quadratic trigonometric B-spline for image interpolation using GA
Abbas, Samreen; Irshad, Misbah
2017-01-01
In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation. PMID:28640906
Quadratic trigonometric B-spline for image interpolation using GA.
Hussain, Malik Zawwar; Abbas, Samreen; Irshad, Misbah
2017-01-01
In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation.
Real-time high dynamic range laser scanning microscopy
Vinegoni, C.; Leon Swisher, C.; Fumene Feruglio, P.; Giedt, R. J.; Rousso, D. L.; Stapleton, S.; Weissleder, R.
2016-01-01
In conventional confocal/multiphoton fluorescence microscopy, images are typically acquired under ideal settings and after extensive optimization of parameters for a given structure or feature, often resulting in information loss from other image attributes. To overcome the problem of selective data display, we developed a new method that extends the imaging dynamic range in optical microscopy and improves the signal-to-noise ratio. Here we demonstrate how real-time and sequential high dynamic range microscopy facilitates automated three-dimensional neural segmentation. We address reconstruction and segmentation performance on samples with different size, anatomy and complexity. Finally, in vivo real-time high dynamic range imaging is also demonstrated, making the technique particularly relevant for longitudinal imaging in the presence of physiological motion and/or for quantification of in vivo fast tracer kinetics during functional imaging. PMID:27032979
NASA Technical Reports Server (NTRS)
Chamberlain, F. R. (Inventor)
1980-01-01
A system for generating, within a single frame of photographic film, a quadrified image including images of angularly (including orthogonally) related fields of view of a near field three dimensional object is described. It is characterized by three subsystems each of which includes a plurality of reflective surfaces for imaging a different field of view of the object at a different quadrant of the quadrified image. All of the subsystems have identical path lengths to the object photographed.
Microwave imaging by three-dimensional Born linearization of electromagnetic scattering
NASA Astrophysics Data System (ADS)
Caorsi, S.; Gragnani, G. L.; Pastorino, M.
1990-11-01
An approach to microwave imaging is proposed that uses a three-dimensional vectorial form of the Born approximation to linearize the equation of electromagnetic scattering. The inverse scattering problem is numerically solved for three-dimensional geometries by means of the moment method. A pseudoinversion algorithm is adopted to overcome ill conditioning. Results show that the method is well suited for qualitative imaging purposes, while its capability for exactly reconstructing the complex dielectric permittivity is affected by the limitations inherent in the Born approximation and in ill conditioning.
Three-dimensional ultrasound and image-directed surgery: implications for operating room personnel.
Macedonia, C
1997-04-01
The proliferation of new imaging technologies is having a profound impact on all surgical specialties. New means of surgical visualization are allowing more surgeries to be performed less invasively. Three-dimensional ultrasound is a technology that has potential as a diagnostic tool, as a presurgical planning simulator, and as an adjunct to image-directed surgery. This article describes how three-dimensional ultrasound is being used by the United States Department of Defense and how it may change the role of the perioperative nurse in the near future.
NASA Astrophysics Data System (ADS)
Bhardwaj, Kaushal; Patra, Swarnajyoti
2018-04-01
Inclusion of spatial information along with spectral features play a significant role in classification of remote sensing images. Attribute profiles have already proved their ability to represent spatial information. In order to incorporate proper spatial information, multiple attributes are required and for each attribute large profiles need to be constructed by varying the filter parameter values within a wide range. Thus, the constructed profiles that represent spectral-spatial information of an hyperspectral image have huge dimension which leads to Hughes phenomenon and increases computational burden. To mitigate these problems, this work presents an unsupervised feature selection technique that selects a subset of filtered image from the constructed high dimensional multi-attribute profile which are sufficiently informative to discriminate well among classes. In this regard the proposed technique exploits genetic algorithms (GAs). The fitness function of GAs are defined in an unsupervised way with the help of mutual information. The effectiveness of the proposed technique is assessed using one-against-all support vector machine classifier. The experiments conducted on three hyperspectral data sets show the robustness of the proposed method in terms of computation time and classification accuracy.
NASA Astrophysics Data System (ADS)
Saaid, Hicham; Segers, Patrick; Novara, Matteo; Claessens, Tom; Verdonck, Pascal
2018-03-01
The characterization of flow patterns in the left ventricle may help the development and interpretation of flow-based parameters of cardiac function and (patho-)physiology. Yet, in vivo visualization of highly dynamic three-dimensional flow patterns in an opaque and moving chamber is a challenging task. This has been shown in several recent multidisciplinary studies where in vivo imaging methods are often complemented by in silico solutions, or by in vitro methods. Because of its distinctive features, particle image velocimetry (PIV) has been extensively used to investigate flow dynamics in the cardiovascular field. However, full volumetric PIV data in a dynamically changing geometry such as the left ventricle remain extremely scarce, which justifies the present study. An investigation of the left ventricle flow making use of a customized cardiovascular simulator is presented; a multiplane scanning-stereoscopic PIV setup is used, which allows for the measurement of independent planes across the measurement volume. Due to the accuracy in traversing the illumination and imaging systems, the present setup allows to reconstruct the flow in a 3D volume performing only one single calibration. The effects of the orientation of a prosthetic mitral valve in anatomical and anti-anatomical configurations have been investigated during the diastolic filling time. The measurement is performed in a phase-locked manner; the mean velocity components are presented together with the vorticity and turbulent kinetic energy maps. The reconstructed 3D flow structures downstream the bileaflet mitral valve are shown, which provides additional insight of the highly three-dimensional flow.
Park, Jae-Hyeung; Kim, Hak-Rin; Kim, Yunhee; Kim, Joohwan; Hong, Jisoo; Lee, Sin-Doo; Lee, Byoungho
2004-12-01
A depth-enhanced three-dimensional-two-dimensional convertible display that uses a polymer-dispersed liquid crystal based on the principle of integral imaging is proposed. In the proposed method, a lens array is located behind a transmission-type display panel to form an array of point-light sources, and a polymer-dispersed liquid crystal is electrically controlled to pass or to scatter light coming from these point-light sources. Therefore, three-dimensional-two-dimensional conversion is accomplished electrically without any mechanical movement. Moreover, the nonimaging structure of the proposed method increases the expressible depth range considerably. We explain the method of operation and present experimental results.
Jurrus, Elizabeth; Watanabe, Shigeki; Giuly, Richard J.; Paiva, Antonio R. C.; Ellisman, Mark H.; Jorgensen, Erik M.; Tasdizen, Tolga
2013-01-01
Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated process first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes. PMID:22644867
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jurrus, Elizabeth R.; Watanabe, Shigeki; Giuly, Richard J.
2013-01-01
Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated processmore » first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes.« less
National Defense Center of Excellence for Industrial Metrology and 3D Imaging
2012-10-18
validation rather than mundane data-reduction/analysis tasks. Indeed, the new financial and technical resources being brought to bear by integrating CT...of extremely fast axial scanners. By replacing the single-spot detector by a detector array, a three-dimensional image is acquired by one depth scan...the number of acquired voxels per complete two-dimensional or three-dimensional image, the axial and lateral resolution, the depth range, the
NASA Astrophysics Data System (ADS)
Kirubanandham, A.; Lujan-Regalado, I.; Vallabhaneni, R.; Chawla, N.
2016-11-01
Decreasing pitch size in electronic packaging has resulted in a drastic decrease in solder volumes. The Sn grain crystallography and fraction of intermetallic compounds (IMCs) in small-scale solder joints evolve much differently at the smaller length scales. A cross-sectional study limits the morphological analysis of microstructural features to two dimensions. This study utilizes serial sectioning technique in conjunction with electron backscatter diffraction to investigate the crystallographic orientation of both Sn grains and Cu6Sn5 IMCs in Cu/Pure Sn/Cu solder joints in three dimensional (3D). Quantification of grain aspect ratio is affected by local cooling rate differences within the solder volume. Backscatter electron imaging and focused ion beam serial sectioning enabled the visualization of morphology of both nanosized Cu6Sn5 IMCs and the hollow hexagonal morphology type Cu6Sn5 IMCs in 3D. Quantification and visualization of microstructural features in 3D thus enable us to better understand the microstructure and deformation mechanics within these small scale solder joints.
Stereo imaging with spaceborne radars
NASA Technical Reports Server (NTRS)
Leberl, F.; Kobrick, M.
1983-01-01
Stereo viewing is a valuable tool in photointerpretation and is used for the quantitative reconstruction of the three dimensional shape of a topographical surface. Stereo viewing refers to a visual perception of space by presenting an overlapping image pair to an observer so that a three dimensional model is formed in the brain. Some of the observer's function is performed by machine correlation of the overlapping images - so called automated stereo correlation. The direct perception of space with two eyes is often called natural binocular vision; techniques of generating three dimensional models of the surface from two sets of monocular image measurements is the topic of stereology.
The importance of spatial ability and mental models in learning anatomy
NASA Astrophysics Data System (ADS)
Chatterjee, Allison K.
As a foundational course in medical education, gross anatomy serves to orient medical and veterinary students to the complex three-dimensional nature of the structures within the body. Understanding such spatial relationships is both fundamental and crucial for achievement in gross anatomy courses, and is essential for success as a practicing professional. Many things contribute to learning spatial relationships; this project focuses on a few key elements: (1) the type of multimedia resources, particularly computer-aided instructional (CAI) resources, medical students used to study and learn; (2) the influence of spatial ability on medical and veterinary students' gross anatomy grades and their mental models; and (3) how medical and veterinary students think about anatomy and describe the features of their mental models to represent what they know about anatomical structures. The use of computer-aided instruction (CAI) by gross anatomy students at Indiana University School of Medicine (IUSM) was assessed through a questionnaire distributed to the regional centers of the IUSM. Students reported using internet browsing, PowerPoint presentation software, and email on a daily bases to study gross anatomy. This study reveals that first-year medical students at the IUSM make limited use of CAI to study gross anatomy. Such studies emphasize the importance of examining students' use of CAI to study gross anatomy prior to development and integration of electronic media into the curriculum and they may be important in future decisions regarding the development of alternative learning resources. In order to determine how students think about anatomical relationships and describe the features of their mental models, personal interviews were conducted with select students based on students' ROT scores. Five typologies of the characteristics of students' mental models were identified and described: spatial thinking, kinesthetic approach, identification of anatomical structures, problem solving strategies, and study methods. Students with different levels of spatial ability visualize and think about anatomy in qualitatively different ways, which is reflected by the features of their mental models. Low spatial ability students thought about and used two-dimensional images from the textbook. They possessed basic two-dimensional models of anatomical structures; they placed emphasis on diagrams and drawings in their studies; and they re-read anatomical problems many times before answering. High spatial ability students thought fully in three-dimensional and imagined rotation and movement of the structures; they made use of many types of images and text as they studied and solved problems. They possessed elaborate three-dimensional models of anatomical structures which they were able to manipulate to solve problems; and they integrated diagrams, drawings, and written text in their studies. Middle spatial ability students were a mix between both low and high spatial ability students. They imagined two-dimensional images popping out of the flat paper to become more three-dimensional, but still relied on drawings and diagrams. Additionally, high spatial ability students used a higher proportion of anatomical terminology than low spatial ability or middle spatial ability students. This provides additional support to the premise that high spatial students' mental models are a complex mixture of imagistic representations and propositional representations that incorporate correct anatomical terminology. Low spatial ability students focused on the function of structures and ways to group information primarily for the purpose of recall. This supports the theory that low spatial students' mental models will be characterized by more on imagistic representations that are general in nature. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Montejo, Ludguier D.; Jia, Jingfei; Kim, Hyun K.; Hielscher, Andreas H.
2013-03-01
We apply the Fourier Transform to absorption and scattering coefficient images of proximal interphalangeal (PIP) joints and evaluate the performance of these coefficients as classifiers using receiver operator characteristic (ROC) curve analysis. We find 25 features that yield a Youden index over 0.7, 3 features that yield a Youden index over 0.8, and 1 feature that yields a Youden index over 0.9 (90.0% sensitivity and 100% specificity). In general, scattering coefficient images yield better one-dimensional classifiers compared to absorption coefficient images. Using features derived from scattering coefficient images we obtain an average Youden index of 0.58 +/- 0.16, and an average Youden index of 0.45 +/- 0.15 when using features from absorption coefficient images.
Fault Diagnosis for Rolling Bearings under Variable Conditions Based on Visual Cognition
Cheng, Yujie; Zhou, Bo; Lu, Chen; Yang, Chao
2017-01-01
Fault diagnosis for rolling bearings has attracted increasing attention in recent years. However, few studies have focused on fault diagnosis for rolling bearings under variable conditions. This paper introduces a fault diagnosis method for rolling bearings under variable conditions based on visual cognition. The proposed method includes the following steps. First, the vibration signal data are transformed into a recurrence plot (RP), which is a two-dimensional image. Then, inspired by the visual invariance characteristic of the human visual system (HVS), we utilize speed up robust feature to extract fault features from the two-dimensional RP and generate a 64-dimensional feature vector, which is invariant to image translation, rotation, scaling variation, etc. Third, based on the manifold perception characteristic of HVS, isometric mapping, a manifold learning method that can reflect the intrinsic manifold embedded in the high-dimensional space, is employed to obtain a low-dimensional feature vector. Finally, a classical classification method, support vector machine, is utilized to realize fault diagnosis. Verification data were collected from Case Western Reserve University Bearing Data Center, and the experimental result indicates that the proposed fault diagnosis method based on visual cognition is highly effective for rolling bearings under variable conditions, thus providing a promising approach from the cognitive computing field. PMID:28772943
Higaki, Takumi; Kutsuna, Natsumaro; Hasezawa, Seiichiro
2013-05-16
Intracellular configuration is an important feature of cell status. Recent advances in microscopic imaging techniques allow us to easily obtain a large number of microscopic images of intracellular structures. In this circumstance, automated microscopic image recognition techniques are of extreme importance to future phenomics/visible screening approaches. However, there was no benchmark microscopic image dataset for intracellular organelles in a specified plant cell type. We previously established the Live Images of Plant Stomata (LIPS) database, a publicly available collection of optical-section images of various intracellular structures of plant guard cells, as a model system of environmental signal perception and transduction. Here we report recent updates to the LIPS database and the establishment of a database table, LIPService. We updated the LIPS dataset and established a new interface named LIPService to promote efficient inspection of intracellular structure configurations. Cell nuclei, microtubules, actin microfilaments, mitochondria, chloroplasts, endoplasmic reticulum, peroxisomes, endosomes, Golgi bodies, and vacuoles can be filtered using probe names or morphometric parameters such as stomatal aperture. In addition to the serial optical sectional images of the original LIPS database, new volume-rendering data for easy web browsing of three-dimensional intracellular structures have been released to allow easy inspection of their configurations or relationships with cell status/morphology. We also demonstrated the utility of the new LIPS image database for automated organelle recognition of images from another plant cell image database with image clustering analyses. The updated LIPS database provides a benchmark image dataset for representative intracellular structures in Arabidopsis guard cells. The newly released LIPService allows users to inspect the relationship between organellar three-dimensional configurations and morphometrical parameters.
3D Surface Reconstruction for Lower Limb Prosthetic Model using Radon Transform
NASA Astrophysics Data System (ADS)
Sobani, S. S. Mohd; Mahmood, N. H.; Zakaria, N. A.; Razak, M. A. Abdul
2018-03-01
This paper describes the idea to realize three-dimensional surfaces of objects with cylinder-based shapes where the techniques adopted and the strategy developed for a non-rigid three-dimensional surface reconstruction of an object from uncalibrated two-dimensional image sequences using multiple-view digital camera and turntable setup. The surface of an object is reconstructed based on the concept of tomography with the aid of performing several digital image processing algorithms on the two-dimensional images captured by a digital camera in thirty-six different projections and the three-dimensional structure of the surface is analysed. Four different objects are used as experimental models in the reconstructions and each object is placed on a manually rotated turntable. The results shown that the proposed method has successfully reconstruct the three-dimensional surface of the objects and practicable. The shape and size of the reconstructed three-dimensional objects are recognizable and distinguishable. The reconstructions of objects involved in the test are strengthened with the analysis where the maximum percent error obtained from the computation is approximately 1.4 % for the height whilst 4.0%, 4.79% and 4.7% for the diameters at three specific heights of the objects.
The Design and Performance Characteristics of a Cellular Logic 3-D Image Classification Processor.
1981-04-01
34 AGARD Proc. No. 94 on Artificiel Intelligence , 217: 1-13 (1971) 7. Golay, Marcel J. E. "Hexagonal Parallel Pattern Transformations." IEEE Trans. on...nonrandom nature of the data and features must be understood in order to intelligently select a reasonable three-dimensional noise filter. This completes...tactical targets which are located hundreds of meters away and are controlled and disguised by equally intelligent human beings, the difficulty of the
NASA Astrophysics Data System (ADS)
Daily, W.; Ramirez, A.
1995-04-01
Electrical resistance tomography was used to monitor in-situ remediation processes for removal of volatile organic compounds from subsurface water and soil at the Savannah River Site near Aiken, South Carolina. This work was designed to test the feasibility of injecting a weak mixture of methane in air as a metabolic carbon source for natural microbial populations which are capable of trichloroethylene degradation. Electrical resistance tomograms were constructed of the subsurface during the test to provide detailed images of the process. These images were made using an iterative reconstruction algorithm based on a finite element forward model and Newton-type least-squares minimization. Changes in the subsurface resistivity distribution were imaged by a pixel-by-pixel subtraction of images taken before and during the process. This differential tomography removed all static features of formation resistivity but clearly delineated dynamic features induced by remediation processes. The air-methane mixture was injected into the saturated zone and the intrained air migration paths were tomographically imaged by the increased resistivity of the path as air displaced formation water. We found the flow paths to be confined to a complex three-dimensional network of channels, some of which extended as far as 30 m from the injection well. These channels were not entirely stable over a period of months since new channels appeared to form with time. Also, the resistivity of the air injection paths increased with time. In another series of tests, resistivity images of water infiltration from the surface support similar conclusions about the preferential permeability paths in the vadose zone. In this case, the water infiltration front is confined to narrow channels which have a three-dimensional structure. Here, similar to air injection in the saturated zone, the water flow is controlled by local variations in formation permeability. However, temporal changes in these channels are minor, indicating that the permeable paths do not seem to be modified by continued infiltration.
Analysis of autostereoscopic three-dimensional images using multiview wavelets.
Saveljev, Vladimir; Palchikova, Irina
2016-08-10
We propose that multiview wavelets can be used in processing multiview images. The reference functions for the synthesis/analysis of multiview images are described. The synthesized binary images were observed experimentally as three-dimensional visual images. The symmetric multiview B-spline wavelets are proposed. The locations recognized in the continuous wavelet transform correspond to the layout of the test objects. The proposed wavelets can be applied to the multiview, integral, and plenoptic images.
NASA Astrophysics Data System (ADS)
Zhu, Weifang; Zhang, Li; Shi, Fei; Xiang, Dehui; Wang, Lirong; Guo, Jingyun; Yang, Xiaoling; Chen, Haoyu; Chen, Xinjian
2017-07-01
Cystoid macular edema (CME) and macular hole (MH) are the leading causes for visual loss in retinal diseases. The volume of the CMEs can be an accurate predictor for visual prognosis. This paper presents an automatic method to segment the CMEs from the abnormal retina with coexistence of MH in three-dimensional-optical coherence tomography images. The proposed framework consists of preprocessing and CMEs segmentation. The preprocessing part includes denoising, intraretinal layers segmentation and flattening, and MH and vessel silhouettes exclusion. In the CMEs segmentation, a three-step strategy is applied. First, an AdaBoost classifier trained with 57 features is employed to generate the initialization results. Second, an automated shape-constrained graph cut algorithm is applied to obtain the refined results. Finally, cyst area information is used to remove false positives (FPs). The method was evaluated on 19 eyes with coexistence of CMEs and MH from 18 subjects. The true positive volume fraction, FP volume fraction, dice similarity coefficient, and accuracy rate for CMEs segmentation were 81.0%±7.8%, 0.80%±0.63%, 80.9%±5.7%, and 99.7%±0.1%, respectively.
Parot, Vicente; Lim, Daryl; González, Germán; Traverso, Giovanni; Nishioka, Norman S; Vakoc, Benjamin J; Durr, Nicholas J
2013-07-01
While color video endoscopy has enabled wide-field examination of the gastrointestinal tract, it often misses or incorrectly classifies lesions. Many of these missed lesions exhibit characteristic three-dimensional surface topographies. An endoscopic system that adds topographical measurements to conventional color imagery could therefore increase lesion detection and improve classification accuracy. We introduce photometric stereo endoscopy (PSE), a technique which allows high spatial frequency components of surface topography to be acquired simultaneously with conventional two-dimensional color imagery. We implement this technique in an endoscopic form factor and demonstrate that it can acquire the topography of small features with complex geometries and heterogeneous optical properties. PSE imaging of ex vivo human gastrointestinal tissue shows that surface topography measurements enable differentiation of abnormal shapes from surrounding normal tissue. Together, these results confirm that the topographical measurements can be obtained with relatively simple hardware in an endoscopic form factor, and suggest the potential of PSE to improve lesion detection and classification in gastrointestinal imaging.
A three-dimensional quality-guided phase unwrapping method for MR elastography
NASA Astrophysics Data System (ADS)
Wang, Huifang; Weaver, John B.; Perreard, Irina I.; Doyley, Marvin M.; Paulsen, Keith D.
2011-07-01
Magnetic resonance elastography (MRE) uses accumulated phases that are acquired at multiple, uniformly spaced relative phase offsets, to estimate harmonic motion information. Heavily wrapped phase occurs when the motion is large and unwrapping procedures are necessary to estimate the displacements required by MRE. Two unwrapping methods were developed and compared in this paper. The first method is a sequentially applied approach. The three-dimensional MRE phase image block for each slice was processed by two-dimensional unwrapping followed by a one-dimensional phase unwrapping approach along the phase-offset direction. This unwrapping approach generally works well for low noise data. However, there are still cases where the two-dimensional unwrapping method fails when noise is high. In this case, the baseline of the corrupted regions within an unwrapped image will not be consistent. Instead of separating the two-dimensional and one-dimensional unwrapping in a sequential approach, an interleaved three-dimensional quality-guided unwrapping method was developed to combine both the two-dimensional phase image continuity and one-dimensional harmonic motion information. The quality of one-dimensional harmonic motion unwrapping was used to guide the three-dimensional unwrapping procedures and it resulted in stronger guidance than in the sequential method. In this work, in vivo results generated by the two methods were compared.
Fractal Dimensionality of Pore and Grain Volume of a Siliciclastic Marine Sand
NASA Astrophysics Data System (ADS)
Reed, A. H.; Pandey, R. B.; Lavoie, D. L.
Three-dimensional (3D) spatial distributions of pore and grain volumes were determined from high-resolution computer tomography (CT) images of resin-impregnated marine sands. Using a linear gradient extrapolation method, cubic three-dimensional samples were constructed from two-dimensional CT images. Image porosity (0.37) was found to be consistent with the estimate of porosity by water weight loss technique (0.36). Scaling of the pore volume (Vp) with the linear size (L), V~LD provides the fractal dimensionalities of the pore volume (D=2.74+/-0.02) and grain volume (D=2.90+/-0.02) typical for sedimentary materials.
Hyperspectral image classification based on local binary patterns and PCANet
NASA Astrophysics Data System (ADS)
Yang, Huizhen; Gao, Feng; Dong, Junyu; Yang, Yang
2018-04-01
Hyperspectral image classification has been well acknowledged as one of the challenging tasks of hyperspectral data processing. In this paper, we propose a novel hyperspectral image classification framework based on local binary pattern (LBP) features and PCANet. In the proposed method, linear prediction error (LPE) is first employed to select a subset of informative bands, and LBP is utilized to extract texture features. Then, spectral and texture features are stacked into a high dimensional vectors. Next, the extracted features of a specified position are transformed to a 2-D image. The obtained images of all pixels are fed into PCANet for classification. Experimental results on real hyperspectral dataset demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Edwards, Warren S.; Ritchie, Cameron J.; Kim, Yongmin; Mack, Laurence A.
1995-04-01
We have developed a three-dimensional (3D) imaging system using power Doppler (PD) ultrasound (US). This system can be used for visualizing and analyzing the vascular anatomy of parenchymal organs. To create the 3D PD images, we acquired a series of two-dimensional PD images from a commercial US scanner and recorded the position and orientation of each image using a 3D magnetic position sensor. Three-dimensional volumes were reconstructed using specially designed software and then volume rendered for display. We assessed the feasibility and geometric accuracy of our system with various flow phantoms. The system was then tested on a volunteer by scanning a transplanted kidney. The reconstructed volumes of the flow phantom contained less than 1 mm of geometric distortion and the 3D images of the transplanted kidney depicted the segmental, arcuate, and interlobar vessels.
Synthetic aperture radar target detection, feature extraction, and image formation techniques
NASA Technical Reports Server (NTRS)
Li, Jian
1994-01-01
This report presents new algorithms for target detection, feature extraction, and image formation with the synthetic aperture radar (SAR) technology. For target detection, we consider target detection with SAR and coherent subtraction. We also study how the image false alarm rates are related to the target template false alarm rates when target templates are used for target detection. For feature extraction from SAR images, we present a computationally efficient eigenstructure-based 2D-MODE algorithm for two-dimensional frequency estimation. For SAR image formation, we present a robust parametric data model for estimating high resolution range signatures of radar targets and for forming high resolution SAR images.
Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou
2017-07-01
In this paper, we focus on the "blind" identification of the computed tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT scanner based on an original sensor pattern noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its three-dimensional (3-D) image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train a support vector machine (SVM) based classifier to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than sensor pattern noise (SPN) based strategy proposed for general public camera devices.
Three-dimensional head anthropometric analysis
NASA Astrophysics Data System (ADS)
Enciso, Reyes; Shaw, Alex M.; Neumann, Ulrich; Mah, James
2003-05-01
Currently, two-dimensional photographs are most commonly used to facilitate visualization, assessment and treatment of facial abnormalities in craniofacial care but are subject to errors because of perspective, projection, lack metric and 3-dimensional information. One can find in the literature a variety of methods to generate 3-dimensional facial images such as laser scans, stereo-photogrammetry, infrared imaging and even CT however each of these methods contain inherent limitations and as such no systems are in common clinical use. In this paper we will focus on development of indirect 3-dimensional landmark location and measurement of facial soft-tissue with light-based techniques. In this paper we will statistically evaluate and validate a current three-dimensional image-based face modeling technique using a plaster head model. We will also develop computer graphics tools for indirect anthropometric measurements in a three-dimensional head model (or polygonal mesh) including linear distances currently used in anthropometry. The measurements will be tested against a validated 3-dimensional digitizer (MicroScribe 3DX).
2014-09-01
to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging system that...research is to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging ...i) developed time-of- flight extraction algorithms to perform USCT, (ii) developing image reconstruction algorithms for USCT, (iii) developed
Large-field-of-view wide-spectrum artificial reflecting superposition compound eyes
NASA Astrophysics Data System (ADS)
Huang, Chi-Chieh
The study of the imaging principles of natural compound eyes has become an active area of research and has fueled the advancement of modern optics with many attractive design features beyond those available with conventional technologies. Most prominent among all compound eyes is the reflecting superposition compound eyes (RSCEs) found in some decapods. They are extraordinary imaging systems with numerous optical features such as minimum chromatic aberration, wide-angle field of view (FOV), high sensitivity to light and superb acuity to motion. Inspired by their remarkable visual system, we were able to implement the unique lens-free, reflection-based imaging mechanisms into a miniaturized, large-FOV optical imaging device operating at the wide visible spectrum to minimize chromatic aberration without any additional post-image processing. First, two micro-transfer printing methods, a multiple and a shear-assisted transfer printing technique, were studied and discussed to realize life-sized artificial RSCEs. The processes exploited the differential adhesive tendencies of the microstructures formed between a donor and a transfer substrate to accomplish an efficient release and transfer process. These techniques enabled conformal wrapping of three-dimensional (3-D) microstructures, initially fabricated in two-dimensional (2-D) layouts with standard fabrication technology onto a wide range of surfaces with complex and curvilinear shapes. Final part of this dissertation was focused on implementing the key operational features of the natural RSCEs into large-FOV, wide-spectrum artificial RSCEs as an optical imaging device suitable for the wide visible spectrum. Our devices can form real, clear images based on reflection rather than refraction, hence avoiding chromatic aberration due to dispersion by the optical materials. Compared to the performance of conventional refractive lenses of comparable size, our devices demonstrated minimum chromatic aberration, exceptional FOV up to 165o without distortion, modest spherical aberrations and comparable imaging quality without any post-image processing. Together with an augmenting cruciform pattern surrounding each focused image, our devices possessed enhanced, dynamic motion-tracking capability ideal for diverse applications in military, security, search and rescue, night navigation, medical imaging and astronomy. In the future, due to its reflection-based operating principles, it can be further extended into mid- and far-infrared for more demanding applications.
Next Generation Image-Based Phenotyping of Root System Architecture
NASA Astrophysics Data System (ADS)
Davis, T. W.; Shaw, N. M.; Cheng, H.; Larson, B. G.; Craft, E. J.; Shaff, J. E.; Schneider, D. J.; Piñeros, M. A.; Kochian, L. V.
2016-12-01
The development of the Plant Root Imaging and Data Acquisition (PRIDA) hardware/software system enables researchers to collect digital images, along with all the relevant experimental details, of a range of hydroponically grown agricultural crop roots for 2D and 3D trait analysis. Previous efforts of image-based root phenotyping focused on young cereals, such as rice; however, there is a growing need to measure both older and larger root systems, such as those of maize and sorghum, to improve our understanding of the underlying genetics that control favorable rooting traits for plant breeding programs to combat the agricultural risks presented by climate change. Therefore, a larger imaging apparatus has been prototyped for capturing 3D root architecture with an adaptive control system and innovative plant root growth media that retains three-dimensional root architectural features. New publicly available multi-platform software has been released with considerations for both high throughput (e.g., 3D imaging of a single root system in under ten minutes) and high portability (e.g., support for the Raspberry Pi computer). The software features unified data collection, management, exploration and preservation for continued trait and genetics analysis of root system architecture. The new system makes data acquisition efficient and includes features that address the needs of researchers and technicians, such as reduced imaging time, semi-automated camera calibration with uncertainty characterization, and safe storage of the critical experimental data.
Automated detection of Schlemm's canal in spectral-domain optical coherence tomography
NASA Astrophysics Data System (ADS)
Tom, Manu; Ramakrishnan, Vignesh; van Oterendorp, Christian; Deserno, Thomas M.
2015-03-01
Recent advances in optical coherence tomography (OCT) technology allow in vivo imaging of the complex network of intra-scleral aqueous veins in the anterior segment of the eye. Pathological changes in this network, draining the aqueous humor from the eye, are considered to play a role in intraocular pressure elevation, which can lead to glaucoma, one of the major causes of blindness in the world. Through acquisition of OCT volume scans of the anterior eye segment, we aim at reconstructing the three dimensional network of aqueous veins in healthy and glaucomatous subjects. A novel algorithm for segmentation of the three-dimensional (3D) vessel system in human Schlemms canal is presented analyzing frames of spectral domain OCT (SD-OCT) of the eyes surface in either horizontal or vertical orientation. Distortions such as vertical stripes are caused by the superficial blood vessels in the conjunctiva and the episclera. They are removed in the discrete Fourier domain (DFT) masking particular frequencies. Feature-based rigid registration of these noise-filtered images is then performed using the scale invariant feature transform (SIFT). Segmentation of the vessels deep in the sclera originating at or in the vicinity of or having indirect connection to the Schlemm's canal is then performed with 3D region growing technique. The segmented vessels are visualized in 3D providing diagnostically relevant information to the physicians. A proof-of-concept study was performed on a healthy volunteer before and after a pharmaceutical narrowing of Schlemm's canal. A relative decreases 17% was measured based on manual ground truth and the image processing method.
NASA Astrophysics Data System (ADS)
Qin, Xulei; Lu, Guolan; Sechopoulos, Ioannis; Fei, Baowei
2014-03-01
Digital breast tomosynthesis (DBT) is a pseudo-three-dimensional x-ray imaging modality proposed to decrease the effect of tissue superposition present in mammography, potentially resulting in an increase in clinical performance for the detection and diagnosis of breast cancer. Tissue classification in DBT images can be useful in risk assessment, computer-aided detection and radiation dosimetry, among other aspects. However, classifying breast tissue in DBT is a challenging problem because DBT images include complicated structures, image noise, and out-of-plane artifacts due to limited angular tomographic sampling. In this project, we propose an automatic method to classify fatty and glandular tissue in DBT images. First, the DBT images are pre-processed to enhance the tissue structures and to decrease image noise and artifacts. Second, a global smooth filter based on L0 gradient minimization is applied to eliminate detailed structures and enhance large-scale ones. Third, the similar structure regions are extracted and labeled by fuzzy C-means (FCM) classification. At the same time, the texture features are also calculated. Finally, each region is classified into different tissue types based on both intensity and texture features. The proposed method is validated using five patient DBT images using manual segmentation as the gold standard. The Dice scores and the confusion matrix are utilized to evaluate the classified results. The evaluation results demonstrated the feasibility of the proposed method for classifying breast glandular and fat tissue on DBT images.
NASA Astrophysics Data System (ADS)
Ray, L.; Jordan, M.; Arcone, S. A.; Kaluzienski, L. M.; Koons, P. O.; Lever, J.; Walker, B.; Hamilton, G. S.
2017-12-01
The McMurdo Shear Zone (MSZ) is a narrow, intensely crevassed strip tens of km long separating the Ross and McMurdo ice shelves (RIS and MIS) and an important pinning feature for the RIS. We derive local velocity fields within the MSZ from two consecutive annual ground penetrating radar (GPR) datasets that reveal complex firn and marine ice crevassing; no englacial features are evident. The datasets were acquired in 2014 and 2015 using robot-towed 400 MHz and 200 MHz GPR over a 5 km x 5.7 km grid. 100 west-to-east transects at 50 m spacing provide three-dimensional maps that reveal the length of many firn crevasses, and their year-to-year structural evolution. Hand labeling of crevasse cross sections near the MSZ western and eastern boundaries reveal matching firn and marine ice crevasses, and more complex and chaotic features between these boundaries. By matching crevasse features from year to year both on the eastern and western boundaries and within the chaotic region, marine ice crevasses along the western and eastern boundaries are shown to align directly with firn crevasses, and the local velocity field is estimated and compared with data from strain rate surveys and remote sensing. While remote sensing provides global velocity fields, crevasse matching indicates greater local complexity attributed to faulting, folding, and rotation.
75 FR 77885 - Government-Owned Inventions; Availability for Licensing
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-14
... of federally-funded research and development. Foreign patent applications are filed on selected... applications. Software System for Quantitative Assessment of Vasculature in Three Dimensional Images... three dimensional vascular networks from medical and basic research images. Deregulation of angiogenesis...
Lesson learned and dispelled myths: three-dimensional imaging of the human vagina.
Barnhart, Kurt T; Pretorius, E Scott; Malamud, Daniel
2004-05-01
Three-dimensional imaging of the human vagina demonstrates that the cross section can be a "W," rather than an "H," and that intravaginal gel can ascend into the endocervix and presumably into the endometrium.
Motion Detection in Ultrasound Image-Sequences Using Tensor Voting
NASA Astrophysics Data System (ADS)
Inba, Masafumi; Yanagida, Hirotaka; Tamura, Yasutaka
2008-05-01
Motion detection in ultrasound image sequences using tensor voting is described. We have been developing an ultrasound imaging system adopting a combination of coded excitation and synthetic aperture focusing techniques. In our method, frame rate of the system at distance of 150 mm reaches 5000 frame/s. Sparse array and short duration coded ultrasound signals are used for high-speed data acquisition. However, many artifacts appear in the reconstructed image sequences because of the incompleteness of the transmitted code. To reduce the artifacts, we have examined the application of tensor voting to the imaging method which adopts both coded excitation and synthetic aperture techniques. In this study, the basis of applying tensor voting and the motion detection method to ultrasound images is derived. It was confirmed that velocity detection and feature enhancement are possible using tensor voting in the time and space of simulated ultrasound three-dimensional image sequences.
A Low-Cost PC-Based Image Workstation for Dynamic Interactive Display of Three-Dimensional Anatomy
NASA Astrophysics Data System (ADS)
Barrett, William A.; Raya, Sai P.; Udupa, Jayaram K.
1989-05-01
A system for interactive definition, automated extraction, and dynamic interactive display of three-dimensional anatomy has been developed and implemented on a low-cost PC-based image workstation. An iconic display is used for staging predefined image sequences through specified increments of tilt and rotation over a solid viewing angle. Use of a fast processor facilitates rapid extraction and rendering of the anatomy into predefined image views. These views are formatted into a display matrix in a large image memory for rapid interactive selection and display of arbitrary spatially adjacent images within the viewing angle, thereby providing motion parallax depth cueing for efficient and accurate perception of true three-dimensional shape, size, structure, and spatial interrelationships of the imaged anatomy. The visual effect is that of holding and rotating the anatomy in the hand.
NASA Astrophysics Data System (ADS)
Zhang, Qiang; Li, Jiafeng; Zhuo, Li; Zhang, Hui; Li, Xiaoguang
2017-12-01
Color is one of the most stable attributes of vehicles and often used as a valuable cue in some important applications. Various complex environmental factors, such as illumination, weather, noise and etc., result in the visual characteristics of the vehicle color being obvious diversity. Vehicle color recognition in complex environments has been a challenging task. The state-of-the-arts methods roughly take the whole image for color recognition, but many parts of the images such as car windows; wheels and background contain no color information, which will have negative impact on the recognition accuracy. In this paper, a novel vehicle color recognition method using local vehicle-color saliency detection and dual-orientational dimensionality reduction of convolutional neural network (CNN) deep features has been proposed. The novelty of the proposed method includes two parts: (1) a local vehicle-color saliency detection method has been proposed to determine the vehicle color region of the vehicle image and exclude the influence of non-color regions on the recognition accuracy; (2) dual-orientational dimensionality reduction strategy has been designed to greatly reduce the dimensionality of deep features that are learnt from CNN, which will greatly mitigate the storage and computational burden of the subsequent processing, while improving the recognition accuracy. Furthermore, linear support vector machine is adopted as the classifier to train the dimensionality reduced features to obtain the recognition model. The experimental results on public dataset demonstrate that the proposed method can achieve superior recognition performance over the state-of-the-arts methods.
Comparative assessment of techniques for initial pose estimation using monocular vision
NASA Astrophysics Data System (ADS)
Sharma, Sumant; D`Amico, Simone
2016-06-01
This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.
NASA Astrophysics Data System (ADS)
Gries, Katharina Ines; Schlechtweg, Julian; Hille, Pascal; Schörmann, Jörg; Eickhoff, Martin; Volz, Kerstin
2017-10-01
Scanning transmission electron microscopy is an extremely useful method to image small features with a size in the range of a few nanometers and below. But it must be taken into account that such images are projections of the sample and do not necessarily represent the real three dimensional structure of the specimen. By applying electron tomography this problem can be overcome. In our work GaN nanowires including InGaN nanodisks were investigated. To reduce the effect of the missing wedge a single nanowire was removed from the underlying silicon substrate using a manipulator needle and attached to a tomography holder. Since this sample exhibits the same thickness of few tens of nanometers in all directions normal to the tilt axis, this procedure allows a sample tilt of ±90°. Reconstruction of the received data reveals a split of the InGaN nanodisks into a horizontal continuation of the (0 0 0 1 bar) central facet and a declined {1 0 1 bar l} facet (with l = -2 or -3).
Three-dimensional propagation in near-field tomographic X-ray phase retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruhlandt, Aike, E-mail: aruhlan@gwdg.de; Salditt, Tim
An extension of phase retrieval algorithms for near-field X-ray (propagation) imaging to three dimensions is presented, enhancing the quality of the reconstruction by exploiting previously unused three-dimensional consistency constraints. This paper presents an extension of phase retrieval algorithms for near-field X-ray (propagation) imaging to three dimensions, enhancing the quality of the reconstruction by exploiting previously unused three-dimensional consistency constraints. The approach is based on a novel three-dimensional propagator and is derived for the case of optically weak objects. It can be easily implemented in current phase retrieval architectures, is computationally efficient and reduces the need for restrictive prior assumptions, resultingmore » in superior reconstruction quality.« less
Sparks, Rachel; Madabhushi, Anant
2016-01-01
Content-based image retrieval (CBIR) retrieves database images most similar to the query image by (1) extracting quantitative image descriptors and (2) calculating similarity between database and query image descriptors. Recently, manifold learning (ML) has been used to perform CBIR in a low dimensional representation of the high dimensional image descriptor space to avoid the curse of dimensionality. ML schemes are computationally expensive, requiring an eigenvalue decomposition (EVD) for every new query image to learn its low dimensional representation. We present out-of-sample extrapolation utilizing semi-supervised ML (OSE-SSL) to learn the low dimensional representation without recomputing the EVD for each query image. OSE-SSL incorporates semantic information, partial class label, into a ML scheme such that the low dimensional representation co-localizes semantically similar images. In the context of prostate histopathology, gland morphology is an integral component of the Gleason score which enables discrimination between prostate cancer aggressiveness. Images are represented by shape features extracted from the prostate gland. CBIR with OSE-SSL for prostate histology obtained from 58 patient studies, yielded an area under the precision recall curve (AUPRC) of 0.53 ± 0.03 comparatively a CBIR with Principal Component Analysis (PCA) to learn a low dimensional space yielded an AUPRC of 0.44 ± 0.01. PMID:27264985
Three-Dimensional Root Phenotyping with a Novel Imaging and Software Platform1[C][W][OA
Clark, Randy T.; MacCurdy, Robert B.; Jung, Janelle K.; Shaff, Jon E.; McCouch, Susan R.; Aneshansley, Daniel J.; Kochian, Leon V.
2011-01-01
A novel imaging and software platform was developed for the high-throughput phenotyping of three-dimensional root traits during seedling development. To demonstrate the platform’s capacity, plants of two rice (Oryza sativa) genotypes, Azucena and IR64, were grown in a transparent gellan gum system and imaged daily for 10 d. Rotational image sequences consisting of 40 two-dimensional images were captured using an optically corrected digital imaging system. Three-dimensional root reconstructions were generated and analyzed using a custom-designed software, RootReader3D. Using the automated and interactive capabilities of RootReader3D, five rice root types were classified and 27 phenotypic root traits were measured to characterize these two genotypes. Where possible, measurements from the three-dimensional platform were validated and were highly correlated with conventional two-dimensional measurements. When comparing gellan gum-grown plants with those grown under hydroponic and sand culture, significant differences were detected in morphological root traits (P < 0.05). This highly flexible platform provides the capacity to measure root traits with a high degree of spatial and temporal resolution and will facilitate novel investigations into the development of entire root systems or selected components of root systems. In combination with the extensive genetic resources that are now available, this platform will be a powerful resource to further explore the molecular and genetic determinants of root system architecture. PMID:21454799
Spectral feature design in high dimensional multispectral data
NASA Technical Reports Server (NTRS)
Chen, Chih-Chien Thomas; Landgrebe, David A.
1988-01-01
The High resolution Imaging Spectrometer (HIRIS) is designed to acquire images simultaneously in 192 spectral bands in the 0.4 to 2.5 micrometers wavelength region. It will make possible the collection of essentially continuous reflectance spectra at a spectral resolution sufficient to extract significantly enhanced amounts of information from return signals as compared to existing systems. The advantages of such high dimensional data come at a cost of increased system and data complexity. For example, since the finer the spectral resolution, the higher the data rate, it becomes impractical to design the sensor to be operated continuously. It is essential to find new ways to preprocess the data which reduce the data rate while at the same time maintaining the information content of the high dimensional signal produced. Four spectral feature design techniques are developed from the Weighted Karhunen-Loeve Transforms: (1) non-overlapping band feature selection algorithm; (2) overlapping band feature selection algorithm; (3) Walsh function approach; and (4) infinite clipped optimal function approach. The infinite clipped optimal function approach is chosen since the features are easiest to find and their classification performance is the best. After the preprocessed data has been received at the ground station, canonical analysis is further used to find the best set of features under the criterion that maximal class separability is achieved. Both 100 dimensional vegetation data and 200 dimensional soil data were used to test the spectral feature design system. It was shown that the infinite clipped versions of the first 16 optimal features had excellent classification performance. The overall probability of correct classification is over 90 percent while providing for a reduced downlink data rate by a factor of 10.
Unbiased feature selection in learning random forests for high-dimensional data.
Nguyen, Thanh-Tung; Huang, Joshua Zhexue; Nguyen, Thuy Thi
2015-01-01
Random forests (RFs) have been widely used as a powerful classification method. However, with the randomization in both bagging samples and feature selection, the trees in the forest tend to select uninformative features for node splitting. This makes RFs have poor accuracy when working with high-dimensional data. Besides that, RFs have bias in the feature selection process where multivalued features are favored. Aiming at debiasing feature selection in RFs, we propose a new RF algorithm, called xRF, to select good features in learning RFs for high-dimensional data. We first remove the uninformative features using p-value assessment, and the subset of unbiased features is then selected based on some statistical measures. This feature subset is then partitioned into two subsets. A feature weighting sampling technique is used to sample features from these two subsets for building trees. This approach enables one to generate more accurate trees, while allowing one to reduce dimensionality and the amount of data needed for learning RFs. An extensive set of experiments has been conducted on 47 high-dimensional real-world datasets including image datasets. The experimental results have shown that RFs with the proposed approach outperformed the existing random forests in increasing the accuracy and the AUC measures.
Desrosiers, Christian; Hassan, Lama; Tanougast, Camel
2016-01-01
Objective: Predicting the survival outcome of patients with glioblastoma multiforme (GBM) is of key importance to clinicians for selecting the optimal course of treatment. The goal of this study was to evaluate the usefulness of geometric shape features, extracted from MR images, as a potential non-invasive way to characterize GBM tumours and predict the overall survival times of patients with GBM. Methods: The data of 40 patients with GBM were obtained from the Cancer Genome Atlas and Cancer Imaging Archive. The T1 weighted post-contrast and fluid-attenuated inversion-recovery volumes of patients were co-registered and segmented into delineate regions corresponding to three GBM phenotypes: necrosis, active tumour and oedema/invasion. A set of two-dimensional shape features were then extracted slicewise from each phenotype region and combined over slices to describe the three-dimensional shape of these phenotypes. Thereafter, a Kruskal–Wallis test was employed to identify shape features with significantly different distributions across phenotypes. Moreover, a Kaplan–Meier analysis was performed to find features strongly associated with GBM survival. Finally, a multivariate analysis based on the random forest model was used for predicting the survival group of patients with GBM. Results: Our analysis using the Kruskal–Wallis test showed that all but one shape feature had statistically significant differences across phenotypes, with p-value < 0.05, following Holm–Bonferroni correction, justifying the analysis of GBM tumour shapes on a per-phenotype basis. Furthermore, the survival analysis based on the Kaplan–Meier estimator identified three features derived from necrotic regions (i.e. Eccentricity, Extent and Solidity) that were significantly correlated with overall survival (corrected p-value < 0.05; hazard ratios between 1.68 and 1.87). In the multivariate analysis, features from necrotic regions gave the highest accuracy in predicting the survival group of patients, with a mean area under the receiver-operating characteristic curve (AUC) of 63.85%. Combining the features of all three phenotypes increased the mean AUC to 66.99%, suggesting that shape features from different phenotypes can be used in a synergic manner to predict GBM survival. Conclusion: Results show that shape features, in particular those extracted from necrotic regions, can be used effectively to characterize GBM tumours and predict the overall survival of patients with GBM. Advances in knowledge: Simple volumetric features have been largely used to characterize the different phenotypes of a GBM tumour (i.e. active tumour, oedema and necrosis). This study extends previous work by considering a wide range of shape features, extracted in different phenotypes, for the prediction of survival in patients with GBM. PMID:27781499
Gosnell, Jordan; Pietila, Todd; Samuel, Bennett P; Kurup, Harikrishnan K N; Haw, Marcus P; Vettukattil, Joseph J
2016-12-01
Three-dimensional (3D) printing is an emerging technology aiding diagnostics, education, and interventional, and surgical planning in congenital heart disease (CHD). Three-dimensional printing has been derived from computed tomography, cardiac magnetic resonance, and 3D echocardiography. However, individually the imaging modalities may not provide adequate visualization of complex CHD. The integration of the strengths of two or more imaging modalities has the potential to enhance visualization of cardiac pathomorphology. We describe the feasibility of hybrid 3D printing from two imaging modalities in a patient with congenitally corrected transposition of the great arteries (L-TGA). Hybrid 3D printing may be useful as an additional tool for cardiologists and cardiothoracic surgeons in planning interventions in children and adults with CHD.
Adaptation of an articulated fetal skeleton model to three-dimensional fetal image data
NASA Astrophysics Data System (ADS)
Klinder, Tobias; Wendland, Hannes; Wachter-Stehle, Irina; Roundhill, David; Lorenz, Cristian
2015-03-01
The automatic interpretation of three-dimensional fetal images poses specific challenges compared to other three-dimensional diagnostic data, especially since the orientation of the fetus in the uterus and the position of the extremities is highly variable. In this paper, we present a comprehensive articulated model of the fetal skeleton and the adaptation of the articulation for pose estimation in three-dimensional fetal images. The model is composed out of rigid bodies where the articulations are represented as rigid body transformations. Given a set of target landmarks, the model constellation can be estimated by optimization of the pose parameters. Experiments are carried out on 3D fetal MRI data yielding an average error per case of 12.03+/-3.36 mm between target and estimated landmark positions.
Non-invasive measurement of proppant pack deformation
Walsh, Stuart D. C.; Smith, Megan; Carroll, Susan A.; ...
2016-05-26
In this study, we describe a method to non-invasively study the movement of proppant packs at the sub-fracture scale by applying three-dimensional digital image correlation techniques to X-ray tomography data. Proppant movement is tracked in a fractured core of Marcellus shale placed under a series of increasing confining pressures up to 10,000 psi. The analysis reveals the sudden failure of a region of the proppant pack, accompanied by the large-scale rearrangement of grains across the entire fracture surface. The failure of the pack coincides with the appearance of vortex-like grain motions similar to features observed in biaxial compression of twomore » dimensional granular assemblies.« less
NASA Astrophysics Data System (ADS)
Yamauchi, Toyohiko; Iwai, Hidenao; Yamashita, Yutaka
2013-03-01
We succeeded in utilizing our low-coherent quantitative phase microscopy (LC-QPM) to achieve label-free and three-dimensional imaging of string-like structures bridging the free-space between live cells. In past studies, three dimensional morphology of the string-like structures between cells had been investigated by electron microscopies and fluorescence microscopies and these structures were called "membrane nanotubes" or "tunneling nanotubes." However, use of electron microscopy inevitably kills these cells and fluorescence microscopy is itself a potentially invasive method. To achieve noninvasive imaging of live cells, we applied our LC-QPM which is a reflection-type, phase resolved and full-field interference microscope employing a low-coherent light source. LC-QPM is able to visualize the three-dimensional morphology of live cells without labeling by means of low-coherence interferometry. The lateral (diffraction limit) and longitudinal (coherence-length) spatial resolution of LC-QPM were respectively 0.49 and 0.93 micrometers and the repeatability of the phase measurement was 0.02 radians (1.0 nm). We successfully obtained three-dimensional morphology of live cultured epithelial cells (cell type: HeLa, derived from cervix cancer) and were able to clearly observe the individual string-like structures interconnecting the cells. When we performed volumetric imaging, a 80 micrometer by 60 micrometer by 6.5 micrometer volume was scanned every 5.67 seconds and 70 frames of a three-dimensional movie were recorded for a duration of 397 seconds. Moreover, the optical phase images gave us detailed information about the three-dimensional morphology of the string-like structure at sub-wavelength resolution. We believe that our LC-QPM will be a useful tool for the study of three-dimensional morphology of live cells.
Laboratory-size three-dimensional water-window x-ray microscope with Wolter type I mirror optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohsuka, Shinji; The Graduate School for the Creation of New Photonics Industries, 1955-1 Kurematsu-cho, Nishi-ku, Hamamatsu-City, 431-1202; Ohba, Akira
2016-01-28
We constructed a laboratory-size three-dimensional water-window x-ray microscope that combines wide-field transmission x-ray microscopy with tomographic reconstruction techniques. It consists of an electron-impact x-ray source emitting oxygen Kα x-rays, Wolter type I grazing incidence mirror optics, and a back-illuminated CCD for x-ray imaging. A spatial resolution limit better than 1.0 line pairs per micrometer was obtained for two-dimensional transmission images, and 1-μm-scale three-dimensional fine structures were resolved.
Automated Coronal Loop Identification using Digital Image Processing Techniques
NASA Astrophysics Data System (ADS)
Lee, J. K.; Gary, G. A.; Newman, T. S.
2003-05-01
The results of a Master's thesis study of computer algorithms for automatic extraction and identification (i.e., collectively, "detection") of optically-thin, 3-dimensional, (solar) coronal-loop center "lines" from extreme ultraviolet and X-ray 2-dimensional images will be presented. The center lines, which can be considered to be splines, are proxies of magnetic field lines. Detecting the loops is challenging because there are no unique shapes, the loop edges are often indistinct, and because photon and detector noise heavily influence the images. Three techniques for detecting the projected magnetic field lines have been considered and will be described in the presentation. The three techniques used are (i) linear feature recognition of local patterns (related to the inertia-tensor concept), (ii) parametric space inferences via the Hough transform, and (iii) topological adaptive contours (snakes) that constrain curvature and continuity. Since coronal loop topology is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information that has also been incorporated into the detection process. Synthesized images have been generated to benchmark the suitability of the three techniques, and the performance of the three techniques on both synthesized and solar images will be presented and numerically evaluated in the presentation. The process of automatic detection of coronal loops is important in the reconstruction of the coronal magnetic field where the derived magnetic field lines provide a boundary condition for magnetic models ( cf. , Gary (2001, Solar Phys., 203, 71) and Wiegelmann & Neukirch (2002, Solar Phys., 208, 233)). . This work was supported by NASA's Office of Space Science - Solar and Heliospheric Physics Supporting Research and Technology Program.
NASA Astrophysics Data System (ADS)
Feng, Zhixin
2018-02-01
Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.
Zernike phase contrast cryo-electron tomography of whole bacterial cells.
Guerrero-Ferreira, Ricardo C; Wright, Elizabeth R
2014-01-01
Cryo-electron tomography (cryo-ET) provides three-dimensional (3D) structural information of bacteria preserved in a native, frozen-hydrated state. The typical low contrast of tilt-series images, a result of both the need for a low electron dose and the use of conventional defocus phase-contrast imaging, is a challenge for high-quality tomograms. We show that Zernike phase-contrast imaging allows the electron dose to be reduced. This limits movement of gold fiducials during the tilt series, which leads to better alignment and a higher-resolution reconstruction. Contrast is also enhanced, improving visibility of weak features. The reduced electron dose also means that more images at more tilt angles could be recorded, further increasing resolution. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kajiwara, K.; Shobu, T.; Toyokawa, H.; Sato, M.
2014-04-01
A technique for three-dimensional visualization of grain boundaries was developed at BL28B2 at SPring-8. The technique uses white X-ray microbeam diffraction and a rotating slit. Three-dimensional images of small silicon single crystals filled in a plastic tube were successfully obtained using this technique for demonstration purposes. The images were consistent with those obtained by X-ray computed tomography.
Dahdouh, Sonia; Andescavage, Nickie; Yewale, Sayali; Yarish, Alexa; Lanham, Diane; Bulas, Dorothy; du Plessis, Adre J; Limperopoulos, Catherine
2018-02-01
To investigate the ability of three-dimensional (3D) MRI placental shape and textural features to predict fetal growth restriction (FGR) and birth weight (BW) for both healthy and FGR fetuses. We recruited two groups of pregnant volunteers between 18 and 39 weeks of gestation; 46 healthy subjects and 34 FGR. Both groups underwent fetal MR imaging on a 1.5 Tesla GE scanner using an eight-channel receiver coil. We acquired T2-weighted images on either the coronal or the axial plane to obtain MR volumes with a slice thickness of either 4 or 8 mm covering the full placenta. Placental shape features (volume, thickness, elongation) were combined with textural features; first order textural features (mean, variance, kurtosis, and skewness of placental gray levels), as well as, textural features computed on the gray level co-occurrence and run-length matrices characterizing placental homogeneity, symmetry, and coarseness. The features were used in two machine learning frameworks to predict FGR and BW. The proposed machine-learning based method using shape and textural features identified FGR pregnancies with 86% accuracy, 77% precision and 86% recall. BW estimations were 0.3 ± 13.4% (mean percentage error ± standard error) for healthy fetuses and -2.6 ± 15.9% for FGR. The proposed FGR identification and BW estimation methods using in utero placental shape and textural features computed on 3D MR images demonstrated high accuracy in our healthy and high-risk cohorts. Future studies to assess the evolution of each feature with regard to placental development are currently underway. 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:449-458. © 2017 International Society for Magnetic Resonance in Medicine.
Neukamm, Christian; Try, Kirsti; Norgård, Gunnar; Brun, Henrik
2014-01-01
A technique that uses two-dimensional images to create a knowledge-based, three-dimensional model was tested and compared to magnetic resonance imaging. Measurement of right ventricular volumes and function is important in the follow-up of patients after pulmonary valve replacement. Magnetic resonance imaging is the gold standard for volumetric assessment. Echocardiographic methods have been validated and are attractive alternatives. Thirty patients with tetralogy of Fallot (25 ± 14 years) after pulmonary valve replacement were examined. Magnetic resonance imaging volumetric measurements and echocardiography-based three-dimensional reconstruction were performed. End-diastolic volume, end-systolic volume, and ejection fraction were measured, and the results were compared. Magnetic resonance imaging measurements gave coefficient of variation in the intraobserver study of 3.5, 4.6, and 5.3 and in the interobserver study of 3.6, 5.9, and 6.7 for end-diastolic volume, end-systolic volume, and ejection fraction, respectively. Echocardiographic three-dimensional reconstruction was highly feasible (97%). In the intraobserver study, the corresponding values were 6.0, 7.0, and 8.9 and in the interobserver study 7.4, 10.8, and 13.4. In comparison of the methods, correlations with magnetic resonance imaging were r = 0.91, 0.91, and 0.38, and the corresponding coefficient of variations were 9.4, 10.8, and 14.7. Echocardiography derived volumes (mL/m(2)) were significantly higher than magnetic resonance imaging volumes in end-diastolic volume 13.7 ± 25.6 and in end-systolic volume 9.1 ± 17.0 (both P < .05). The knowledge-based three-dimensional right ventricular volume method was highly feasible. Intra and interobserver variabilities were satisfactory. Agreement with magnetic resonance imaging measurements for volumes was reasonable but unsatisfactory for ejection fraction. Knowledge-based reconstruction may replace magnetic resonance imaging measurements for serial follow-up, whereas magnetic resonance imaging should be used for surgical decision making.
3D fluorescence anisotropy imaging using selective plane illumination microscopy.
Hedde, Per Niklas; Ranjit, Suman; Gratton, Enrico
2015-08-24
Fluorescence anisotropy imaging is a popular method to visualize changes in organization and conformation of biomolecules within cells and tissues. In such an experiment, depolarization effects resulting from differences in orientation, proximity and rotational mobility of fluorescently labeled molecules are probed with high spatial resolution. Fluorescence anisotropy is typically imaged using laser scanning and epifluorescence-based approaches. Unfortunately, those techniques are limited in either axial resolution, image acquisition speed, or by photobleaching. In the last decade, however, selective plane illumination microscopy has emerged as the preferred choice for three-dimensional time lapse imaging combining axial sectioning capability with fast, camera-based image acquisition, and minimal light exposure. We demonstrate how selective plane illumination microscopy can be utilized for three-dimensional fluorescence anisotropy imaging of live cells. We further examined the formation of focal adhesions by three-dimensional time lapse anisotropy imaging of CHO-K1 cells expressing an EGFP-paxillin fusion protein.
Cartography of irregularly shaped satellites
NASA Technical Reports Server (NTRS)
Batson, R. M.; Edwards, Kathleen
1987-01-01
Irregularly shaped satellites, such as Phobos and Amalthea, do not lend themselves to mapping by conventional methods because mathematical projections of their surfaces fail to convey an accurate visual impression of the landforms, and because large and irregular scale changes make their features difficult to measure on maps. A digital mapping technique has therefore been developed by which maps are compiled from digital topographic and spacecraft image files. The digital file is geometrically transformed as desired for human viewing, either on video screens or on hard copy. Digital files of this kind consist of digital images superimposed on another digital file representing the three-dimensional form of a body.
Steganalysis using logistic regression
NASA Astrophysics Data System (ADS)
Lubenko, Ivans; Ker, Andrew D.
2011-02-01
We advocate Logistic Regression (LR) as an alternative to the Support Vector Machine (SVM) classifiers commonly used in steganalysis. LR offers more information than traditional SVM methods - it estimates class probabilities as well as providing a simple classification - and can be adapted more easily and efficiently for multiclass problems. Like SVM, LR can be kernelised for nonlinear classification, and it shows comparable classification accuracy to SVM methods. This work is a case study, comparing accuracy and speed of SVM and LR classifiers in detection of LSB Matching and other related spatial-domain image steganography, through the state-of-art 686-dimensional SPAM feature set, in three image sets.
Walton, Katherine D; Kolterud, Asa
2014-09-04
Most morphogenetic processes in the fetal intestine have been inferred from thin sections of fixed tissues, providing snapshots of changes over developmental stages. Three-dimensional information from thin serial sections can be challenging to interpret because of the difficulty of reconstructing serial sections perfectly and maintaining proper orientation of the tissue over serial sections. Recent findings by Grosse et al., 2011 highlight the importance of three- dimensional information in understanding morphogenesis of the developing villi of the intestine(1). Three-dimensional reconstruction of singly labeled intestinal cells demonstrated that the majority of the intestinal epithelial cells contact both the apical and basal surfaces. Furthermore, three-dimensional reconstruction of the actin cytoskeleton at the apical surface of the epithelium demonstrated that the intestinal lumen is continuous and that secondary lumens are an artifact of sectioning. Those two points, along with the demonstration of interkinetic nuclear migration in the intestinal epithelium, defined the developing intestinal epithelium as a pseudostratified epithelium and not stratified as previously thought(1). The ability to observe the epithelium three-dimensionally was seminal to demonstrating this point and redefining epithelial morphogenesis in the fetal intestine. With the evolution of multi-photon imaging technology and three-dimensional reconstruction software, the ability to visualize intact, developing organs is rapidly improving. Two-photon excitation allows less damaging penetration deeper into tissues with high resolution. Two-photon imaging and 3D reconstruction of the whole fetal mouse intestines in Walton et al., 2012 helped to define the pattern of villus outgrowth(2). Here we describe a whole organ culture system that allows ex vivo development of villi and extensions of that culture system to allow the intestines to be three-dimensionally imaged during their development.
Wei, Xu-Biao; Xu, Jie; Li, Nan; Yu, Ying; Shi, Jie; Guo, Wei-Xing; Cheng, Hong-Yan; Wu, Meng-Chao; Lau, Wan-Yee; Cheng, Shu-Qun
2016-03-01
Accurate assessment of characteristics of tumor and portal vein tumor thrombus is crucial in the management of hepatocellular carcinoma. Comparison of the three-dimensional imaging with multiple-slice computed tomography in the diagnosis and treatment of hepatocellular carcinoma with portal vein tumor thrombus. Patients eligible for surgical resection were divided into the three-dimensional imaging group or the multiple-slice computed tomography group according to the type of preoperative assessment. The clinical data were collected and compared. 74 patients were enrolled into this study. The weighted κ values for comparison between the thrombus type based on preoperative evaluation and intraoperative findings were 0.87 for the three-dimensional reconstruction group (n = 31) and 0.78 for the control group (n = 43). Three-dimensional reconstruction was significantly associated with a higher rate of en-bloc resection of tumor and thrombus (P = 0.025). Using three-dimensional reconstruction, significant correlation existed between the predicted and actual volumes of the resected specimens (r = 0.82, P < 0.01), as well as the predicted and actual resection margins (r = 0.97, P < 0.01). Preoperative three-dimensional reconstruction significantly decreased tumor recurrence and tumor-related death, with hazard ratios of 0.49 (95% confidential interval, 0.27-0.90) and 0.41 (95% confidential interval, 0.21-0.78), respectively. For hepatocellular carcinoma with portal vein tumor thrombus, three-dimensional imaging was efficient in facilitating surgical treatment and benefiting postoperative survivals. Copyright © 2015 International Hepato-Pancreato-Biliary Association Inc. Published by Elsevier Ltd. All rights reserved.
An improved three-dimensional non-scanning laser imaging system based on digital micromirror device
NASA Astrophysics Data System (ADS)
Xia, Wenze; Han, Shaokun; Lei, Jieyu; Zhai, Yu; Timofeev, Alexander N.
2018-01-01
Nowadays, there are two main methods to realize three-dimensional non-scanning laser imaging detection, which are detection method based on APD and detection method based on Streak Tube. However, the detection method based on APD possesses some disadvantages, such as small number of pixels, big pixel interval and complex supporting circuit. The detection method based on Streak Tube possesses some disadvantages, such as big volume, bad reliability and high cost. In order to resolve the above questions, this paper proposes an improved three-dimensional non-scanning laser imaging system based on Digital Micromirror Device. In this imaging system, accurate control of laser beams and compact design of imaging structure are realized by several quarter-wave plates and a polarizing beam splitter. The remapping fiber optics is used to sample the image plane of receiving optical lens, and transform the image into line light resource, which can realize the non-scanning imaging principle. The Digital Micromirror Device is used to convert laser pulses from temporal domain to spatial domain. The CCD with strong sensitivity is used to detect the final reflected laser pulses. In this paper, we also use an algorithm which is used to simulate this improved laser imaging system. In the last, the simulated imaging experiment demonstrates that this improved laser imaging system can realize three-dimensional non-scanning laser imaging detection.
Nano-Optics for Chemical and Materials Characterization
NASA Astrophysics Data System (ADS)
Beversluis, Michael; Stranick, Stephan
2007-03-01
Light microscopy can provide non-destructive, real-time, three-dimensional imaging with chemically-specific contrast, but diffraction frequently limits the resolution to roughly 200 nm. Recently, structured illumination techniques have allowed fluorescence imaging to reach 50 nm resolution [1]. Since these fluorescence techniques were developed for use in microbiology, a key challenge is to take the resolution-enhancing features and apply them to contrast mechanisms like vibrational spectroscopy (e.g., Raman and CARS microscopy) that provide morphological and chemically specific imaging.. We are developing a new hybrid technique that combines the resolution enhancement of structured illumination microscopy with scanning techniques that can record hyperspectral images with 100 nm spatial resolution. We will show such superresolving images of semiconductor nanostructures and discuss the advantages and requirements for this technique. Referenence: 1. M. G. L. Gustafsson, P. Natl. Acad. Sci. USA 102, 13081-13086 (2005).
Through thick and thin: a pictorial review of the endometrium.
Caserta, Melanie P; Bolan, Candice; Clingan, M Jennings
2016-12-01
The purpose of this pictorial review is to describe the normal appearance of the endometrium and to provide radiologists with an overview of endometrial pathology utilizing case examples. The normal appearance of the endometrium varies by age, menstrual phase, and hormonal status with differing degrees of acceptable endometrial thickness. Endometrial pathology most often manifests as either focal or diffuse endometrial thickening, and patients frequently present with abnormal vaginal bleeding. Endovaginal ultrasound (US) is the first-line modality for imaging the endometrium. This article will discuss the endometrial measurements used to direct management and workup of symptomatic patients and will discuss when additional imaging may be appropriate. Three-dimensional US is complementary to two-dimensional ultrasound and can be used as a problem-solving technique. Saline-infused sonohysterogram is a useful adjunct to delineate and detect focal intracavitary abnormalities, such as polyps and submucosal fibroids. Magnetic resonance imaging is the preferred imaging modality for staging endometrial cancer because it best depicts the depth of myometrial invasion and cervical stromal involvement. Unique imaging features and complications of endometrial ablation will be introduced. At the completion of this article, the reader will understand the spectrum of normal endometrial findings and will understand the workup of common endometrial pathology.
The use of computer imaging techniques to visualize cardiac muscle cells in three dimensions.
Marino, T A; Cook, P N; Cook, L T; Dwyer, S J
1980-11-01
Atrial muscle cells and atrioventricular bundle cells were reconstructed using a computer-assisted three-dimensional reconstruction system. This reconstruction technique permitted these cells to be viewed from any direction. The cell surfaces were approximated using triangular tiles, and this optimization technique for cell reconstruction allowed for the computation of cell surface area and cell volume. A transparent mode is described which enables the investigator to examine internal cellular features such as the shape and location of the nucleus. In addition, more than one cell can be displayed simultaneously, and, therefore, spatial relationships are preserved and intercellular relationships viewed directly. The use of computer imaging techniques allows for a more complete collection of quantitative morphological data and also the visualization of the morphological information gathered.
Ge, Jiajia; Zhu, Banghe; Regalado, Steven; Godavarty, Anuradha
2008-01-01
Hand-held based optical imaging systems are a recent development towards diagnostic imaging of breast cancer. To date, all the hand-held based optical imagers are used to perform only surface mapping and target localization, but are not capable of demonstrating tomographic imaging. Herein, a novel hand-held probe based optical imager is developed towards three-dimensional (3-D) optical tomography studies. The unique features of this optical imager, which primarily consists of a hand-held probe and an intensified charge coupled device detector, are its ability to; (i) image large tissue areas (5×10 sq. cm) in a single scan, (ii) perform simultaneous multiple point illumination and collection, thus reducing the overall imaging time; and (iii) adapt to varying tissue curvatures, from a flexible probe head design. Experimental studies are performed in the frequency domain on large slab phantoms (∼650 ml) using fluorescence target(s) under perfect uptake (1:0) contrast ratios, and varying target depths (1–2 cm) and X-Y locations. The effect of implementing simultaneous over sequential multiple point illumination towards 3-D tomography is experimentally demonstrated. The feasibility of 3-D optical tomography studies has been demonstrated for the first time using a hand-held based optical imager. Preliminary fluorescence-enhanced optical tomography studies are able to reconstruct 0.45 ml target(s) located at different target depths (1–2 cm). However, the depth recovery was limited as the actual target depth increased, since only reflectance measurements were acquired. Extensive tomography studies are currently carried out to determine the resolution and performance limits of the imager on flat and curved phantoms. PMID:18697559
Ge, Jiajia; Zhu, Banghe; Regalado, Steven; Godavarty, Anuradha
2008-07-01
Hand-held based optical imaging systems are a recent development towards diagnostic imaging of breast cancer. To date, all the hand-held based optical imagers are used to perform only surface mapping and target localization, but are not capable of demonstrating tomographic imaging. Herein, a novel hand-held probe based optical imager is developed towards three-dimensional (3-D) optical tomography studies. The unique features of this optical imager, which primarily consists of a hand-held probe and an intensified charge coupled device detector, are its ability to; (i) image large tissue areas (5 x 10 sq. cm) in a single scan, (ii) perform simultaneous multiple point illumination and collection, thus reducing the overall imaging time; and (iii) adapt to varying tissue curvatures, from a flexible probe head design. Experimental studies are performed in the frequency domain on large slab phantoms (approximately 650 ml) using fluorescence target(s) under perfect uptake (1:0) contrast ratios, and varying target depths (1-2 cm) and X-Y locations. The effect of implementing simultaneous over sequential multiple point illumination towards 3-D tomography is experimentally demonstrated. The feasibility of 3-D optical tomography studies has been demonstrated for the first time using a hand-held based optical imager. Preliminary fluorescence-enhanced optical tomography studies are able to reconstruct 0.45 ml target(s) located at different target depths (1-2 cm). However, the depth recovery was limited as the actual target depth increased, since only reflectance measurements were acquired. Extensive tomography studies are currently carried out to determine the resolution and performance limits of the imager on flat and curved phantoms.
ERIC Educational Resources Information Center
Cody, Jeremy A.; Craig, Paul A.; Loudermilk, Adam D.; Yacci, Paul M.; Frisco, Sarah L.; Milillo, Jennifer R.
2012-01-01
A novel stereochemistry lesson was prepared that incorporated both handheld molecular models and embedded virtual three-dimensional (3D) images. The images are fully interactive and eye-catching for the students; methods for preparing 3D molecular images in Adobe Acrobat are included. The lesson was designed and implemented to showcase the 3D…
Unsupervised Deep Hashing With Pseudo Labels for Scalable Image Retrieval.
Zhang, Haofeng; Liu, Li; Long, Yang; Shao, Ling
2018-04-01
In order to achieve efficient similarity searching, hash functions are designed to encode images into low-dimensional binary codes with the constraint that similar features will have a short distance in the projected Hamming space. Recently, deep learning-based methods have become more popular, and outperform traditional non-deep methods. However, without label information, most state-of-the-art unsupervised deep hashing (DH) algorithms suffer from severe performance degradation for unsupervised scenarios. One of the main reasons is that the ad-hoc encoding process cannot properly capture the visual feature distribution. In this paper, we propose a novel unsupervised framework that has two main contributions: 1) we convert the unsupervised DH model into supervised by discovering pseudo labels; 2) the framework unifies likelihood maximization, mutual information maximization, and quantization error minimization so that the pseudo labels can maximumly preserve the distribution of visual features. Extensive experiments on three popular data sets demonstrate the advantages of the proposed method, which leads to significant performance improvement over the state-of-the-art unsupervised hashing algorithms.
Real time three dimensional sensing system
Gordon, S.J.
1996-12-31
The invention is a three dimensional sensing system which utilizes two flexibly located cameras for receiving and recording visual information with respect to a sensed object illuminated by a series of light planes. Each pixel of each image is converted to a digital word and the words are grouped into stripes, each stripe comprising contiguous pixels. One pixel of each stripe in one image is selected and an epi-polar line of that point is drawn in the other image. The three dimensional coordinate of each selected point is determined by determining the point on said epi-polar line which also lies on a stripe in the second image and which is closest to a known light plane. 7 figs.
Real time three dimensional sensing system
Gordon, Steven J.
1996-01-01
The invention is a three dimensional sensing system which utilizes two flexibly located cameras for receiving and recording visual information with respect to a sensed object illuminated by a series of light planes. Each pixel of each image is converted to a digital word and the words are grouped into stripes, each stripe comprising contiguous pixels. One pixel of each stripe in one image is selected and an epi-polar line of that point is drawn in the other image. The three dimensional coordinate of each selected point is determined by determining the point on said epi-polar line which also lies on a stripe in the second image and which is closest to a known light plane.
Using Three-Dimensional Interactive Graphics To Teach Equipment Procedures.
ERIC Educational Resources Information Center
Hamel, Cheryl J.; Ryan-Jones, David L.
1997-01-01
Focuses on how three-dimensional graphical and interactive features of computer-based instruction can enhance learning and support human cognition during technical training of equipment procedures. Presents guidelines for using three-dimensional interactive graphics to teach equipment procedures based on studies of the effects of graphics, motion,…
Guppy-Coles, Kristyan B; Prasad, Sandhir B; Smith, Kym C; Hillier, Samuel; Lo, Ada; Atherton, John J
2015-06-01
We aimed to determine the feasibility of training cardiac nurses to evaluate left ventricular function utilising a semi-automated, workstation-based protocol on three dimensional echocardiography images. Assessment of left ventricular function by nurses is an attractive concept. Recent developments in three dimensional echocardiography coupled with border detection assistance have reduced inter- and intra-observer variability and analysis time. This could allow abbreviated training of nurses to assess cardiac function. A comparative, diagnostic accuracy study evaluating left ventricular ejection fraction assessment utilising a semi-automated, workstation-based protocol performed by echocardiography-naïve nurses on previously acquired three dimensional echocardiography images. Nine cardiac nurses underwent two brief lectures about cardiac anatomy, physiology and three dimensional left ventricular ejection fraction assessment, before a hands-on demonstration in 20 cases. We then selected 50 cases from our three dimensional echocardiography library based on optimal image quality with a broad range of left ventricular ejection fractions, which was quantified by two experienced sonographers and the average used as the comparator for the nurses. Nurses independently measured three dimensional left ventricular ejection fraction using the Auto lvq package with semi-automated border detection. The left ventricular ejection fraction range was 25-72% (70% with a left ventricular ejection fraction <55%). All nurses showed excellent agreement with the sonographers. Minimal intra-observer variability was noted on both short-term (same day) and long-term (>2 weeks later) retest. It is feasible to train nurses to measure left ventricular ejection fraction utilising a semi-automated, workstation-based protocol on previously acquired three dimensional echocardiography images. Further study is needed to determine the feasibility of training nurses to acquire three dimensional echocardiography images on real-world patients to measure left ventricular ejection fraction. Nurse-performed evaluation of left ventricular function could facilitate the broader application of echocardiography to allow cost-effective screening and monitoring for left ventricular dysfunction in high-risk populations. © 2014 John Wiley & Sons Ltd.
Development of AN Innovative Three-Dimensional Complete Body Screening Device - 3D-CBS
NASA Astrophysics Data System (ADS)
Crosetto, D. B.
2004-07-01
This article describes an innovative technological approach that increases the efficiency with which a large number of particles (photons) can be detected and analyzed. The three-dimensional complete body screening (3D-CBS) combines the functional imaging capability of the Positron Emission Tomography (PET) with those of the anatomical imaging capability of Computed Tomography (CT). The novel techniques provide better images in a shorter time with less radiation to the patient. A primary means of accomplishing this is the use of a larger solid angle, but this requires a new electronic technique capable of handling the increased data rate. This technique, combined with an improved and simplified detector assembly, enables executing complex real-time algorithms and allows more efficiently use of economical crystals. These are the principal features of this invention. A good synergy of advanced techniques in particle detection, together with technological progress in industry (latest FPGA technology) and simple, but cost-effective ideas provide a revolutionary invention. This technology enables over 400 times PET efficiency improvement at once compared to two to three times improvements achieved every five years during the past decades. Details of the electronics are provided, including an IBM PC board with a parallel-processing architecture implemented in FPGA, enabling the execution of a programmable complex real-time algorithm for best detection of photons.
NASA Astrophysics Data System (ADS)
Petrochenko, Andrey; Konyakhin, Igor
2017-06-01
In connection with the development of robotics have become increasingly popular variety of three-dimensional reconstruction of the system mapping and image-set received from the optical sensors. The main objective of technical and robot vision is the detection, tracking and classification of objects of the space in which these systems and robots operate [15,16,18]. Two-dimensional images sometimes don't contain sufficient information to address those or other problems: the construction of the map of the surrounding area for a route; object identification, tracking their relative position and movement; selection of objects and their attributes to complement the knowledge base. Three-dimensional reconstruction of the surrounding space allows you to obtain information on the relative positions of objects, their shape, surface texture. Systems, providing training on the basis of three-dimensional reconstruction of the results of the comparison can produce two-dimensional images of three-dimensional model that allows for the recognition of volume objects on flat images. The problem of the relative orientation of industrial robots with the ability to build threedimensional scenes of controlled surfaces is becoming actual nowadays.
Prediction of high-dimensional states subject to respiratory motion: a manifold learning approach
NASA Astrophysics Data System (ADS)
Liu, Wenyang; Sawant, Amit; Ruan, Dan
2016-07-01
The development of high-dimensional imaging systems in image-guided radiotherapy provides important pathways to the ultimate goal of real-time full volumetric motion monitoring. Effective motion management during radiation treatment usually requires prediction to account for system latency and extra signal/image processing time. It is challenging to predict high-dimensional respiratory motion due to the complexity of the motion pattern combined with the curse of dimensionality. Linear dimension reduction methods such as PCA have been used to construct a linear subspace from the high-dimensional data, followed by efficient predictions on the lower-dimensional subspace. In this study, we extend such rationale to a more general manifold and propose a framework for high-dimensional motion prediction with manifold learning, which allows one to learn more descriptive features compared to linear methods with comparable dimensions. Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where accurate and efficient prediction can be performed. A fixed-point iterative pre-image estimation method is used to recover the predicted value in the original state space. We evaluated and compared the proposed method with a PCA-based approach on level-set surfaces reconstructed from point clouds captured by a 3D photogrammetry system. The prediction accuracy was evaluated in terms of root-mean-squared-error. Our proposed method achieved consistent higher prediction accuracy (sub-millimeter) for both 200 ms and 600 ms lookahead lengths compared to the PCA-based approach, and the performance gain was statistically significant.
A compact light-sheet microscope for the study of the mammalian central nervous system
Yang, Zhengyi; Haslehurst, Peter; Scott, Suzanne; Emptage, Nigel; Dholakia, Kishan
2016-01-01
Investigation of the transient processes integral to neuronal function demands rapid and high-resolution imaging techniques over a large field of view, which cannot be achieved with conventional scanning microscopes. Here we describe a compact light sheet fluorescence microscope, featuring a 45° inverted geometry and an integrated photolysis laser, that is optimized for applications in neuroscience, in particular fast imaging of sub-neuronal structures in mammalian brain slices. We demonstrate the utility of this design for three-dimensional morphological reconstruction, activation of a single synapse with localized photolysis, and fast imaging of neuronal Ca2+ signalling across a large field of view. The developed system opens up a host of novel applications for the neuroscience community. PMID:27215692
Pollitz, Fred; Mooney, Walter D.
2016-01-01
Seismic surface waves from the Transportable Array of EarthScope's USArray are used to estimate phase velocity structure of 18 to 125 s Rayleigh waves, then inverted to obtain three-dimensional crust and upper mantle structure of the Central and Eastern United States (CEUS) down to ∼200 km. The obtained lithosphere structure confirms previously imaged CEUS features, e.g., the low seismic-velocity signature of the Cambrian Reelfoot Rift and the very low velocity at >150 km depth below an Eocene volcanic center in northwestern Virginia. New features include high-velocity mantle stretching from the Archean Superior Craton well into the Proterozoic terranes and deep low-velocity zones in central Texas (associated with the late Cretaceous Travis and Uvalde volcanic fields) and beneath the South Georgia Rift (which contains Jurassic basalts). Hot spot tracks may be associated with several imaged low-velocity zones, particularly those close to the former rifted Laurentia margin.
Processing And Display Of Medical Three Dimensional Arrays Of Numerical Data Using Octree Encoding
NASA Astrophysics Data System (ADS)
Amans, Jean-Louis; Darier, Pierre
1986-05-01
imaging modalities such as X-Ray computerized Tomography (CT), Nuclear Medecine and Nuclear Magnetic Resonance can produce three-dimensional (3-D) arrays of numerical data of medical object internal structures. The analysis of 3-D data by synthetic generation of realistic images is an important area of computer graphics and imaging.
Magnetic Resonance Imaging of Three-Dimensional Cervical Anatomy in the Second and Third Trimester
HOUSE, Michael; BHADELIA, Rafeeque A.; MYERS, Kristin; SOCRATE, Simona
2009-01-01
OBJECTIVE Although a short cervix is known to be associated with preterm birth, the patterns of three-dimensional, anatomic changes leading to a short cervix are unknown. Our objective was to 1) construct three-dimensional anatomic models during normal pregnancy and 2) use the models to compare cervical anatomy in the second and third trimester. STUDY DESIGN A cross sectional study was performed in a population of patients referred to magnetic resonance imaging (MRI) for a fetal indication. Using magnetic resonance images for guidance, three-dimensional solid models of the following anatomic structures were constructed: amniotic cavity, uterine wall, cervical stroma, cervical mucosa and anterior vaginal wall. To compare cervical anatomy in the second and third trimester, models were matched according the size of the bony pelvis. RESULTS Fourteen patients were imaged and divided into two groups according to gestational age: 20 – 24 weeks (n=7)) and 31 – 36 weeks (n=7). Compared to the second trimester, the third trimester was associated with significant descent of the amniotic sac. (p=.02). Descent of the amniotic sac was associated with modified anatomy of the uterocervical junction. These 3-dimensional changes were associated with a cervix that appeared shorter in the third trimester. CONCLUSION We report a technique for constructing MRI-based, three-dimensional anatomic models during pregnancy. Compared to the second trimester, the third trimester is associated with three-dimensional changes in the cervix and lower uterine segment. PMID:19297070
[Localization of perforators in the lower leg by digital antomy imaging methods].
Wei, Peng; Ma, Liang-Liang; Fang, Ye-Dong; Xia, Wei-Zhi; Ding, Mao-Chao; Mei, Jin
2012-03-01
To offer both the accurate three-dimensional anatomical information and algorithmic morphology of perforators in the lower leg for perforator flaps design. The cadaver was injected with a modified lead oxide-gelatin mixture. Radiography was first performed and the images were analyzed using the software Photoshop and Scion Image. Then spiral CT scan was also performed and 3-dimensional images were reconstructed with MIMICS 10.01 software. There are (27 +/- 4) perforators whose outer diameter > or = 0.5 mm ( average, 0.8 +/- 0.2 mm). The average pedicle length within the superficial fascia is (37.3 +/- 18.6) mm. The average supplied area of each perforator is (49.5 +/- 25.5) cm2. The three-dimensional model displayed accurate morphology structure and three-dimensional distribution of the perforator-to- perforator and perforator-to-source artery. The 3D reconstruction model can clearly show the geometric, local details and three-dimensional distribution. It is a considerable method for the study of morphological characteristics of the individual perforators in human calf and preoperative planning of the perforator flap.
Laser electro-optic system for rapid three-dimensional /3-D/ topographic mapping of surfaces
NASA Technical Reports Server (NTRS)
Altschuler, M. D.; Altschuler, B. R.; Taboada, J.
1981-01-01
It is pointed out that the generic utility of a robot in a factory/assembly environment could be substantially enhanced by providing a vision capability to the robot. A standard videocamera for robot vision provides a two-dimensional image which contains insufficient information for a detailed three-dimensional reconstruction of an object. Approaches which supply the additional information needed for the three-dimensional mapping of objects with complex surface shapes are briefly considered and a description is presented of a laser-based system which can provide three-dimensional vision to a robot. The system consists of a laser beam array generator, an optical image recorder, and software for controlling the required operations. The projection of a laser beam array onto a surface produces a dot pattern image which is viewed from one or more suitable perspectives. Attention is given to the mathematical method employed, the space coding technique, the approaches used for obtaining the transformation parameters, the optics for laser beam array generation, the hardware for beam array coding, and aspects of image acquisition.
Registration of 3D spectral OCT volumes using 3D SIFT feature point matching
NASA Astrophysics Data System (ADS)
Niemeijer, Meindert; Garvin, Mona K.; Lee, Kyungmoo; van Ginneken, Bram; Abràmoff, Michael D.; Sonka, Milan
2009-02-01
The recent introduction of next generation spectral OCT scanners has enabled routine acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D OCT is used in the detection and management of serious eye diseases such as glaucoma and age-related macular degeneration. For follow-up studies, image registration is a vital tool to enable more precise, quantitative comparison of disease states. This work presents a registration method based on a recently introduced extension of the 2D Scale-Invariant Feature Transform (SIFT) framework1 to 3D.2 The SIFT feature extractor locates minima and maxima in the difference of Gaussian scale space to find salient feature points. It then uses histograms of the local gradient directions around each found extremum in 3D to characterize them in a 4096 element feature vector. Matching points are found by comparing the distance between feature vectors. We apply this method to the rigid registration of optic nerve head- (ONH) and macula-centered 3D OCT scans of the same patient that have only limited overlap. Three OCT data set pairs with known deformation were used for quantitative assessment of the method's robustness and accuracy when deformations of rotation and scaling were considered. Three-dimensional registration accuracy of 2.0+/-3.3 voxels was observed. The accuracy was assessed as average voxel distance error in N=1572 matched locations. The registration method was applied to 12 3D OCT scans (200 x 200 x 1024 voxels) of 6 normal eyes imaged in vivo to demonstrate the clinical utility and robustness of the method in a real-world environment.
NASA Astrophysics Data System (ADS)
Davis, Benjamin L.; Berrier, J. C.; Shields, D. W.; Kennefick, J.; Kennefick, D.; Seigar, M. S.; Lacy, C. H. S.; Puerari, I.
2012-01-01
A logarithmic spiral is a prominent feature appearing in a majority of observed galaxies. This feature has long been associated with the traditional Hubble classification scheme, but historical quotes of pitch angle of spiral galaxies have been almost exclusively qualitative. We have developed a methodology, utilizing Two-Dimensional Fast Fourier Transformations of images of spiral galaxies, in order to isolate and measure the pitch angles of their spiral arms. Our technique provides a quantitative way to measure this morphological feature. This will allow the precise comparison of spiral galaxy evolution to other galactic parameters and test spiral arm genesis theories. In this work, we detail our image processing and analysis of spiral galaxy images and discuss the robustness of our analysis techniques. The authors gratefully acknowledge support for this work from NASA Grant NNX08AW03A.
Knowledge Driven Image Mining with Mixture Density Mercer Kernels
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Oza, Nikunj
2004-01-01
This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels; which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper, we present the theory of Mercer Kernels, describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.
Knowledge Driven Image Mining with Mixture Density Mercer Kernals
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Oza, Nikunj
2004-01-01
This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper we present the theory of Mercer Kernels; describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.
2012-03-28
Scintillation 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Comberiate, Joseph M. 5e. TASK NUMBER 5f. WORK...bubble climatology. A tomographic reconstruction technique was modified and applied to SSUSI data to reconstruct three-dimensional cubes of ionospheric... modified and applied to SSUSI data to reconstruct three-dimensional cubes of ionospheric electron density. These data cubes allowed for 3-D imaging of
Real-time Three-dimensional Echocardiography: From Diagnosis to Intervention.
Orvalho, João S
2017-09-01
Echocardiography is one of the most important diagnostic tools in veterinary cardiology, and one of the greatest recent developments is real-time three-dimensional imaging. Real-time three-dimensional echocardiography is a new ultrasonography modality that provides comprehensive views of the cardiac valves and congenital heart defects. The main advantages of this technique, particularly real-time three-dimensional transesophageal echocardiography, are the ability to visualize the catheters, and balloons or other devices, and the ability to image the structure that is undergoing intervention with unprecedented quality. This technique may become one of the main choices for the guidance of interventional cardiology procedures. Copyright © 2017 Elsevier Inc. All rights reserved.
Multifunctional, three-dimensional tomography for analysis of eletrectrohydrodynamic jetting
NASA Astrophysics Data System (ADS)
Nguyen, Xuan Hung; Gim, Yeonghyeon; Ko, Han Seo
2015-05-01
A three-dimensional optical tomography technique was developed to reconstruct three-dimensional objects using a set of two-dimensional shadowgraphic images and normal gray images. From three high-speed cameras, which were positioned at an offset angle of 45° between each other, number, size, and location of electrohydrodynamic jets with respect to the nozzle position were analyzed using shadowgraphic tomography employing multiplicative algebraic reconstruction technique (MART). Additionally, a flow field inside a cone-shaped liquid (Taylor cone) induced under an electric field was observed using a simultaneous multiplicative algebraic reconstruction technique (SMART), a tomographic method for reconstructing light intensities of particles, combined with three-dimensional cross-correlation. Various velocity fields of circulating flows inside the cone-shaped liquid caused by various physico-chemical properties of liquid were also investigated.
Seismic reflection images of the accretionary wedge of Costa Rica
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shipley, T.H.; Stoffa, P.L.; McIntosh, K.
The large-scale structure of modern accretionary wedges is known almost entirely from seismic reflection investigations using single or grids of two-dimensional profiles. The authors will report on the first three-dimensional seismic reflection data volume collected of a wedge. This data set covers a 9-km-wide {times} 22-km-long {times} 6-km-thick volume of the accretionary wedge just arcward of the Middle America Trench off Costa Rica. The three-dimensional processing has improved the imaging ability of the multichannel data, and the data volume allows mapping of structures from a few hundred meters to kilometers in size. These data illustrate the relationships between the basement,more » the wedge shape, and overlying slope sedimentary deposits. Reflections from within the wedge define the gross structural features and tectonic processes active along this particular convergent margin. So far, the analysis shows that the subdued basement relief (horst and graben structures seldom have relief of more than a few hundred meters off Costa Rica) does affect the larger scale through going structural features within the wedge. The distribution of mud volcanoes and amplitude anomalies associated with the large-scale wedge structures suggests that efficient fluid migration paths may extend from the top of the downgoing slab at the shelf edge out into the lower and middle slope region at a distance of 50-100 km. Offscraping of the uppermost (about 45 m) sediment occurs within 4 km of the trench, creating a small pile of sediments near the trench lower slope. Underplating of parts of the 400-m-thick subducted sedimentary section begins at a very shallow structural level, 4-10 km arcward of the trench. Volumetrically, the most important accretionary process is underplating.« less
Ranjanomennahary, P; Ghalila, S Sevestre; Malouche, D; Marchadier, A; Rachidi, M; Benhamou, Cl; Chappard, C
2011-01-01
Hip fracture is a serious health problem and textural methods are being developed to assess bone quality. The authors aimed to perform textural analysis at femur on high-resolution digital radiographs compared to three-dimensional (3D) microarchitecture comparatively to bone mineral density. Sixteen cadaveric femurs were imaged with an x-ray device using a C-MOS sensor. One 17 mm square region of interest (ROI) was selected in the femoral head (FH) and one in the great trochanter (GT). Two-dimensional (2D) textural features from the co-occurrence matrices were extracted. Site-matched measurements of bone mineral density were performed. Inside each ROI, a 16 mm diameter core was extracted. Apparent density (Dapp) and bone volume proportion (BV/TV(Arch)) were measured from a defatted bone core using Archimedes' principle. Microcomputed tomography images of the entire length of the core were obtained (Skyscan 1072) at 19.8 microm of resolution and usual 3D morphometric parameters were computed on the binary volume after calibration from BV/TV(Arch). Then, bone surface/bone volume, trabecular thickness, trabecular separation, and trabecular number were obtained by direct methods without model assumption and the structure model index was calculated. In univariate analysis, the correlation coefficients between 2D textural features and 3D morphological parameters reached 0.83 at the FH and 0.79 at the GT. In multivariate canonical correlation analysis, coefficients of the first component reached 0.95 at the FH and 0.88 at the GT. Digital radiographs, widely available and economically viable, are an alternative method for evaluating bone microarchitectural structure.
Fine Metal Mask 3-Dimensional Measurement by using Scanning Digital Holographic Microscope
NASA Astrophysics Data System (ADS)
Shin, Sanghoon; Yu, Younghun
2018-04-01
For three-dimensional microscopy, fast and high axial resolution are very important. Extending the depth of field for digital holographic is necessary for three-dimensional measurements of thick samples. We propose an optical sectioning method for optical scanning digital holography that is performed in the frequency domain by spatial filtering of a reconstructed amplitude image. We established a scanning dual-wavelength off-axis digital holographic microscope to measure samples that exhibit a large amount of coherent noise and a thickness larger than the depth of focus of the objective lens. As a demonstration, we performed a three-dimensional measurement of a fine metal mask with a reconstructed sectional phase image and filtering with a reconstructed amplitude image.
Quantitative analysis and feature recognition in 3-D microstructural data sets
NASA Astrophysics Data System (ADS)
Lewis, A. C.; Suh, C.; Stukowski, M.; Geltmacher, A. B.; Spanos, G.; Rajan, K.
2006-12-01
A three-dimensional (3-D) reconstruction of an austenitic stainless-steel microstructure was used as input for an image-based finite-element model to simulate the anisotropic elastic mechanical response of the microstructure. The quantitative data-mining and data-warehousing techniques used to correlate regions of high stress with critical microstructural features are discussed. Initial analysis of elastic stresses near grain boundaries due to mechanical loading revealed low overall correlation with their location in the microstructure. However, the use of data-mining and feature-tracking techniques to identify high-stress outliers revealed that many of these high-stress points are generated near grain boundaries and grain edges (triple junctions). These techniques also allowed for the differentiation between high stresses due to boundary conditions of the finite volume reconstructed, and those due to 3-D microstructural features.
Zhu, S; Yang, Y; Khambay, B
2017-03-01
Clinicians are accustomed to viewing conventional two-dimensional (2D) photographs and assume that viewing three-dimensional (3D) images is similar. Facial images captured in 3D are not viewed in true 3D; this may alter clinical judgement. The aim of this study was to evaluate the reliability of using conventional photographs, 3D images, and stereoscopic projected 3D images to rate the severity of the deformity in pre-surgical class III patients. Forty adult patients were recruited. Eight raters assessed facial height, symmetry, and profile using the three different viewing media and a 100-mm visual analogue scale (VAS), and appraised the most informative viewing medium. Inter-rater consistency was above good for all three media. Intra-rater reliability was not significantly different for rating facial height using 2D (P=0.704), symmetry using 3D (P=0.056), and profile using projected 3D (P=0.749). Using projected 3D for rating profile and symmetry resulted in significantly lower median VAS scores than either 3D or 2D images (all P<0.05). For 75% of the raters, stereoscopic 3D projection was the preferred method for rating. The reliability of assessing specific characteristics was dependent on the viewing medium. Clinicians should be aware that the visual information provided when viewing 3D images is not the same as when viewing 2D photographs, especially for facial depth, and this may change the clinical impression. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Terahertz Imaging of Three-Dimensional Dehydrated Breast Cancer Tumors
NASA Astrophysics Data System (ADS)
Bowman, Tyler; Wu, Yuhao; Gauch, John; Campbell, Lucas K.; El-Shenawee, Magda
2017-06-01
This work presents the application of terahertz imaging to three-dimensional formalin-fixed, paraffin-embedded human breast cancer tumors. The results demonstrate the capability of terahertz for in-depth scanning to produce cross section images without the need to slice the tumor. Samples of tumors excised from women diagnosed with infiltrating ductal carcinoma and lobular carcinoma are investigated using a pulsed terahertz time domain imaging system. A time of flight estimation is used to obtain vertical and horizontal cross section images of tumor tissues embedded in paraffin block. Strong agreement is shown comparing the terahertz images obtained by electronically scanning the tumor in-depth in comparison with histopathology images. The detection of cancer tissue inside the block is found to be accurate to depths over 1 mm. Image processing techniques are applied to provide improved contrast and automation of the obtained terahertz images. In particular, unsharp masking and edge detection methods are found to be most effective for three-dimensional block imaging.
High-resolution ab initio three-dimensional x-ray diffraction microscopy
Chapman, Henry N.; Barty, Anton; Marchesini, Stefano; ...
2006-01-01
Coherent x-ray diffraction microscopy is a method of imaging nonperiodic isolated objects at resolutions limited, in principle, by only the wavelength and largest scattering angles recorded. We demonstrate x-ray diffraction imaging with high resolution in all three dimensions, as determined by a quantitative analysis of the reconstructed volume images. These images are retrieved from the three-dimensional diffraction data using no a priori knowledge about the shape or composition of the object, which has never before been demonstrated on a nonperiodic object. We also construct two-dimensional images of thick objects with greatly increased depth of focus (without loss of transverse spatialmore » resolution). These methods can be used to image biological and materials science samples at high resolution with x-ray undulator radiation and establishes the techniques to be used in atomic-resolution ultrafast imaging at x-ray free-electron laser sources.« less
Accurate label-free 3-part leukocyte recognition with single cell lens-free imaging flow cytometry.
Li, Yuqian; Cornelis, Bruno; Dusa, Alexandra; Vanmeerbeeck, Geert; Vercruysse, Dries; Sohn, Erik; Blaszkiewicz, Kamil; Prodanov, Dimiter; Schelkens, Peter; Lagae, Liesbet
2018-05-01
Three-part white blood cell differentials which are key to routine blood workups are typically performed in centralized laboratories on conventional hematology analyzers operated by highly trained staff. With the trend of developing miniaturized blood analysis tool for point-of-need in order to accelerate turnaround times and move routine blood testing away from centralized facilities on the rise, our group has developed a highly miniaturized holographic imaging system for generating lens-free images of white blood cells in suspension. Analysis and classification of its output data, constitutes the final crucial step ensuring appropriate accuracy of the system. In this work, we implement reference holographic images of single white blood cells in suspension, in order to establish an accurate ground truth to increase classification accuracy. We also automate the entire workflow for analyzing the output and demonstrate clear improvement in the accuracy of the 3-part classification. High-dimensional optical and morphological features are extracted from reconstructed digital holograms of single cells using the ground-truth images and advanced machine learning algorithms are investigated and implemented to obtain 99% classification accuracy. Representative features of the three white blood cell subtypes are selected and give comparable results, with a focus on rapid cell recognition and decreased computational cost. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Three-dimensional rendering of segmented object using matlab - biomed 2010.
Anderson, Jeffrey R; Barrett, Steven F
2010-01-01
The three-dimensional rendering of microscopic objects is a difficult and challenging task that often requires specialized image processing techniques. Previous work has been described of a semi-automatic segmentation process of fluorescently stained neurons collected as a sequence of slice images with a confocal laser scanning microscope. Once properly segmented, each individual object can be rendered and studied as a three-dimensional virtual object. This paper describes the work associated with the design and development of Matlab files to create three-dimensional images from the segmented object data previously mentioned. Part of the motivation for this work is to integrate both the segmentation and rendering processes into one software application, providing a seamless transition from the segmentation tasks to the rendering and visualization tasks. Previously these tasks were accomplished on two different computer systems, windows and Linux. This transition basically limits the usefulness of the segmentation and rendering applications to those who have both computer systems readily available. The focus of this work is to create custom Matlab image processing algorithms for object rendering and visualization, and merge these capabilities to the Matlab files that were developed especially for the image segmentation task. The completed Matlab application will contain both the segmentation and rendering processes in a single graphical user interface, or GUI. This process for rendering three-dimensional images in Matlab requires that a sequence of two-dimensional binary images, representing a cross-sectional slice of the object, be reassembled in a 3D space, and covered with a surface. Additional segmented objects can be rendered in the same 3D space. The surface properties of each object can be varied by the user to aid in the study and analysis of the objects. This inter-active process becomes a powerful visual tool to study and understand microscopic objects.
Three-dimensional cardiac architecture determined by two-photon microtomy
NASA Astrophysics Data System (ADS)
Huang, Hayden; MacGillivray, Catherine; Kwon, Hyuk-Sang; Lammerding, Jan; Robbins, Jeffrey; Lee, Richard T.; So, Peter
2009-07-01
Cardiac architecture is inherently three-dimensional, yet most characterizations rely on two-dimensional histological slices or dissociated cells, which remove the native geometry of the heart. We previously developed a method for labeling intact heart sections without dissociation and imaging large volumes while preserving their three-dimensional structure. We further refine this method to permit quantitative analysis of imaged sections. After data acquisition, these sections are assembled using image-processing tools, and qualitative and quantitative information is extracted. By examining the reconstructed cardiac blocks, one can observe end-to-end adjacent cardiac myocytes (cardiac strands) changing cross-sectional geometries, merging and separating from other strands. Quantitatively, representative cross-sectional areas typically used for determining hypertrophy omit the three-dimensional component; we show that taking orientation into account can significantly alter the analysis. Using fast-Fourier transform analysis, we analyze the gross organization of cardiac strands in three dimensions. By characterizing cardiac structure in three dimensions, we are able to determine that the α crystallin mutation leads to hypertrophy with cross-sectional area increases, but not necessarily via changes in fiber orientation distribution.
Hori, Masatoshi; Suzuki, Kenji; Epstein, Mark L.; Baron, Richard L.
2011-01-01
The purpose was to evaluate a relationship between slice thickness and calculated volume on CT liver volumetry by comparing the results for images with various slice thicknesses including three-dimensional images. Twenty adult potential liver donors (12 men, 8 women; mean age, 39 years; range, 24–64) underwent CT with a 64-section multi-detector row CT scanner after intra-venous injection of contrast material. Four image sets with slice thicknesses of 0.625 mm, 2.5 mm, 5 mm, and 10 mm were used. First, a program developed in our laboratory for automated liver extraction was applied to CT images, and the liver boundary was obtained automatically. Then, an abdominal radiologist reviewed all images on which automatically extracted boundaries were superimposed, and edited the boundary on each slice to enhance the accuracy. Liver volumes were determined by counting of the voxels within the liver boundary. Mean whole liver volumes estimated with CT were 1322.5 cm3 on 0.625-mm, 1313.3 cm3 on 2.5-mm, 1310.3 cm3 on 5-mm, and 1268.2 cm3 on 10-mm images. Volumes calculated for three-dimensional (0.625-mm-thick) images were significantly larger than those for thicker images (P<.0001). Partial liver volumes of right lobe, left lobe, and lateral segment were also evaluated in a similar manner. Estimated maximum differences in calculated volumes of lateral segment was −10.9 cm3 (−4.6%) between 0.625-mm and 5-mm images. In conclusion, liver volumes calculated on 2.5-mm or thicker images were significantly smaller than volumes calculated on three-dimensional images. If a maximum error of 5% in the calculated graft volume is within the range of having an insignificant clinical impact, 5-mm thick images are acceptable for CT volumetry. If not, three-dimensional images could be essential. PMID:21850689
A novel weld seam detection method for space weld seam of narrow butt joint in laser welding
NASA Astrophysics Data System (ADS)
Shao, Wen Jun; Huang, Yu; Zhang, Yong
2018-02-01
Structured light measurement is widely used for weld seam detection owing to its high measurement precision and robust. However, there is nearly no geometrical deformation of the stripe projected onto weld face, whose seam width is less than 0.1 mm and without misalignment. So, it's very difficult to ensure an exact retrieval of the seam feature. This issue is raised as laser welding for butt joint of thin metal plate is widely applied. Moreover, measurement for the seam width, seam center and the normal vector of the weld face at the same time during welding process is of great importance to the welding quality but rarely reported. Consequently, a seam measurement method based on vision sensor for space weld seam of narrow butt joint is proposed in this article. Three laser stripes with different wave length are project on the weldment, in which two red laser stripes are designed and used to measure the three dimensional profile of the weld face by the principle of optical triangulation, and the third green laser stripe is used as light source to measure the edge and the centerline of the seam by the principle of passive vision sensor. The corresponding image process algorithm is proposed to extract the centerline of the red laser stripes as well as the seam feature. All these three laser stripes are captured and processed in a single image so that the three dimensional position of the space weld seam can be obtained simultaneously. Finally, the result of experiment reveals that the proposed method can meet the precision demand of space narrow butt joint.
NASA Astrophysics Data System (ADS)
Zhang, Chun-Sen; Zhang, Meng-Meng; Zhang, Wei-Xing
2017-01-01
This paper outlines a low-cost, user-friendly photogrammetric technique with nonmetric cameras to obtain excavation site digital sequence images, based on photogrammetry and computer vision. Digital camera calibration, automatic aerial triangulation, image feature extraction, image sequence matching, and dense digital differential rectification are used, combined with a certain number of global control points of the excavation site, to reconstruct the high precision of measured three-dimensional (3-D) models. Using the acrobatic figurines in the Qin Shi Huang mausoleum excavation as an example, our method solves the problems of little base-to-height ratio, high inclination, unstable altitudes, and significant ground elevation changes affecting image matching. Compared to 3-D laser scanning, the 3-D color point cloud obtained by this method can maintain the same visual result and has advantages of low project cost, simple data processing, and high accuracy. Structure-from-motion (SfM) is often used to reconstruct 3-D models of large scenes and has lower accuracy if it is a reconstructed 3-D model of a small scene at close range. Results indicate that this method quickly achieves 3-D reconstruction of large archaeological sites and produces heritage site distribution of orthophotos providing a scientific basis for accurate location of cultural relics, archaeological excavations, investigation, and site protection planning. This proposed method has a comprehensive application value.
Using High-Dimensional Image Models to Perform Highly Undetectable Steganography
NASA Astrophysics Data System (ADS)
Pevný, Tomáš; Filler, Tomáš; Bas, Patrick
This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.