Trache, Tudor; Stöbe, Stephan; Tarr, Adrienn; Pfeiffer, Dietrich; Hagendorff, Andreas
2014-12-01
Comparison of 3D and 2D speckle tracking performed on standard 2D and triplane 2D datasets of normal and pathological left ventricular (LV) wall-motion patterns with a focus on the effect that 3D volume rate (3DVR), image quality and tracking artifacts have on the agreement between 2D and 3D speckle tracking. 37 patients with normal LV function and 18 patients with ischaemic wall-motion abnormalities underwent 2D and 3D echocardiography, followed by offline speckle tracking measurements. The values of 3D global, regional and segmental strain were compared with the standard 2D and triplane 2D strain values. Correlation analysis with the LV ejection fraction (LVEF) was also performed. The 3D and 2D global strain values correlated good in both normally and abnormally contracting hearts, though systematic differences between the two methods were observed. Of the 3D strain parameters, the area strain showed the best correlation with the LVEF. The numerical agreement of 3D and 2D analyses varied significantly with the volume rate and image quality of the 3D datasets. The highest correlation between 2D and 3D peak systolic strain values was found between 3D area and standard 2D longitudinal strain. Regional wall-motion abnormalities were similarly detected by 2D and 3D speckle tracking. 2DST of triplane datasets showed similar results to those of conventional 2D datasets. 2D and 3D speckle tracking similarly detect normal and pathological wall-motion patterns. Limited image quality has a significant impact on the agreement between 3D and 2D numerical strain values.
Oechsner, Markus; Chizzali, Barbara; Devecka, Michal; Combs, Stephanie Elisabeth; Wilkens, Jan Jakob; Duma, Marciana Nona
2016-10-26
The aim of this study was to analyze differences in couch shifts (setup errors) resulting from image registration of different CT datasets with free breathing cone beam CTs (FB-CBCT). As well automatic as manual image registrations were performed and registration results were correlated to tumor characteristics. FB-CBCT image registration was performed for 49 patients with lung lesions using slow planning CT (PCT), average intensity projection (AIP), maximum intensity projection (MIP) and mid-ventilation CTs (MidV) as reference images. Both, automatic and manual image registrations were applied. Shift differences were evaluated between the registered CT datasets for automatic and manual registration, respectively. Furthermore, differences between automatic and manual registration were analyzed for the same CT datasets. The registration results were statistically analyzed and correlated to tumor characteristics (3D tumor motion, tumor volume, superior-inferior (SI) distance, tumor environment). Median 3D shift differences over all patients were between 0.5 mm (AIPvsMIP) and 1.9 mm (MIPvsPCT and MidVvsPCT) for the automatic registration and between 1.8 mm (AIPvsPCT) and 2.8 mm (MIPvsPCT and MidVvsPCT) for the manual registration. For some patients, large shift differences (>5.0 mm) were found (maximum 10.5 mm, automatic registration). Comparing automatic vs manual registrations for the same reference CTs, ∆AIP achieved the smallest (1.1 mm) and ∆MIP the largest (1.9 mm) median 3D shift differences. The standard deviation (variability) for the 3D shift differences was also the smallest for ∆AIP (1.1 mm). Significant correlations (p < 0.01) between 3D shift difference and 3D tumor motion (AIPvsMIP, MIPvsMidV) and SI distance (AIPvsMIP) (automatic) and also for 3D tumor motion (∆PCT, ∆MidV; automatic vs manual) were found. Using different CT datasets for image registration with FB-CBCTs can result in different 3D couch shifts. Manual registrations achieved partly different 3D shifts than automatic registrations. AIP CTs yielded the smallest shift differences and might be the most appropriate CT dataset for registration with 3D FB-CBCTs.
Schure, Mark R; Davis, Joe M
2017-11-10
Orthogonality metrics (OMs) for three and higher dimensional separations are proposed as extensions of previously developed OMs, which were used to evaluate the zone utilization of two-dimensional (2D) separations. These OMs include correlation coefficients, dimensionality, information theory metrics and convex-hull metrics. In a number of these cases, lower dimensional subspace metrics exist and can be readily calculated. The metrics are used to interpret previously generated experimental data. The experimental datasets are derived from Gilar's peptide data, now modified to be three dimensional (3D), and a comprehensive 3D chromatogram from Moore and Jorgenson. The Moore and Jorgenson chromatogram, which has 25 identifiable 3D volume elements or peaks, displayed good orthogonality values over all dimensions. However, OMs based on discretization of the 3D space changed substantially with changes in binning parameters. This example highlights the importance in higher dimensions of having an abundant number of retention times as data points, especially for methods that use discretization. The Gilar data, which in a previous study produced 21 2D datasets by the pairing of 7 one-dimensional separations, was reinterpreted to produce 35 3D datasets. These datasets show a number of interesting properties, one of which is that geometric and harmonic means of lower dimensional subspace (i.e., 2D) OMs correlate well with the higher dimensional (i.e., 3D) OMs. The space utilization of the Gilar 3D datasets was ranked using OMs, with the retention times of the datasets having the largest and smallest OMs presented as graphs. A discussion concerning the orthogonality of higher dimensional techniques is given with emphasis on molecular diversity in chromatographic separations. In the information theory work, an inconsistency is found in previous studies of orthogonality using the 2D metric often identified as %O. A new choice of metric is proposed, extended to higher dimensions, characterized by mixes of ordered and random retention times, and applied to the experimental datasets. In 2D, the new metric always equals or exceeds the original one. However, results from both the original and new methods are given. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wheatland, Jonathan; Bushby, Andy; Droppo, Ian; Carr, Simon; Spencer, Kate
2015-04-01
Suspended estuarine sediments form flocs that are compositionally complex, fragile and irregularly shaped. The fate and transport of suspended particulate matter (SPM) is determined by the size, shape, density, porosity and stability of these flocs and prediction of SPM transport requires accurate measurements of these three-dimensional (3D) physical properties. However, the multi-scaled nature of flocs in addition to their fragility makes their characterisation in 3D problematic. Correlative microscopy is a strategy involving the spatial registration of information collected at different scales using several imaging modalities. Previously, conventional optical microscopy (COM) and transmission electron microscopy (TEM) have enabled 2-dimensional (2D) floc characterisation at the gross (> 1 µm) and sub-micron scales respectively. Whilst this has proven insightful there remains a critical spatial and dimensional gap preventing the accurate measurement of geometric properties and an understanding of how structures at different scales are related. Within life sciences volumetric imaging techniques such as 3D micro-computed tomography (3D µCT) and focused ion beam scanning electron microscopy [FIB-SEM (or FIB-tomography)] have been combined to characterise materials at the centimetre to micron scale. Combining these techniques with TEM enables an advanced correlative study, allowing material properties across multiple spatial and dimensional scales to be visualised. The aims of this study are; 1) to formulate an advanced correlative imaging strategy combining 3D µCT, FIB-tomography and TEM; 2) to acquire 3D datasets; 3) to produce a model allowing their co-visualisation; 4) to interpret 3D floc structure. To reduce the chance of structural alterations during analysis samples were first 'fixed' in 2.5% glutaraldehyde/2% formaldehyde before being embedding in Durcupan resin. Intermediate steps were implemented to improve contrast and remove pore water, achieved by the addition of heavy metal stains and washing samples in a series of ethanol solutions and acetone. Gross-scale characterisation involved scanning samples using a Nikon Metrology HM X 225 µCT. For micro-scale analysis a working surface was revealed by microtoming the sample. Ultrathin sections were then collected and analysed using a JEOL 1200 Ex II TEM, and FIB-tomography datasets obtained using an FEI Quanta 3D FIB-SEM. Finally, to locate the surface and relate TEM and FIB-tomography datasets to the original floc, samples were rescanned using the µCT. Image processing was initially conducted in ImageJ. Following this datasets were imported into Amira 5.5 where pixel intensity thresholding allowed particle-matrix boundaries to be defined. Using 'landmarks' datasets were then registered to enable their co-visualisation in 3D models. Analysis of registered datasets reveals the complex non-fractal nature of flocs, whose properties span several of orders of magnitude. Primary particles are organised into discrete 'bundles', the arrangement of which directly influences their gross morphology. This strategy, which allows the co-visualisation of spatially registered multi-scale 3D datasets, provides unique insights into the true nature floc which would other have been impossible.
NASA Astrophysics Data System (ADS)
Nikolakopoulos, Konstantinos G.
2017-09-01
A global digital surface model dataset named ALOS Global Digital Surface Model (AW3D30) with a horizontal resolution of approx. 30-meter mesh (1 arcsec) has been released by the Japan Aerospace Exploration Agency (JAXA). The dataset has been compiled with images acquired by the Advanced Land Observing Satellite "DAICHI" (ALOS) and it is published based on the DSM dataset (5-meter mesh version) of the "World 3D Topographic Data", which is the most precise global-scale elevation data at this time, and its elevation precision is also at a world-leading level as a 30-meter mesh version. In this study the accuracy of ALOS AW3D30 was examined. For an area with complex geomorphologic characteristics DSM from ALOS stereo pairs were created with classical photogrammetric techniques. Those DSMs were compared with the ALOS AW3D30. Points of certified elevation collected with DGPS have been used to estimate the accuracy of the DSM. The elevation difference between the two DSMs was calculated. 2D RMSE, correlation and the percentile value were also computed and the results are presented.
Menéndez, Lumila Paula
2017-05-01
Intraobserver error (INTRA-OE) is the difference between repeated measurements of the same variable made by the same observer. The objective of this work was to evaluate INTRA-OE from 3D landmarks registered with a Microscribe, in different datasets: (A) the 3D coordinates, (B) linear measurements calculated from A, and (C) the six-first principal component axes. INTRA-OE was analyzed by digitizing 42 landmarks from 23 skulls in three events two weeks apart from each other. Systematic error was tested through repeated measures ANOVA (ANOVA-RM), while random error through intraclass correlation coefficient. Results showed that the largest differences between the three observations were found in the first dataset. Some anatomical points like nasion, ectoconchion, temporosphenoparietal, asterion, and temporomandibular presented the highest INTRA-OE. In the second dataset, local distances had higher INTRA-OE than global distances while the third dataset showed the lowest INTRA-OE. © 2016 American Academy of Forensic Sciences.
3D shape recovery from image focus using gray level co-occurrence matrix
NASA Astrophysics Data System (ADS)
Mahmood, Fahad; Munir, Umair; Mehmood, Fahad; Iqbal, Javaid
2018-04-01
Recovering a precise and accurate 3-D shape of the target object utilizing robust 3-D shape recovery algorithm is an ultimate objective of computer vision community. Focus measure algorithm plays an important role in this architecture which convert the color values of each pixel of the acquired 2-D image dataset into corresponding focus values. After convolving the focus measure filter with the input 2-D image dataset, a 3-D shape recovery approach is applied which will recover the depth map. In this document, we are concerned with proposing Gray Level Co-occurrence Matrix along with its statistical features for computing the focus information of the image dataset. The Gray Level Co-occurrence Matrix quantifies the texture present in the image using statistical features and then applies joint probability distributive function of the gray level pairs of the input image. Finally, we quantify the focus value of the input image using Gaussian Mixture Model. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach -in spite of simplicity generates accurate results.
Multifield-graphs: an approach to visualizing correlations in multifield scalar data.
Sauber, Natascha; Theisel, Holger; Seidel, Hans-Peter
2006-01-01
We present an approach to visualizing correlations in 3D multifield scalar data. The core of our approach is the computation of correlation fields, which are scalar fields containing the local correlations of subsets of the multiple fields. While the visualization of the correlation fields can be done using standard 3D volume visualization techniques, their huge number makes selection and handling a challenge. We introduce the Multifield-Graph to give an overview of which multiple fields correlate and to show the strength of their correlation. This information guides the selection of informative correlation fields for visualization. We use our approach to visually analyze a number of real and synthetic multifield datasets.
Non-model-based correction of respiratory motion using beat-to-beat 3D spiral fat-selective imaging.
Keegan, Jennifer; Gatehouse, Peter D; Yang, Guang-Zhong; Firmin, David N
2007-09-01
To demonstrate the feasibility of retrospective beat-to-beat correction of respiratory motion, without the need for a respiratory motion model. A high-resolution three-dimensional (3D) spiral black-blood scan of the right coronary artery (RCA) of six healthy volunteers was acquired over 160 cardiac cycles without respiratory gating. One spiral interleaf was acquired per cardiac cycle, prior to each of which a complete low-resolution fat-selective 3D spiral dataset was acquired. The respiratory motion (3D translation) on each cardiac cycle was determined by cross-correlating a region of interest (ROI) in the fat around the artery in the low-resolution datasets with that on a reference end-expiratory dataset. The measured translations were used to correct the raw data of the high-resolution spiral interleaves. Beat-to-beat correction provided consistently good results, with the image quality being better than that obtained with a fixed superior-inferior tracking factor of 0.6 and better than (N = 5) or equal to (N = 1) that achieved using a subject-specific retrospective 3D translation motion model. Non-model-based correction of respiratory motion using 3D spiral fat-selective imaging is feasible, and in this small group of volunteers produced better-quality images than a subject-specific retrospective 3D translation motion model. (c) 2007 Wiley-Liss, Inc.
Ma, Ya-Jun; West, Justin; Nazaran, Amin; Cheng, Xin; Hoenecke, Heinz; Du, Jiang; Chang, Eric Y
2018-02-02
To utilize the 3D inversion recovery prepared ultrashort echo time with cones readout (IR-UTE-Cones) MRI technique for direct imaging of lamellar bone with comparison to the gold standard of computed tomography (CT). CT and MRI was performed on 11 shoulder specimens and three patients. Five specimens had imaging performed before and after glenoid fracture (osteotomy). 2D and 3D volume-rendered CT images were reconstructed and conventional T1-weighted and 3D IR-UTE-Cones MRI techniques were performed. Glenoid widths and defects were independently measured by two readers using the circle method. Measurements were compared with those made from 3D CT datasets. Paired-sample Student's t tests and intraclass correlation coefficients were performed. In addition, 2D CT and 3D IR-UTE-Cones MRI datasets were linearly registered, digitally overlaid, and compared in consensus by these two readers. Compared with the reference standard (3D CT), glenoid bone diameter measurements made on 2D CT and 3D IR-UTE-Cones were not significantly different for either reader, whereas T1-weighted images underestimated the diameter (mean difference of 0.18 cm, p = 0.003 and 0.16 cm, p = 0.022 for readers 1 and 2, respectively). However, mean margin of error for measuring glenoid bone loss was small for all modalities (range, 1.46-3.92%). All measured ICCs were near perfect. Digitally registered 2D CT and 3D IR-UTE-Cones MRI datasets yielded essentially perfect congruity between the two modalities. The 3D IR-UTE-Cones MRI technique selectively visualizes lamellar bone, produces similar contrast to 2D CT imaging, and compares favorably to measurements made using 2D and 3D CT.
An automatic approach for 3D registration of CT scans
NASA Astrophysics Data System (ADS)
Hu, Yang; Saber, Eli; Dianat, Sohail; Vantaram, Sreenath Rao; Abhyankar, Vishwas
2012-03-01
CT (Computed tomography) is a widely employed imaging modality in the medical field. Normally, a volume of CT scans is prescribed by a doctor when a specific region of the body (typically neck to groin) is suspected of being abnormal. The doctors are required to make professional diagnoses based upon the obtained datasets. In this paper, we propose an automatic registration algorithm that helps healthcare personnel to automatically align corresponding scans from 'Study' to 'Atlas'. The proposed algorithm is capable of aligning both 'Atlas' and 'Study' into the same resolution through 3D interpolation. After retrieving the scanned slice volume in the 'Study' and the corresponding volume in the original 'Atlas' dataset, a 3D cross correlation method is used to identify and register various body parts.
Jones, Nia W; Raine-Fenning, Nick J; Mousa, Hatem A; Bradley, Eileen; Bugg, George J
2011-03-01
Three-dimensional (3-D) power Doppler angiography (3-D-PDA) allows visualisation of Doppler signals within the placenta and their quantification is possible by the generation of vascular indices by the 4-D View software programme. This study aimed to investigate intra- and interobserver reproducibility of 3-D-PDA analysis of stored datasets at varying gestations with the ultimate goal being to develop a tool for predicting placental dysfunction. Women with an uncomplicated, viable singleton pregnancy were scanned at 12, 16 or 20 weeks gestational age groups. 3-D-PDA datasets acquired of the whole placenta were analysed using the VOCAL software processing tool. Each volume was analysed by three observers twice in the A plane. Intra- and interobserver reliability was assessed by intraclass correlation coefficients (ICCs) and Bland Altman plots. At each gestational age group, 20 low risk women were scanned resulting in 60 datasets in total. The ICC demonstrated a high level of measurement reliability at each gestation with intraobserver values >0.90 and interobserver values of >0.6 for the vascular indices. Bland Altman plots also showed high levels of agreement. Systematic bias was seen at 20 weeks in the vascular indices obtained by different observers. This study demonstrates that 3-D-PDA data can be measured reliably by different observers from stored datasets up to 18 weeks gestation. Measurements become less reliable as gestation advances with bias between observers evident at 20 weeks. Copyright © 2011 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Zeng, Rongping; Petrick, Nicholas; Gavrielides, Marios A; Myers, Kyle J
2011-10-07
Multi-slice computed tomography (MSCT) scanners have become popular volumetric imaging tools. Deterministic and random properties of the resulting CT scans have been studied in the literature. Due to the large number of voxels in the three-dimensional (3D) volumetric dataset, full characterization of the noise covariance in MSCT scans is difficult to tackle. However, as usage of such datasets for quantitative disease diagnosis grows, so does the importance of understanding the noise properties because of their effect on the accuracy of the clinical outcome. The goal of this work is to study noise covariance in the helical MSCT volumetric dataset. We explore possible approximations to the noise covariance matrix with reduced degrees of freedom, including voxel-based variance, one-dimensional (1D) correlation, two-dimensional (2D) in-plane correlation and the noise power spectrum (NPS). We further examine the effect of various noise covariance models on the accuracy of a prewhitening matched filter nodule size estimation strategy. Our simulation results suggest that the 1D longitudinal, 2D in-plane and NPS prewhitening approaches can improve the performance of nodule size estimation algorithms. When taking into account computational costs in determining noise characterizations, the NPS model may be the most efficient approximation to the MSCT noise covariance matrix.
van den Hoven, Allard T; Mc-Ghie, Jackie S; Chelu, Raluca G; Duijnhouwer, Anthonie L; Baggen, Vivan J M; Coenen, Adriaan; Vletter, Wim B; Dijkshoorn, Marcel L; van den Bosch, Annemien E; Roos-Hesselink, Jolien W
2017-12-01
Integration of volumetric heart chamber quantification by 3D echocardiography into clinical practice has been hampered by several factors which a new fully automated algorithm (Left Heart Model, (LHM)) may help overcome. This study therefore aims to evaluate the feasibility and accuracy of the LHM software in quantifying left atrial and left ventricular volumes and left ventricular ejection fraction in a cohort of patients with a bicuspid aortic valve. Patients with a bicuspid aortic valve were prospectively included. All patients underwent 2D and 3D transthoracic echocardiography and computed tomography. Left atrial and ventricular volumes were obtained using the automated program, which did not require manual contour detection. For comparison manual and semi-automated measurements were performed using conventional 2D and 3D datasets. 53 patients were included, in four of those patients no 3D dataset could be acquired. Additionally, 12 patients were excluded based on poor imaging quality. Left ventricular end-diastolic and end-systolic volumes and ejection fraction calculated by the LHM correlated well with manual 2D and 3D measurements (Pearson's r between 0.43 and 0.97, p < 0.05). Left atrial volume (LAV) also correlated significantly although LHM did estimate larger LAV compared to both 2DE and 3DE (Pearson's r between 0.61 and 0.81, p < 0.01). The fully automated software works well in a real-world setting and helps to overcome some of the major hurdles in integrating 3D analysis into daily practice, as it is user-independent and highly reproducible in a group of patients with a clearly defined and well-studied valvular abnormality.
CheS-Mapper - Chemical Space Mapping and Visualization in 3D.
Gütlein, Martin; Karwath, Andreas; Kramer, Stefan
2012-03-17
Analyzing chemical datasets is a challenging task for scientific researchers in the field of chemoinformatics. It is important, yet difficult to understand the relationship between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects. To that respect, visualization tools can help to better comprehend the underlying correlations. Our recently developed 3D molecular viewer CheS-Mapper (Chemical Space Mapper) divides large datasets into clusters of similar compounds and consequently arranges them in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kind of features, like structural fragments as well as quantitative chemical descriptors. These features can be highlighted within CheS-Mapper, which aids the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. As a final function, the tool can also be used to select and export specific subsets of a given dataset for further analysis.
CheS-Mapper - Chemical Space Mapping and Visualization in 3D
2012-01-01
Analyzing chemical datasets is a challenging task for scientific researchers in the field of chemoinformatics. It is important, yet difficult to understand the relationship between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects. To that respect, visualization tools can help to better comprehend the underlying correlations. Our recently developed 3D molecular viewer CheS-Mapper (Chemical Space Mapper) divides large datasets into clusters of similar compounds and consequently arranges them in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kind of features, like structural fragments as well as quantitative chemical descriptors. These features can be highlighted within CheS-Mapper, which aids the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. As a final function, the tool can also be used to select and export specific subsets of a given dataset for further analysis. PMID:22424447
Geng, Hua; Todd, Naomi M; Devlin-Mullin, Aine; Poologasundarampillai, Gowsihan; Kim, Taek Bo; Madi, Kamel; Cartmell, Sarah; Mitchell, Christopher A; Jones, Julian R; Lee, Peter D
2016-06-01
A correlative imaging methodology was developed to accurately quantify bone formation in the complex lattice structure of additive manufactured implants. Micro computed tomography (μCT) and histomorphometry were combined, integrating the best features from both, while demonstrating the limitations of each imaging modality. This semi-automatic methodology registered each modality using a coarse graining technique to speed the registration of 2D histology sections to high resolution 3D μCT datasets. Once registered, histomorphometric qualitative and quantitative bone descriptors were directly correlated to 3D quantitative bone descriptors, such as bone ingrowth and bone contact. The correlative imaging allowed the significant volumetric shrinkage of histology sections to be quantified for the first time (~15 %). This technique demonstrated the importance of location of the histological section, demonstrating that up to a 30 % offset can be introduced. The results were used to quantitatively demonstrate the effectiveness of 3D printed titanium lattice implants.
A web-based instruction module for interpretation of craniofacial cone beam CT anatomy.
Hassan, B A; Jacobs, R; Scarfe, W C; Al-Rawi, W T
2007-09-01
To develop a web-based module for learner instruction in the interpretation and recognition of osseous anatomy on craniofacial cone-beam CT (CBCT) images. Volumetric datasets from three CBCT systems were acquired (i-CAT, NewTom 3G and AccuiTomo FPD) for various subjects using equipment-specific scanning protocols. The datasets were processed using multiple software to provide two-dimensional (2D) multiplanar reformatted (MPR) images (e.g. sagittal, coronal and axial) and three-dimensional (3D) visual representations (e.g. maximum intensity projection, minimum intensity projection, ray sum, surface and volume rendering). Distinct didactic modules which illustrate the principles of CBCT systems, guided navigation of the volumetric dataset, and anatomic correlation of 3D models and 2D MPR graphics were developed using a hybrid combination of web authoring and image analysis techniques. Interactive web multimedia instruction was facilitated by the use of dynamic highlighting and labelling, and rendered video illustrations, supplemented with didactic textual material. HTML coding and Java scripting were heavily implemented for the blending of the educational modules. An interactive, multimedia educational tool for visualizing the morphology and interrelationships of osseous craniofacial anatomy, as depicted on CBCT MPR and 3D images, was designed and implemented. The present design of a web-based instruction module may assist radiologists and clinicians in learning how to recognize and interpret the craniofacial anatomy of CBCT based images more efficiently.
Nehme, A; Zibara, K; Cerutti, C; Bricca, G
2015-06-01
The implication of the renin-angiotensin-aldosterone system (RAAS) in atheroma development is well described. However, a complete view of the local RAAS in atheroma is still missing. In this study we aimed to reveal the organization of RAAS in atheroma at the transcriptomic level and identify the transcriptional regulators behind it. Extended RAAS (extRAAS) was defined as the set of 37 genes coding for classical and novel RAAS participants (Figure 1). Five microarray datasets containing overall 590 samples representing carotid and peripheral atheroma were downloaded from the GEO database. Correlation-based hierarchical clustering (R software) of extRAAS genes within each dataset allowed the identification of modules of co-expressed genes. Reproducible co-expression modules across datasets were then extracted. Transcription factors (TFs) having common binding sites (TFBSs) in the promoters of coordinated genes were identified using the Genomatix database tools and analyzed for their correlation with extRAAS genes in the microarray datasets. Expression data revealed the expressed extRAAS components and their relative abundance displaying the favored pathways in atheroma. Three co-expression modules with more than 80% reproducibility across datasets were extracted. Two of them (M1 and M2) contained genes coding for angiotensin metabolizing enzymes involved in different pathways: M1 included ACE, MME, RNPEP, and DPP3, in addition to 7 other genes; and M2 included CMA1, CTSG, and CPA3. The third module (M3) contained genes coding for receptors known to be implicated in atheroma (AGTR1, MR, GR, LNPEP, EGFR and GPER). M1 and M3 were negatively correlated in 3 of 5 datasets. We identified 19 TFs that have enriched TFBSs in the promoters of genes of M1, and two for M3, but none was found for M2. Among the extracted TFs, ELF1, MAX, and IRF5 showed significant positive correlations with peptidase-coding genes from M1 and negative correlations with receptors-coding genes from M3 (p < 0.05). The identified co-expression modules display the transcriptional organization of local extRAAS in human carotid atheroma. The identification of several TFs potentially associated to extRAAS genes may provide a frame for the discovery of atheroma-specific modulators of extRAAS activity.(Figure is included in full-text article.).
Geiger, Daniel; Bae, Won C.; Statum, Sheronda; Du, Jiang; Chung, Christine B.
2014-01-01
Objective Temporomandibular dysfunction involves osteoarthritis of the TMJ, including degeneration and morphologic changes of the mandibular condyle. Purpose of this study was to determine accuracy of novel 3D-UTE MRI versus micro-CT (μCT) for quantitative evaluation of mandibular condyle morphology. Material & Methods Nine TMJ condyle specimens were harvested from cadavers (2M, 3F; Age 85 ± 10 yrs., mean±SD). 3D-UTE MRI (TR=50ms, TE=0.05 ms, 104 μm isotropic-voxel) was performed using a 3-T MR scanner and μCT (18 μm isotropic-voxel) was performed. MR datasets were spatially-registered with μCT dataset. Two observers segmented bony contours of the condyles. Fibrocartilage was segmented on MR dataset. Using a custom program, bone and fibrocartilage surface coordinates, Gaussian curvature, volume of segmented regions and fibrocartilage thickness were determined for quantitative evaluation of joint morphology. Agreement between techniques (MRI vs. μCT) and observers (MRI vs. MRI) for Gaussian curvature, mean curvature and segmented volume of the bone were determined using intraclass correlation correlation (ICC) analyses. Results Between MRI and μCT, the average deviation of surface coordinates was 0.19±0.15 mm, slightly higher than spatial resolution of MRI. Average deviation of the Gaussian curvature and volume of segmented regions, from MRI to μCT, was 5.7±6.5% and 6.6±6.2%, respectively. ICC coefficients (MRI vs. μCT) for Gaussian curvature, mean curvature and segmented volumes were respectively 0.892, 0.893 and 0.972. Between observers (MRI vs. MRI), the ICC coefficients were 0.998, 0.999 and 0.997 respectively. Fibrocartilage thickness was 0.55±0.11 mm, as previously described in literature for grossly normal TMJ samples. Conclusion 3D-UTE MR quantitative evaluation of TMJ condyle morphology ex-vivo, including surface, curvature and segmented volume, shows high correlation against μCT and between observers. In addition, UTE MRI allows quantitative evaluation of the fibrocartilaginous condylar component. PMID:24092237
Preliminary interpretation of high resolution 3D seismic data from offshore Mt. Etna, Italy
NASA Astrophysics Data System (ADS)
Gross, F.; Krastel, S.; Chiocci, F. L.; Ridente, D.; Cukur, D.; Bialas, J.; Papenberg, C. A.; Crutchley, G.; Koch, S.
2013-12-01
In order to gain knowledge about subsurface structures and its correlation to seafloor expressions, a hydro-acoustic dataset was collected during RV Meteor Cruise M86/2 (December 2011/January 2012) in Messina Straits and offshore Mt. Etna. Especially offshore Mt. Etna, the data reveals an obvious connection between subsurface structures and previously known morphological features at the sea floor. Therefore a high resolution 3D seismic dataset was acquired between Riposto Ridge and Catania Canyon close to the shore of eastern Sicily. The study area is characterized by a major structural high, which hosts several ridge-like features at the seafloor. These features are connected to a SW-NE trending fault system. The ridges are bended in their NE-SW direction and host major escarpments at the seafloor. Furthermore they are located directly next to a massive amphitheater structure offshore Mt. Etna with slope gradients of up to 35°, which is interpreted as remnants of a massive submarine mass wasting event off Sicily. The new 3D seismic dataset allows an in depth analysis of the ongoing deformation of the east flank of Mt. Etna.
NASA Astrophysics Data System (ADS)
Kalisperakis, I.; Stentoumis, Ch.; Grammatikopoulos, L.; Karantzalos, K.
2015-08-01
The indirect estimation of leaf area index (LAI) in large spatial scales is crucial for several environmental and agricultural applications. To this end, in this paper, we compare and evaluate LAI estimation in vineyards from different UAV imaging datasets. In particular, canopy levels were estimated from i.e., (i) hyperspectral data, (ii) 2D RGB orthophotomosaics and (iii) 3D crop surface models. The computed canopy levels have been used to establish relationships with the measured LAI (ground truth) from several vines in Nemea, Greece. The overall evaluation indicated that the estimated canopy levels were correlated (r2 > 73%) with the in-situ, ground truth LAI measurements. As expected the lowest correlations were derived from the calculated greenness levels from the 2D RGB orthomosaics. The highest correlation rates were established with the hyperspectral canopy greenness and the 3D canopy surface models. For the later the accurate detection of canopy, soil and other materials in between the vine rows is required. All approaches tend to overestimate LAI in cases with sparse, weak, unhealthy plants and canopy.
3D shape recovery from image focus using Gabor features
NASA Astrophysics Data System (ADS)
Mahmood, Fahad; Mahmood, Jawad; Zeb, Ayesha; Iqbal, Javaid
2018-04-01
Recovering an accurate and precise depth map from a set of acquired 2-D image dataset of the target object each having different focus information is an ultimate goal of 3-D shape recovery. Focus measure algorithm plays an important role in this architecture as it converts the corresponding color value information into focus information which will be then utilized for recovering depth map. This article introduces Gabor features as focus measure approach for recovering depth map from a set of 2-D images. Frequency and orientation representation of Gabor filter features is similar to human visual system and normally applied for texture representation. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach, in spite of simplicity, generates accurate results.
NASA Astrophysics Data System (ADS)
Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.
2002-12-01
Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated, even if one data object lies behind another. Stereoscopic viewing is another powerful tool to investigate 3-D relationships between objects. This form of immersion is constructed through viewing two separate images that are interleaved--typically 48 frames per second, per eye--and synced through an emitter and a set of specialized polarizing eyeglasses. The polarizing lenses flicker at an equivalent rate, blanking the eye for which a particular image was not drawn, producing the desired stereo effect. Volumetric visualization of the ARAD 3-D seismic dataset will be presented. The effective use of transparency reveals detailed structure of the melt-lens beneath the 9°03'N overlapping spreading center (OSC) along the East Pacific Rise, including melt-filled fractures within the propagating rift-tip. In addition, range-gated images of seismic reflectivity will be co-registered to investigate the physical properties (melt versus mush) of the magma chamber at this locale. Surface visualization of a dense, 2-D grid of MCS seismic data beneath Axial seamount (Juan de Fuca Ridge) will also be highlighted, including relationships between the summit caldera and rift zones, and the underlying (and humongous) magma chamber. A selection of Quicktime movies will be shown. Popcorn will be served, really!
NASA Astrophysics Data System (ADS)
José González-Rojí, Santos; Sáenz, Jon; Ibarra-Berastegi, Gabriel
2017-04-01
GLEAM dataset was presented a few years ago and since that moment, it has just been used for validation of evaporation in a few places of the world (Australia and Africa). The Iberian Peninsula is composed of different soil types and it is affected by different weather regimes, with different climate regions. It is this feature which makes it a very interesting zone for the study of the meteorological cycle, including evaporation. For that purpose, a numerical downscaling exercise over the Iberian Peninsula was run nesting the WRF model inside ERA Interim. Two model configurations were tested in two experiments spanning the period 2010-2014 after a one-year spin-up (2009). In the first experiment (N), boundary conditions drive the model. The second experiment (D) is configured the same way as the N case, but 3DVAR data assimilation is run every six hours (00Z, 06Z, 12Z and 18Z) using observations obtained from the PREPBUFR dataset. For both N and D runs and ERA Interim, the evaporation of the model runs was compared to GLEAM v3.0b and v3.0c datasets over the Iberian Peninsula, both at the daily and monthly time scales. GLEAM v3.0a was not used for validation as it uses for forcing radiation and air temperature data from ERA Interim. Results show that the experiment with data assimilation (D) improve the results obtained for N experiment. Moreover, correlations values are comparable to the ones obtained with ERA Interim. However, some negative correlation values are observed at Portuguese and Mediterranean coasts for both WRF runs. All of these problematic points are considered as urban sites by the NOAH land surface model. Because of that, the model is not able to simulate a correct evaporation value. Even with these discrepancies, better results than for ERA Interim are observed for seasonal Biases and daily RMSEs over Iberian Peninsula, obtaining the best values inland. Minimal differences are observed for the two GLEAM datasets selected.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhou, S; Cai, W; Hurwitz, M
2015-06-15
Purpose: Respiratory-correlated cone-beam CT (4DCBCT) images acquired immediately prior to treatment have the potential to represent patient motion patterns and anatomy during treatment, including both intra- and inter-fractional changes. We develop a method to generate patient-specific motion models based on 4DCBCT images acquired with existing clinical equipment and used to generate time varying volumetric images (3D fluoroscopic images) representing motion during treatment delivery. Methods: Motion models are derived by deformably registering each 4DCBCT phase to a reference phase, and performing principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated by optimizing the resulting PCAmore » coefficients iteratively through comparison of the cone-beam projections simulating kV treatment imaging and digitally reconstructed radiographs generated from the motion model. Patient and physical phantom datasets are used to evaluate the method in terms of tumor localization error compared to manually defined ground truth positions. Results: 4DCBCT-based motion models were derived and used to generate 3D fluoroscopic images at treatment time. For the patient datasets, the average tumor localization error and the 95th percentile were 1.57 and 3.13 respectively in subsets of four patient datasets. For the physical phantom datasets, the average tumor localization error and the 95th percentile were 1.14 and 2.78 respectively in two datasets. 4DCBCT motion models are shown to perform well in the context of generating 3D fluoroscopic images due to their ability to reproduce anatomical changes at treatment time. Conclusion: This study showed the feasibility of deriving 4DCBCT-based motion models and using them to generate 3D fluoroscopic images at treatment time in real clinical settings. 4DCBCT-based motion models were found to account for the 3D non-rigid motion of the patient anatomy during treatment and have the potential to localize tumor and other patient anatomical structures at treatment time even when inter-fractional changes occur. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc., Palo Alto, CA. The project was also supported, in part, by Award Number R21CA156068 from the National Cancer Institute.« less
Magunia, Harry; Schmid, Eckhard; Hilberath, Jan N; Häberle, Leo; Grasshoff, Christian; Schlensak, Christian; Rosenberger, Peter; Nowak-Machen, Martina
2017-04-01
The early diagnosis and treatment of right ventricular (RV) dysfunction are of critical importance in cardiac surgery patients and impact clinical outcome. Two-dimensional (2D) transesophageal echocardiography (TEE) can be used to evaluate RV function using surrogate parameters due to complex RV geometry. The aim of this study was to evaluate whether the commonly used visual evaluation of RV function and size using 2D TEE correlated with the calculated three-dimensional (3D) volumetric models of RV function. Retrospective study, single center, University Hospital. Seventy complete datasets were studied consisting of 2D 4-chamber view loops (2-3 beats) and the corresponding 4-chamber view 3D full-volume loop of the right ventricle. RV function and RV size of the 2D loops then were assessed retrospectively purely qualitatively individually by 4 clinician echocardiographers certified in perioperative TEE. Corresponding 3D volumetric models calculating RV ejection fraction and RV end-diastolic volumes then were established and compared with the 2D assessments. 2D assessment of RV function correlated with 3D volumetric calculations (Spearman's rho -0.5; p<0.0001). No correlation could be established between 2D estimates of RV size and actual 3D volumetric end-diastolic volumes (Spearman's rho 0.15; p = 0.25). The 2D assessment of right ventricular function based on visual estimation as frequently used in clinical practice appeared to be a reliable method of RV functional evaluation. However, 2D assessment of RV size seemed unreliable and should be used with caution. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kilian-Meneghin, Josh; Xiong, Z.; Rudin, S.; Oines, A.; Bednarek, D. R.
2017-03-01
The purpose of this work is to evaluate methods for producing a library of 2D-radiographic images to be correlated to clinical images obtained during a fluoroscopically-guided procedure for automated patient-model localization. The localization algorithm will be used to improve the accuracy of the skin-dose map superimposed on the 3D patient- model of the real-time Dose-Tracking-System (DTS). For the library, 2D images were generated from CT datasets of the SK-150 anthropomorphic phantom using two methods: Schmid's 3D-visualization tool and Plastimatch's digitally-reconstructed-radiograph (DRR) code. Those images, as well as a standard 2D-radiographic image, were correlated to a 2D-fluoroscopic image of a phantom, which represented the clinical-fluoroscopic image, using the Corr2 function in Matlab. The Corr2 function takes two images and outputs the relative correlation between them, which is fed into the localization algorithm. Higher correlation means better alignment of the 3D patient-model with the patient image. In this instance, it was determined that the localization algorithm will succeed when Corr2 returns a correlation of at least 50%. The 3D-visualization tool images returned 55-80% correlation relative to the fluoroscopic-image, which was comparable to the correlation for the radiograph. The DRR images returned 61-90% correlation, again comparable to the radiograph. Both methods prove to be sufficient for the localization algorithm and can be produced quickly; however, the DRR method produces more accurate grey-levels. Using the DRR code, a library at varying angles can be produced for the localization algorithm.
Tsareva, Daria A; Osolodkin, Dmitry I; Shulga, Dmitry A; Oliferenko, Alexander A; Pisarev, Sergey A; Palyulin, Vladimir A; Zefirov, Nikolay S
2011-03-14
Two fast empirical charge models, Kirchhoff Charge Model (KCM) and Dynamic Electronegativity Relaxation (DENR), had been developed in our laboratory previously for widespread use in drug design research. Both models are based on the electronegativity relaxation principle (Adv. Quantum Chem. 2006, 51, 139-156) and parameterized against ab initio dipole/quadrupole moments and molecular electrostatic potentials, respectively. As 3D QSAR studies comprise one of the most important fields of applied molecular modeling, they naturally have become the first topic to test our charges and thus, indirectly, the assumptions laid down to the charge model theories in a case study. Here these charge models are used in CoMFA and CoMSIA methods and tested on five glycogen synthase kinase 3 (GSK-3) inhibitor datasets, relevant to our current studies, and one steroid dataset. For comparison, eight other different charge models, ab initio through semiempirical and empirical, were tested on the same datasets. The complex analysis including correlation and cross-validation, charges robustness and predictability, as well as visual interpretability of 3D contour maps generated was carried out. As a result, our new electronegativity relaxation-based models both have shown stable results, which in conjunction with other benefits discussed render them suitable for building reliable 3D QSAR models. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Susceptibility-based functional brain mapping by 3D deconvolution of an MR-phase activation map.
Chen, Zikuan; Liu, Jingyu; Calhoun, Vince D
2013-05-30
The underlying source of T2*-weighted magnetic resonance imaging (T2*MRI) for brain imaging is magnetic susceptibility (denoted by χ). T2*MRI outputs a complex-valued MR image consisting of magnitude and phase information. Recent research has shown that both the magnitude and the phase images are morphologically different from the source χ, primarily due to 3D convolution, and that the source χ can be reconstructed from complex MR images by computed inverse MRI (CIMRI). Thus, we can obtain a 4D χ dataset from a complex 4D MR dataset acquired from a brain functional MRI study by repeating CIMRI to reconstruct 3D χ volumes at each timepoint. Because the reconstructed χ is a more direct representation of neuronal activity than the MR image, we propose a method for χ-based functional brain mapping, which is numerically characterised by a temporal correlation map of χ responses to a stimulant task. Under the linear imaging conditions used for T2*MRI, we show that the χ activation map can be calculated from the MR phase map by CIMRI. We validate our approach using numerical simulations and Gd-phantom experiments. We also analyse real data from a finger-tapping visuomotor experiment and show that the χ-based functional mapping provides additional activation details (in the form of positive and negative correlation patterns) beyond those generated by conventional MR-magnitude-based mapping. Copyright © 2013 Elsevier B.V. All rights reserved.
Moerel, Michelle; De Martino, Federico; Kemper, Valentin G; Schmitter, Sebastian; Vu, An T; Uğurbil, Kâmil; Formisano, Elia; Yacoub, Essa
2018-01-01
Following rapid technological advances, ultra-high field functional MRI (fMRI) enables exploring correlates of neuronal population activity at an increasing spatial resolution. However, as the fMRI blood-oxygenation-level-dependent (BOLD) contrast is a vascular signal, the spatial specificity of fMRI data is ultimately determined by the characteristics of the underlying vasculature. At 7T, fMRI measurement parameters determine the relative contribution of the macro- and microvasculature to the acquired signal. Here we investigate how these parameters affect relevant high-end fMRI analyses such as encoding, decoding, and submillimeter mapping of voxel preferences in the human auditory cortex. Specifically, we compare a T 2 * weighted fMRI dataset, obtained with 2D gradient echo (GE) EPI, to a predominantly T 2 weighted dataset obtained with 3D GRASE. We first investigated the decoding accuracy based on two encoding models that represented different hypotheses about auditory cortical processing. This encoding/decoding analysis profited from the large spatial coverage and sensitivity of the T 2 * weighted acquisitions, as evidenced by a significantly higher prediction accuracy in the GE-EPI dataset compared to the 3D GRASE dataset for both encoding models. The main disadvantage of the T 2 * weighted GE-EPI dataset for encoding/decoding analyses was that the prediction accuracy exhibited cortical depth dependent vascular biases. However, we propose that the comparison of prediction accuracy across the different encoding models may be used as a post processing technique to salvage the spatial interpretability of the GE-EPI cortical depth-dependent prediction accuracy. Second, we explored the mapping of voxel preferences. Large-scale maps of frequency preference (i.e., tonotopy) were similar across datasets, yet the GE-EPI dataset was preferable due to its larger spatial coverage and sensitivity. However, submillimeter tonotopy maps revealed biases in assigned frequency preference and selectivity for the GE-EPI dataset, but not for the 3D GRASE dataset. Thus, a T 2 weighted acquisition is recommended if high specificity in tonotopic maps is required. In conclusion, different fMRI acquisitions were better suited for different analyses. It is therefore critical that any sequence parameter optimization considers the eventual intended fMRI analyses and the nature of the neuroscience questions being asked. Copyright © 2017 Elsevier Inc. All rights reserved.
Point-based warping with optimized weighting factors of displacement vectors
NASA Astrophysics Data System (ADS)
Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas
2000-06-01
The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.
Using 3D Visualization to Communicate Scientific Results to Non-scientists
NASA Astrophysics Data System (ADS)
Whipple, S.; Mellors, R. J.; Sale, J.; Kilb, D.
2002-12-01
If "a picture is worth a thousand words" then an animation is worth millions. 3D animations and visualizations are useful for geoscientists but are perhaps even more valuable for rapidly illustrating standard geoscience ideas and concepts (such as faults, seismicity patterns, and topography) to non-specialists. This is useful not only for purely educational needs but also in rapidly briefing decision makers where time may be critical. As a demonstration of this we juxtapose large geophysical datasets (e.g., Southern California seismicity and topography) with other large societal datasets (such as highways and urban areas), which allows an instant understanding of the correlations. We intend to work out a methodology to aid other datasets such as hospitals and bridges, for example, in an ongoing fashion. The 3D scenes we create from the separate datasets can be "flown" through and individual snapshots that emphasize the concepts of interest are quickly rendered and converted to formats accessible to all. Viewing the snapshots and scenes greatly aids non-specialists comprehension of the problems and tasks at hand. For example, seismicity clusters (such as aftershocks) and faults near urban areas are clearly visible. A simple "fly-by" through our Southern California scene demonstrates simple concepts such as the topographic features due to plate motion along faults, and the demarcation of the North American/Pacific Plate boundary by the complex fault system (e.g., Elsinore, San Jacinto and San Andreas faults) in Southern California.
Obokata, Masaru; Nagata, Yasufumi; Wu, Victor Chien-Chia; Kado, Yuichiro; Kurabayashi, Masahiko; Otsuji, Yutaka; Takeuchi, Masaaki
2016-05-01
Cardiac magnetic resonance (CMR) feature tracking (FT) with steady-state free precession (SSFP) has advantages over traditional myocardial tagging to analyse left ventricular (LV) strain. However, direct comparisons of CMRFT and 2D/3D echocardiography speckle tracking (2/3DEST) for measurement of LV strain are limited. The aim of this study was to investigate the feasibility and reliability of CMRFT and 2D/3DEST for measurement of global LV strain. We enrolled 106 patients who agreed to undergo both CMR and 2D/3DE on the same day. SSFP images at multiple short-axis and three apical views were acquired. 2DE images from three levels of short-axis, three apical views, and 3D full-volume datasets were also acquired. Strain data were expressed as absolute values. Feasibility was highest in CMRFT, followed by 2DEST and 3DEST. Analysis time was shortest in 3DEST, followed by CMRFT and 2DEST. There was good global longitudinal strain (GLS) correlation between CMRFT and 2D/3DEST (r = 0.83 and 0.87, respectively) with the limit of agreement (LOA) ranged from ±3.6 to ±4.9%. Excellent global circumferential strain (GCS) correlation between CMRFT and 2D/3DEST was observed (r = 0.90 and 0.88) with LOA of ±6.8-8.5%. Global radial strain showed fair correlations (r = 0.69 and 0.82, respectively) with LOA ranged from ±12.4 to ±16.3%. CMRFT GCS showed least observer variability with highest intra-class correlation. Although not interchangeable, the high GLS and GCS correlation between CMRFT and 2D/3DEST makes CMRFT a useful modality for quantification of global LV strain in patients, especially those with suboptimal echo image quality. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.
Computational optical tomography using 3-D deep convolutional neural networks
NASA Astrophysics Data System (ADS)
Nguyen, Thanh; Bui, Vy; Nehmetallah, George
2018-04-01
Deep convolutional neural networks (DCNNs) offer a promising performance for many image processing areas, such as super-resolution, deconvolution, image classification, denoising, and segmentation, with outstanding results. Here, we develop for the first time, to our knowledge, a method to perform 3-D computational optical tomography using 3-D DCNN. A simulated 3-D phantom dataset was first constructed and converted to a dataset of phase objects imaged on a spatial light modulator. For each phase image in the dataset, the corresponding diffracted intensity image was experimentally recorded on a CCD. We then experimentally demonstrate the ability of the developed 3-D DCNN algorithm to solve the inverse problem by reconstructing the 3-D index of refraction distributions of test phantoms from the dataset from their corresponding diffraction patterns.
Generation of the 30 M-Mesh Global Digital Surface Model by Alos Prism
NASA Astrophysics Data System (ADS)
Tadono, T.; Nagai, H.; Ishida, H.; Oda, F.; Naito, S.; Minakawa, K.; Iwamoto, H.
2016-06-01
Topographical information is fundamental to many geo-spatial related information and applications on Earth. Remote sensing satellites have the advantage in such fields because they are capable of global observation and repeatedly. Several satellite-based digital elevation datasets were provided to examine global terrains with medium resolutions e.g. the Shuttle Radar Topography Mission (SRTM), the global digital elevation model by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER GDEM). A new global digital surface model (DSM) dataset using the archived data of the Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM) onboard the Advanced Land Observing Satellite (ALOS, nicknamed "Daichi") has been completed on March 2016 by Japan Aerospace Exploration Agency (JAXA) collaborating with NTT DATA Corp. and Remote Sensing Technology Center, Japan. This project is called "ALOS World 3D" (AW3D), and its dataset consists of the global DSM dataset with 0.15 arcsec. pixel spacing (approx. 5 m mesh) and ortho-rectified PRISM image with 2.5 m resolution. JAXA is also processing the global DSM with 1 arcsec. spacing (approx. 30 m mesh) based on the AW3D DSM dataset, and partially releasing it free of charge, which calls "ALOS World 3D 30 m mesh" (AW3D30). The global AW3D30 dataset will be released on May 2016. This paper describes the processing status, a preliminary validation result of the AW3D30 DSM dataset, and its public release status. As a summary of the preliminary validation of AW3D30 DSM, 4.40 m (RMSE) of the height accuracy of the dataset was confirmed using 5,121 independent check points distributed in the world.
NASA Astrophysics Data System (ADS)
Mey, Antonia S. J. S.; Jiménez, Jordi Juárez; Michel, Julien
2018-01-01
The Drug Design Data Resource (D3R) consortium organises blinded challenges to address the latest advances in computational methods for ligand pose prediction, affinity ranking, and free energy calculations. Within the context of the second D3R Grand Challenge several blinded binding free energies predictions were made for two congeneric series of Farsenoid X Receptor (FXR) inhibitors with a semi-automated alchemical free energy calculation workflow featuring FESetup and SOMD software tools. Reasonable performance was observed in retrospective analyses of literature datasets. Nevertheless, blinded predictions on the full D3R datasets were poor due to difficulties encountered with the ranking of compounds that vary in their net-charge. Performance increased for predictions that were restricted to subsets of compounds carrying the same net-charge. Disclosure of X-ray crystallography derived binding modes maintained or improved the correlation with experiment in a subsequent rounds of predictions. The best performing protocols on D3R set1 and set2 were comparable or superior to predictions made on the basis of analysis of literature structure activity relationships (SAR)s only, and comparable or slightly inferior, to the best submissions from other groups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bautista, Julian E.; Busca, Nicolas G.; Bailey, Stephen
We describe mock data-sets generated to simulate the high-redshift quasar sample in Data Release 11 (DR11) of the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS). The mock spectra contain Lyα forest correlations useful for studying the 3D correlation function including Baryon Acoustic Oscillations (BAO). They also include astrophysical effects such as quasar continuum diversity and high-density absorbers, instrumental effects such as noise and spectral resolution, as well as imperfections introduced by the SDSS pipeline treatment of the raw data. The Lyα forest BAO analysis of the BOSS collaboration, described in Delubac et al. 2014, has used these mock data-sets to developmore » and cross-check analysis procedures prior to performing the BAO analysis on real data, and for continued systematic cross checks. Tests presented here show that the simulations reproduce sufficiently well important characteristics of real spectra. These mock data-sets will be made available together with the data at the time of the Data Release 11.« less
Benchmark datasets for 3D MALDI- and DESI-imaging mass spectrometry.
Oetjen, Janina; Veselkov, Kirill; Watrous, Jeramie; McKenzie, James S; Becker, Michael; Hauberg-Lotte, Lena; Kobarg, Jan Hendrik; Strittmatter, Nicole; Mróz, Anna K; Hoffmann, Franziska; Trede, Dennis; Palmer, Andrew; Schiffler, Stefan; Steinhorst, Klaus; Aichler, Michaela; Goldin, Robert; Guntinas-Lichius, Orlando; von Eggeling, Ferdinand; Thiele, Herbert; Maedler, Kathrin; Walch, Axel; Maass, Peter; Dorrestein, Pieter C; Takats, Zoltan; Alexandrov, Theodore
2015-01-01
Three-dimensional (3D) imaging mass spectrometry (MS) is an analytical chemistry technique for the 3D molecular analysis of a tissue specimen, entire organ, or microbial colonies on an agar plate. 3D-imaging MS has unique advantages over existing 3D imaging techniques, offers novel perspectives for understanding the spatial organization of biological processes, and has growing potential to be introduced into routine use in both biology and medicine. Owing to the sheer quantity of data generated, the visualization, analysis, and interpretation of 3D imaging MS data remain a significant challenge. Bioinformatics research in this field is hampered by the lack of publicly available benchmark datasets needed to evaluate and compare algorithms. High-quality 3D imaging MS datasets from different biological systems at several labs were acquired, supplied with overview images and scripts demonstrating how to read them, and deposited into MetaboLights, an open repository for metabolomics data. 3D imaging MS data were collected from five samples using two types of 3D imaging MS. 3D matrix-assisted laser desorption/ionization imaging (MALDI) MS data were collected from murine pancreas, murine kidney, human oral squamous cell carcinoma, and interacting microbial colonies cultured in Petri dishes. 3D desorption electrospray ionization (DESI) imaging MS data were collected from a human colorectal adenocarcinoma. With the aim to stimulate computational research in the field of computational 3D imaging MS, selected high-quality 3D imaging MS datasets are provided that could be used by algorithm developers as benchmark datasets.
Developing 3D SEM in a broad biological context
Kremer, A; Lippens, S; Bartunkova, S; Asselbergh, B; Blanpain, C; Fendrych, M; Goossens, A; Holt, M; Janssens, S; Krols, M; Larsimont, J-C; Mc Guire, C; Nowack, MK; Saelens, X; Schertel, A; Schepens, B; Slezak, M; Timmerman, V; Theunis, C; Van Brempt, R; Visser, Y; GuÉRin, CJ
2015-01-01
When electron microscopy (EM) was introduced in the 1930s it gave scientists their first look into the nanoworld of cells. Over the last 80 years EM has vastly increased our understanding of the complex cellular structures that underlie the diverse functions that cells need to maintain life. One drawback that has been difficult to overcome was the inherent lack of volume information, mainly due to the limit on the thickness of sections that could be viewed in a transmission electron microscope (TEM). For many years scientists struggled to achieve three-dimensional (3D) EM using serial section reconstructions, TEM tomography, and scanning EM (SEM) techniques such as freeze-fracture. Although each technique yielded some special information, they required a significant amount of time and specialist expertise to obtain even a very small 3D EM dataset. Almost 20 years ago scientists began to exploit SEMs to image blocks of embedded tissues and perform serial sectioning of these tissues inside the SEM chamber. Using first focused ion beams (FIB) and subsequently robotic ultramicrotomes (serial block-face, SBF-SEM) microscopists were able to collect large volumes of 3D EM information at resolutions that could address many important biological questions, and do so in an efficient manner. We present here some examples of 3D EM taken from the many diverse specimens that have been imaged in our core facility. We propose that the next major step forward will be to efficiently correlate functional information obtained using light microscopy (LM) with 3D EM datasets to more completely investigate the important links between cell structures and their functions. Lay Description Life happens in three dimensions. For many years, first light, and then EM struggled to image the smallest parts of cells in 3D. With recent advances in technology and corresponding improvements in computing, scientists can now see the 3D world of the cell at the nanoscale. In this paper we present the results of high resolution 3D imaging in a number of diverse cells and tissues from multiple species. 3D reconstructions of cell structures often revealed them to be significantly more complex when compared to extrapolations made from 2D studies. Correlating functional 3D LM studies with 3D EM results opens up the possibility of making new strides in our understanding of how cell structure is connected to cell function. PMID:25623622
A model-based 3D phase unwrapping algorithm using Gegenbauer polynomials.
Langley, Jason; Zhao, Qun
2009-09-07
The application of a two-dimensional (2D) phase unwrapping algorithm to a three-dimensional (3D) phase map may result in an unwrapped phase map that is discontinuous in the direction normal to the unwrapped plane. This work investigates the problem of phase unwrapping for 3D phase maps. The phase map is modeled as a product of three one-dimensional Gegenbauer polynomials. The orthogonality of Gegenbauer polynomials and their derivatives on the interval [-1, 1] are exploited to calculate the expansion coefficients. The algorithm was implemented using two well-known Gegenbauer polynomials: Chebyshev polynomials of the first kind and Legendre polynomials. Both implementations of the phase unwrapping algorithm were tested on 3D datasets acquired from a magnetic resonance imaging (MRI) scanner. The first dataset was acquired from a homogeneous spherical phantom. The second dataset was acquired using the same spherical phantom but magnetic field inhomogeneities were introduced by an external coil placed adjacent to the phantom, which provided an additional burden to the phase unwrapping algorithm. Then Gaussian noise was added to generate a low signal-to-noise ratio dataset. The third dataset was acquired from the brain of a human volunteer. The results showed that Chebyshev implementation and the Legendre implementation of the phase unwrapping algorithm give similar results on the 3D datasets. Both implementations of the phase unwrapping algorithm compare well to PRELUDE 3D, 3D phase unwrapping software well recognized for functional MRI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galster, Ulrich; Baumgartner, Frank; Mueller, Ulrich
2005-12-15
Dissociation of well-defined H{sub 3} Rydberg states into three ground state hydrogen atoms reveals characteristic correlation patterns in the center-of-mass motion of the three fragments. We present an extensive experimental dataset of momentum correlation maps for all lower Rydberg states of H{sub 3} and D{sub 3}. In particular the states with principal quantum number n=2 feature simple correlation patterns with regular occurence of mutual affinities. Energetically higher-lying states typically show more complex patterns which are unique for each state. Quantum-chemical calculations on adiabatic potential energy surfaces of H{sub 3} Rydberg states are presented to illuminate the likely origin of thesemore » differences. We discuss the likely dissociation mechanisms and paths which are responsible for the observed continuum correlation.« less
A recipe for consistent 3D management of velocity data and time-depth conversion using Vel-IO 3D
NASA Astrophysics Data System (ADS)
Maesano, Francesco E.; D'Ambrogi, Chiara
2017-04-01
3D geological model production and related basin analyses need large and consistent seismic dataset and hopefully well logs to support correlation and calibration; the workflow and tools used to manage and integrate different type of data control the soundness of the final 3D model. Even though seismic interpretation is a basic early step in such workflow, the most critical step to obtain a comprehensive 3D model useful for further analyses is represented by the construction of an effective 3D velocity model and a well constrained time-depth conversion. We present a complex workflow that includes comprehensive management of large seismic dataset and velocity data, the construction of a 3D instantaneous multilayer-cake velocity model, the time-depth conversion of highly heterogeneous geological framework, including both depositional and structural complexities. The core of the workflow is the construction of the 3D velocity model using Vel-IO 3D tool (Maesano and D'Ambrogi, 2017; https://github.com/framae80/Vel-IO3D) that is composed by the following three scripts, written in Python 2.7.11 under ArcGIS ArcPy environment: i) the 3D instantaneous velocity model builder creates a preliminary 3D instantaneous velocity model using key horizons in time domain and velocity data obtained from the analysis of well and pseudo-well logs. The script applies spatial interpolation to the velocity parameters and calculates the value of depth of each point on each horizon bounding the layer-cake velocity model. ii) the velocity model optimizer improves the consistency of the velocity model by adding new velocity data indirectly derived from measured depths, thus reducing the geometrical uncertainties in the areas located far from the original velocity data. iii) the time-depth converter runs the time-depth conversion of any object located inside the 3D velocity model The Vel-IO 3D tool allows one to create 3D geological models consistent with the primary geological constraints (e.g. depth of the markers on wells). The workflow and Vel-IO 3D tool have been developed and tested for the construction of the 3D geological model of a flat region, 5700 km2 in area, located in the central part of the Po Plain (Northern Italy) in the frame of the European funded Project GeoMol. The study area was covered by a dense dataset of seismic lines (ca. 12000 km) and exploration wells (130 drilling), mainly deriving from oil and gas exploration activities. The interpretation of the seismic dataset leads to the construction of a 3D model in time domain that has been depth converted using Vel-IO 3D, with a 4 layer-cake 3D instantaneous velocity model. The resulting final 3D geological model, composed of 15 horizons and 150 faults, has been used for basin analysis at regional scale, for geothermal assessment, and for the update of the seismotectonic knowledge of the Po Plain. The Vel-IO 3D has been further used for the depth conversion of the accretionary prism of the Calabrian subduction (Southern Italy) and for a basin scale analysis of the Po Plain Plio-Pleistocene evolution. Maesano F.E. and D'Ambrogi C., (2017), Computers and Geosciences, doi: 10.1016/j.cageo.2016.11.013 Vel-IO 3D is available at: https://github.com/framae80/Vel-IO3D
Reducing 4D CT artifacts using optimized sorting based on anatomic similarity.
Johnston, Eric; Diehn, Maximilian; Murphy, James D; Loo, Billy W; Maxim, Peter G
2011-05-01
Four-dimensional (4D) computed tomography (CT) has been widely used as a tool to characterize respiratory motion in radiotherapy. The two most commonly used 4D CT algorithms sort images by the associated respiratory phase or displacement into a predefined number of bins, and are prone to image artifacts at transitions between bed positions. The purpose of this work is to demonstrate a method of reducing motion artifacts in 4D CT by incorporating anatomic similarity into phase or displacement based sorting protocols. Ten patient datasets were retrospectively sorted using both the displacement and phase based sorting algorithms. Conventional sorting methods allow selection of only the nearest-neighbor image in time or displacement within each bin. In our method, for each bed position either the displacement or the phase defines the center of a bin range about which several candidate images are selected. The two dimensional correlation coefficients between slices bordering the interface between adjacent couch positions are then calculated for all candidate pairings. Two slices have a high correlation if they are anatomically similar. Candidates from each bin are then selected to maximize the slice correlation over the entire data set using the Dijkstra's shortest path algorithm. To assess the reduction of artifacts, two thoracic radiation oncologists independently compared the resorted 4D datasets pairwise with conventionally sorted datasets, blinded to the sorting method, to choose which had the least motion artifacts. Agreement between reviewers was evaluated using the weighted kappa score. Anatomically based image selection resulted in 4D CT datasets with significantly reduced motion artifacts with both displacement (P = 0.0063) and phase sorting (P = 0.00022). There was good agreement between the two reviewers, with complete agreement 34 times and complete disagreement 6 times. Optimized sorting using anatomic similarity significantly reduces 4D CT motion artifacts compared to conventional phase or displacement based sorting. This improved sorting algorithm is a straightforward extension of the two most common 4D CT sorting algorithms.
Device and methods for "gold standard" registration of clinical 3D and 2D cerebral angiograms
NASA Astrophysics Data System (ADS)
Madan, Hennadii; Likar, Boštjan; Pernuš, Franjo; Å piclin, Žiga
2015-03-01
Translation of any novel and existing 3D-2D image registration methods into clinical image-guidance systems is limited due to lack of their objective validation on clinical image datasets. The main reason is that, besides the calibration of the 2D imaging system, a reference or "gold standard" registration is very difficult to obtain on clinical image datasets. In the context of cerebral endovascular image-guided interventions (EIGIs), we present a calibration device in the form of a headband with integrated fiducial markers and, secondly, propose an automated pipeline comprising 3D and 2D image processing, analysis and annotation steps, the result of which is a retrospective calibration of the 2D imaging system and an optimal, i.e., "gold standard" registration of 3D and 2D images. The device and methods were used to create the "gold standard" on 15 datasets of 3D and 2D cerebral angiograms, whereas each dataset was acquired on a patient undergoing EIGI for either aneurysm coiling or embolization of arteriovenous malformation. The use of the device integrated seamlessly in the clinical workflow of EIGI. While the automated pipeline eliminated all manual input or interactive image processing, analysis or annotation. In this way, the time to obtain the "gold standard" was reduced from 30 to less than one minute and the "gold standard" of 3D-2D registration on all 15 datasets of cerebral angiograms was obtained with a sub-0.1 mm accuracy.
Yang, Y X; Teo, S-K; Van Reeth, E; Tan, C H; Tham, I W K; Poh, C L
2015-08-01
Accurate visualization of lung motion is important in many clinical applications, such as radiotherapy of lung cancer. Advancement in imaging modalities [e.g., computed tomography (CT) and MRI] has allowed dynamic imaging of lung and lung tumor motion. However, each imaging modality has its advantages and disadvantages. The study presented in this paper aims at generating synthetic 4D-CT dataset for lung cancer patients by combining both continuous three-dimensional (3D) motion captured by 4D-MRI and the high spatial resolution captured by CT using the authors' proposed approach. A novel hybrid approach based on deformable image registration (DIR) and finite element method simulation was developed to fuse a static 3D-CT volume (acquired under breath-hold) and the 3D motion information extracted from 4D-MRI dataset, creating a synthetic 4D-CT dataset. The study focuses on imaging of lung and lung tumor. Comparing the synthetic 4D-CT dataset with the acquired 4D-CT dataset of six lung cancer patients based on 420 landmarks, accurate results (average error <2 mm) were achieved using the authors' proposed approach. Their hybrid approach achieved a 40% error reduction (based on landmarks assessment) over using only DIR techniques. The synthetic 4D-CT dataset generated has high spatial resolution, has excellent lung details, and is able to show movement of lung and lung tumor over multiple breathing cycles.
Burnett, T. L.; McDonald, S. A.; Gholinia, A.; Geurts, R.; Janus, M.; Slater, T.; Haigh, S. J.; Ornek, C.; Almuaili, F.; Engelberg, D. L.; Thompson, G. E.; Withers, P. J.
2014-01-01
Increasingly researchers are looking to bring together perspectives across multiple scales, or to combine insights from different techniques, for the same region of interest. To this end, correlative microscopy has already yielded substantial new insights in two dimensions (2D). Here we develop correlative tomography where the correlative task is somewhat more challenging because the volume of interest is typically hidden beneath the sample surface. We have threaded together x-ray computed tomography, serial section FIB-SEM tomography, electron backscatter diffraction and finally TEM elemental analysis all for the same 3D region. This has allowed observation of the competition between pitting corrosion and intergranular corrosion at multiple scales revealing the structural hierarchy, crystallography and chemistry of veiled corrosion pits in stainless steel. With automated correlative workflows and co-visualization of the multi-scale or multi-modal datasets the technique promises to provide insights across biological, geological and materials science that are impossible using either individual or multiple uncorrelated techniques. PMID:24736640
Boer, Annemarie; Dutmer, Alisa L; Schiphorst Preuper, Henrica R; van der Woude, Lucas H V; Stewart, Roy E; Deyo, Richard A; Reneman, Michiel F; Soer, Remko
2017-10-01
Validation study with cross-sectional and longitudinal measurements. To translate the US National Institutes of Health (NIH)-minimal dataset for clinical research on chronic low back pain into the Dutch language and to test its validity and reliability among people with chronic low back pain. The NIH developed a minimal dataset to encourage more complete and consistent reporting of clinical research and to be able to compare studies across countries in patients with low back pain. In the Netherlands, the NIH-minimal dataset has not been translated before and measurement properties are unknown. Cross-cultural validity was tested by a formal forward-backward translation. Structural validity was tested with exploratory factor analyses (comparative fit index, Tucker-Lewis index, and root mean square error of approximation). Hypothesis testing was performed to compare subscales of the NIH dataset with the Pain Disability Index and the EurQol-5D (Pearson correlation coefficients). Internal consistency was tested with Cronbach α and test-retest reliability at 2 weeks was calculated in a subsample of patients with Intraclass Correlation Coefficients and weighted Kappa (κω). In total, 452 patients were included of which 52 were included for the test-retest study. factor analysis for structural validity pointed into the direction of a seven-factor model (Cronbach α = 0.78). Factors and total score of the NIH-minimal dataset showed fair to good correlations with Pain Disability Index (r = 0.43-0.70) and EuroQol-5D (r = -0.41 to -0.64). Reliability: test-retest reliability per item showed substantial agreement (κω=0.65). Test-retest reliability per factor was moderate to good (Intraclass Correlation Coefficient = 0.71). The Dutch language version measurement properties of the NIH-minimal were satisfactory. N/A.
Quantifying the tibiofemoral joint space using x-ray tomosynthesis.
Kalinosky, Benjamin; Sabol, John M; Piacsek, Kelly; Heckel, Beth; Gilat Schmidt, Taly
2011-12-01
Digital x-ray tomosynthesis (DTS) has the potential to provide 3D information about the knee joint in a load-bearing posture, which may improve diagnosis and monitoring of knee osteoarthritis compared with projection radiography, the current standard of care. Manually quantifying and visualizing the joint space width (JSW) from 3D tomosynthesis datasets may be challenging. This work developed a semiautomated algorithm for quantifying the 3D tibiofemoral JSW from reconstructed DTS images. The algorithm was validated through anthropomorphic phantom experiments and applied to three clinical datasets. A user-selected volume of interest within the reconstructed DTS volume was enhanced with 1D multiscale gradient kernels. The edge-enhanced volumes were divided by polarity into tibial and femoral edge maps and combined across kernel scales. A 2D connected components algorithm was performed to determine candidate tibial and femoral edges. A 2D joint space width map (JSW) was constructed to represent the 3D tibiofemoral joint space. To quantify the algorithm accuracy, an adjustable knee phantom was constructed, and eleven posterior-anterior (PA) and lateral DTS scans were acquired with the medial minimum JSW of the phantom set to 0-5 mm in 0.5 mm increments (VolumeRad™, GE Healthcare, Chalfont St. Giles, United Kingdom). The accuracy of the algorithm was quantified by comparing the minimum JSW in a region of interest in the medial compartment of the JSW map to the measured phantom setting for each trial. In addition, the algorithm was applied to DTS scans of a static knee phantom and the JSW map compared to values estimated from a manually segmented computed tomography (CT) dataset. The algorithm was also applied to three clinical DTS datasets of osteoarthritic patients. The algorithm segmented the JSW and generated a JSW map for all phantom and clinical datasets. For the adjustable phantom, the estimated minimum JSW values were plotted against the measured values for all trials. A linear fit estimated a slope of 0.887 (R² = 0.962) and a mean error across all trials of 0.34 mm for the PA phantom data. The estimated minimum JSW values for the lateral adjustable phantom acquisitions were found to have low correlation to the measured values (R² = 0.377), with a mean error of 2.13 mm. The error in the lateral adjustable-phantom datasets appeared to be caused by artifacts due to unrealistic features in the phantom bones. JSW maps generated by DTS and CT varied by a mean of 0.6 mm and 0.8 mm across the knee joint, for PA and lateral scans. The tibial and femoral edges were successfully segmented and JSW maps determined for PA and lateral clinical DTS datasets. A semiautomated method is presented for quantifying the 3D joint space in a 2D JSW map using tomosynthesis images. The proposed algorithm quantified the JSW across the knee joint to sub-millimeter accuracy for PA tomosynthesis acquisitions. Overall, the results suggest that x-ray tomosynthesis may be beneficial for diagnosing and monitoring disease progression or treatment of osteoarthritis by providing quantitative images of JSW in the load-bearing knee.
4D CT amplitude binning for the generation of a time-averaged 3D mid-position CT scan
NASA Astrophysics Data System (ADS)
Kruis, Matthijs F.; van de Kamer, Jeroen B.; Belderbos, José S. A.; Sonke, Jan-Jakob; van Herk, Marcel
2014-09-01
The purpose of this study was to develop a method to use amplitude binned 4D-CT (A-4D-CT) data for the construction of mid-position CT data and to compare the results with data created from phase-binned 4D-CT (P-4D-CT) data. For the latter purpose we have developed two measures which describe the regularity of the 4D data and we have tried to correlate these measures with the regularity of the external respiration signal. 4D-CT data was acquired for 27 patients on a combined PET-CT scanner. The 4D data were reconstructed twice, using phase and amplitude binning. The 4D frames of each dataset were registered using a quadrature-based optical flow method. After registration the deformation vector field was repositioned to the mid-position. Since amplitude-binned 4D data does not provide temporal information, we corrected the mid-position for the occupancy of the bins. We quantified the differences between the two mid-position datasets in terms of tumour offset and amplitude differences. Furthermore, we measured the standard deviation of the image intensity over the respiration after registration (σregistration) and the regularity of the deformation vector field (\\overline{\\Delta |J|} ) to quantify the quality of the 4D-CT data. These measures were correlated to the regularity of the external respiration signal (σsignal). The two irregularity measures, \\overline{\\Delta |J|} and σregistration, were dependent on each other (p < 0.0001, R2 = 0.80 for P-4D-CT, R2 = 0.74 for A-4D-CT). For all datasets amplitude binning resulted in lower \\overline{\\Delta |J|} and σregistration and large decreases led to visible quality improvements in the mid-position data. The quantity of artefact decrease was correlated to the irregularity of the external respiratory signal. The average tumour offset between the phase and amplitude binned mid-position without occupancy correction was 0.42 mm in the caudal direction (10.6% of the amplitude). After correction this was reduced to 0.16 mm in caudal direction (4.1% of the amplitude). Similar relative offsets were found at the diaphragm. We have devised a method to use amplitude binned 4D-CT to construct motion model and generate a mid-position planning CT for radiotherapy treatment purposes. We have decimated the systematic offset of this mid-position model with a motion model derived from P-4D-CT. We found that the A-4D-CT led to a decrease of local artefacts and that this decrease was correlated to the irregularity of the external respiration signal.
LOD 1 VS. LOD 2 - Preliminary Investigations Into Differences in Mobile Rendering Performance
NASA Astrophysics Data System (ADS)
Ellul, C.; Altenbuchner, J.
2013-09-01
The increasing availability, size and detail of 3D City Model datasets has led to a challenge when rendering such data on mobile devices. Understanding the limitations to the usability of such models on these devices is particularly important given the broadening range of applications - such as pollution or noise modelling, tourism, planning, solar potential - for which these datasets and resulting visualisations can be utilized. Much 3D City Model data is created by extrusion of 2D topographic datasets, resulting in what is known as Level of Detail (LoD) 1 buildings - with flat roofs. However, in the UK the National Mapping Agency (the Ordnance Survey, OS) is now releasing test datasets to Level of Detail (LoD) 2 - i.e. including roof structures. These datasets are designed to integrate with the LoD 1 datasets provided by the OS, and provide additional detail in particular on larger buildings and in town centres. The availability of such integrated datasets at two different Levels of Detail permits investigation into the impact of the additional roof structures (and hence the display of a more realistic 3D City Model) on rendering performance on a mobile device. This paper describes preliminary work carried out to investigate this issue, for the test area of the city of Sheffield (in the UK Midlands). The data is stored in a 3D spatial database as triangles and then extracted and served as a web-based data stream which is queried by an App developed on the mobile device (using the Android environment, Java and OpenGL for graphics). Initial tests have been carried out on two dataset sizes, for the city centre and a larger area, rendering the data onto a tablet to compare results. Results of 52 seconds for rendering LoD 1 data, and 72 seconds for LoD 1 mixed with LoD 2 data, show that the impact of LoD 2 is significant.
The 3D Reference Earth Model: Status and Preliminary Results
NASA Astrophysics Data System (ADS)
Moulik, P.; Lekic, V.; Romanowicz, B. A.
2017-12-01
In the 20th century, seismologists constructed models of how average physical properties (e.g. density, rigidity, compressibility, anisotropy) vary with depth in the Earth's interior. These one-dimensional (1D) reference Earth models (e.g. PREM) have proven indispensable in earthquake location, imaging of interior structure, understanding material properties under extreme conditions, and as a reference in other fields, such as particle physics and astronomy. Over the past three decades, new datasets motivated more sophisticated efforts that yielded models of how properties vary both laterally and with depth in the Earth's interior. Though these three-dimensional (3D) models exhibit compelling similarities at large scales, differences in the methodology, representation of structure, and dataset upon which they are based, have prevented the creation of 3D community reference models. As part of the REM-3D project, we are compiling and reconciling reference seismic datasets of body wave travel-time measurements, fundamental mode and overtone surface wave dispersion measurements, and normal mode frequencies and splitting functions. These reference datasets are being inverted for a long-wavelength, 3D reference Earth model that describes the robust long-wavelength features of mantle heterogeneity. As a community reference model with fully quantified uncertainties and tradeoffs and an associated publically available dataset, REM-3D will facilitate Earth imaging studies, earthquake characterization, inferences on temperature and composition in the deep interior, and be of improved utility to emerging scientific endeavors, such as neutrino geoscience. Here, we summarize progress made in the construction of the reference long period dataset and present a preliminary version of REM-3D in the upper-mantle. In order to determine the level of detail warranted for inclusion in REM-3D, we analyze the spectrum of discrepancies between models inverted with different subsets of the reference dataset. This procedure allows us to evaluate the extent of consistency in imaging heterogeneity at various depths and between spatial scales.
MiRNA-181d Expression Significantly Affects Treatment Responses to Carmustine Wafer Implantation.
Sippl, Christoph; Ketter, Ralf; Bohr, Lisa; Kim, Yoo Jin; List, Markus; Oertel, Joachim; Urbschat, Steffi
2018-05-26
Standard therapeutic protocols for glioblastoma, the most aggressive type of brain cancer, include surgery followed by chemoradiotherapy. Additionally, carmustine-eluting wafers can be implanted locally into the resection cavity. To evaluate microRNA (miRNA)-181d as a prognostic marker of responses to carmustine wafer implantation. A total of 80 glioblastoma patients (40/group) were included in a matched pair analysis. One group (carmustine wafer group) received concomitant chemoradiotherapy with carmustine wafer implantation (Stupp protocol). The second group (control group) received only concomitant chemoradiotherapy. All tumor specimens were subjected to evaluations of miRNA-181d expression, results were correlated with further individual clinical data. The Cancer Genome Atlas (TCGA) dataset of 149 patients was used as an independent cohort to validate the results. Patients in the carmustine wafer group with low miRNA-181d expression had significantly longer overall (hazard ratio [HR], 35.03, [95% confidence interval (CI): 3.50-350.23], P = .002) and progression-free survival (HR, 20.23, [95% CI: 2.19-186.86], P = .008) than patients of the same group with a high miRNA-181d expression. These correlations were not observed in the control group. The nonsignificance in the control group was confirmed in the independent TCGA dataset. The carmustine wafer group patients with low miRNA-181d expression also had a significantly longer progression-free (P = .049) and overall survival (OS) (P = .034), compared with control group patients. Gross total resection correlated significantly with longer OS (P = .023). MiRNA-181d expression significantly affects treatment responses to carmustine wafer implantation.
NASA Astrophysics Data System (ADS)
Gillespie, D.; La Pensée, A.; Cooper, M.
2013-07-01
Three dimensional (3D) laser scanning is an important documentation technique for cultural heritage. This technology has been adopted from the engineering and aeronautical industry and is an invaluable tool for the documentation of objects within museum collections (La Pensée, 2008). The datasets created via close range laser scanning are extremely accurate and the created 3D dataset allows for a more detailed analysis in comparison to other documentation technologies such as photography. The dataset can be used for a range of different applications including: documentation; archiving; surface monitoring; replication; gallery interactives; educational sessions; conservation and visualization. However, the novel nature of a 3D dataset is presenting a rather unique challenge with respect to its sharing and dissemination. This is in part due to the need for specialised 3D software and a supported graphics card to display high resolution 3D models. This can be detrimental to one of the main goals of cultural institutions, which is to share knowledge and enable activities such as research, education and entertainment. This has limited the presentation of 3D models of cultural heritage objects to mainly either images or videos. Yet with recent developments in computer graphics, increased internet speed and emerging technologies such as Adobe's Stage 3D (Adobe, 2013) and WebGL (Khronos, 2013), it is now possible to share a dataset directly within a webpage. This allows website visitors to interact with the 3D dataset allowing them to explore every angle of the object, gaining an insight into its shape and nature. This can be very important considering that it is difficult to offer the same level of understanding of the object through the use of traditional mediums such as photographs and videos. Yet this presents a range of problems: this is a very novel experience and very few people have engaged with 3D objects outside of 3D software packages or games. This paper presents results of research that aims to provide a methodology for museums and cultural institutions for prototyping a 3D viewer within a webpage, thereby not only allowing institutions to promote their collections via the internet but also providing a tool for users to engage in a meaningful way with cultural heritage datasets. The design process encompasses evaluation as the central part of the design methodology; focusing on how slight changes to navigation, object engagement and aesthetic appearance can influence the user's experience. The prototype used in this paper, was created using WebGL with the Three.Js (Three.JS, 2013) library and datasets were loaded as the OpenCTM (Geelnard, 2010) file format. The overall design is centred on creating an easy-tolearn interface allowing non-skilled users to interact with the datasets, and also providing tools allowing skilled users to discover more about the cultural heritage object. User testing was carried out, allowing users to interact with 3D datasets within the interactive viewer. The results are analysed and the insights learned are discussed in relation to an interface designed to interact with 3D content. The results will lead to the design of interfaces for interacting with 3D objects, which allow for both skilled and non skilled users to engage with 3D cultural heritage objects in a meaningful way.
Phylo_dCor: distance correlation as a novel metric for phylogenetic profiling.
Sferra, Gabriella; Fratini, Federica; Ponzi, Marta; Pizzi, Elisabetta
2017-09-05
Elaboration of powerful methods to predict functional and/or physical protein-protein interactions from genome sequence is one of the main tasks in the post-genomic era. Phylogenetic profiling allows the prediction of protein-protein interactions at a whole genome level in both Prokaryotes and Eukaryotes. For this reason it is considered one of the most promising methods. Here, we propose an improvement of phylogenetic profiling that enables handling of large genomic datasets and infer global protein-protein interactions. This method uses the distance correlation as a new measure of phylogenetic profile similarity. We constructed robust reference sets and developed Phylo-dCor, a parallelized version of the algorithm for calculating the distance correlation that makes it applicable to large genomic data. Using Saccharomyces cerevisiae and Escherichia coli genome datasets, we showed that Phylo-dCor outperforms phylogenetic profiling methods previously described based on the mutual information and Pearson's correlation as measures of profile similarity. In this work, we constructed and assessed robust reference sets and propose the distance correlation as a measure for comparing phylogenetic profiles. To make it applicable to large genomic data, we developed Phylo-dCor, a parallelized version of the algorithm for calculating the distance correlation. Two R scripts that can be run on a wide range of machines are available upon request.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y. X.; Van Reeth, E.; Poh, C. L., E-mail: clpoh@ntu.edu.sg
2015-08-15
Purpose: Accurate visualization of lung motion is important in many clinical applications, such as radiotherapy of lung cancer. Advancement in imaging modalities [e.g., computed tomography (CT) and MRI] has allowed dynamic imaging of lung and lung tumor motion. However, each imaging modality has its advantages and disadvantages. The study presented in this paper aims at generating synthetic 4D-CT dataset for lung cancer patients by combining both continuous three-dimensional (3D) motion captured by 4D-MRI and the high spatial resolution captured by CT using the authors’ proposed approach. Methods: A novel hybrid approach based on deformable image registration (DIR) and finite elementmore » method simulation was developed to fuse a static 3D-CT volume (acquired under breath-hold) and the 3D motion information extracted from 4D-MRI dataset, creating a synthetic 4D-CT dataset. Results: The study focuses on imaging of lung and lung tumor. Comparing the synthetic 4D-CT dataset with the acquired 4D-CT dataset of six lung cancer patients based on 420 landmarks, accurate results (average error <2 mm) were achieved using the authors’ proposed approach. Their hybrid approach achieved a 40% error reduction (based on landmarks assessment) over using only DIR techniques. Conclusions: The synthetic 4D-CT dataset generated has high spatial resolution, has excellent lung details, and is able to show movement of lung and lung tumor over multiple breathing cycles.« less
Naveja, J. Jesús; Medina-Franco, José L.
2017-01-01
We present a novel approach called ChemMaps for visualizing chemical space based on the similarity matrix of compound datasets generated with molecular fingerprints’ similarity. The method uses a ‘satellites’ approach, where satellites are, in principle, molecules whose similarity to the rest of the molecules in the database provides sufficient information for generating a visualization of the chemical space. Such an approach could help make chemical space visualizations more efficient. We hereby describe a proof-of-principle application of the method to various databases that have different diversity measures. Unsurprisingly, we found the method works better with databases that have low 2D diversity. 3D diversity played a secondary role, although it seems to be more relevant as 2D diversity increases. For less diverse datasets, taking as few as 25% satellites seems to be sufficient for a fair depiction of the chemical space. We propose to iteratively increase the satellites number by a factor of 5% relative to the whole database, and stop when the new and the prior chemical space correlate highly. This Research Note represents a first exploratory step, prior to the full application of this method for several datasets. PMID:28794856
Naveja, J Jesús; Medina-Franco, José L
2017-01-01
We present a novel approach called ChemMaps for visualizing chemical space based on the similarity matrix of compound datasets generated with molecular fingerprints' similarity. The method uses a 'satellites' approach, where satellites are, in principle, molecules whose similarity to the rest of the molecules in the database provides sufficient information for generating a visualization of the chemical space. Such an approach could help make chemical space visualizations more efficient. We hereby describe a proof-of-principle application of the method to various databases that have different diversity measures. Unsurprisingly, we found the method works better with databases that have low 2D diversity. 3D diversity played a secondary role, although it seems to be more relevant as 2D diversity increases. For less diverse datasets, taking as few as 25% satellites seems to be sufficient for a fair depiction of the chemical space. We propose to iteratively increase the satellites number by a factor of 5% relative to the whole database, and stop when the new and the prior chemical space correlate highly. This Research Note represents a first exploratory step, prior to the full application of this method for several datasets.
Saliency-Guided Detection of Unknown Objects in RGB-D Indoor Scenes.
Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Xi, Ning
2015-08-27
This paper studies the problem of detecting unknown objects within indoor environments in an active and natural manner. The visual saliency scheme utilizing both color and depth cues is proposed to arouse the interests of the machine system for detecting unknown objects at salient positions in a 3D scene. The 3D points at the salient positions are selected as seed points for generating object hypotheses using the 3D shape. We perform multi-class labeling on a Markov random field (MRF) over the voxels of the 3D scene, combining cues from object hypotheses and 3D shape. The results from MRF are further refined by merging the labeled objects, which are spatially connected and have high correlation between color histograms. Quantitative and qualitative evaluations on two benchmark RGB-D datasets illustrate the advantages of the proposed method. The experiments of object detection and manipulation performed on a mobile manipulator validate its effectiveness and practicability in robotic applications.
Saliency-Guided Detection of Unknown Objects in RGB-D Indoor Scenes
Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Xi, Ning
2015-01-01
This paper studies the problem of detecting unknown objects within indoor environments in an active and natural manner. The visual saliency scheme utilizing both color and depth cues is proposed to arouse the interests of the machine system for detecting unknown objects at salient positions in a 3D scene. The 3D points at the salient positions are selected as seed points for generating object hypotheses using the 3D shape. We perform multi-class labeling on a Markov random field (MRF) over the voxels of the 3D scene, combining cues from object hypotheses and 3D shape. The results from MRF are further refined by merging the labeled objects, which are spatially connected and have high correlation between color histograms. Quantitative and qualitative evaluations on two benchmark RGB-D datasets illustrate the advantages of the proposed method. The experiments of object detection and manipulation performed on a mobile manipulator validate its effectiveness and practicability in robotic applications. PMID:26343656
Speeding up 3D speckle tracking using PatchMatch
NASA Astrophysics Data System (ADS)
Zontak, Maria; O'Donnell, Matthew
2016-03-01
Echocardiography provides valuable information to diagnose heart dysfunction. A typical exam records several minutes of real-time cardiac images. To enable complete analysis of 3D cardiac strains, 4-D (3-D+t) echocardiography is used. This results in a huge dataset and requires effective automated analysis. Ultrasound speckle tracking is an effective method for tissue motion analysis. It involves correlation of a 3D kernel (block) around a voxel with kernels in later frames. The search region is usually confined to a local neighborhood, due to biomechanical and computational constraints. For high strains and moderate frame-rates, however, this search region will remain large, leading to a considerable computational burden. Moreover, speckle decorrelation (due to high strains) leads to errors in tracking. To solve this, spatial motion coherency between adjacent voxels should be imposed, e.g., by averaging their correlation functions.1 This requires storing correlation functions for neighboring voxels, thus increasing memory demands. In this work, we propose an efficient search using PatchMatch, 2 a powerful method to find correspondences between images. Here we adopt PatchMatch for 3D volumes and radio-frequency signals. As opposed to an exact search, PatchMatch performs random sampling of the search region and propagates successive matches among neighboring voxels. We show that: 1) Inherently smooth offset propagation in PatchMatch contributes to spatial motion coherence without any additional processing or memory demand. 2) For typical scenarios, PatchMatch is at least 20 times faster than the exact search, while maintaining comparable tracking accuracy.
Qin, Zijian; Wang, Maolin; Yan, Aixia
2017-07-01
In this study, quantitative structure-activity relationship (QSAR) models using various descriptor sets and training/test set selection methods were explored to predict the bioactivity of hepatitis C virus (HCV) NS3/4A protease inhibitors by using a multiple linear regression (MLR) and a support vector machine (SVM) method. 512 HCV NS3/4A protease inhibitors and their IC 50 values which were determined by the same FRET assay were collected from the reported literature to build a dataset. All the inhibitors were represented with selected nine global and 12 2D property-weighted autocorrelation descriptors calculated from the program CORINA Symphony. The dataset was divided into a training set and a test set by a random and a Kohonen's self-organizing map (SOM) method. The correlation coefficients (r 2 ) of training sets and test sets were 0.75 and 0.72 for the best MLR model, 0.87 and 0.85 for the best SVM model, respectively. In addition, a series of sub-dataset models were also developed. The performances of all the best sub-dataset models were better than those of the whole dataset models. We believe that the combination of the best sub- and whole dataset SVM models can be used as reliable lead designing tools for new NS3/4A protease inhibitors scaffolds in a drug discovery pipeline. Copyright © 2017 Elsevier Ltd. All rights reserved.
García-Jacas, César R; Contreras-Torres, Ernesto; Marrero-Ponce, Yovani; Pupo-Meriño, Mario; Barigye, Stephen J; Cabrera-Leyva, Lisset
2016-01-01
Recently, novel 3D alignment-free molecular descriptors (also known as QuBiLS-MIDAS) based on two-linear, three-linear and four-linear algebraic forms have been introduced. These descriptors codify chemical information for relations between two, three and four atoms by using several (dis-)similarity metrics and multi-metrics. Several studies aimed at assessing the quality of these novel descriptors have been performed. However, a deeper analysis of their performance is necessary. Therefore, in the present manuscript an assessment and statistical validation of the performance of these novel descriptors in QSAR studies is performed. To this end, eight molecular datasets (angiotensin converting enzyme, acetylcholinesterase inhibitors, benzodiazepine receptor, cyclooxygenase-2 inhibitors, dihydrofolate reductase inhibitors, glycogen phosphorylase b, thermolysin inhibitors, thrombin inhibitors) widely used as benchmarks in the evaluation of several procedures are utilized. Three to nine variable QSAR models based on Multiple Linear Regression are built for each chemical dataset according to the original division into training/test sets. Comparisons with respect to leave-one-out cross-validation correlation coefficients[Formula: see text] reveal that the models based on QuBiLS-MIDAS indices possess superior predictive ability in 7 of the 8 datasets analyzed, outperforming methodologies based on similar or more complex techniques such as: Partial Least Square, Neural Networks, Support Vector Machine and others. On the other hand, superior external correlation coefficients[Formula: see text] are attained in 6 of the 8 test sets considered, confirming the good predictive power of the obtained models. For the [Formula: see text] values non-parametric statistic tests were performed, which demonstrated that the models based on QuBiLS-MIDAS indices have the best global performance and yield significantly better predictions in 11 of the 12 QSAR procedures used in the comparison. Lastly, a study concerning to the performance of the indices according to several conformer generation methods was performed. This demonstrated that the quality of predictions of the QSAR models based on QuBiLS-MIDAS indices depend on 3D structure generation method considered, although in this preliminary study the results achieved do not present significant statistical differences among them. As conclusions it can be stated that the QuBiLS-MIDAS indices are suitable for extracting structural information of the molecules and thus, constitute a promissory alternative to build models that contribute to the prediction of pharmacokinetic, pharmacodynamics and toxicological properties on novel compounds.Graphical abstractComparative graphical representation of the performance of the novel QuBiLS-MIDAS 3D-MDs with respect to other methodologies in QSAR modeling of eight chemical datasets.
Deng, Lei; Fan, Chao; Zeng, Zhiwen
2017-12-28
Direct prediction of the three-dimensional (3D) structures of proteins from one-dimensional (1D) sequences is a challenging problem. Significant structural characteristics such as solvent accessibility and contact number are essential for deriving restrains in modeling protein folding and protein 3D structure. Thus, accurately predicting these features is a critical step for 3D protein structure building. In this study, we present DeepSacon, a computational method that can effectively predict protein solvent accessibility and contact number by using a deep neural network, which is built based on stacked autoencoder and a dropout method. The results demonstrate that our proposed DeepSacon achieves a significant improvement in the prediction quality compared with the state-of-the-art methods. We obtain 0.70 three-state accuracy for solvent accessibility, 0.33 15-state accuracy and 0.74 Pearson Correlation Coefficient (PCC) for the contact number on the 5729 monomeric soluble globular protein dataset. We also evaluate the performance on the CASP11 benchmark dataset, DeepSacon achieves 0.68 three-state accuracy and 0.69 PCC for solvent accessibility and contact number, respectively. We have shown that DeepSacon can reliably predict solvent accessibility and contact number with stacked sparse autoencoder and a dropout approach.
Relative Error Evaluation to Typical Open Global dem Datasets in Shanxi Plateau of China
NASA Astrophysics Data System (ADS)
Zhao, S.; Zhang, S.; Cheng, W.
2018-04-01
Produced by radar data or stereo remote sensing image pairs, global DEM datasets are one of the most important types for DEM data. Relative error relates to surface quality created by DEM data, so it relates to geomorphology and hydrologic applications using DEM data. Taking Shanxi Plateau of China as the study area, this research evaluated the relative error to typical open global DEM datasets including Shuttle Radar Terrain Mission (SRTM) data with 1 arc second resolution (SRTM1), SRTM data with 3 arc second resolution (SRTM3), ASTER global DEM data in the second version (GDEM-v2) and ALOS world 3D-30m (AW3D) data. Through process and selection, more than 300,000 ICESat/GLA14 points were used as the GCP data, and the vertical error was computed and compared among four typical global DEM datasets. Then, more than 2,600,000 ICESat/GLA14 point pairs were acquired using the distance threshold between 100 m and 500 m. Meanwhile, the horizontal distance between every point pair was computed, so the relative error was achieved using slope values based on vertical error difference and the horizontal distance of the point pairs. Finally, false slope ratio (FSR) index was computed through analyzing the difference between DEM and ICESat/GLA14 values for every point pair. Both relative error and FSR index were categorically compared for the four DEM datasets under different slope classes. Research results show: Overall, AW3D has the lowest relative error values in mean error, mean absolute error, root mean square error and standard deviation error; then the SRTM1 data, its values are a little higher than AW3D data; the SRTM3 and GDEM-v2 data have the highest relative error values, and the values for the two datasets are similar. Considering different slope conditions, all the four DEM data have better performance in flat areas but worse performance in sloping regions; AW3D has the best performance in all the slope classes, a litter better than SRTM1; with slope increasing, the relative error for the SRTM3 data increases faster than other DEM datasets; so SRTM3 is better than GDEM-v2 in flat regions but worse in sloping regions. As to FSR value, AW3D has the lowest value, 4.37 %; then SRTM1 data, 5.80 %, similar to AW3D data; SRTM3 has higher value, about 8.27 %; GDEM-v2 data has the highest FSR value, about 12.15 %. FSR can represent the performance of correctly creating the earth surface based on DEM data. Hence, AW3D has the best performance, which is approximate to but a little better than SRTM1. The performance of SRTM3 and GDEM-v2 is similar, which is much worse than AW3D and SRTM1, and the performance of GDEM-v2 is the worst of all. Originated from the DEM dataset with 5m resolution, AW3D is regarded as the most precise global DEM datasets up to now, so it may exerts more effect in topographic analysis and geographic research. Through analysis and comparison of the relative error for the four open global DEM datasets, this research will provide reference in open global DEM datasets selection and applications in geosciences and other relevant fields.
NASA Astrophysics Data System (ADS)
Hazelaar, Colien; Dahele, Max; Mostafavi, Hassan; van der Weide, Lineke; Slotman, Ben; Verbakel, Wilko
2018-06-01
Lung tumors treated in breath-hold are subject to inter- and intra-breath-hold variations, which makes tumor position monitoring during each breath-hold important. A markerless technique is desirable, but limited tumor visibility on kV images makes this challenging. We evaluated if template matching + triangulation of kV projection images acquired during breath-hold stereotactic treatments could determine 3D tumor position. Band-pass filtering and/or digital tomosynthesis (DTS) were used as image pre-filtering/enhancement techniques. On-board kV images continuously acquired during volumetric modulated arc irradiation of (i) a 3D-printed anthropomorphic thorax phantom with three lung tumors (n = 6 stationary datasets, n = 2 gradually moving), and (ii) four patients (13 datasets) were analyzed. 2D reference templates (filtered DRRs) were created from planning CT data. Normalized cross-correlation was used for 2D matching between templates and pre-filtered/enhanced kV images. For 3D verification, each registration was triangulated with multiple previous registrations. Generally applicable image processing/algorithm settings for lung tumors in breath-hold were identified. For the stationary phantom, the interquartile range of the 3D position vector was on average 0.25 mm for 12° DTS + band-pass filtering (average detected positions in 2D = 99.7%, 3D = 96.1%, and 3D excluding first 12° due to triangulation angle = 99.9%) compared to 0.81 mm for band-pass filtering only (55.8/52.9/55.0%). For the moving phantom, RMS errors for the lateral/longitudinal/vertical direction after 12° DTS + band-pass filtering were 1.5/0.4/1.1 mm and 2.2/0.3/3.2 mm. For the clinical data, 2D position was determined for at least 93% of each dataset and 3D position excluding first 12° for at least 82% of each dataset using 12° DTS + band-pass filtering. Template matching + triangulation using DTS + band-pass filtered images could accurately determine the position of stationary lung tumors. However, triangulation was less accurate/reliable for targets with continuous, gradual displacement in the lateral and vertical directions. This technique is therefore currently most suited to detect/monitor offsets occurring between initial setup and the start of treatment, inter-breath-hold variations, and tumors with predominantly longitudinal motion.
NASA Astrophysics Data System (ADS)
Mickevicius, Nikolai J.; Paulson, Eric S.
2017-04-01
The purpose of this work is to investigate the effects of undersampling and reconstruction algorithm on the total processing time and image quality of respiratory phase-resolved 4D MRI data. Specifically, the goal is to obtain quality 4D-MRI data with a combined acquisition and reconstruction time of five minutes or less, which we reasoned would be satisfactory for pre-treatment 4D-MRI in online MRI-gRT. A 3D stack-of-stars, self-navigated, 4D-MRI acquisition was used to scan three healthy volunteers at three image resolutions and two scan durations. The NUFFT, CG-SENSE, SPIRiT, and XD-GRASP reconstruction algorithms were used to reconstruct each dataset on a high performance reconstruction computer. The overall image quality, reconstruction time, artifact prevalence, and motion estimates were compared. The CG-SENSE and XD-GRASP reconstructions provided superior image quality over the other algorithms. The combination of a 3D SoS sequence and parallelized reconstruction algorithms using computing hardware more advanced than those typically seen on product MRI scanners, can result in acquisition and reconstruction of high quality respiratory correlated 4D-MRI images in less than five minutes.
Simultaneous tumor and surrogate motion tracking with dynamic MRI for radiation therapy planning
NASA Astrophysics Data System (ADS)
Park, Seyoun; Farah, Rana; Shea, Steven M.; Tryggestad, Erik; Hales, Russell; Lee, Junghoon
2018-01-01
Respiration-induced tumor motion is a major obstacle for achieving high-precision radiotherapy of cancers in the thoracic and abdominal regions. Surrogate-based estimation and tracking methods are commonly used in radiotherapy, but with limited understanding of quantified correlation to tumor motion. In this study, we propose a method to simultaneously track the lung tumor and external surrogates to evaluate their spatial correlation in a quantitative way using dynamic MRI, which allows real-time acquisition without ionizing radiation exposure. To capture the lung and whole tumor, four MRI-compatible fiducials are placed on the patient’s chest and upper abdomen. Two different types of acquisitions are performed in the sagittal orientation including multi-slice 2D cine MRIs to reconstruct 4D-MRI and two-slice 2D cine MRIs to simultaneously track the tumor and fiducials. A phase-binned 4D-MRI is first reconstructed from multi-slice MR images using body area as a respiratory surrogate and groupwise registration. The 4D-MRI provides 3D template volumes for different breathing phases. 3D tumor position is calculated by 3D-2D template matching in which 3D tumor templates in the 4D-MRI reconstruction and the 2D cine MRIs from the two-slice tracking dataset are registered. 3D trajectories of the external surrogates are derived via matching a 3D geometrical model of the fiducials to their segmentations on the 2D cine MRIs. We tested our method on ten lung cancer patients. Using a correlation analysis, the 3D tumor trajectory demonstrates a noticeable phase mismatch and significant cycle-to-cycle motion variation, while the external surrogate was not sensitive enough to capture such variations. Additionally, there was significant phase mismatch between surrogate signals obtained from the fiducials at different locations.
Evaluation of lung tumor motion management in radiation therapy with dynamic MRI
NASA Astrophysics Data System (ADS)
Park, Seyoun; Farah, Rana; Shea, Steven M.; Tryggestad, Erik; Hales, Russell; Lee, Junghoon
2017-03-01
Surrogate-based tumor motion estimation and tracing methods are commonly used in radiotherapy despite the lack of continuous real time 3D tumor and surrogate data. In this study, we propose a method to simultaneously track the tumor and external surrogates with dynamic MRI, which allows us to evaluate their reproducible correlation. Four MRIcompatible fiducials are placed on the patient's chest and upper abdomen, and multi-slice 2D cine MRIs are acquired to capture the lung and whole tumor, followed by two-slice 2D cine MRIs to simultaneously track the tumor and fiducials, all in sagittal orientation. A phase-binned 4D-MRI is first reconstructed from multi-slice MR images using body area as a respiratory surrogate and group-wise registration. The 4D-MRI provides 3D template volumes for different breathing phases. 3D tumor position is calculated by 3D-2D template matching in which 3D tumor templates in 4D-MRI reconstruction and the 2D cine MRIs from the two-slice tracking dataset are registered. 3D trajectories of the external surrogates are derived via matching a 3D geometrical model to the fiducial segmentations on the 2D cine MRIs. We tested our method on five lung cancer patients. Internal target volume from 4D-CT showed average sensitivity of 86.5% compared to the actual tumor motion for 5 min. 3D tumor motion correlated with the external surrogate signal, but showed a noticeable phase mismatch. The 3D tumor trajectory showed significant cycle-to-cycle variation, while the external surrogate was not sensitive enough to capture such variations. Additionally, there was significant phase mismatch between surrogate signals obtained from fiducials at different locations.
Efficient segmentation of 3D fluoroscopic datasets from mobile C-arm
NASA Astrophysics Data System (ADS)
Styner, Martin A.; Talib, Haydar; Singh, Digvijay; Nolte, Lutz-Peter
2004-05-01
The emerging mobile fluoroscopic 3D technology linked with a navigation system combines the advantages of CT-based and C-arm-based navigation. The intra-operative, automatic segmentation of 3D fluoroscopy datasets enables the combined visualization of surgical instruments and anatomical structures for enhanced planning, surgical eye-navigation and landmark digitization. We performed a thorough evaluation of several segmentation algorithms using a large set of data from different anatomical regions and man-made phantom objects. The analyzed segmentation methods include automatic thresholding, morphological operations, an adapted region growing method and an implicit 3D geodesic snake method. In regard to computational efficiency, all methods performed within acceptable limits on a standard Desktop PC (30sec-5min). In general, the best results were obtained with datasets from long bones, followed by extremities. The segmentations of spine, pelvis and shoulder datasets were generally of poorer quality. As expected, the threshold-based methods produced the worst results. The combined thresholding and morphological operations methods were considered appropriate for a smaller set of clean images. The region growing method performed generally much better in regard to computational efficiency and segmentation correctness, especially for datasets of joints, and lumbar and cervical spine regions. The less efficient implicit snake method was able to additionally remove wrongly segmented skin tissue regions. This study presents a step towards efficient intra-operative segmentation of 3D fluoroscopy datasets, but there is room for improvement. Next, we plan to study model-based approaches for datasets from the knee and hip joint region, which would be thenceforth applied to all anatomical regions in our continuing development of an ideal segmentation procedure for 3D fluoroscopic images.
NASA Astrophysics Data System (ADS)
Fang, Leyuan; Wang, Chong; Li, Shutao; Yan, Jun; Chen, Xiangdong; Rabbani, Hossein
2017-11-01
We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness.
NASA Astrophysics Data System (ADS)
Mandolesi, E.; Jones, A. G.; Roux, E.; Lebedev, S.
2009-12-01
Recently different studies were undertaken on the correlation between diverse geophysical datasets. Magnetotelluric (MT) data are used to map the electrical conductivity structure behind the Earth, but one of the problems in MT method is the lack in resolution in mapping zones beneath a region of high conductivity. Joint inversion of different datasets in which a common structure is recognizable reduces non-uniqueness and may improve the quality of interpretation when different dataset are sensitive to different physical properties with an underlined common structure. A common structure is recognized if the change of physical properties occur at the same spatial locations. Common structure may be recognized in 1D inversion of seismic and MT datasets, and numerous authors show that also 2D common structure may drive to an improvement of inversion quality while dataset are jointly inverted. In this presentation a tool to constrain MT 2D inversion with phase velocity of surface wave seismic data (SW) is proposed and is being developed and tested on synthetic data. Results obtained suggest that a joint inversion scheme could be applied with success along a section profile for which data are compatible with a 2D MT model.
Atlas Toolkit: Fast registration of 3D morphological datasets in the absence of landmarks
Grocott, Timothy; Thomas, Paul; Münsterberg, Andrea E.
2016-01-01
Image registration is a gateway technology for Developmental Systems Biology, enabling computational analysis of related datasets within a shared coordinate system. Many registration tools rely on landmarks to ensure that datasets are correctly aligned; yet suitable landmarks are not present in many datasets. Atlas Toolkit is a Fiji/ImageJ plugin collection offering elastic group-wise registration of 3D morphological datasets, guided by segmentation of the interesting morphology. We demonstrate the method by combinatorial mapping of cell signalling events in the developing eyes of chick embryos, and use the integrated datasets to predictively enumerate Gene Regulatory Network states. PMID:26864723
Atlas Toolkit: Fast registration of 3D morphological datasets in the absence of landmarks.
Grocott, Timothy; Thomas, Paul; Münsterberg, Andrea E
2016-02-11
Image registration is a gateway technology for Developmental Systems Biology, enabling computational analysis of related datasets within a shared coordinate system. Many registration tools rely on landmarks to ensure that datasets are correctly aligned; yet suitable landmarks are not present in many datasets. Atlas Toolkit is a Fiji/ImageJ plugin collection offering elastic group-wise registration of 3D morphological datasets, guided by segmentation of the interesting morphology. We demonstrate the method by combinatorial mapping of cell signalling events in the developing eyes of chick embryos, and use the integrated datasets to predictively enumerate Gene Regulatory Network states.
Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments.
Ionescu, Catalin; Papava, Dragos; Olaru, Vlad; Sminchisescu, Cristian
2014-07-01
We introduce a new dataset, Human3.6M, of 3.6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms. Besides increasing the size of the datasets in the current state-of-the-art by several orders of magnitude, we also aim to complement such datasets with a diverse set of motions and poses encountered as part of typical human activities (taking photos, talking on the phone, posing, greeting, eating, etc.), with additional synchronized image, human motion capture, and time of flight (depth) data, and with accurate 3D body scans of all the subject actors involved. We also provide controlled mixed reality evaluation scenarios where 3D human models are animated using motion capture and inserted using correct 3D geometry, in complex real environments, viewed with moving cameras, and under occlusion. Finally, we provide a set of large-scale statistical models and detailed evaluation baselines for the dataset illustrating its diversity and the scope for improvement by future work in the research community. Our experiments show that our best large-scale model can leverage our full training set to obtain a 20% improvement in performance compared to a training set of the scale of the largest existing public dataset for this problem. Yet the potential for improvement by leveraging higher capacity, more complex models with our large dataset, is substantially vaster and should stimulate future research. The dataset together with code for the associated large-scale learning models, features, visualization tools, as well as the evaluation server, is available online at http://vision.imar.ro/human3.6m.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hazelaar, Colien, E-mail: c.hazelaar@vumc.nl; Dahele, Max; Mostafavi, Hassan
Purpose: Spine stereotactic body radiation therapy (SBRT) requires highly accurate positioning. We report our experience with markerless template matching and triangulation of kilovoltage images routinely acquired during spine SBRT, to determine spine position. Methods and Materials: Kilovoltage images, continuously acquired at 7, 11 or 15 frames/s during volumetric modulated spine SBRT of 18 patients, consisting of 93 fluoroscopy datasets (1 dataset/arc), were analyzed off-line. Four patients were immobilized in a head/neck mask, 14 had no immobilization. Two-dimensional (2D) templates were created for each gantry angle from planning computed tomography data and registered to prefiltered kilovoltage images to determine 2D shiftsmore » between actual and planned spine position. Registrations were considered valid if the normalized cross correlation score was ≥0.15. Multiple registrations were triangulated to determine 3D position. For each spine position dataset, average positional offset and standard deviation were calculated. To verify the accuracy and precision of the technique, mean positional offset and standard deviation for twenty stationary phantom datasets with different baseline shifts were measured. Results: For the phantom, average standard deviations were 0.18 mm for left-right (LR), 0.17 mm for superior-inferior (SI), and 0.23 mm for the anterior-posterior (AP) direction. Maximum difference in average detected and applied shift was 0.09 mm. For the 93 clinical datasets, the percentage of valid matched frames was, on average, 90.7% (range: 49.9-96.1%) per dataset. Average standard deviations for all datasets were 0.28, 0.19, and 0.28 mm for LR, SI, and AP, respectively. Spine position offsets were, on average, −0.05 (range: −1.58 to 2.18), −0.04 (range: −3.56 to 0.82), and −0.03 mm (range: −1.16 to 1.51), respectively. Average positional deviation was <1 mm in all directions in 92% of the arcs. Conclusions: Template matching and triangulation using kilovoltage images acquired during irradiation allows spine position detection with submillimeter accuracy at subsecond intervals. Although the majority of patients were not immobilized, most vertebrae were stable at the sub-mm level during spine SBRT delivery.« less
Madrigal, Pedro
2017-03-01
Computational evaluation of variability across DNA or RNA sequencing datasets is a crucial step in genomic science, as it allows both to evaluate reproducibility of biological or technical replicates, and to compare different datasets to identify their potential correlations. Here we present fCCAC, an application of functional canonical correlation analysis to assess covariance of nucleic acid sequencing datasets such as chromatin immunoprecipitation followed by deep sequencing (ChIP-seq). We show how this method differs from other measures of correlation, and exemplify how it can reveal shared covariance between histone modifications and DNA binding proteins, such as the relationship between the H3K4me3 chromatin mark and its epigenetic writers and readers. An R/Bioconductor package is available at http://bioconductor.org/packages/fCCAC/ . pmb59@cam.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
bioWeb3D: an online webGL 3D data visualisation tool.
Pettit, Jean-Baptiste; Marioni, John C
2013-06-07
Data visualization is critical for interpreting biological data. However, in practice it can prove to be a bottleneck for non trained researchers; this is especially true for three dimensional (3D) data representation. Whilst existing software can provide all necessary functionalities to represent and manipulate biological 3D datasets, very few are easily accessible (browser based), cross platform and accessible to non-expert users. An online HTML5/WebGL based 3D visualisation tool has been developed to allow biologists to quickly and easily view interactive and customizable three dimensional representations of their data along with multiple layers of information. Using the WebGL library Three.js written in Javascript, bioWeb3D allows the simultaneous visualisation of multiple large datasets inputted via a simple JSON, XML or CSV file, which can be read and analysed locally thanks to HTML5 capabilities. Using basic 3D representation techniques in a technologically innovative context, we provide a program that is not intended to compete with professional 3D representation software, but that instead enables a quick and intuitive representation of reasonably large 3D datasets.
REM-3D Reference Datasets: Reconciling large and diverse compilations of travel-time observations
NASA Astrophysics Data System (ADS)
Moulik, P.; Lekic, V.; Romanowicz, B. A.
2017-12-01
A three-dimensional Reference Earth model (REM-3D) should ideally represent the consensus view of long-wavelength heterogeneity in the Earth's mantle through the joint modeling of large and diverse seismological datasets. This requires reconciliation of datasets obtained using various methodologies and identification of consistent features. The goal of REM-3D datasets is to provide a quality-controlled and comprehensive set of seismic observations that would not only enable construction of REM-3D, but also allow identification of outliers and assist in more detailed studies of heterogeneity. The community response to data solicitation has been enthusiastic with several groups across the world contributing recent measurements of normal modes, (fundamental mode and overtone) surface waves, and body waves. We present results from ongoing work with body and surface wave datasets analyzed in consultation with a Reference Dataset Working Group. We have formulated procedures for reconciling travel-time datasets that include: (1) quality control for salvaging missing metadata; (2) identification of and reasons for discrepant measurements; (3) homogenization of coverage through the construction of summary rays; and (4) inversions of structure at various wavelengths to evaluate inter-dataset consistency. In consultation with the Reference Dataset Working Group, we retrieved the station and earthquake metadata in several legacy compilations and codified several guidelines that would facilitate easy storage and reproducibility. We find strong agreement between the dispersion measurements of fundamental-mode Rayleigh waves, particularly when made using supervised techniques. The agreement deteriorates substantially in surface-wave overtones, for which discrepancies vary with frequency and overtone number. A half-cycle band of discrepancies is attributed to reversed instrument polarities at a limited number of stations, which are not reflected in the instrument response history. By assessing inter-dataset consistency across similar paths, we quantify travel-time measurement errors for both surface and body waves. Finally, we discuss challenges associated with combining high frequency ( 1 Hz) and long period (10-20s) body-wave measurements into the REM-3D reference dataset.
NASA Astrophysics Data System (ADS)
Xiong, Qiufen; Hu, Jianglin
2013-05-01
The minimum/maximum (Min/Max) temperature in the Yangtze River valley is decomposed into the climatic mean and anomaly component. A spatial interpolation is developed which combines the 3D thin-plate spline scheme for climatological mean and the 2D Barnes scheme for the anomaly component to create a daily Min/Max temperature dataset. The climatic mean field is obtained by the 3D thin-plate spline scheme because the relationship between the decreases in Min/Max temperature with elevation is robust and reliable on a long time-scale. The characteristics of the anomaly field tend to be related to elevation variation weakly, and the anomaly component is adequately analyzed by the 2D Barnes procedure, which is computationally efficient and readily tunable. With this hybridized interpolation method, a daily Min/Max temperature dataset that covers the domain from 99°E to 123°E and from 24°N to 36°N with 0.1° longitudinal and latitudinal resolution is obtained by utilizing daily Min/Max temperature data from three kinds of station observations, which are national reference climatological stations, the basic meteorological observing stations and the ordinary meteorological observing stations in 15 provinces and municipalities in the Yangtze River valley from 1971 to 2005. The error estimation of the gridded dataset is assessed by examining cross-validation statistics. The results show that the statistics of daily Min/Max temperature interpolation not only have high correlation coefficient (0.99) and interpolation efficiency (0.98), but also the mean bias error is 0.00 °C. For the maximum temperature, the root mean square error is 1.1 °C and the mean absolute error is 0.85 °C. For the minimum temperature, the root mean square error is 0.89 °C and the mean absolute error is 0.67 °C. Thus, the new dataset provides the distribution of Min/Max temperature over the Yangtze River valley with realistic, successive gridded data with 0.1° × 0.1° spatial resolution and daily temporal scale. The primary factors influencing the dataset precision are elevation and terrain complexity. In general, the gridded dataset has a relatively high precision in plains and flatlands and a relatively low precision in mountainous areas.
Registration of 3D ultrasound computer tomography and MRI for evaluation of tissue correspondences
NASA Astrophysics Data System (ADS)
Hopp, T.; Dapp, R.; Zapf, M.; Kretzek, E.; Gemmeke, H.; Ruiter, N. V.
2015-03-01
3D Ultrasound Computer Tomography (USCT) is a new imaging method for breast cancer diagnosis. In the current state of development it is essential to correlate USCT with a known imaging modality like MRI to evaluate how different tissue types are depicted. Due to different imaging conditions, e.g. with the breast subject to buoyancy in USCT, a direct correlation is demanding. We present a 3D image registration method to reduce positioning differences and allow direct side-by-side comparison of USCT and MRI volumes. It is based on a two-step approach including a buoyancy simulation with a biomechanical model and free form deformations using cubic B-Splines for a surface refinement. Simulation parameters are optimized patient-specifically in a simulated annealing scheme. The method was evaluated with in-vivo datasets resulting in an average registration error below 5mm. Correlating tissue structures can thereby be located in the same or nearby slices in both modalities and three-dimensional non-linear deformations due to the buoyancy are reduced. Image fusion of MRI volumes and USCT sound speed volumes was performed for intuitive display. By applying the registration to data of our first in-vivo study with the KIT 3D USCT, we could correlate several tissue structures in MRI and USCT images and learn how connective tissue, carcinomas and breast implants observed in the MRI are depicted in the USCT imaging modes.
A fully automated non-external marker 4D-CT sorting algorithm using a serial cine scanning protocol.
Carnes, Greg; Gaede, Stewart; Yu, Edward; Van Dyk, Jake; Battista, Jerry; Lee, Ting-Yim
2009-04-07
Current 4D-CT methods require external marker data to retrospectively sort image data and generate CT volumes. In this work we develop an automated 4D-CT sorting algorithm that performs without the aid of data collected from an external respiratory surrogate. The sorting algorithm requires an overlapping cine scan protocol. The overlapping protocol provides a spatial link between couch positions. Beginning with a starting scan position, images from the adjacent scan position (which spatial match the starting scan position) are selected by maximizing the normalized cross correlation (NCC) of the images at the overlapping slice position. The process was continued by 'daisy chaining' all couch positions using the selected images until an entire 3D volume was produced. The algorithm produced 16 phase volumes to complete a 4D-CT dataset. Additional 4D-CT datasets were also produced using external marker amplitude and phase angle sorting methods. The image quality of the volumes produced by the different methods was quantified by calculating the mean difference of the sorted overlapping slices from adjacent couch positions. The NCC sorted images showed a significant decrease in the mean difference (p < 0.01) for the five patients.
3D object retrieval using salient views
Shapiro, Linda G.
2013-01-01
This paper presents a method for selecting salient 2D views to describe 3D objects for the purpose of retrieval. The views are obtained by first identifying salient points via a learning approach that uses shape characteristics of the 3D points (Atmosukarto and Shapiro in International workshop on structural, syntactic, and statistical pattern recognition, 2008; Atmosukarto and Shapiro in ACM multimedia information retrieval, 2008). The salient views are selected by choosing views with multiple salient points on the silhouette of the object. Silhouette-based similarity measures from Chen et al. (Comput Graph Forum 22(3):223–232, 2003) are then used to calculate the similarity between two 3D objects. Retrieval experiments were performed on three datasets: the Heads dataset, the SHREC2008 dataset, and the Princeton dataset. Experimental results show that the retrieval results using the salient views are comparable to the existing light field descriptor method (Chen et al. in Comput Graph Forum 22(3):223–232, 2003), and our method achieves a 15-fold speedup in the feature extraction computation time. PMID:23833704
A framework for automatic creation of gold-standard rigid 3D-2D registration datasets.
Madan, Hennadii; Pernuš, Franjo; Likar, Boštjan; Špiclin, Žiga
2017-02-01
Advanced image-guided medical procedures incorporate 2D intra-interventional information into pre-interventional 3D image and plan of the procedure through 3D/2D image registration (32R). To enter clinical use, and even for publication purposes, novel and existing 32R methods have to be rigorously validated. The performance of a 32R method can be estimated by comparing it to an accurate reference or gold standard method (usually based on fiducial markers) on the same set of images (gold standard dataset). Objective validation and comparison of methods are possible only if evaluation methodology is standardized, and the gold standard dataset is made publicly available. Currently, very few such datasets exist and only one contains images of multiple patients acquired during a procedure. To encourage the creation of gold standard 32R datasets, we propose an automatic framework. The framework is based on rigid registration of fiducial markers. The main novelty is spatial grouping of fiducial markers on the carrier device, which enables automatic marker localization and identification across the 3D and 2D images. The proposed framework was demonstrated on clinical angiograms of 20 patients. Rigid 32R computed by the framework was more accurate than that obtained manually, with the respective target registration error below 0.027 mm compared to 0.040 mm. The framework is applicable for gold standard setup on any rigid anatomy, provided that the acquired images contain spatially grouped fiducial markers. The gold standard datasets and software will be made publicly available.
2013-01-01
Background In biomedical research, a huge variety of different techniques is currently available for the structural examination of small specimens, including conventional light microscopy (LM), transmission electron microscopy (TEM), confocal laser scanning microscopy (CLSM), microscopic X-ray computed tomography (microCT), and many others. Since every imaging method is physically limited by certain parameters, a correlative use of complementary methods often yields a significant broader range of information. Here we demonstrate the advantages of the correlative use of microCT, light microscopy, and transmission electron microscopy for the analysis of small biological samples. Results We used a small juvenile bivalve mollusc (Mytilus galloprovincialis, approximately 0.8 mm length) to demonstrate the workflow of a correlative examination by microCT, LM serial section analysis, and TEM-re-sectioning. Initially these three datasets were analyzed separately, and subsequently they were fused in one 3D scene. This workflow is very straightforward. The specimen was processed as usual for transmission electron microscopy including post-fixation in osmium tetroxide and embedding in epoxy resin. Subsequently it was imaged with microCT. Post-fixation in osmium tetroxide yielded sufficient X-ray contrast for microCT imaging, since the X-ray absorption of epoxy resin is low. Thereafter, the same specimen was serially sectioned for LM investigation. The serial section images were aligned and specific organ systems were reconstructed based on manual segmentation and surface rendering. According to the region of interest (ROI), specific LM sections were detached from the slides, re-mounted on resin blocks and re-sectioned (ultrathin) for TEM. For analysis, image data from the three different modalities was co-registered into a single 3D scene using the software AMIRA®. We were able to register both the LM section series volume and TEM slices neatly to the microCT dataset, with small geometric deviations occurring only in the peripheral areas of the specimen. Based on co-registered datasets the excretory organs, which were chosen as ROI for this study, could be investigated regarding both their ultrastructure as well as their position in the organism and their spatial relationship to adjacent tissues. We found structures typical for mollusc excretory systems, including ultrafiltration sites at the pericardial wall, and ducts leading from the pericardium towards the kidneys, which exhibit a typical basal infolding system. Conclusions The presented approach allows a comprehensive analysis and presentation of small objects regarding both the overall organization as well as cellular and subcellular details. Although our protocol involves a variety of different equipment and procedures, we maintain that it offers savings in both effort and cost. Co-registration of datasets from different imaging modalities can be accomplished with high-end desktop computers and offers new opportunities for understanding and communicating structural relationships within organisms and tissues. In general, the correlative use of different microscopic imaging techniques will continue to become more widespread in morphological and structural research in zoology. Classical TEM serial section investigations are extremely time consuming, and modern methods for 3D analysis of ultrastructure such as SBF-SEM and FIB-SEM are limited to very small volumes for examination. Thus the re-sectioning of LM sections is suitable for speeding up TEM examination substantially, while microCT could become a key-method for complementing ultrastructural examinations. PMID:23915384
Crustal Imaging of the Faroe Islands and North Sea Using Ambient Seismic Noise
NASA Astrophysics Data System (ADS)
Sammarco, C.; Rawlinson, N.; Cornwell, D. G.
2016-12-01
The recent development of ambient seismic noise imaging offers the potential for obtaining detailed seismic models of the crust. Cross-correlation of long-term recordings from station pairs reveals an empirical "Green's function" which is related to the impulse response of the medium between the two stations. Here, we present new results using two different broadband datasets: one that spans the Faroe Islands and another that spans the North Sea. The smaller scale Faroe Islands study was tackled first, because with only 12 stations, it was well suited for the development and testing of a new data processing and inversion workflow. In the Faroe Islands study cross-correlations with high signal-to-noise ratios were obtained by applying phase weighted stacking, which is shown to be a significant improvement over convectional linear stacking. For example, coherent noise concentrated near the zero time lag of the linearly stacked cross correlations appears to have an influence on the dispersion characteristics beyond 10 s period, but we have managed to minimize these effects with phase weighted stacking. We obtain group velocity maps from 0.5s to 15s period by inverting inter-station travel times using an iterative non-linear inversion scheme. It reveals the presence of significant lateral heterogeneity in the mid-upper crust, including evidence of a low velocity zone in the upper crust, which may mark the base of the basalt layer. This is most clearly revealed by taking the average group velocity dispersion curve for all station pairs and inverting for 1-D shear wave velocity. The computation of a 3-D shear wave speed model both verifies and adds further detail to these results. Application to the North Sea dataset was challenging due to the highly attenuative nature of the crust in this region, which has previously been observed to dramatically reduce the signal-to-noise ratio of short period surface waves. However, with the help of phase-weighted stacking good quality empirical Green's functions can be retrieved for this large dataset. Both group and phase velocity dispersion information are extracted from the cross-correlations, which are then inverted to produce period-dependent velocity maps. The next stage is to invert these maps for 3-D shear wave velocity structure beneath the North Sea region.
NASA Astrophysics Data System (ADS)
Pariser, O.; Calef, F.; Manning, E. M.; Ardulov, V.
2017-12-01
We will present implementation and study of several use-cases of utilizing Virtual Reality (VR) for immersive display, interaction and analysis of large and complex 3D datasets. These datasets have been acquired by the instruments across several Earth, Planetary and Solar Space Robotics Missions. First, we will describe the architecture of the common application framework that was developed to input data, interface with VR display devices and program input controllers in various computing environments. Tethered and portable VR technologies will be contrasted and advantages of each highlighted. We'll proceed to presenting experimental immersive analytics visual constructs that enable augmentation of 3D datasets with 2D ones such as images and statistical and abstract data. We will conclude by presenting comparative analysis with traditional visualization applications and share the feedback provided by our users: scientists and engineers.
Inversion of quasi-3D DC resistivity imaging data using artificial neural networks
NASA Astrophysics Data System (ADS)
Neyamadpour, Ahmad; Wan Abdullah, W. A. T.; Taib, Samsudin
2010-02-01
The objective of this paper is to investigate the applicability of artificial neural networks in inverting quasi-3D DC resistivity imaging data. An electrical resistivity imaging survey was carried out along seven parallel lines using a dipole-dipole array to confirm the validation of the results of an inversion using an artificial neural network technique. The model used to produce synthetic data to train the artificial neural network was a homogeneous medium of 100Ωm resistivity with an embedded anomalous body of 1000Ωm resistivity. The network was trained using 21 datasets (comprising 12159 data points) and tested on another 11 synthetic datasets (comprising 6369 data points) and on real field data. Another 24 test datasets (comprising 13896 data points) consisting of different resistivities for the background and the anomalous bodies were used in order to test the interpolation and extrapolation of network properties. Different learning paradigms were tried in the training process of the neural network, with the resilient propagation paradigm being the most efficient. The number of nodes, hidden layers, and efficient values for learning rate and momentum coefficient have been studied. Although a significant correlation between results of the neural network and the conventional robust inversion technique was found, the ANN results show more details of the subsurface structure, and the RMS misfits for the results of the neural network are less than seen with conventional methods. The interpreted results show that the trained network was able to invert quasi-3D electrical resistivity imaging data obtained by dipole-dipole configuration both rapidly and accurately.
bioWeb3D: an online webGL 3D data visualisation tool
2013-01-01
Background Data visualization is critical for interpreting biological data. However, in practice it can prove to be a bottleneck for non trained researchers; this is especially true for three dimensional (3D) data representation. Whilst existing software can provide all necessary functionalities to represent and manipulate biological 3D datasets, very few are easily accessible (browser based), cross platform and accessible to non-expert users. Results An online HTML5/WebGL based 3D visualisation tool has been developed to allow biologists to quickly and easily view interactive and customizable three dimensional representations of their data along with multiple layers of information. Using the WebGL library Three.js written in Javascript, bioWeb3D allows the simultaneous visualisation of multiple large datasets inputted via a simple JSON, XML or CSV file, which can be read and analysed locally thanks to HTML5 capabilities. Conclusions Using basic 3D representation techniques in a technologically innovative context, we provide a program that is not intended to compete with professional 3D representation software, but that instead enables a quick and intuitive representation of reasonably large 3D datasets. PMID:23758781
NASA Astrophysics Data System (ADS)
Chang, Q.; Jiao, W.
2017-12-01
Phenology is a sensitive and critical feature of vegetation change that has regarded as a good indicator in climate change studies. So far, variety of remote sensing data sources and phenology extraction methods from satellite datasets have been developed to study the spatial-temporal dynamics of vegetation phenology. However, the differences between vegetation phenology results caused by the varies satellite datasets and phenology extraction methods are not clear, and the reliability for different phenology results extracted from remote sensing datasets is not verified and compared using the ground observation data. Based on three most popular remote sensing phenology extraction methods, this research calculated the Start of the growing season (SOS) for each pixels in the Northern Hemisphere for two kinds of long time series satellite datasets: GIMMS NDVIg (SOSg) and GIMMS NDVI3g (SOS3g). The three methods used in this research are: maximum increase method, dynamic threshold method and midpoint method. Then, this study used SOS calculated from NEE datasets (SOS_NEE) monitored by 48 eddy flux tower sites in global flux website to validate the reliability of six phenology results calculated from remote sensing datasets. Results showed that both SOSg and SOS3g extracted by maximum increase method are not correlated with ground observed phenology metrics. SOSg and SOS3g extracted by the dynamic threshold method and midpoint method are both correlated with SOS_NEE significantly. Compared with SOSg extracted by the dynamic threshold method, SOSg extracted by the midpoint method have a stronger correlation with SOS_NEE. And, the same to SOS3g. Additionally, SOSg showed stronger correlation with SOS_NEE than SOS3g extracted by the same method. SOS extracted by the midpoint method from GIMMS NDVIg datasets seemed to be the most reliable results when validated with SOS_NEE. These results can be used as reference for data and method selection in future's phenology study.
NASA Astrophysics Data System (ADS)
Barnoud, Anne; Coutant, Olivier; Bouligand, Claire; Massin, Frédérick; Stehly, Laurent
2015-04-01
We image the volcanic island of Basse-Terre, Guadeloupe, Lesser Antilles, using both earthquake travel times and noise correlations. (1) A new earthquake catalog was recently compiled for the Lesser Antilles by the CDSA/OVSG/IPGP (Massin et al., EGU General Assembly 2014) and allows us to perform classical travel time tomography to obtain smooth 3D body wave velocity models. The geometrical configuration of the volcanic arc controls the resolution of the model in our zone of interest. (2) Surface wave tomography using noise correlations was successfully applied to volcanoes (Brenguier et al., Geophys. Res. Lett. 2007). We use seismic noise recorded at 16 broad-band stations and 9 short-period stations from Basse-Terre over a period of six years (2007-2012). For each station pair, we extract a dispersion curve from the noise correlation to get surface wave velocity models. The inversion of the dispersion curves produces a 3D S-wave velocity model of the island. The spatial distribution of seismic stations accross the island is highly heterogeneous, leading to higher resolution near the dome of the Soufrière of Guadeloupe volcano. Resulting velocity models are compared with densities obtained by 3D inversion of gravimetric data (Barnoud et al., AGU Fall Meeting 2013). Further work should include simultaneous inversion of seismic and gravimetric datasets to overcome resolution limitations.
Abdullah, Kamarul A; McEntee, Mark F; Reed, Warren; Kench, Peter L
2018-04-30
An ideal organ-specific insert phantom should be able to simulate the anatomical features with appropriate appearances in the resultant computed tomography (CT) images. This study investigated a 3D printing technology to develop a novel and cost-effective cardiac insert phantom derived from volumetric CT image datasets of anthropomorphic chest phantom. Cardiac insert volumes were segmented from CT image datasets, derived from an anthropomorphic chest phantom of Lungman N-01 (Kyoto Kagaku, Japan). These segmented datasets were converted to a virtual 3D-isosurface of heart-shaped shell, while two other removable inserts were included using computer-aided design (CAD) software program. This newly designed cardiac insert phantom was later printed by using a fused deposition modelling (FDM) process via a Creatbot DM Plus 3D printer. Then, several selected filling materials, such as contrast media, oil, water and jelly, were loaded into designated spaces in the 3D-printed phantom. The 3D-printed cardiac insert phantom was positioned within the anthropomorphic chest phantom and 30 repeated CT acquisitions performed using a multi-detector scanner at 120-kVp tube potential. Attenuation (Hounsfield Unit, HU) values were measured and compared to the image datasets of real-patient and Catphan ® 500 phantom. The output of the 3D-printed cardiac insert phantom was a solid acrylic plastic material, which was strong, light in weight and cost-effective. HU values of the filling materials were comparable to the image datasets of real-patient and Catphan ® 500 phantom. A novel and cost-effective cardiac insert phantom for anthropomorphic chest phantom was developed using volumetric CT image datasets with a 3D printer. Hence, this suggested the printing methodology could be applied to generate other phantoms for CT imaging studies. © 2018 The Authors. Journal of Medical Radiation Sciences published by John Wiley & Sons Australia, Ltd on behalf of Australian Society of Medical Imaging and Radiation Therapy and New Zealand Institute of Medical Radiation Technology.
Validating Variational Bayes Linear Regression Method With Multi-Central Datasets.
Murata, Hiroshi; Zangwill, Linda M; Fujino, Yuri; Matsuura, Masato; Miki, Atsuya; Hirasawa, Kazunori; Tanito, Masaki; Mizoue, Shiro; Mori, Kazuhiko; Suzuki, Katsuyoshi; Yamashita, Takehiro; Kashiwagi, Kenji; Shoji, Nobuyuki; Asaoka, Ryo
2018-04-01
To validate the prediction accuracy of variational Bayes linear regression (VBLR) with two datasets external to the training dataset. The training dataset consisted of 7268 eyes of 4278 subjects from the University of Tokyo Hospital. The Japanese Archive of Multicentral Databases in Glaucoma (JAMDIG) dataset consisted of 271 eyes of 177 patients, and the Diagnostic Innovations in Glaucoma Study (DIGS) dataset includes 248 eyes of 173 patients, which were used for validation. Prediction accuracy was compared between the VBLR and ordinary least squared linear regression (OLSLR). First, OLSLR and VBLR were carried out using total deviation (TD) values at each of the 52 test points from the second to fourth visual fields (VFs) (VF2-4) to 2nd to 10th VF (VF2-10) of each patient in JAMDIG and DIGS datasets, and the TD values of the 11th VF test were predicted every time. The predictive accuracy of each method was compared through the root mean squared error (RMSE) statistic. OLSLR RMSEs with the JAMDIG and DIGS datasets were between 31 and 4.3 dB, and between 19.5 and 3.9 dB. On the other hand, VBLR RMSEs with JAMDIG and DIGS datasets were between 5.0 and 3.7, and between 4.6 and 3.6 dB. There was statistically significant difference between VBLR and OLSLR for both datasets at every series (VF2-4 to VF2-10) (P < 0.01 for all tests). However, there was no statistically significant difference in VBLR RMSEs between JAMDIG and DIGS datasets at any series of VFs (VF2-2 to VF2-10) (P > 0.05). VBLR outperformed OLSLR to predict future VF progression, and the VBLR has a potential to be a helpful tool at clinical settings.
Fang, Leyuan; Wang, Chong; Li, Shutao; Yan, Jun; Chen, Xiangdong; Rabbani, Hossein
2017-11-01
We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Fayad, Hadi; Pan, Tinsu; Clément, Jean-François; Visvikis, Dimitris
2011-01-01
Purpose Current respiratory motion monitoring devices used for motion synchronization in medical imaging and radiotherapy provide either 1D respiratory signals over a specific region or 3D information based on few external or internal markers. On the other hand, newer technology may offer the potential to monitor the entire patient external surface in real time. The main objective of this study was to assess the motion correlation between such an external patient surface and internal anatomical landmarks motion. Methods Four dimensional Computed Tomography (4D CT) volumes for ten patients were used in this study. Anatomical landmarks were manually selected in the thoracic region across the 4D CT datasets by two experts. The landmarks included normal structures as well as the tumour location. In addition, a distance map representing the entire external patient surface, which corresponds to surfaces acquired by a Time of Flight (ToF) camera or similar devices, was created by segmenting the skin of all 4D CT volumes using a thresholding algorithm. Finally, the correlation between the internal landmarks and external surface motion was evaluated for different regions (placement and size) throughout a patient’s surface. Results Significant variability was observed in the motion of the different parts of the external patient surface. The larger motion magnitude was consistently measured in the central regions of the abdominal and the thoracic areas for the different patient datasets considered. The highest correlation coefficients were observed between the motion of these external surface areas and internal landmarks such as the diaphragm and mediastinum structures as well as the tumour location landmarks (0.8 ± 0.18 and 0.72 ± 0.12 for the abdominal and the thoracic regions respectively). Worse correlation was observed when one considered landmarks not significantly influenced by respiratory motion such as the apex and the sternum. Discussion and conclusions There were large differences in the motion correlation observed considering different regions of interest placed over a patients’ external surface and internal anatomical landmarks. The positioning of current devices used for respiratory motion synchronization may reduce such correlation by averaging the motion over correlated and poorly correlated external regions. The potential of capturing in real-time the motion of the complete external patient surface as well as choosing the area of the surface that correlates best with the internal motion should allow reducing such variability and associated errors in both respiratory motion synchronization and subsequent motion modeling processes. PMID:21815390
A novel binary shape context for 3D local surface description
NASA Astrophysics Data System (ADS)
Dong, Zhen; Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Li, Bijun; Zang, Yufu
2017-08-01
3D local surface description is now at the core of many computer vision technologies, such as 3D object recognition, intelligent driving, and 3D model reconstruction. However, most of the existing 3D feature descriptors still suffer from low descriptiveness, weak robustness, and inefficiency in both time and memory. To overcome these challenges, this paper presents a robust and descriptive 3D Binary Shape Context (BSC) descriptor with high efficiency in both time and memory. First, a novel BSC descriptor is generated for 3D local surface description, and the performance of the BSC descriptor under different settings of its parameters is analyzed. Next, the descriptiveness, robustness, and efficiency in both time and memory of the BSC descriptor are evaluated and compared to those of several state-of-the-art 3D feature descriptors. Finally, the performance of the BSC descriptor for 3D object recognition is also evaluated on a number of popular benchmark datasets, and an urban-scene dataset is collected by a terrestrial laser scanner system. Comprehensive experiments demonstrate that the proposed BSC descriptor obtained high descriptiveness, strong robustness, and high efficiency in both time and memory and achieved high recognition rates of 94.8%, 94.1% and 82.1% on the considered UWA, Queen, and WHU datasets, respectively.
Oechsner, Markus; Odersky, Leonhard; Berndt, Johannes; Combs, Stephanie Elisabeth; Wilkens, Jan Jakob; Duma, Marciana Nona
2015-12-01
The purpose of this study was to assess the impact on dose to the planning target volume (PTV) and organs at risk (OAR) by using four differently generated CT datasets for dose calculation in stereotactic body radiotherapy (SBRT) of lung and liver tumors. Additionally, dose differences between 3D conformal radiotherapy and volumetric modulated arc therapy (VMAT) plans calculated on these CT datasets were determined. Twenty SBRT patients, ten lung cases and ten liver cases, were retrospectively selected for this study. Treatment plans were optimized on average intensity projection (AIP) CTs using 3D conformal radiotherapy (3D-CRT) and volumetric modulated arc therapy (VMAT). Afterwards, the plans were copied to the planning CTs (PCT), maximum intensity projection (MIP) and mid-ventilation (MidV) CT datasets and dose was recalculated keeping all beam parameters and monitor units unchanged. Ipsilateral lung and liver volumes and dosimetric parameters for PTV (Dmean, D2, D98, D95), ipsilateral lung and liver (Dmean, V30, V20, V10) were determined and statistically analysed using Wilcoxon test. Significant but small mean differences were found for PTV dose between the CTs (lung SBRT: ≤2.5 %; liver SBRT: ≤1.6 %). MIPs achieved the smallest lung and the largest liver volumes. OAR mean doses in MIP plans were distinctly smaller than in the other CT datasets. Furthermore, overlapping of tumors with the diaphragm results in underestimated ipsilateral lung dose in MIP plans. Best agreement was found between AIP and MidV (lung SBRT). Overall, differences in liver SBRT were smaller than in lung SBRT and VMAT plans achieved slightly smaller differences than 3D-CRT plans. Only small differences were found for PTV parameters between the four CT datasets. Larger differences occurred for the doses to organs at risk (ipsilateral lung, liver) especially for MIP plans. No relevant differences were observed between 3D-CRT or VMAT plans. MIP CTs are not appropriate for OAR dose assessment. PCT, AIP and MidV resulted in similar doses. If a 4DCT is acquired PCT can be omitted using AIP or MidV for treatment planning.
Facts and Misconceptions about 2D:4D, Social and Risk Preferences
Alonso, Judit; Di Paolo, Roberto; Ponti, Giovanni; Sartarelli, Marcello
2018-01-01
We study how the ratio between the length of the second and fourth digit (2D:4D) correlates with choices in social and risk preferences elicitation tasks by building a large dataset from five experimental projects with more than 800 subjects. Our results confirm the recent literature that downplays the link between 2D:4D and many domains of economic interest, such as social and risk preferences. As for the former, we find that social preferences are significantly lower when 2D:4D is above the median value only for subjects with low cognitive ability. As for the latter, we find that a high 2D:4D is not correlated with the frequency of subjects' risky choices. PMID:29487510
Reflectance, illumination, and appearance in color constancy
McCann, John J.; Parraman, Carinna; Rizzi, Alessandro
2013-01-01
We studied color constancy using a pair of identical 3-D Color Mondrian displays. We viewed one 3-D Mondrian in nearly uniform illumination, and the other in directional, nonuniform illumination. We used the three dimensional structures to modulate the light falling on the painted surfaces. The 3-D structures in the displays were a matching set of wooden blocks. Across Mondrian displays, each corresponding facet had the same paint on its surface. We used only 6 chromatic, and 5 achromatic paints applied to 104 block facets. The 3-D blocks add shadows and multiple reflections not found in flat Mondrians. Both 3-D Mondrians were viewed simultaneously, side-by-side. We used two techniques to measure correlation of appearance with surface reflectance. First, observers made magnitude estimates of changes in the appearances of identical reflectances. Second, an author painted a watercolor of the 3-D Mondrians. The watercolor's reflectances quantified the changes in appearances. While constancy generalizations about illumination and reflectance hold for flat Mondrians, they do not for 3-D Mondrians. A constant paint does not exhibit perfect color constancy, but rather shows significant shifts in lightness, hue and chroma in response to the structure in the nonuniform illumination. Color appearance depends on the spatial information in both the illumination and the reflectances of objects. The spatial information of the quanta catch from the array of retinal receptors generates sensations that have variable correlation with surface reflectance. Models of appearance in humans need to calculate the departures from perfect constancy measured here. This article provides a dataset of measurements of color appearances for computational models of sensation. PMID:24478738
NASA Technical Reports Server (NTRS)
Salamuniccar, Goran; Loncaric, Sven; Mazarico, Erwan Matias
2012-01-01
For Mars, 57,633 craters from the manually assembled catalogues and 72,668 additional craters identified using several crater detection algorithms (CDAs) have been merged into the MA130301GT catalogue. By contrast, for the Moon the most complete previous catalogue contains only 14,923 craters. Two recent missions provided higher-quality digital elevation maps (DEMs): SELENE (in 1/16° resolution) and Lunar Reconnaissance Orbiter (we used up to 1/512°). This was the main motivation for work on the new Crater Shape-based interpolation module, which improves previous CDA as follows: (1) it decreases the number of false-detections for the required number of true detections; (2) it improves detection capabilities for very small craters; and (3) it provides more accurate automated measurements of craters' properties. The results are: (1) LU60645GT, which is currently the most complete (up to D>=8 km) catalogue of Lunar craters; and (2) MA132843GT catalogue of Martian craters complete up to D>=2 km, which is the extension of the previous MA130301GT catalogue. As previously achieved for Mars, LU60645GT provides all properties that were provided by the previous Lunar catalogues, plus: (1) correlation between morphological descriptors from used catalogues; (2) correlation between manually assigned attributes and automated measurements; (3) average errors and their standard deviations for manually and automatically assigned attributes such as position coordinates, diameter, depth/diameter ratio, etc; and (4) a review of positional accuracy of used datasets. Additionally, surface dating could potentially be improved with the exhaustiveness of this new catalogue. The accompanying results are: (1) the possibility of comparing a large number of Lunar and Martian craters, of e.g. depth/diameter ratio and 2D profiles; (2) utilisation of a method for re-projection of datasets and catalogues, which is very useful for craters that are very close to poles; and (3) the extension of the previous framework for evaluation of CDAs with datasets and ground-truth catalogue for the Moon.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Benthem, Mark H.
2016-05-04
This software is employed for 3D visualization of X-ray diffraction (XRD) data with functionality for slicing, reorienting, isolating and plotting of 2D color contour maps and 3D renderings of large datasets. The program makes use of the multidimensionality of textured XRD data where diffracted intensity is not constant over a given set of angular positions (as dictated by the three defined dimensional angles of phi, chi, and two-theta). Datasets are rendered in 3D with intensity as a scaler which is represented as a rainbow color scale. A GUI interface and scrolling tools along with interactive function via the mouse allowmore » for fast manipulation of these large datasets so as to perform detailed analysis of diffraction results with full dimensionality of the diffraction space.« less
Multi-azimuth 3D Seismic Exploration and Processing in the Jeju Basin, the Northern East China Sea
NASA Astrophysics Data System (ADS)
Yoon, Youngho; Kang, Moohee; Kim, Jin-Ho; Kim, Kyong-O.
2015-04-01
Multi-azimuth(MAZ) 3D seismic exploration is one of the most advanced seismic survey methods to improve illumination and multiple attenuation for better image of the subsurface structures. 3D multi-channel seismic data were collected in two phases during 2012, 2013, and 2014 in Jeju Basin, the northern part of the East China Sea Basin where several oil and gas fields were discovered. Phase 1 data were acquired at 135° and 315° azimuths in 2012 and 2013 comprised a full 3D marine seismic coverage of 160 km2. In 2014, phase 2 data were acquired at the azimuths 45° and 225°, perpendicular to those of phase 1. These two datasets were processed through the same processing workflow prior to velocity analysis and merged to one MAZ dataset. We performed velocity analysis on the MAZ dataset as well as two phases data individually and then stacked these three datasets separately. We were able to pick more accurate velocities in the MAZ dataset compare to phase 1 and 2 data while velocity picking. Consequently, the MAZ seismic volume provide us better resolution and improved images since different shooting directions illuminate different parts of the structures and stratigraphic features.
Han, Seung Seog; Park, Gyeong Hun; Lim, Woohyung; Kim, Myoung Shin; Na, Jung Im; Park, Ilwoo; Chang, Sung Eun
2018-01-01
Although there have been reports of the successful diagnosis of skin disorders using deep learning, unrealistically large clinical image datasets are required for artificial intelligence (AI) training. We created datasets of standardized nail images using a region-based convolutional neural network (R-CNN) trained to distinguish the nail from the background. We used R-CNN to generate training datasets of 49,567 images, which we then used to fine-tune the ResNet-152 and VGG-19 models. The validation datasets comprised 100 and 194 images from Inje University (B1 and B2 datasets, respectively), 125 images from Hallym University (C dataset), and 939 images from Seoul National University (D dataset). The AI (ensemble model; ResNet-152 + VGG-19 + feedforward neural networks) results showed test sensitivity/specificity/ area under the curve values of (96.0 / 94.7 / 0.98), (82.7 / 96.7 / 0.95), (92.3 / 79.3 / 0.93), (87.7 / 69.3 / 0.82) for the B1, B2, C, and D datasets. With a combination of the B1 and C datasets, the AI Youden index was significantly (p = 0.01) higher than that of 42 dermatologists doing the same assessment manually. For B1+C and B2+ D dataset combinations, almost none of the dermatologists performed as well as the AI. By training with a dataset comprising 49,567 images, we achieved a diagnostic accuracy for onychomycosis using deep learning that was superior to that of most of the dermatologists who participated in this study.
3D Analysis of Human Embryos and Fetuses Using Digitized Datasets From the Kyoto Collection.
Takakuwa, Tetsuya
2018-06-01
Three-dimensional (3D) analysis of the human embryonic and early-fetal period has been performed using digitized datasets obtained from the Kyoto Collection, in which the digital datasets play a primary role in research. Datasets include magnetic resonance imaging (MRI) acquired with 1.5 T, 2.35 T, and 7 T magnet systems, phase-contrast X-ray computed tomography (CT), and digitized histological serial sections. Large, high-resolution datasets covering a broad range of developmental periods obtained with various methods of acquisition are key elements for the studies. The digital data have gross merits that enabled us to develop various analysis. Digital data analysis accelerated the speed of morphological observations using precise and improved methods by providing a suitable plane for a morphometric analysis from staged human embryos. Morphometric data are useful for quantitatively evaluating and demonstrating the features of development and for screening abnormal samples, which may be suggestive in the pathogenesis of congenital malformations. Morphometric data are also valuable for comparing sonographic data in a process known as "sonoembryology." The 3D coordinates of anatomical landmarks may be useful tools for analyzing the positional change of interesting landmarks and their relationships during development. Several dynamic events could be explained by differential growth using 3D coordinates. Moreover, 3D coordinates can be utilized in mathematical analysis as well as statistical analysis. The 3D analysis in our study may serve to provide accurate morphologic data, including the dynamics of embryonic structures related to developmental stages, which is required for insights into the dynamic and complex processes occurring during organogenesis. Anat Rec, 301:960-969, 2018. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Thali, Michael J; Taubenreuther, Ulrike; Karolczak, Marek; Braun, Marcel; Brueschweiler, Walter; Kalender, Willi A; Dirnhofer, Richard
2003-11-01
When a knife is stabbed in bone, it leaves an impression in the bone. The characteristics (shape, size, etc.) may indicate the type of tool used to produce the patterned injury in bone. Until now it has been impossible in forensic sciences to document such damage precisely and non-destructively. Micro-computed tomography (Micro-CT) offers an opportunity to analyze patterned injuries of tool marks made in bone. Using high-resolution Micro-CT and computer software, detailed analysis of three-dimensional (3D) architecture has recently become feasible and allows microstructural 3D bone information to be collected. With adequate viewing software, data from 2D slice of an arbitrary plane can be extracted from 3D datasets. Using such software as a "digital virtual knife," the examiner can interactively section and analyze the 3D sample. Analysis of the bone injury revealed that Micro-CT provides an opportunity to correlate a bone injury to an injury-causing instrument. Even broken knife tips can be graphically and non-destructively assigned to a suspect weapon.
Lutomski, J E; van Exel, N J A; Kempen, G I J M; Moll van Charante, E P; den Elzen, W P J; Jansen, A P D; Krabbe, P F M; Steunenberg, B; Steyerberg, E W; Olde Rikkert, M G M; Melis, R J F
2015-05-01
Validity is a contextual aspect of a scale which may differ across sample populations and study protocols. The objective of our study was to validate the Care-Related Quality of Life Instrument (CarerQol) across two different study design features, sampling framework (general population vs. different care settings) and survey mode (interview vs. written questionnaire). Data were extracted from The Older Persons and Informal Caregivers Minimum DataSet (TOPICS-MDS, www.topics-mds.eu ), a pooled public-access data set with information on >3,000 informal caregivers throughout the Netherlands. Meta-correlations and linear mixed models between the CarerQol's seven dimensions (CarerQol-7D) and caregiver's level of happiness (CarerQol-VAS) and self-rated burden (SRB) were performed. The CarerQol-7D dimensions were correlated to the CarerQol-VAS and SRB in the pooled data set and the subgroups. The strength of correlations between CarerQol-7D dimensions and SRB was weaker among caregivers who were interviewed versus those who completed a written questionnaire. The directionality of associations between the CarerQol-VAS, SRB and the CarerQol-7D dimensions in the multivariate model supported the construct validity of the CarerQol in the pooled population. Significant interaction terms were observed in several dimensions of the CarerQol-7D across sampling frame and survey mode, suggesting meaningful differences in reporting levels. Although good scientific practice emphasises the importance of re-evaluating instrument properties in individual research studies, our findings support the validity and applicability of the CarerQol instrument in a variety of settings. Due to minor differential reporting, pooling CarerQol data collected using mixed administration modes should be interpreted with caution; for TOPICS-MDS, meta-analytic techniques may be warranted.
NASA Astrophysics Data System (ADS)
Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.
2018-04-01
In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.
Sunspot Pattern Classification using PCA and Neural Networks (Poster)
NASA Technical Reports Server (NTRS)
Rajkumar, T.; Thompson, D. E.; Slater, G. L.
2005-01-01
The sunspot classification scheme presented in this paper is considered as a 2-D classification problem on archived datasets, and is not a real-time system. As a first step, it mirrors the Zuerich/McIntosh historical classification system and reproduces classification of sunspot patterns based on preprocessing and neural net training datasets. Ultimately, the project intends to move from more rudimentary schemes, to develop spatial-temporal-spectral classes derived by correlating spatial and temporal variations in various wavelengths to the brightness fluctuation spectrum of the sun in those wavelengths. Once the approach is generalized, then the focus will naturally move from a 2-D to an n-D classification, where "n" includes time and frequency. Here, the 2-D perspective refers both to the actual SOH0 Michelson Doppler Imager (MDI) images that are processed, but also refers to the fact that a 2-D matrix is created from each image during preprocessing. The 2-D matrix is the result of running Principal Component Analysis (PCA) over the selected dataset images, and the resulting matrices and their eigenvalues are the objects that are stored in a database, classified, and compared. These matrices are indexed according to the standard McIntosh classification scheme.
LIME: 3D visualisation and interpretation of virtual geoscience models
NASA Astrophysics Data System (ADS)
Buckley, Simon; Ringdal, Kari; Dolva, Benjamin; Naumann, Nicole; Kurz, Tobias
2017-04-01
Three-dimensional and photorealistic acquisition of surface topography, using methods such as laser scanning and photogrammetry, has become widespread across the geosciences over the last decade. With recent innovations in photogrammetric processing software, robust and automated data capture hardware, and novel sensor platforms, including unmanned aerial vehicles, obtaining 3D representations of exposed topography has never been easier. In addition to 3D datasets, fusion of surface geometry with imaging sensors, such as multi/hyperspectral, thermal and ground-based InSAR, and geophysical methods, create novel and highly visual datasets that provide a fundamental spatial framework to address open geoscience research questions. Although data capture and processing routines are becoming well-established and widely reported in the scientific literature, challenges remain related to the analysis, co-visualisation and presentation of 3D photorealistic models, especially for new users (e.g. students and scientists new to geomatics methods). Interpretation and measurement is essential for quantitative analysis of 3D datasets, and qualitative methods are valuable for presentation purposes, for planning and in education. Motivated by this background, the current contribution presents LIME, a lightweight and high performance 3D software for interpreting and co-visualising 3D models and related image data in geoscience applications. The software focuses on novel data integration and visualisation of 3D topography with image sources such as hyperspectral imagery, logs and interpretation panels, geophysical datasets and georeferenced maps and images. High quality visual output can be generated for dissemination purposes, to aid researchers with communication of their research results. The background of the software is described and case studies from outcrop geology, in hyperspectral mineral mapping and geophysical-geospatial data integration are used to showcase the novel methods developed.
A General Purpose Feature Extractor for Light Detection and Ranging Data
2010-11-17
datasets, and the 3D MIT DARPA Urban Challenge dataset. Keywords: SLAM ; LIDARs ; feature detection; uncertainty estimates; descriptors 1. Introduction The...November 2010 Abstract: Feature extraction is a central step of processing Light Detection and Ranging ( LIDAR ) data. Existing detectors tend to exploit...detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image
Whole lung morphometry with 3D multiple b-value hyperpolarized gas MRI and compressed sensing.
Chan, Ho-Fung; Stewart, Neil J; Parra-Robles, Juan; Collier, Guilhem J; Wild, Jim M
2017-05-01
To demonstrate three-dimensional (3D) multiple b-value diffusion-weighted (DW) MRI of hyperpolarized 3 He gas for whole lung morphometry with compressed sensing (CS). A fully-sampled, two b-value, 3D hyperpolarized 3 He DW-MRI dataset was acquired from the lungs of a healthy volunteer and retrospectively undersampled in the k y and k z phase-encoding directions for CS simulations. Optimal k-space undersampling patterns were determined by minimizing the mean absolute error between reconstructed and fully-sampled 3 He apparent diffusion coefficient (ADC) maps. Prospective three-fold, undersampled, 3D multiple b-value 3 He DW-MRI datasets were acquired from five healthy volunteers and one chronic obstructive pulmonary disease (COPD) patient, and the mean values of maps of ADC and mean alveolar dimension (Lm D ) were validated against two-dimensional (2D) and 3D fully-sampled 3 He DW-MRI experiments. Reconstructed undersampled datasets showed no visual artifacts and good preservation of the main image features and quantitative information. A good agreement between fully-sampled and prospective undersampled datasets was found, with a mean difference of +3.4% and +5.1% observed in mean global ADC and Lm D values, respectively. These differences were within the standard deviation range and consistent with values reported from healthy and COPD lungs. Accelerated CS acquisition has facilitated 3D multiple b-value 3 He DW-MRI scans in a single breath-hold, enabling whole lung morphometry mapping. Magn Reson Med 77:1916-1925, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.
2008-10-20
One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasetsmore » having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic value for both ER-positive and ER-negative breast cancer. The signature was selected using a novel biological approach and hence holds promise to represent the key biological processes of breast cancer.« less
Arlt, Stephan; Noser, Hansrudi; Wienke, Andreas; Radetzki, Florian; Hofmann, Gunther Olaf; Mendel, Thomas
2018-05-21
Acetabular fracture surgery is directed toward anatomical reduction and stable fixation to allow for the early functional rehabilitation of an injured hip joint. Recent biomechanical investigations have shown the superiority of using an additional screw in the infraacetabular (IA) region, thereby transfixing the separated columns to strengthen the construct by closing the periacetabular fixation frame. However, the inter-individual existence and variance concerning secure IA screw corridors are poorly understood. This computer-aided 3-D radiomorphometric study examined 124 CT Digital Imaging and Communications in Medicine (DICOM) datasets of intact human pelves (248 acetabula) to visualize the spatial IA corridors as the sum of all intraosseous screw positions. DICOM files were pre-processed using the Amira® 4.2 visualization software. Final corridor computation was accomplished using a custom-made software algorithm. The volumetric measurement data of each corridor were calculated for further statistical analyses. Correlations between the volumetric values and the biometric data were investigated. Furthermore, the influence of hip dysplasia on the IA corridor configuration was analyzed. The IA corridors consistently showed a double-cone shape with the isthmus located at the acetabular fovea. In 97% of male and 91% of female acetabula, a corridor for a 3.5-mm screw could be found. The number of IA corridors was significantly lower in females for screw diameters ≥ 4.5 mm. The mean 3.5-mm screw corridor volume was 16 cm 3 in males and 9.2 cm 3 in female pelves. Corridor volumes were significantly positively correlated with body height and weight and with the diameter of Köhler's teardrop on standard AP pelvic X-rays. No correlation was observed between hip dysplasia and the IA corridor extent. IA corridors are consistently smaller in females. However, 3.5-mm small fragment screws may still be used as the standard implant because sex-specific differences are significant only with screw diameters ≥ 4.5 mm. Congenital hip dysplasia does not affect secure IA screw insertion. The described method allows 3-D shape analyses with highly reliable results. The visualization of secure IA corridors may support the spatial awareness of surgeons. Volumetric data allow the reliable assessment of individual IA corridors using standard AP X-ray views, which aids preoperative planning.
van Veelen, G A; Schweitzer, K J; van der Vaart, C H
2013-11-01
To evaluate the reliability of measurements of the levator hiatus and levator-urethra gap (LUG) using three/four-dimensional (3D/4D) transperineal ultrasound in women during their first pregnancy and 6 months postpartum, and to assess the learning process for these measurements. An inexperienced observer was taught to perform measurements of the levator hiatus and LUG by an experienced observer. After training, 3D/4D ultrasound volume datasets of 40 women in the first trimester were analyzed by these two observers. Another training session then took place and both observers repeated the analyses of the same volume datasets. Finally, analyses of 40 volume datasets of the women 6 months postpartum were performed by both observers. Intra- and interobserver reliability were determined by intraclass correlation coefficients (ICC) with 95% CIs. For levator hiatal measurements, in the women during their first pregnancy the interobserver reliability was substantial to almost perfect after both the first and second training session (ICC, 0.62-0.83 and 0.71-0.89, respectively, for anteroposterior diameter, transverse diameter and area at rest, on contraction and on Valsalva) and the intraobserver reliability was substantial to almost perfect for both observers. For these measurements performed once the women had delivered, interobserver reliability was moderate to almost perfect. For LUG measurements performed during pregnancy, interobserver reliability was slight to moderate after the first training session (ICC, 0.14-0.54), but improved after the second training session (ICC, 0.38-0.71), and intraobserver reliability was moderate to substantial for the experienced observer and slight to moderate for the inexperienced observer. For these measurements performed when the women had delivered, interobserver reliability was fair to moderate. The levator hiatus and LUG can be measured reliably using 3D/4D ultrasound in primigravid and primiparous women. The technique to measure dimensions of the levator hiatus requires limited teaching, but LUG measurements are more difficult and require more extensive training. Copyright © 2013 ISUOG. Published by John Wiley & Sons Ltd.
Evaluation of regression-based 3-D shoulder rhythms.
Xu, Xu; Dickerson, Clark R; Lin, Jia-Hua; McGorry, Raymond W
2016-08-01
The movements of the humerus, the clavicle, and the scapula are not completely independent. The coupled pattern of movement of these bones is called the shoulder rhythm. To date, multiple studies have focused on providing regression-based 3-D shoulder rhythms, in which the orientations of the clavicle and the scapula are estimated by the orientation of the humerus. In this study, six existing regression-based shoulder rhythms were evaluated by an independent dataset in terms of their predictability. The datasets include the measured orientations of the humerus, the clavicle, and the scapula of 14 participants over 118 different upper arm postures. The predicted orientations of the clavicle and the scapula were derived from applying those regression-based shoulder rhythms to the humerus orientation. The results indicated that none of those regression-based shoulder rhythms provides consistently more accurate results than the others. For all the joint angles and all the shoulder rhythms, the RMSE are all greater than 5°. Among those shoulder rhythms, the scapula lateral/medial rotation has the strongest correlation between the predicted and the measured angles, while the other thoracoclavicular and thoracoscapular bone orientation angles only showed a weak to moderate correlation. Since the regression-based shoulder rhythm has been adopted for shoulder biomechanical models to estimate shoulder muscle activities and structure loads, there needs to be further investigation on how the predicted error from the shoulder rhythm affects the output of the biomechanical model. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Mahmoudzadeh, Amir Pasha; Kashou, Nasser H.
2013-01-01
Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method. PMID:24000283
Mahmoudzadeh, Amir Pasha; Kashou, Nasser H
2013-01-01
Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.
Farooqi, Kanwal M; Lengua, Carlos Gonzalez; Weinberg, Alan D; Nielsen, James C; Sanz, Javier
2016-08-01
The method of cardiac magnetic resonance (CMR) three-dimensional (3D) image acquisition and post-processing which should be used to create optimal virtual models for 3D printing has not been studied systematically. Patients (n = 19) who had undergone CMR including both 3D balanced steady-state free precession (bSSFP) imaging and contrast-enhanced magnetic resonance angiography (MRA) were retrospectively identified. Post-processing for the creation of virtual 3D models involved using both myocardial (MS) and blood pool (BP) segmentation, resulting in four groups: Group 1-bSSFP/MS, Group 2-bSSFP/BP, Group 3-MRA/MS and Group 4-MRA/BP. The models created were assessed by two raters for overall quality (1-poor; 2-good; 3-excellent) and ability to identify predefined vessels (1-5: superior vena cava, inferior vena cava, main pulmonary artery, ascending aorta and at least one pulmonary vein). A total of 76 virtual models were created from 19 patient CMR datasets. The mean overall quality scores for Raters 1/2 were 1.63 ± 0.50/1.26 ± 0.45 for Group 1, 2.12 ± 0.50/2.26 ± 0.73 for Group 2, 1.74 ± 0.56/1.53 ± 0.61 for Group 3 and 2.26 ± 0.65/2.68 ± 0.48 for Group 4. The numbers of identified vessels for Raters 1/2 were 4.11 ± 1.32/4.05 ± 1.31 for Group 1, 4.90 ± 0.46/4.95 ± 0.23 for Group 2, 4.32 ± 1.00/4.47 ± 0.84 for Group 3 and 4.74 ± 0.56/4.63 ± 0.49 for Group 4. Models created using BP segmentation (Groups 2 and 4) received significantly higher ratings than those created using MS for both overall quality and number of vessels visualized (p < 0.05), regardless of the acquisition technique. There were no significant differences between Groups 1 and 3. The ratings for Raters 1 and 2 had good correlation for overall quality (ICC = 0.63) and excellent correlation for the total number of vessels visualized (ICC = 0.77). The intra-rater reliability was good for Rater A (ICC = 0.65). Three models were successfully printed on desktop 3D printers with good quality and accurate representation of the virtual 3D models. We recommend using BP segmentation with either MRA or bSSFP source datasets to create virtual 3D models for 3D printing. Desktop 3D printers can offer good quality printed models with accurate representation of anatomic detail.
Virtual probing system for medical volume data
NASA Astrophysics Data System (ADS)
Xiao, Yongfei; Fu, Yili; Wang, Shuguo
2007-12-01
Because of the huge computation in 3D medical data visualization, looking into its inner data interactively is always a problem to be resolved. In this paper, we present a novel approach to explore 3D medical dataset in real time by utilizing a 3D widget to manipulate the scanning plane. With the help of the 3D texture property in modern graphics card, a virtual scanning probe is used to explore oblique clipping plane of medical volume data in real time. A 3D model of the medical dataset is also rendered to illustrate the relationship between the scanning-plane image and the other tissues in medical data. It will be a valuable tool in anatomy education and understanding of medical images in the medical research.
Rashno, Abdolreza; Nazari, Behzad; Koozekanani, Dara D.; Drayna, Paul M.; Sadri, Saeed; Rabbani, Hossein
2017-01-01
A fully-automated method based on graph shortest path, graph cut and neutrosophic (NS) sets is presented for fluid segmentation in OCT volumes for exudative age related macular degeneration (EAMD) subjects. The proposed method includes three main steps: 1) The inner limiting membrane (ILM) and the retinal pigment epithelium (RPE) layers are segmented using proposed methods based on graph shortest path in NS domain. A flattened RPE boundary is calculated such that all three types of fluid regions, intra-retinal, sub-retinal and sub-RPE, are located above it. 2) Seed points for fluid (object) and tissue (background) are initialized for graph cut by the proposed automated method. 3) A new cost function is proposed in kernel space, and is minimized with max-flow/min-cut algorithms, leading to a binary segmentation. Important properties of the proposed steps are proven and quantitative performance of each step is analyzed separately. The proposed method is evaluated using a publicly available dataset referred as Optima and a local dataset from the UMN clinic. For fluid segmentation in 2D individual slices, the proposed method outperforms the previously proposed methods by 18%, 21% with respect to the dice coefficient and sensitivity, respectively, on the Optima dataset, and by 16%, 11% and 12% with respect to the dice coefficient, sensitivity and precision, respectively, on the local UMN dataset. Finally, for 3D fluid volume segmentation, the proposed method achieves true positive rate (TPR) and false positive rate (FPR) of 90% and 0.74%, respectively, with a correlation of 95% between automated and expert manual segmentations using linear regression analysis. PMID:29059257
Jong, Victor L; Novianti, Putri W; Roes, Kit C B; Eijkemans, Marinus J C
2014-12-01
The literature shows that classifiers perform differently across datasets and that correlations within datasets affect the performance of classifiers. The question that arises is whether the correlation structure within datasets differ significantly across diseases. In this study, we evaluated the homogeneity of correlation structures within and between datasets of six etiological disease categories; inflammatory, immune, infectious, degenerative, hereditary and acute myeloid leukemia (AML). We also assessed the effect of filtering; detection call and variance filtering on correlation structures. We downloaded microarray datasets from ArrayExpress for experiments meeting predefined criteria and ended up with 12 datasets for non-cancerous diseases and six for AML. The datasets were preprocessed by a common procedure incorporating platform-specific recommendations and the two filtering methods mentioned above. Homogeneity of correlation matrices between and within datasets of etiological diseases was assessed using the Box's M statistic on permuted samples. We found that correlation structures significantly differ between datasets of the same and/or different etiological disease categories and that variance filtering eliminates more uncorrelated probesets than detection call filtering and thus renders the data highly correlated.
The Most Common Geometric and Semantic Errors in CityGML Datasets
NASA Astrophysics Data System (ADS)
Biljecki, F.; Ledoux, H.; Du, X.; Stoter, J.; Soon, K. H.; Khoo, V. H. S.
2016-10-01
To be used as input in most simulation and modelling software, 3D city models should be geometrically and topologically valid, and semantically rich. We investigate in this paper what is the quality of currently available CityGML datasets, i.e. we validate the geometry/topology of the 3D primitives (Solid and MultiSurface), and we validate whether the semantics of the boundary surfaces of buildings is correct or not. We have analysed all the CityGML datasets we could find, both from portals of cities and on different websites, plus a few that were made available to us. We have thus validated 40M surfaces in 16M 3D primitives and 3.6M buildings found in 37 CityGML datasets originating from 9 countries, and produced by several companies with diverse software and acquisition techniques. The results indicate that CityGML datasets without errors are rare, and those that are nearly valid are mostly simple LOD1 models. We report on the most common errors we have found, and analyse them. One main observation is that many of these errors could be automatically fixed or prevented with simple modifications to the modelling software. Our principal aim is to highlight the most common errors so that these are not repeated in the future. We hope that our paper and the open-source software we have developed will help raise awareness for data quality among data providers and 3D GIS software producers.
Laplace-domain waveform modeling and inversion for the 3D acoustic-elastic coupled media
NASA Astrophysics Data System (ADS)
Shin, Jungkyun; Shin, Changsoo; Calandra, Henri
2016-06-01
Laplace-domain waveform inversion reconstructs long-wavelength subsurface models by using the zero-frequency component of damped seismic signals. Despite the computational advantages of Laplace-domain waveform inversion over conventional frequency-domain waveform inversion, an acoustic assumption and an iterative matrix solver have been used to invert 3D marine datasets to mitigate the intensive computing cost. In this study, we develop a Laplace-domain waveform modeling and inversion algorithm for 3D acoustic-elastic coupled media by using a parallel sparse direct solver library (MUltifrontal Massively Parallel Solver, MUMPS). We precisely simulate a real marine environment by coupling the 3D acoustic and elastic wave equations with the proper boundary condition at the fluid-solid interface. In addition, we can extract the elastic properties of the Earth below the sea bottom from the recorded acoustic pressure datasets. As a matrix solver, the parallel sparse direct solver is used to factorize the non-symmetric impedance matrix in a distributed memory architecture and rapidly solve the wave field for a number of shots by using the lower and upper matrix factors. Using both synthetic datasets and real datasets obtained by a 3D wide azimuth survey, the long-wavelength component of the P-wave and S-wave velocity models is reconstructed and the proposed modeling and inversion algorithm are verified. A cluster of 80 CPU cores is used for this study.
Temkin, Bharti; Acosta, Eric; Malvankar, Ameya; Vaidyanath, Sreeram
2006-04-01
The Visible Human digital datasets make it possible to develop computer-based anatomical training systems that use virtual anatomical models (virtual body structures-VBS). Medical schools are combining these virtual training systems and classical anatomy teaching methods that use labeled images and cadaver dissection. In this paper we present a customizable web-based three-dimensional anatomy training system, W3D-VBS. W3D-VBS uses National Library of Medicine's (NLM) Visible Human Male datasets to interactively locate, explore, select, extract, highlight, label, and visualize, realistic 2D (using axial, coronal, and sagittal views) and 3D virtual structures. A real-time self-guided virtual tour of the entire body is designed to provide detailed anatomical information about structures, substructures, and proximal structures. The system thus facilitates learning of visuospatial relationships at a level of detail that may not be possible by any other means. The use of volumetric structures allows for repeated real-time virtual dissections, from any angle, at the convenience of the user. Volumetric (3D) virtual dissections are performed by adding, removing, highlighting, and labeling individual structures (and/or entire anatomical systems). The resultant virtual explorations (consisting of anatomical 2D/3D illustrations and animations), with user selected highlighting colors and label positions, can be saved and used for generating lesson plans and evaluation systems. Tracking users' progress using the evaluation system helps customize the curriculum, making W3D-VBS a powerful learning tool. Our plan is to incorporate other Visible Human segmented datasets, especially datasets with higher resolutions, that make it possible to include finer anatomical structures such as nerves and small vessels. (c) 2006 Wiley-Liss, Inc.
Borumandi, Farzad; Hammer, Beat; Noser, Hansrudi; Kamer, Lukas
2013-05-01
Three-dimensional (3D) CT reconstruction of the bony orbit for accurate measurement and classification of the complex orbital morphology may not be suitable for daily practice. We present an easily measurable two-dimensional (2D) reference dataset of the bony orbit for study of individual orbital morphology prior to decompression surgery in Graves' orbitopathy. CT images of 70 European adults (140 orbits) with unaffected orbits were included. On axial views, the following orbital dimensions were assessed: orbital length (OL), globe length (GL), GL/OL ratio and cone angle. Postprocessed CT data were required to measure the corresponding 3D orbital parameters. The 2D and 3D orbital parameters were correlated. The 2D orbital parameters were significantly correlated to the corresponding 3D parameters (significant at the 0.01 level). The average GL was 25 mm (SD±1.0), the average OL was 42 mm (SD±2.0) and the average GL/OL ratio was 0.6 (SD±0.03). The posterior cone angle was, on average, 50.2° (SD±4.1). Three orbital sizes were classified: short (OL≤40 mm), medium (OL>40 to <45 mm) and large (OL≥45 mm). We present easily measurable reference data for the orbit that can be used for preoperative study and classification of individual orbital morphology. A short and shallow orbit may require a different decompression technique than a large and deep orbit. Prospective clinical trials are needed to demonstrate how individual orbital morphology affects the outcome of decompression surgery.
Welsh, A W; Hou, M; Meriki, N; Martins, W P
2012-10-01
Volumetric impedance indices derived from spatiotemporal image correlation (STIC) power Doppler ultrasound (PDU) might overcome the influence of machine settings and attenuation. We examined the feasibility of obtaining these indices from spherical samples of anterior placentas in healthy pregnancies, and assessed intraobserver reliability and correlation with conventional umbilical artery (UA) impedance indices. Uncomplicated singleton pregnancies with anterior placenta were included in the study. A single observer evaluated UA pulsatility index (PI), resistance index (RI) and systolic/diastolic ratio (S/D) and acquired three STIC-PDU datasets from the placenta just above the placental cord insertion. Another observer analyzed the STIC-PDU datasets using Virtual Organ Computer-aided AnaLysis (VOCAL) spherical samples from every frame to determine the vascularization index (VI) and vascularization flow index (VFI); maximum, minimum and average values were used to determine the three volumetric impedance indices (vPI, vRI, vS/D). Intraobserver reliability was examined by intraclass correlation coefficients (ICC) and association between volumetric indices from placenta, and UA Doppler indices were assessed by Pearson's correlation coefficient. A total of 25 pregnant women were evaluated but five were excluded because of artifacts observed during analysis. The reliability of measurement of volumetric indices of both VI and VFI from three STIC-PDU datasets was similar, with all ICCs ≥ 0.78. Pearson's r values showed a weak and non-significant correlation between UA pulsed-wave Doppler indices and their respective volumetric indices from spherical samples of placenta (all r ≥ 0.23). VOCAL indices from specific phases of the cardiac cycle showed good repeatability (ICC ≥ 0.92). Volumetric impedance indices determined from spherical samples of placenta are sufficiently reliable but do not correlate with UA Doppler indices in healthy pregnancies. Copyright © 2012 ISUOG. Published by John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Alexandrakis, C.; Calo, M.; Bouchaala, F.; Vavrycuk, V.
2012-04-01
The Novy Kostel region in West Bohemia is an area prone to periodic bursts of natural microseismic activity. In this study, we use 476 events from the October 2008 earthquake swarm recorded on the WEBNET seismic network. The foci occurred on the northern extension of the Marianske-Lazne Fault near the town of Novy Kostel in the Czech Republic. Initial source locations indicated a rupture zone approximately 3 km along the fault with the sources spread over 4 km depth, centered at 9 km. We use the double-difference tomography method to study the fault structure by relocating the sources and inverting for the P and S velocities in the rupture region. Events are first relocated using the HypoDD program (Waldhauser and Ellsworth, 2000) using both catalog and cross-correlated datasets. These datasets, along with the absolute time picks are then used by the TomoDD program (Zhang and Thurber, 2003) to iteratively relocate the sources and invert for the 3D seismic structure. This dataset is ideal for this procedure as the cluster is very condensed and the WEBNET network offers ray coverage in all directions. The relocated events flatten onto a fault plane striking at 169 degrees NE. This fault plane has three sections with distinct dip angles. At the shallowest (up to 8 km) and deepest (10 - 11 km) parts of the fault, the dip is shallow, whereas the middle section has a steep dip angle. Most events occur at the deeper part of the middle section. The inverted velocities correspond well to results from regional seismic refraction surveys (e.g., CELEBRATION 2000). Here, more details of the 3D velocity structure are revealed. As expected, velocities to the east of the fault are overall higher, corresponding to the uplifted northern margin of the Eger Rift. Finer structures surrounding the source region are also resolved.
Three-dimensional reconstruction of Roman coins from photometric image sets
NASA Astrophysics Data System (ADS)
MacDonald, Lindsay; Moitinho de Almeida, Vera; Hess, Mona
2017-01-01
A method is presented for increasing the spatial resolution of the three-dimensional (3-D) digital representation of coins by combining fine photometric detail derived from a set of photographic images with accurate geometric data from a 3-D laser scanner. 3-D reconstructions were made of the obverse and reverse sides of two ancient Roman denarii by processing sets of images captured under directional lighting in an illumination dome. Surface normal vectors were calculated by a "bounded regression" technique, excluding both shadow and specular components of reflection from the metallic surface. Because of the known difficulty in achieving geometric accuracy when integrating photometric normals to produce a digital elevation model, the low spatial frequencies were replaced by those derived from the point cloud produced by a 3-D laser scanner. The two datasets were scaled and registered by matching the outlines and correlating the surface gradients. The final result was a realistic rendering of the coins at a spatial resolution of 75 pixels/mm (13-μm spacing), in which the fine detail modulated the underlying geometric form of the surface relief. The method opens the way to obtain high quality 3-D representations of coins in collections to enable interactive online viewing.
de Dumast, Priscille; Mirabel, Clément; Cevidanes, Lucia; Ruellas, Antonio; Yatabe, Marilia; Ioshida, Marcos; Ribera, Nina Tubau; Michoud, Loic; Gomes, Liliane; Huang, Chao; Zhu, Hongtu; Muniz, Luciana; Shoukri, Brandon; Paniagua, Beatriz; Styner, Martin; Pieper, Steve; Budin, Francois; Vimort, Jean-Baptiste; Pascal, Laura; Prieto, Juan Carlos
2018-07-01
The purpose of this study is to describe the methodological innovations of a web-based system for storage, integration and computation of biomedical data, using a training imaging dataset to remotely compute a deep neural network classifier of temporomandibular joint osteoarthritis (TMJOA). This study imaging dataset consisted of three-dimensional (3D) surface meshes of mandibular condyles constructed from cone beam computed tomography (CBCT) scans. The training dataset consisted of 259 condyles, 105 from control subjects and 154 from patients with diagnosis of TMJ OA. For the image analysis classification, 34 right and left condyles from 17 patients (39.9 ± 11.7 years), who experienced signs and symptoms of the disease for less than 5 years, were included as the testing dataset. For the integrative statistical model of clinical, biological and imaging markers, the sample consisted of the same 17 test OA subjects and 17 age and sex matched control subjects (39.4 ± 15.4 years), who did not show any sign or symptom of OA. For these 34 subjects, a standardized clinical questionnaire, blood and saliva samples were also collected. The technological methodologies in this study include a deep neural network classifier of 3D condylar morphology (ShapeVariationAnalyzer, SVA), and a flexible web-based system for data storage, computation and integration (DSCI) of high dimensional imaging, clinical, and biological data. The DSCI system trained and tested the neural network, indicating 5 stages of structural degenerative changes in condylar morphology in the TMJ with 91% close agreement between the clinician consensus and the SVA classifier. The DSCI remotely ran with a novel application of a statistical analysis, the Multivariate Functional Shape Data Analysis, that computed high dimensional correlations between shape 3D coordinates, clinical pain levels and levels of biological markers, and then graphically displayed the computation results. The findings of this study demonstrate a comprehensive phenotypic characterization of TMJ health and disease at clinical, imaging and biological levels, using novel flexible and versatile open-source tools for a web-based system that provides advanced shape statistical analysis and a neural network based classification of temporomandibular joint osteoarthritis. Published by Elsevier Ltd.
a Critical Review of Automated Photogrammetric Processing of Large Datasets
NASA Astrophysics Data System (ADS)
Remondino, F.; Nocerino, E.; Toschi, I.; Menna, F.
2017-08-01
The paper reports some comparisons between commercial software able to automatically process image datasets for 3D reconstruction purposes. The main aspects investigated in the work are the capability to correctly orient large sets of image of complex environments, the metric quality of the results, replicability and redundancy. Different datasets are employed, each one featuring a diverse number of images, GSDs at cm and mm resolutions, and ground truth information to perform statistical analyses of the 3D results. A summary of (photogrammetric) terms is also provided, in order to provide rigorous terms of reference for comparisons and critical analyses.
The polyGeVero® software for fast and easy computation of 3D radiotherapy dosimetry data
NASA Astrophysics Data System (ADS)
Kozicki, Marek; Maras, Piotr
2015-01-01
The polyGeVero® software package was elaborated for calculations of 3D dosimetry data such as the polymer gel dosimetry. It comprises four workspaces designed for: i) calculating calibrations, ii) storing calibrations in a database, iii) calculating dose distribution 3D cubes, iv) comparing two datasets e.g. a measured one with a 3D dosimetry with a calculated one with the aid of a treatment planning system. To accomplish calculations the software was equipped with a number of tools such as the brachytherapy isotopes database, brachytherapy dose versus distance calculation based on the line approximation approach, automatic spatial alignment of two 3D dose cubes for comparison purposes, 3D gamma index, 3D gamma angle, 3D dose difference, Pearson's coefficient, histograms calculations, isodoses superimposition for two datasets, and profiles calculations in any desired direction. This communication is to briefly present the main functions of the software and report on the speed of calculations performed by polyGeVero®.
The Role of Research Institutions in Building Visual Content for the Geowall
NASA Astrophysics Data System (ADS)
Newman, R. L.; Kilb, D.; Nayak, A.; Kent, G.
2003-12-01
The advent of the low-cost Geowall (http://www.geowall.org) allows researchers and students to study 3-D geophysical datasets in a collaborative setting. Although 3-D visual objects can aid the understanding of geological principles in the classroom, it is often difficult for staff to develop their own custom visual objects. This is a fundamentally important aspect that research institutions that store large (terabyte) geophysical datasets can address. At Scripps Institution of Oceanography (SIO) we regularly explore gigabyte 3-D visual objects in the SIO Visualization Center (http://siovizcenter.ucsd.edu). Exporting these datasets for use with the Geowall has become routine with current software applications such as IVS's Fledermaus and iView3D. We have developed visualizations that incorporate topographic, bathymetric, and 3-D volumetric crustal datasets to demonstrate fundamental principles of earth science including plate tectonics, seismology, sea-level change, and neotectonics. These visualizations are available for download either via FTP or a website, and have been incorporated into graduate and undergraduate classes at both SIO and the University of California, San Diego. Additionally, staff at the Visualization Center develop content for external schools and colleges such as the Preuss School, a local middle/high school, where a Geowall was installed in February 2003 and curriculum developed for 8th grade students. We have also developed custom visual objects for researchers and educators at diverse education institutions across the globe. At SIO we encourage graduate students and researchers alike to develop visual objects of their datasets through innovative classes and competitions. This not only assists the researchers themselves in understanding their data but also increases the number of visual objects freely available to geoscience educators worldwide.
3D shape representation with spatial probabilistic distribution of intrinsic shape keypoints
NASA Astrophysics Data System (ADS)
Ghorpade, Vijaya K.; Checchin, Paul; Malaterre, Laurent; Trassoudaine, Laurent
2017-12-01
The accelerated advancement in modeling, digitizing, and visualizing techniques for 3D shapes has led to an increasing amount of 3D models creation and usage, thanks to the 3D sensors which are readily available and easy to utilize. As a result, determining the similarity between 3D shapes has become consequential and is a fundamental task in shape-based recognition, retrieval, clustering, and classification. Several decades of research in Content-Based Information Retrieval (CBIR) has resulted in diverse techniques for 2D and 3D shape or object classification/retrieval and many benchmark data sets. In this article, a novel technique for 3D shape representation and object classification has been proposed based on analyses of spatial, geometric distributions of 3D keypoints. These distributions capture the intrinsic geometric structure of 3D objects. The result of the approach is a probability distribution function (PDF) produced from spatial disposition of 3D keypoints, keypoints which are stable on object surface and invariant to pose changes. Each class/instance of an object can be uniquely represented by a PDF. This shape representation is robust yet with a simple idea, easy to implement but fast enough to compute. Both Euclidean and topological space on object's surface are considered to build the PDFs. Topology-based geodesic distances between keypoints exploit the non-planar surface properties of the object. The performance of the novel shape signature is tested with object classification accuracy. The classification efficacy of the new shape analysis method is evaluated on a new dataset acquired with a Time-of-Flight camera, and also, a comparative evaluation on a standard benchmark dataset with state-of-the-art methods is performed. Experimental results demonstrate superior classification performance of the new approach on RGB-D dataset and depth data.
A 2D range Hausdorff approach to 3D facial recognition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin
2004-11-01
This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and templatemore » datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.« less
Milešević, Jelena; Samaniego, Lourdes; Kiely, Mairead; Glibetić, Maria; Roe, Mark; Finglas, Paul
2018-02-01
A review of national nutrition surveys from 2000 to date, demonstrated high prevalence of vitamin D intakes below the EFSA Adequate Intake (AI) (<15μg/d vitamin D) in adults across Europe. Dietary assessment and modelling are required to monitor efficacy and safety of ongoing strategic vitamin D fortification. To support these studies, a specialized vitamin D food composition dataset, based on EuroFIR standards, was compiled. The FoodEXplorer™ tool was used to retrieve well documented analytical data for vitamin D and arrange the data into two datasets - European (8 European countries, 981 data values) and US (1836 data values). Data were classified, using the LanguaL™, FoodEX2 and ODIN classification systems and ranked according to quality criteria. Significant differences in the content, quality of data values, missing data on vitamin D 2 and 25(OH)D 3 and documentation of analytical methods were observed. The dataset is available through the EuroFIR platform. Copyright © 2017 Elsevier Ltd. All rights reserved.
Volonghi, Paola; Tresoldi, Daniele; Cadioli, Marcello; Usuelli, Antonio M; Ponzini, Raffaele; Morbiducci, Umberto; Esposito, Antonio; Rizzo, Giovanna
2016-02-01
To propose and assess a new method that automatically extracts a three-dimensional (3D) geometric model of the thoracic aorta (TA) from 3D cine phase contrast MRI (PCMRI) acquisitions. The proposed method is composed of two steps: segmentation of the TA and creation of the 3D geometric model. The segmentation algorithm, based on Level Set, was set and applied to healthy subjects acquired in three different modalities (with and without SENSE reduction factors). Accuracy was evaluated using standard quality indices. The 3D model is characterized by the vessel surface mesh and its centerline; the comparison of models obtained from the three different datasets was also carried out in terms of radius of curvature (RC) and average tortuosity (AT). In all datasets, the segmentation quality indices confirmed very good agreement between manual and automatic contours (average symmetric distance < 1.44 mm, DICE Similarity Coefficient > 0.88). The 3D models extracted from the three datasets were found to be comparable, with differences of less than 10% for RC and 11% for AT. Our method was found effective on PCMRI data to provide a 3D geometric model of the TA, to support morphometric and hemodynamic characterization of the aorta. © 2015 Wiley Periodicals, Inc.
Analysis of 3d Building Models Accuracy Based on the Airborne Laser Scanning Point Clouds
NASA Astrophysics Data System (ADS)
Ostrowski, W.; Pilarska, M.; Charyton, J.; Bakuła, K.
2018-05-01
Creating 3D building models in large scale is becoming more popular and finds many applications. Nowadays, a wide term "3D building models" can be applied to several types of products: well-known CityGML solid models (available on few Levels of Detail), which are mainly generated from Airborne Laser Scanning (ALS) data, as well as 3D mesh models that can be created from both nadir and oblique aerial images. City authorities and national mapping agencies are interested in obtaining the 3D building models. Apart from the completeness of the models, the accuracy aspect is also important. Final accuracy of a building model depends on various factors (accuracy of the source data, complexity of the roof shapes, etc.). In this paper the methodology of inspection of dataset containing 3D models is presented. The proposed approach check all building in dataset with comparison to ALS point clouds testing both: accuracy and level of details. Using analysis of statistical parameters for normal heights for reference point cloud and tested planes and segmentation of point cloud provides the tool that can indicate which building and which roof plane in do not fulfill requirement of model accuracy and detail correctness. Proposed method was tested on two datasets: solid and mesh model.
Effective 2D-3D medical image registration using Support Vector Machine.
Qi, Wenyuan; Gu, Lixu; Zhao, Qiang
2008-01-01
Registration of pre-operative 3D volume dataset and intra-operative 2D images gradually becomes an important technique to assist radiologists in diagnosing complicated diseases easily and quickly. In this paper, we proposed a novel 2D/3D registration framework based on Support Vector Machine (SVM) to compensate the disadvantages of generating large number of DRR images in the stage of intra-operation. Estimated similarity metric distribution could be built up from the relationship between parameters of transform and prior sparse target metric values by means of SVR method. Based on which, global optimal parameters of transform are finally searched out by an optimizer in order to guide 3D volume dataset to match intra-operative 2D image. Experiments reveal that our proposed registration method improved performance compared to conventional registration method and also provided a precise registration result efficiently.
Choi, Seong Hee; Zhang, Yu; Jiang, Jack J.; Bless, Diane M.; Welham, Nathan V.
2011-01-01
Objective The primary goal of this study was to evaluate a nonlinear dynamic approach to the acoustic analysis of dysphonia associated with vocal fold scar and sulcus vocalis. Study Design Case-control study. Methods Acoustic voice samples from scar/sulcus patients and age/sex-matched controls were analyzed using correlation dimension (D2) and phase plots, time-domain based perturbation indices (jitter, shimmer, signal-to-noise ratio [SNR]), and an auditory-perceptual rating scheme. Signal typing was performed to identify samples with bifurcations and aperiodicity. Results Type 2 and 3 acoustic signals were highly represented in the scar/sulcus patient group. When data were analyzed irrespective of signal type, all perceptual and acoustic indices successfully distinguished scar/sulcus patients from controls. Removal of type 2 and 3 signals eliminated the previously identified differences between experimental groups for all acoustic indices except D2. The strongest perceptual-acoustic correlation in our dataset was observed for SNR; the weakest correlation was observed for D2. Conclusions These findings suggest that D2 is inferior to time-domain based perturbation measures for the analysis of dysphonia associated with scar/sulcus; however, time-domain based algorithms are inherently susceptible to inflation under highly aperiodic (i.e., type 2 and 3) signal conditions. Auditory-perceptual analysis, unhindered by signal aperiodicity, is therefore a robust strategy for distinguishing scar/sulcus patient voices from normal voices. Future acoustic analysis research in this area should consider alternative (e.g., frequency- and quefrency-domain based) measures alongside additional nonlinear approaches. PMID:22516315
3D elastic full waveform inversion: case study from a land seismic survey
NASA Astrophysics Data System (ADS)
Kormann, Jean; Marti, David; Rodriguez, Juan-Esteban; Marzan, Ignacio; Ferrer, Miguel; Gutierrez, Natalia; Farres, Albert; Hanzich, Mauricio; de la Puente, Josep; Carbonell, Ramon
2016-04-01
Full Waveform Inversion (FWI) is one of the most advanced processing methods that is recently reaching a mature state after years of solving theoretical and technical issues such as the non-uniqueness of the solution and harnessing the huge computational power required by realistic scenarios. BSIT (Barcelona Subsurface Imaging Tools, www.bsc.es/bsit) includes a FWI algorithm that can tackle with very complex problems involving large datasets. We present here the application of this system to a 3D dataset acquired to constrain the shallow subsurface. This is where the wavefield is the most complicated, because most of the wavefield conversions takes place in the shallow region and also because the media is much more laterally heterogeneous. With this in mind, at least isotropic elastic approximation would be suitable as kernel engine for FWI. The current study explores the possibilities to apply elastic isotropic FWI using only the vertical component of the recorded seismograms. The survey covers an area of 500×500 m2, and consists in a receivers grid of 10 m×20 m combined with a 250 kg accelerated weight-drop as source on a displaced grid of 20 m×20 m. One of the main challenges in this case study is the costly 3D modeling that includes topography and substantial free surface effects. FWI is applied to a data subset (shooting lines 4 to 12), and is performed for 3 frequencies ranging from 15 to 25 Hz. The starting models are obtained from travel-time tomography and the all computation is run on 75 nodes of Mare Nostrum supercomputer during 3 days. The resulting models provide a higher resolution of the subsurface structures, and show a good correlation with the available borehole measurements. FWI allows to extend in a reliable way this 1D knowledge (borehole) to 3D.
Ali, Abdirahman A; O'Neill, Christopher J; Thomson, Peter C; Kadarmideen, Haja N
2012-07-27
Infectious bovine keratoconjunctivitis (IBK) or 'pinkeye' is an economically important ocular disease that significantly impacts animal performance. Genetic parameters for IBK infection and its genetic and phenotypic correlations with cattle tick counts, number of helminth (unspecified species) eggs per gram of faeces and growth traits in Australian tropically adapted Bos taurus cattle were estimated. Animals were clinically examined for the presence of IBK infection before and after weaning when the calves were 3 to 6 months and 15 to 18 months old, respectively and were also recorded for tick counts, helminth eggs counts as an indicator of intestinal parasites and live weights at several ages including 18 months. Negative genetic correlations were estimated between IBK incidence and weight traits for animals in pre-weaning and post-weaning datasets. Genetic correlations among weight measurements were positive, with moderate to high values. Genetic correlations of IBK incidence with tick counts were positive for the pre-weaning and negative for the post-weaning datasets but negative with helminth eggs counts for the pre-weaning dataset and slightly positive for the post-weaning dataset. Genetic correlations between tick and helminth eggs counts were moderate and positive for both datasets. Phenotypic correlations of IBK incidence with helminth eggs per gram of faeces were moderate and positive for both datasets, but were close to zero for both datasets with tick counts. Our results suggest that genetic selection against IBK incidence in tropical cattle is feasible and that calves genetically prone to acquire IBK infection could also be genetically prone to have a slower growth. The positive genetic correlations among weight traits and between tick and helminth eggs counts suggest that they are controlled by common genes (with pleiotropic effects). Genetic correlations between IBK incidence and tick and helminth egg counts were moderate and opposite between pre-weaning and post-weaning datasets, suggesting that the environmental and (or) maternal effects differ between these two growth phases. This preliminary study provides estimated genetic parameters for IBK incidence, which could be used to design selection and breeding programs for tropical adaptation in beef cattle.
2012-01-01
Background Infectious bovine keratoconjunctivitis (IBK) or ‘pinkeye’ is an economically important ocular disease that significantly impacts animal performance. Genetic parameters for IBK infection and its genetic and phenotypic correlations with cattle tick counts, number of helminth (unspecified species) eggs per gram of faeces and growth traits in Australian tropically adapted Bos taurus cattle were estimated. Methods Animals were clinically examined for the presence of IBK infection before and after weaning when the calves were 3 to 6 months and 15 to 18 months old, respectively and were also recorded for tick counts, helminth eggs counts as an indicator of intestinal parasites and live weights at several ages including 18 months. Results Negative genetic correlations were estimated between IBK incidence and weight traits for animals in pre-weaning and post-weaning datasets. Genetic correlations among weight measurements were positive, with moderate to high values. Genetic correlations of IBK incidence with tick counts were positive for the pre-weaning and negative for the post-weaning datasets but negative with helminth eggs counts for the pre-weaning dataset and slightly positive for the post-weaning dataset. Genetic correlations between tick and helminth eggs counts were moderate and positive for both datasets. Phenotypic correlations of IBK incidence with helminth eggs per gram of faeces were moderate and positive for both datasets, but were close to zero for both datasets with tick counts. Conclusions Our results suggest that genetic selection against IBK incidence in tropical cattle is feasible and that calves genetically prone to acquire IBK infection could also be genetically prone to have a slower growth. The positive genetic correlations among weight traits and between tick and helminth eggs counts suggest that they are controlled by common genes (with pleiotropic effects). Genetic correlations between IBK incidence and tick and helminth egg counts were moderate and opposite between pre-weaning and post-weaning datasets, suggesting that the environmental and (or) maternal effects differ between these two growth phases. This preliminary study provides estimated genetic parameters for IBK incidence, which could be used to design selection and breeding programs for tropical adaptation in beef cattle. PMID:22839739
3D Printing of CT Dataset: Validation of an Open Source and Consumer-Available Workflow.
Bortolotto, Chandra; Eshja, Esmeralda; Peroni, Caterina; Orlandi, Matteo A; Bizzotto, Nicola; Poggi, Paolo
2016-02-01
The broad availability of cheap three-dimensional (3D) printing equipment has raised the need for a thorough analysis on its effects on clinical accuracy. Our aim is to determine whether the accuracy of 3D printing process is affected by the use of a low-budget workflow based on open source software and consumer's commercially available 3D printers. A group of test objects was scanned with a 64-slice computed tomography (CT) in order to build their 3D copies. CT datasets were elaborated using a software chain based on three free and open source software. Objects were printed out with a commercially available 3D printer. Both the 3D copies and the test objects were measured using a digital professional caliper. Overall, the objects' mean absolute difference between test objects and 3D copies is 0.23 mm and the mean relative difference amounts to 0.55 %. Our results demonstrate that the accuracy of 3D printing process remains high despite the use of a low-budget workflow.
Interactive visualization and analysis of multimodal datasets for surgical applications.
Kirmizibayrak, Can; Yim, Yeny; Wakid, Mike; Hahn, James
2012-12-01
Surgeons use information from multiple sources when making surgical decisions. These include volumetric datasets (such as CT, PET, MRI, and their variants), 2D datasets (such as endoscopic videos), and vector-valued datasets (such as computer simulations). Presenting all the information to the user in an effective manner is a challenging problem. In this paper, we present a visualization approach that displays the information from various sources in a single coherent view. The system allows the user to explore and manipulate volumetric datasets, display analysis of dataset values in local regions, combine 2D and 3D imaging modalities and display results of vector-based computer simulations. Several interaction methods are discussed: in addition to traditional interfaces including mouse and trackers, gesture-based natural interaction methods are shown to control these visualizations with real-time performance. An example of a medical application (medialization laryngoplasty) is presented to demonstrate how the combination of different modalities can be used in a surgical setting with our approach.
The 3D Reference Earth Model (REM-3D): Update and Outlook
NASA Astrophysics Data System (ADS)
Lekic, V.; Moulik, P.; Romanowicz, B. A.; Dziewonski, A. M.
2016-12-01
Elastic properties of the Earth's interior (e.g. density, rigidity, compressibility, anisotropy) vary spatially due to changes in temperature, pressure, composition, and flow. In the 20th century, seismologists have constructed reference models of how these quantities vary with depth, notably the PREM model of Dziewonski and Anderson (1981). These 1D reference earth models have proven indispensable in earthquake location, imaging of interior structure, understanding material properties under extreme conditions, and as a reference in other fields, such as particle physics and astronomy. Over the past three decades, more sophisticated efforts by seismologists have yielded several generations of models of how properties vary not only with depth, but also laterally. Yet, though these three-dimensional (3D) models exhibit compelling similarities at large scales, differences in the methodology, representation of structure, and dataset upon which they are based, have prevented the creation of 3D community reference models. We propose to overcome these challenges by compiling, reconciling, and distributing a long period (>15 s) reference seismic dataset, from which we will construct a 3D seismic reference model (REM-3D) for the Earth's mantle, which will come in two flavors: a long wavelength smoothly parameterized model and a set of regional profiles. Here, we summarize progress made in the construction of the reference long period dataset, and present preliminary versions of the REM-3D in order to illustrate the two flavors of REM-3D and their relative advantages and disadvantages. As a community reference model and with fully quantified uncertainties and tradeoffs, REM-3D will facilitate Earth imaging studies, earthquake characterization, inferences on temperature and composition in the deep interior, and be of improved utility to emerging scientific endeavors, such as neutrino geoscience. In this presentation, we outline the outlook for setting up advisory community working groups and the community workshop that would assess progress, evaluate model and dataset performance, identify avenues for improvement, and recommend strategies for maximizing model adoption in and utility for the deep Earth community.
Junge relationships in measurement data for cyclic siloxanes in air.
MacLeod, Matthew; Kierkegaard, Amelie; Genualdi, Susie; Harner, Tom; Scheringer, Martin
2013-10-01
In 1974, Junge postulated a relationship between variability of concentrations of gases in air at remote locations and their atmospheric residence time, and this Junge relationship has subsequently been observed empirically for a range of trace gases. Here, we analyze two previously-published datasets of concentrations of cyclic volatile methyl siloxanes (cVMS) in air and find Junge relationships in both. The first dataset is a time series of concentrations of decamethylcyclopentasiloxane (D5) measured between January and June, 2009 at a rural site in southern Sweden that shows a Junge relationship in the temporal variability of the measurements. The second dataset consists of measurements of hexamethylcyclotrisiloxane (D3), octamethylcyclotetrasiloxane (D4) and D5 made simultaneously at 12 sites in the Global Atmospheric Passive Sampling (GAPS) network that shows a Junge relationship in the spatial variability of the three cVMS congeners. We use the Junge relationship for the GAPS dataset to estimate atmospheric lifetimes of dodecamethylcyclohexasiloxane (D6), 8:2-fluorotelomer alcohol and trichlorinated biphenyls that are within a factor of 3 of estimates based on degradation rate constants for reaction with hydroxyl radical determined in laboratory studies. Copyright © 2012 Elsevier Ltd. All rights reserved.
Antibody-protein interactions: benchmark datasets and prediction tools evaluation
Ponomarenko, Julia V; Bourne, Philip E
2007-01-01
Background The ability to predict antibody binding sites (aka antigenic determinants or B-cell epitopes) for a given protein is a precursor to new vaccine design and diagnostics. Among the various methods of B-cell epitope identification X-ray crystallography is one of the most reliable methods. Using these experimental data computational methods exist for B-cell epitope prediction. As the number of structures of antibody-protein complexes grows, further interest in prediction methods using 3D structure is anticipated. This work aims to establish a benchmark for 3D structure-based epitope prediction methods. Results Two B-cell epitope benchmark datasets inferred from the 3D structures of antibody-protein complexes were defined. The first is a dataset of 62 representative 3D structures of protein antigens with inferred structural epitopes. The second is a dataset of 82 structures of antibody-protein complexes containing different structural epitopes. Using these datasets, eight web-servers developed for antibody and protein binding sites prediction have been evaluated. In no method did performance exceed a 40% precision and 46% recall. The values of the area under the receiver operating characteristic curve for the evaluated methods were about 0.6 for ConSurf, DiscoTope, and PPI-PRED methods and above 0.65 but not exceeding 0.70 for protein-protein docking methods when the best of the top ten models for the bound docking were considered; the remaining methods performed close to random. The benchmark datasets are included as a supplement to this paper. Conclusion It may be possible to improve epitope prediction methods through training on datasets which include only immune epitopes and through utilizing more features characterizing epitopes, for example, the evolutionary conservation score. Notwithstanding, overall poor performance may reflect the generality of antigenicity and hence the inability to decipher B-cell epitopes as an intrinsic feature of the protein. It is an open question as to whether ultimately discriminatory features can be found. PMID:17910770
3D printing from MRI Data: Harnessing strengths and minimizing weaknesses.
Ripley, Beth; Levin, Dmitry; Kelil, Tatiana; Hermsen, Joshua L; Kim, Sooah; Maki, Jeffrey H; Wilson, Gregory J
2017-03-01
3D printing facilitates the creation of accurate physical models of patient-specific anatomy from medical imaging datasets. While the majority of models to date are created from computed tomography (CT) data, there is increasing interest in creating models from other datasets, such as ultrasound and magnetic resonance imaging (MRI). MRI, in particular, holds great potential for 3D printing, given its excellent tissue characterization and lack of ionizing radiation. There are, however, challenges to 3D printing from MRI data as well. Here we review the basics of 3D printing, explore the current strengths and weaknesses of printing from MRI data as they pertain to model accuracy, and discuss considerations in the design of MRI sequences for 3D printing. Finally, we explore the future of 3D printing and MRI, including creative applications and new materials. 5 J. Magn. Reson. Imaging 2017;45:635-645. © 2016 International Society for Magnetic Resonance in Medicine.
COMPARISON OF VOLUMETRIC REGISTRATION ALGORITHMS FOR TENSOR-BASED MORPHOMETRY
Villalon, Julio; Joshi, Anand A.; Toga, Arthur W.; Thompson, Paul M.
2015-01-01
Nonlinear registration of brain MRI scans is often used to quantify morphological differences associated with disease or genetic factors. Recently, surface-guided fully 3D volumetric registrations have been developed that combine intensity-guided volume registrations with cortical surface constraints. In this paper, we compare one such algorithm to two popular high-dimensional volumetric registration methods: large-deformation viscous fluid registration, formulated in a Riemannian framework, and the diffeomorphic “Demons” algorithm. We performed an objective morphometric comparison, by using a large MRI dataset from 340 young adult twin subjects to examine 3D patterns of correlations in anatomical volumes. Surface-constrained volume registration gave greater effect sizes for detecting morphometric associations near the cortex, while the other two approaches gave greater effects sizes subcortically. These findings suggest novel ways to combine the advantages of multiple methods in the future. PMID:26925198
Approximating the Generalized Voronoi Diagram of Closely Spaced Objects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, John; Daniel, Eric; Pascucci, Valerio
2015-06-22
We present an algorithm to compute an approximation of the generalized Voronoi diagram (GVD) on arbitrary collections of 2D or 3D geometric objects. In particular, we focus on datasets with closely spaced objects; GVD approximation is expensive and sometimes intractable on these datasets using previous algorithms. With our approach, the GVD can be computed using commodity hardware even on datasets with many, extremely tightly packed objects. Our approach is to subdivide the space with an octree that is represented with an adjacency structure. We then use a novel adaptive distance transform to compute the distance function on octree vertices. Themore » computed distance field is sampled more densely in areas of close object spacing, enabling robust and parallelizable GVD surface generation. We demonstrate our method on a variety of data and show example applications of the GVD in 2D and 3D.« less
Online C-arm calibration using a marked guide wire for 3D reconstruction of pulmonary arteries
NASA Astrophysics Data System (ADS)
Vachon, Étienne; Miró, Joaquim; Duong, Luc
2017-03-01
3D reconstruction of vessels from 2D X-ray angiography is highly relevant to improve the visualization and the assessment of vascular structures such as pulmonary arteries by interventional cardiologists. However, to ensure a robust and accurate reconstruction, C-arm gantry parameters must be properly calibrated to provide clinically acceptable results. Calibration procedures often rely on calibration objects and complex protocol which is not adapted to an intervention context. In this study, a novel calibration algorithm for C-arm gantry is presented using the instrumentation such as catheters and guide wire. This ensures the availability of a minimum set of correspondences and implies minimal changes to the clinical workflow. The method was evaluated on simulated data and on retrospective patient datasets. Experimental results on simulated datasets demonstrate a calibration that allows a 3D reconstruction of the guide wire up to a geometric transformation. Experiments with patients datasets show a significant decrease of the retro projection error to 0.17 mm 2D RMS. Consequently, such procedure might contribute to identify any calibration drift during the intervention.
The effect of leverage and/or influential on structure-activity relationships.
Bolboacă, Sorana D; Jäntschi, Lorentz
2013-05-01
In the spirit of reporting valid and reliable Quantitative Structure-Activity Relationship (QSAR) models, the aim of our research was to assess how the leverage (analysis with Hat matrix, h(i)) and the influential (analysis with Cook's distance, D(i)) of QSAR models may reflect the models reliability and their characteristics. The datasets included in this research were collected from previously published papers. Seven datasets which accomplished the imposed inclusion criteria were analyzed. Three models were obtained for each dataset (full-model, h(i)-model and D(i)-model) and several statistical validation criteria were applied to the models. In 5 out of 7 sets the correlation coefficient increased when compounds with either h(i) or D(i) higher than the threshold were removed. Withdrawn compounds varied from 2 to 4 for h(i)-models and from 1 to 13 for D(i)-models. Validation statistics showed that D(i)-models possess systematically better agreement than both full-models and h(i)-models. Removal of influential compounds from training set significantly improves the model and is recommended to be conducted in the process of quantitative structure-activity relationships developing. Cook's distance approach should be combined with hat matrix analysis in order to identify the compounds candidates for removal.
High performance computing environment for multidimensional image analysis
Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo
2007-01-01
Background The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. Results We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478× speedup. Conclusion Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets. PMID:17634099
High performance computing environment for multidimensional image analysis.
Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo
2007-07-10
The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.
Razifar, Pasha; Lubberink, Mark; Schneider, Harald; Långström, Bengt; Bengtsson, Ewert; Bergström, Mats
2005-05-13
BACKGROUND: Positron emission tomography (PET) is a powerful imaging technique with the potential of obtaining functional or biochemical information by measuring distribution and kinetics of radiolabelled molecules in a biological system, both in vitro and in vivo. PET images can be used directly or after kinetic modelling to extract quantitative values of a desired physiological, biochemical or pharmacological entity. Because such images are generally noisy, it is essential to understand how noise affects the derived quantitative values. A pre-requisite for this understanding is that the properties of noise such as variance (magnitude) and texture (correlation) are known. METHODS: In this paper we explored the pattern of noise correlation in experimentally generated PET images, with emphasis on the angular dependence of correlation, using the autocorrelation function (ACF). Experimental PET data were acquired in 2D and 3D acquisition mode and reconstructed by analytical filtered back projection (FBP) and iterative ordered subsets expectation maximisation (OSEM) methods. The 3D data was rebinned to a 2D dataset using FOurier REbinning (FORE) followed by 2D reconstruction using either FBP or OSEM. In synthetic images we compared the ACF results with those from covariance matrix. The results were illustrated as 1D profiles and also visualized as 2D ACF images. RESULTS: We found that the autocorrelation images from PET data obtained after FBP were not fully rotationally symmetric or isotropic if the object deviated from a uniform cylindrical radioactivity distribution. In contrast, similar autocorrelation images obtained after OSEM reconstruction were isotropic even when the phantom was not circular. Simulations indicated that the noise autocorrelation is non-isotropic in images created by FBP when the level of noise in projections is angularly variable. Comparison between 1D cross profiles on autocorrelation images obtained by FBP reconstruction and covariance matrices produced almost identical results in a simulation study. CONCLUSION: With asymmetric radioactivity distribution in PET, reconstruction using FBP, in contrast to OSEM, generates images in which the noise correlation is non-isotropic when the noise magnitude is angular dependent, such as in objects with asymmetric radioactivity distribution. In this respect, iterative reconstruction is superior since it creates isotropic noise correlations in the images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Getirana, Augusto; Dutra, Emanuel; Guimberteau, Matthieu
Despite recent advances in modeling and remote sensing of land surfaces, estimates of the global water budget are still fairly uncertain. The objective of this study is to evaluate the water budget of the Amazon basin based on several state-of-the-art land surface model (LSM) outputs. Water budget variables [total water storage (TWS), evapotranspiration (ET), surface runoff (R) and baseflow (B)] are evaluated at the basin scale using both remote sensing and in situ data. Fourteen LSMs were run using meteorological forcings at a 3-hourly time step and 1-degree spatial resolution. Three experiments are performed using precipitation which has been rescaledmore » to match monthly global GPCP and GPCC datasets and the daily HYBAM dataset for the Amazon basin. R and B are used to force the Hydrological Modeling and Analysis Platform (HyMAP) river routing scheme and simulated discharges are compared against observations at 165 gauges. Simulated ET and TWS are compared against FLUXNET and MOD16A2 evapotranspiration, and GRACE TWS estimates in different catchments. At the basin scale, simulated ET ranges from 2.39mm.d-1 to 3.26mm.d-1 and a low spatial correlation between ET and P indicates that evapotranspiration does not depend on water availability over most of the basin. Results also show that other simulated water budget variables vary significantly as a function of both the LSM and precipitation used, but simulated TWS generally agree at the basin scale. The best water budget simulations resulted from experiments using the HYBAM dataset, mostly explained by a denser rainfall gauge network the daily rescaling.« less
Traversing and labeling interconnected vascular tree structures from 3D medical images
NASA Astrophysics Data System (ADS)
O'Dell, Walter G.; Govindarajan, Sindhuja Tirumalai; Salgia, Ankit; Hegde, Satyanarayan; Prabhakaran, Sreekala; Finol, Ender A.; White, R. James
2014-03-01
Purpose: Detailed characterization of pulmonary vascular anatomy has important applications for the diagnosis and management of a variety of vascular diseases. Prior efforts have emphasized using vessel segmentation to gather information on the number or branches, number of bifurcations, and branch length and volume, but accurate traversal of the vessel tree to identify and repair erroneous interconnections between adjacent branches and neighboring tree structures has not been carefully considered. In this study, we endeavor to develop and implement a successful approach to distinguishing and characterizing individual vascular trees from among a complex intermingling of trees. Methods: We developed strategies and parameters in which the algorithm identifies and repairs false branch inter-tree and intra-tree connections to traverse complicated vessel trees. A series of two-dimensional (2D) virtual datasets with a variety of interconnections were constructed for development, testing, and validation. To demonstrate the approach, a series of real 3D computed tomography (CT) lung datasets were obtained, including that of an anthropomorphic chest phantom; an adult human chest CT; a pediatric patient chest CT; and a micro-CT of an excised rat lung preparation. Results: Our method was correct in all 2D virtual test datasets. For each real 3D CT dataset, the resulting simulated vessel tree structures faithfully depicted the vessel tree structures that were originally extracted from the corresponding lung CT scans. Conclusion: We have developed a comprehensive strategy for traversing and labeling interconnected vascular trees and successfully implemented its application to pulmonary vessels observed using 3D CT images of the chest.
Stanislawski, Jerzy; Kotulska, Malgorzata; Unold, Olgierd
2013-01-17
Amyloids are proteins capable of forming fibrils. Many of them underlie serious diseases, like Alzheimer disease. The number of amyloid-associated diseases is constantly increasing. Recent studies indicate that amyloidogenic properties can be associated with short segments of aminoacids, which transform the structure when exposed. A few hundreds of such peptides have been experimentally found. Experimental testing of all possible aminoacid combinations is currently not feasible. Instead, they can be predicted by computational methods. 3D profile is a physicochemical-based method that has generated the most numerous dataset - ZipperDB. However, it is computationally very demanding. Here, we show that dataset generation can be accelerated. Two methods to increase the classification efficiency of amyloidogenic candidates are presented and tested: simplified 3D profile generation and machine learning methods. We generated a new dataset of hexapeptides, using more economical 3D profile algorithm, which showed very good classification overlap with ZipperDB (93.5%). The new part of our dataset contains 1779 segments, with 204 classified as amyloidogenic. The dataset of 6-residue sequences with their binary classification, based on the energy of the segment, was applied for training machine learning methods. A separate set of sequences from ZipperDB was used as a test set. The most effective methods were Alternating Decision Tree and Multilayer Perceptron. Both methods obtained area under ROC curve of 0.96, accuracy 91%, true positive rate ca. 78%, and true negative rate 95%. A few other machine learning methods also achieved a good performance. The computational time was reduced from 18-20 CPU-hours (full 3D profile) to 0.5 CPU-hours (simplified 3D profile) to seconds (machine learning). We showed that the simplified profile generation method does not introduce an error with regard to the original method, while increasing the computational efficiency. Our new dataset proved representative enough to use simple statistical methods for testing the amylogenicity based only on six letter sequences. Statistical machine learning methods such as Alternating Decision Tree and Multilayer Perceptron can replace the energy based classifier, with advantage of very significantly reduced computational time and simplicity to perform the analysis. Additionally, a decision tree provides a set of very easily interpretable rules.
Kamesh Iyer, Srikant; Tasdizen, Tolga; Burgon, Nathan; Kholmovski, Eugene; Marrouche, Nassir; Adluru, Ganesh; DiBella, Edward
2016-09-01
Current late gadolinium enhancement (LGE) imaging of left atrial (LA) scar or fibrosis is relatively slow and requires 5-15min to acquire an undersampled (R=1.7) 3D navigated dataset. The GeneRalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) based parallel imaging method is the current clinical standard for accelerating 3D LGE imaging of the LA and permits an acceleration factor ~R=1.7. Two compressed sensing (CS) methods have been developed to achieve higher acceleration factors: a patch based collaborative filtering technique tested with acceleration factor R~3, and a technique that uses a 3D radial stack-of-stars acquisition pattern (R~1.8) with a 3D total variation constraint. The long reconstruction time of these CS methods makes them unwieldy to use, especially the patch based collaborative filtering technique. In addition, the effect of CS techniques on the quantification of percentage of scar/fibrosis is not known. We sought to develop a practical compressed sensing method for imaging the LA at high acceleration factors. In order to develop a clinically viable method with short reconstruction time, a Split Bregman (SB) reconstruction method with 3D total variation (TV) constraints was developed and implemented. The method was tested on 8 atrial fibrillation patients (4 pre-ablation and 4 post-ablation datasets). Blur metric, normalized mean squared error and peak signal to noise ratio were used as metrics to analyze the quality of the reconstructed images, Quantification of the extent of LGE was performed on the undersampled images and compared with the fully sampled images. Quantification of scar from post-ablation datasets and quantification of fibrosis from pre-ablation datasets showed that acceleration factors up to R~3.5 gave good 3D LGE images of the LA wall, using a 3D TV constraint and constrained SB methods. This corresponds to reducing the scan time by half, compared to currently used GRAPPA methods. Reconstruction of 3D LGE images using the SB method was over 20 times faster than standard gradient descent methods. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Pollastro, Pasquale; Rampone, Salvatore
The aim of this work is to describe a cleaning procedure of GenBank data, producing material to train and to assess the prediction accuracy of computational approaches for gene characterization. A procedure (GenBank2HS3D) has been defined, producing a dataset (HS3D - Homo Sapiens Splice Sites Dataset) of Homo Sapiens Splice regions extracted from GenBank (Rel.123 at this time). It selects, from the complete GenBank Primate Division, entries of Human Nuclear DNA according with several assessed criteria; then it extracts exons and introns from these entries (actually 4523 + 3802). Donor and acceptor sites are then extracted as windows of 140 nucleotides around each splice site (3799 + 3799). After discarding windows not including canonical GT-AG junctions (65 + 74), including insufficient data (not enough material for a 140 nucleotide window) (686 + 589), including not AGCT bases (29 + 30), and redundant (218 + 226), the remaining windows (2796 + 2880) are reported in the dataset. Finally, windows of false splice sites are selected by searching canonical GT-AG pairs in not splicing positions (271 937 + 332 296). The false sites in a range +/- 60 from a true splice site are marked as proximal. HS3D, release 1.2 at this time, is available at the Web server of the University of Sannio: http://www.sci.unisannio.it/docenti/rampone/.
NASA Astrophysics Data System (ADS)
Haak, Alexander; van Stralen, Marijn; van Burken, Gerard; Klein, Stefan; Pluim, Josien P. W.; de Jong, Nico; van der Steen, Antonius F. W.; Bosch, Johan G.
2012-03-01
°For electrophysiology intervention monitoring, we intend to reconstruct 4D ultrasound (US) of structures in the beating heart from 2D transesophageal US by scanplane rotation. The image acquisition is continuous but unsynchronized to the heart rate, which results in a sparsely and irregularly sampled dataset and a spatiotemporal interpolation method is desired. Previously, we showed the potential of normalized convolution (NC) for interpolating such datasets. We explored 4D interpolation by 3 different methods: NC, nearest neighbor (NN), and temporal binning followed by linear interpolation (LTB). The test datasets were derived by slicing three 4D echocardiography datasets at random rotation angles (θ, range: 0-180) and random normalized cardiac phase (τ, range: 0-1). Four different distributions of rotated 2D images with 600, 900, 1350, and 1800 2D input images were created from all TEE sets. A 2D Gaussian kernel was used for NC and optimal kernel sizes (σθ and στ) were found by performing an exhaustive search. The RMS gray value error (RMSE) of the reconstructed images was computed for all interpolation methods. The estimated optimal kernels were in the range of σθ = 3.24 - 3.69°/ στ = 0.045 - 0.048, σθ = 2.79°/ στ = 0.031 - 0.038, σθ = 2.34°/ στ = 0.023 - 0.026, and σθ = 1.89°/ στ = 0.021 - 0.023 for 600, 900, 1350, and 1800 input images respectively. We showed that NC outperforms NN and LTB. For a small number of input images the advantage of NC is more pronounced.
Yilmaz, E; Kayikcioglu, T; Kayipmaz, S
2017-07-01
In this article, we propose a decision support system for effective classification of dental periapical cyst and keratocystic odontogenic tumor (KCOT) lesions obtained via cone beam computed tomography (CBCT). CBCT has been effectively used in recent years for diagnosing dental pathologies and determining their boundaries and content. Unlike other imaging techniques, CBCT provides detailed and distinctive information about the pathologies by enabling a three-dimensional (3D) image of the region to be displayed. We employed 50 CBCT 3D image dataset files as the full dataset of our study. These datasets were identified by experts as periapical cyst and KCOT lesions according to the clinical, radiographic and histopathologic features. Segmentation operations were performed on the CBCT images using viewer software that we developed. Using the tools of this software, we marked the lesional volume of interest and calculated and applied the order statistics and 3D gray-level co-occurrence matrix for each CBCT dataset. A feature vector of the lesional region, including 636 different feature items, was created from those statistics. Six classifiers were used for the classification experiments. The Support Vector Machine (SVM) classifier achieved the best classification performance with 100% accuracy, and 100% F-score (F1) scores as a result of the experiments in which a ten-fold cross validation method was used with a forward feature selection algorithm. SVM achieved the best classification performance with 96.00% accuracy, and 96.00% F1 scores in the experiments in which a split sample validation method was used with a forward feature selection algorithm. SVM additionally achieved the best performance of 94.00% accuracy, and 93.88% F1 in which a leave-one-out (LOOCV) method was used with a forward feature selection algorithm. Based on the results, we determined that periapical cyst and KCOT lesions can be classified with a high accuracy with the models that we built using the new dataset selected for this study. The studies mentioned in this article, along with the selected 3D dataset, 3D statistics calculated from the dataset, and performance results of the different classifiers, comprise an important contribution to the field of computer-aided diagnosis of dental apical lesions. Copyright © 2017 Elsevier B.V. All rights reserved.
Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.
Scharfe, Michael; Pielot, Rainer; Schreiber, Falk
2010-01-11
Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.
Bizino, Maurice B; Tao, Qian; Amersfoort, Jacob; Siebelink, Hans-Marc J; van den Bogaard, Pieter J; van der Geest, Rob J; Lamb, Hildo J
2018-04-06
To compare breath-hold (BH) with navigated free-breathing (FB) 3D late gadolinium enhancement cardiac MRI (LGE-CMR) MATERIALS AND METHODS: Fifty-one patients were retrospectively included (34 ischaemic cardiomyopathy, 14 non-ischaemic cardiomyopathy, three discarded). BH and FB 3D phase sensitive inversion recovery sequences were performed at 3T. FB datasets were reformatted into normal resolution (FB-NR, 1.46x1.46x10mm) and high resolution (FB-HR, isotropic 0.91-mm voxels). Scar mass, scar edge sharpness (SES), SNR and CNR were compared using paired-samples t-test, Pearson correlation and Bland-Altman analysis. Scar mass was similar in BH and FB-NR (mean ± SD: 15.5±18.0 g vs. 15.5±16.9 g, p=0.997), with good correlation (r=0.953), and no bias (mean difference ± SD: 0.00±5.47 g). FB-NR significantly overestimated scar mass compared with FB-HR (15.5±16.9 g vs 14.4±15.6 g; p=0.007). FB-NR and FB-HR correlated well (r=0.988), but Bland-Altman demonstrated systematic bias (1.15±2.84 g). SES was similar in BH and FB-NR (p=0.947), but significantly higher in FB-HR than FB-NR (p<0.01). SNR and CNR were lower in BH than FB-NR (p<0.01), and lower in FB-HR than FB-NR (p<0.01). Navigated free-breathing 3D LGE-CMR allows reliable scar mass quantification comparable to breath-hold. During free-breathing, spatial resolution can be increased resulting in improved sharpness and reduced scar mass. • Navigated free-breathing 3D late gadolinium enhancement is reliable for myocardial scar quantification. • High-resolution 3D late gadolinium enhancement increases scar sharpness • Ischaemic and non-ischaemic cardiomyopathy patients can be imaged using free-breathing LGE CMR.
Modeling 4D Pathological Changes by Leveraging Normative Models
Wang, Bo; Prastawa, Marcel; Irimia, Andrei; Saha, Avishek; Liu, Wei; Goh, S.Y. Matthew; Vespa, Paul M.; Van Horn, John D.; Gerig, Guido
2016-01-01
With the increasing use of efficient multimodal 3D imaging, clinicians are able to access longitudinal imaging to stage pathological diseases, to monitor the efficacy of therapeutic interventions, or to assess and quantify rehabilitation efforts. Analysis of such four-dimensional (4D) image data presenting pathologies, including disappearing and newly appearing lesions, represents a significant challenge due to the presence of complex spatio-temporal changes. Image analysis methods for such 4D image data have to include not only a concept for joint segmentation of 3D datasets to account for inherent correlations of subject-specific repeated scans but also a mechanism to account for large deformations and the destruction and formation of lesions (e.g., edema, bleeding) due to underlying physiological processes associated with damage, intervention, and recovery. In this paper, we propose a novel framework that provides a joint segmentation-registration framework to tackle the inherent problem of image registration in the presence of objects not present in all images of the time series. Our methodology models 4D changes in pathological anatomy across time and and also provides an explicit mapping of a healthy normative template to a subject’s image data with pathologies. Since atlas-moderated segmentation methods cannot explain appearance and locality pathological structures that are not represented in the template atlas, the new framework provides different options for initialization via a supervised learning approach, iterative semisupervised active learning, and also transfer learning, which results in a fully automatic 4D segmentation method. We demonstrate the effectiveness of our novel approach with synthetic experiments and a 4D multimodal MRI dataset of severe traumatic brain injury (TBI), including validation via comparison to expert segmentations. However, the proposed methodology is generic in regard to different clinical applications requiring quantitative analysis of 4D imaging representing spatio-temporal changes of pathologies. PMID:27818606
NASA Astrophysics Data System (ADS)
Christensen, Anders S.; Kromann, Jimmy C.; Jensen, Jan H.; Cui, Qiang
2017-10-01
To facilitate further development of approximate quantum mechanical methods for condensed phase applications, we present a new benchmark dataset of intermolecular interaction energies in the solution phase for a set of 15 dimers, each containing one charged monomer. The reference interaction energy in solution is computed via a thermodynamic cycle that integrates dimer binding energy in the gas phase at the coupled cluster level and solute-solvent interaction with density functional theory; the estimated uncertainty of such calculated interaction energy is ±1.5 kcal/mol. The dataset is used to benchmark the performance of a set of semi-empirical quantum mechanical (SQM) methods that include DFTB3-D3, DFTB3/CPE-D3, OM2-D3, PM6-D3, PM6-D3H+, and PM7 as well as the HF-3c method. We find that while all tested SQM methods tend to underestimate binding energies in the gas phase with a root-mean-squared error (RMSE) of 2-5 kcal/mol, they overestimate binding energies in the solution phase with an RMSE of 3-4 kcal/mol, with the exception of DFTB3/CPE-D3 and OM2-D3, for which the systematic deviation is less pronounced. In addition, we find that HF-3c systematically overestimates binding energies in both gas and solution phases. As most approximate QM methods are parametrized and evaluated using data measured or calculated in the gas phase, the dataset represents an important first step toward calibrating QM based methods for application in the condensed phase where polarization and exchange repulsion need to be treated in a balanced fashion.
Epidermal segmentation in high-definition optical coherence tomography.
Li, Annan; Cheng, Jun; Yow, Ai Ping; Wall, Carolin; Wong, Damon Wing Kee; Tey, Hong Liang; Liu, Jiang
2015-01-01
Epidermis segmentation is a crucial step in many dermatological applications. Recently, high-definition optical coherence tomography (HD-OCT) has been developed and applied to imaging subsurface skin tissues. In this paper, a novel epidermis segmentation method using HD-OCT is proposed in which the epidermis is segmented by 3 steps: the weighted least square-based pre-processing, the graph-based skin surface detection and the local integral projection-based dermal-epidermal junction detection respectively. Using a dataset of five 3D volumes, we found that this method correlates well with the conventional method of manually marking out the epidermis. This method can therefore serve to effectively and rapidly delineate the epidermis for study and clinical management of skin diseases.
NASA Astrophysics Data System (ADS)
Peterson, C. D.; Lisiecki, L. E.; Gebbie, G.; Hamann, B.; Kellogg, L. H.; Kreylos, O.; Kronenberger, M.; Spero, H. J.; Streletz, G. J.; Weber, C.
2015-12-01
Geologic problems and datasets are often 3D or 4D in nature, yet projected onto a 2D surface such as a piece of paper or a projection screen. Reducing the dimensionality of data forces the reader to "fill in" that collapsed dimension in their minds, creating a cognitive challenge for the reader, especially new learners. Scientists and students can visualize and manipulate 3D datasets using the virtual reality software developed for the immersive, real-time interactive 3D environment at the KeckCAVES at UC Davis. The 3DVisualizer software (Billen et al., 2008) can also operate on a desktop machine to produce interactive 3D maps of earthquake epicenter locations and 3D bathymetric maps of the seafloor. With 3D projections of seafloor bathymetry and ocean circulation proxy datasets in a virtual reality environment, we can create visualizations of carbon isotope (δ13C) records for academic research and to aid in demonstrating thermohaline circulation in the classroom. Additionally, 3D visualization of seafloor bathymetry allows students to see features of seafloor most people cannot observe first-hand. To enhance lessons on mid-ocean ridges and ocean basin genesis, we have created movies of seafloor bathymetry for a large-enrollment undergraduate-level class, Introduction to Oceanography. In the past four quarters, students have enjoyed watching 3D movies, and in the fall quarter (2015), we will assess how well 3D movies enhance learning. The class will be split into two groups, one who learns about the Mid-Atlantic Ridge from diagrams and lecture, and the other who learns with a supplemental 3D visualization. Both groups will be asked "what does the seafloor look like?" before and after the Mid-Atlantic Ridge lesson. Then the whole class will watch the 3D movie and respond to an additional question, "did the 3D visualization enhance your understanding of the Mid-Atlantic Ridge?" with the opportunity to further elaborate on the effectiveness of the visualization.
NASA Astrophysics Data System (ADS)
Fotin, Sergei V.; Yin, Yin; Periaswamy, Senthil; Kunz, Justin; Haldankar, Hrishikesh; Muradyan, Naira; Cornud, François; Turkbey, Baris; Choyke, Peter L.
2012-02-01
Fully automated prostate segmentation helps to address several problems in prostate cancer diagnosis and treatment: it can assist in objective evaluation of multiparametric MR imagery, provides a prostate contour for MR-ultrasound (or CT) image fusion for computer-assisted image-guided biopsy or therapy planning, may facilitate reporting and enables direct prostate volume calculation. Among the challenges in automated analysis of MR images of the prostate are the variations of overall image intensities across scanners, the presence of nonuniform multiplicative bias field within scans and differences in acquisition setup. Furthermore, images acquired with the presence of an endorectal coil suffer from localized high-intensity artifacts at the posterior part of the prostate. In this work, a three-dimensional method for fast automated prostate detection based on normalized gradient fields cross-correlation, insensitive to intensity variations and coil-induced artifacts, is presented and evaluated. The components of the method, offline template learning and the localization algorithm, are described in detail. The method was validated on a dataset of 522 T2-weighted MR images acquired at the National Cancer Institute, USA that was split in two halves for development and testing. In addition, second dataset of 29 MR exams from Centre d'Imagerie Médicale Tourville, France were used to test the algorithm. The 95% confidence intervals for the mean Euclidean distance between automatically and manually identified prostate centroids were 4.06 +/- 0.33 mm and 3.10 +/- 0.43 mm for the first and second test datasets respectively. Moreover, the algorithm provided the centroid within the true prostate volume in 100% of images from both datasets. Obtained results demonstrate high utility of the detection method for a fully automated prostate segmentation.
NASA Astrophysics Data System (ADS)
Feng, Zhixin
2018-02-01
Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.
correlcalc: Two-point correlation function from redshift surveys
NASA Astrophysics Data System (ADS)
Rohin, Yeluripati
2017-11-01
correlcalc calculates two-point correlation function (2pCF) of galaxies/quasars using redshift surveys. It can be used for any assumed geometry or Cosmology model. Using BallTree algorithms to reduce the computational effort for large datasets, it is a parallelised code suitable for running on clusters as well as personal computers. It takes redshift (z), Right Ascension (RA) and Declination (DEC) data of galaxies and random catalogs as inputs in form of ascii or fits files. If random catalog is not provided, it generates one of desired size based on the input redshift distribution and mangle polygon file (in .ply format) describing the survey geometry. It also calculates different realisations of (3D) anisotropic 2pCF. Optionally it makes healpix maps of the survey providing visualization.
Extracellular space preservation aids the connectomic analysis of neural circuits.
Pallotto, Marta; Watkins, Paul V; Fubara, Boma; Singer, Joshua H; Briggman, Kevin L
2015-12-09
Dense connectomic mapping of neuronal circuits is limited by the time and effort required to analyze 3D electron microscopy (EM) datasets. Algorithms designed to automate image segmentation suffer from substantial error rates and require significant manual error correction. Any improvement in segmentation error rates would therefore directly reduce the time required to analyze 3D EM data. We explored preserving extracellular space (ECS) during chemical tissue fixation to improve the ability to segment neurites and to identify synaptic contacts. ECS preserved tissue is easier to segment using machine learning algorithms, leading to significantly reduced error rates. In addition, we observed that electrical synapses are readily identified in ECS preserved tissue. Finally, we determined that antibodies penetrate deep into ECS preserved tissue with only minimal permeabilization, thereby enabling correlated light microscopy (LM) and EM studies. We conclude that preservation of ECS benefits multiple aspects of the connectomic analysis of neural circuits.
Container weld identification using portable laser scanners
NASA Astrophysics Data System (ADS)
Taddei, Pierluigi; Boström, Gunnar; Puig, David; Kravtchenko, Victor; Sequeira, Vítor
2015-03-01
Identification and integrity verification of sealed containers for security applications can be obtained by employing noninvasive portable optical systems. We present a portable laser range imaging system capable of identifying welds, a byproduct of a container's physical sealing, with micrometer accuracy. It is based on the assumption that each weld has a unique three-dimensional (3-D) structure which cannot be copied or forged. We process the 3-D surface to generate a normalized depth map which is invariant to mechanical alignment errors and that is used to build compact signatures representing the weld. A weld is identified by performing cross correlations of its signature against a set of known signatures. The system has been tested on realistic datasets, containing hundreds of welds, yielding no false positives or false negatives and thus showing the robustness of the system and the validity of the chosen signature.
The 3D widgets for exploratory scientific visualization
NASA Technical Reports Server (NTRS)
Herndon, Kenneth P.; Meyer, Tom
1995-01-01
Computational fluid dynamics (CFD) techniques are used to simulate flows of fluids like air or water around such objects as airplanes and automobiles. These techniques usually generate very large amounts of numerical data which are difficult to understand without using graphical scientific visualization techniques. There are a number of commercial scientific visualization applications available today which allow scientists to control visualization tools via textual and/or 2D user interfaces. However, these user interfaces are often difficult to use. We believe that 3D direct-manipulation techniques for interactively controlling visualization tools will provide opportunities for powerful and useful interfaces with which scientists can more effectively explore their datasets. A few systems have been developed which use these techniques. In this paper, we will present a variety of 3D interaction techniques for manipulating parameters of visualization tools used to explore CFD datasets, and discuss in detail various techniques for positioning tools in a 3D scene.
Deformable 3D-2D registration for CT and its application to low dose tomographic fluoroscopy
NASA Astrophysics Data System (ADS)
Flach, Barbara; Brehm, Marcus; Sawall, Stefan; Kachelrieß, Marc
2014-12-01
Many applications in medical imaging include image registration for matching of images from the same or different modalities. In the case of full data sampling, the respective reconstructed images are usually of such a good image quality that standard deformable volume-to-volume (3D-3D) registration approaches can be applied. But research in temporal-correlated image reconstruction and dose reductions increases the number of cases where rawdata are available from only few projection angles. Here, deteriorated image quality leads to non-acceptable deformable volume-to-volume registration results. Therefore a registration approach is required that is robust against a decreasing number of projections defining the target position. We propose a deformable volume-to-rawdata (3D-2D) registration method that aims at finding a displacement vector field maximizing the alignment of a CT volume and the acquired rawdata based on the sum of squared differences in rawdata domain. The registration is constrained by a regularization term in accordance with a fluid-based diffusion. Both cost function components, the rawdata fidelity and the regularization term, are optimized in an alternating manner. The matching criterion is optimized by a conjugate gradient descent for nonlinear functions, while the regularization is realized by convolution of the vector fields with Gaussian kernels. We validate the proposed method and compare it to the demons algorithm, a well-known 3D-3D registration method. The comparison is done for a range of 4-60 target projections using datasets from low dose tomographic fluoroscopy as an application example. The results show a high correlation to the ground truth target position without introducing artifacts even in the case of very few projections. In particular the matching in the rawdata domain is improved compared to the 3D-3D registration for the investigated range. The proposed volume-to-rawdata registration increases the robustness regarding sparse rawdata and provides more stable results than volume-to-volume approaches. By applying the proposed registration approach to low dose tomographic fluoroscopy it is possible to improve the temporal resolution and thus to increase the robustness of low dose tomographic fluoroscopy.
X-ray computed tomography library of shark anatomy and lower jaw surface models.
Kamminga, Pepijn; De Bruin, Paul W; Geleijns, Jacob; Brazeau, Martin D
2017-04-11
The cranial diversity of sharks reflects disparate biomechanical adaptations to feeding. In order to be able to investigate and better understand the ecomorphology of extant shark feeding systems, we created a x-ray computed tomography (CT) library of shark cranial anatomy with three-dimensional (3D) lower jaw reconstructions. This is used to examine and quantify lower jaw disparity in extant shark species in a separate study. The library is divided in a dataset comprised of medical CT scans of 122 sharks (Selachimorpha, Chondrichthyes) representing 73 extant species, including digitized morphology of entire shark specimens. This CT dataset and additional data provided by other researchers was used to reconstruct a second dataset containing 3D models of the left lower jaw for 153 individuals representing 94 extant shark species. These datasets form an extensive anatomical record of shark skeletal anatomy, necessary for comparative morphological, biomechanical, ecological and phylogenetic studies.
Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron
2014-04-01
We propose a novel global optimization-based approach to segmentation of 3-D prostate transrectal ultrasound (TRUS) and T2 weighted magnetic resonance (MR) images, enforcing inherent axial symmetry of prostate shapes to simultaneously adjust a series of 2-D slice-wise segmentations in a "global" 3-D sense. We show that the introduced challenging combinatorial optimization problem can be solved globally and exactly by means of convex relaxation. In this regard, we propose a novel coherent continuous max-flow model (CCMFM), which derives a new and efficient duality-based algorithm, leading to a GPU-based implementation to achieve high computational speeds. Experiments with 25 3-D TRUS images and 30 3-D T2w MR images from our dataset, and 50 3-D T2w MR images from a public dataset, demonstrate that the proposed approach can segment a 3-D prostate TRUS/MR image within 5-6 s including 4-5 s for initialization, yielding a mean Dice similarity coefficient of 93.2%±2.0% for 3-D TRUS images and 88.5%±3.5% for 3-D MR images. The proposed method also yields relatively low intra- and inter-observer variability introduced by user manual initialization, suggesting a high reproducibility, independent of observers.
A general purpose feature extractor for light detection and ranging data.
Li, Yangming; Olson, Edwin B
2010-01-01
Feature extraction is a central step of processing Light Detection and Ranging (LIDAR) data. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear) environments, and trees from outdoor environments. While these detectors work well in their intended environments, their performance in different environments can be poor. We describe a general purpose feature detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image processing literature, specifically the multi-scale Kanade-Tomasi corner detector. The resulting method is capable of identifying highly stable and repeatable features at a variety of spatial scales without knowledge of environment, and produces principled uncertainty estimates and corner descriptors at same time. We present results on both software simulation and standard datasets, including the 2D Victoria Park and Intel Research Center datasets, and the 3D MIT DARPA Urban Challenge dataset.
A General Purpose Feature Extractor for Light Detection and Ranging Data
Li, Yangming; Olson, Edwin B.
2010-01-01
Feature extraction is a central step of processing Light Detection and Ranging (LIDAR) data. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear) environments, and trees from outdoor environments. While these detectors work well in their intended environments, their performance in different environments can be poor. We describe a general purpose feature detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image processing literature, specifically the multi-scale Kanade-Tomasi corner detector. The resulting method is capable of identifying highly stable and repeatable features at a variety of spatial scales without knowledge of environment, and produces principled uncertainty estimates and corner descriptors at same time. We present results on both software simulation and standard datasets, including the 2D Victoria Park and Intel Research Center datasets, and the 3D MIT DARPA Urban Challenge dataset. PMID:22163474
Adjoint tomography of crust and upper-mantle structure beneath Continental China
NASA Astrophysics Data System (ADS)
Chen, M.; Niu, F.; Liu, Q.; Tromp, J.
2013-12-01
Four years of regional earthquake recordings from 1,869 seismic stations are used for high-resolution and high-fidelity seismic imaging of the crust and upper-mantle structure beneath Continental China. This unprecedented high-density dataset is comprised of seismograms recorded by the China Earthquake Administration Array (CEArray), NorthEast China Extended SeiSmic Array (NECESSArray), INDEPTH-IV Array, F-net and other global and regional seismic networks, and involves 1,326,384 frequency-dependent phase measurements. Adjoint tomography is applied to this unprecedented dataset, aiming to resolve detailed 3D maps of compressional and shear wavespeeds, and radial anisotropy. Contrary to traditional ray-theory based tomography, adjoint tomography takes into account full 3D wave propagation effects and off-ray-path sensitivity. In our implementation, it utilizes a spectral-element method for precise wave propagation simulations. The tomographic method starts with a 3D initial model that combines smooth radially anisotropic mantle model S362ANI and 3D crustal model Crust2.0. Traveltime and amplitude misfits are minimized iteratively based on a conjugate gradient method, harnessing 3D finite-frequency kernels computed for each updated 3D model. After 17 iterations, our inversion reveals strong correlations of 3D wavespeed heterogeneities in the crust and upper mantle with surface tectonic units, such as the Himalaya Block, the Tibetan Plateau, the Tarim Basin, the Ordos Block, and the South China Block. Narrow slab features emerge from the smooth initial model above the transition zone beneath the Japan, Ryukyu, Philippine, Izu-Bonin, Mariana and Andaman arcs. 3D wavespeed variations appear comparable to or much sharper than in high-frequency P-and S-wave models from previous studies. Moreover our results include new information, such as 3D variations of radial anisotropy and the Vp/Vs ratio, which are expected to shed new light to the composition, thermal state, flow or fabric structure in the crust and upper mantle, as well as the related dynamical processes. We intend to use these seismic images to answer important tectonic questions, namely, 1) what controls the strength of the lithosphere; 2) how does lithosphere deform during the formation of orogens, basins and plateaus; 3) how pervasive is lithospheric delamination or partial removal beneath orogens and plateaus; 3) whether or not (and how) are slab segmentation and penetration into the lower mantle linked to upwellings associated with widespread magmatism in East Asia.
Observations of ELM stabilization during neutral beam injection in DIII-D
NASA Astrophysics Data System (ADS)
Bortolon, Alessandro; Kramer, Gerrit; Diallo, Ahmed; Knolker, Matthias; Maingi, Rajesh; Nazikian, Raffi; Degrassie, John; Osborne, Thomas
2017-10-01
Edge localized modes (ELMs) are generally interpreted as peeling-ballooning instabilities, driven by the pedestal current and pressure gradient, with other subdominant effects possibly relevant close to marginal stability. We report observations of transient stabilization of type-I ELMs during neutral beam injection (NBI), emerging from a combined dataset of DIII-D ELMy H-mode plasmas with moderate heating obtained through pulsed NBI waveforms. Statistical analysis of ELM onset times indicates that, in the selected dataset, the likelihood of onset of an ELM lowers significantly during NBI modulation pulses, with the stronger correlation found with counter-current NBI. The effect is also found in rf-heated H-modes, where ELMs appear inhibited when isolated diagnostic beam pulses are applied. Coherent average analysis is used to determine how plasma density, temperature, rotation as well as beam ion quantities evolve during a NB modulation cycle, finding relatively small changes ( 3%) of pedestal Te and ne and toroidal and poloidal rotation variations up to 5 km/s. The effect of these changes on pedestal stability will be discussed. Work supported by US DOE under DE-FC02-04ER54698, DE-AC02-09CH11466.
Bose-Einstein correlations in pp and PbPb collisions with ALICE at the LHC
Kisiel, Adam
2018-05-14
We report on the results of identical pion femtoscopy at the LHC. The Bose-Einstein correlation analysis was performed on the large-statistics ALICE p+p at sqrt{s}= 0.9 TeV and 7 TeV datasets collected during 2010 LHC running and the first Pb+Pb dataset at sqrt{s_NN}= 2.76 TeV. Detailed pion femtoscopy studies in heavy-ion collisions have shown that emission region sizes ("HBT radii") decrease with increasing pair momentum, which is understood as a manifestation of the collective behavior of matter. 3D radii were also found to universally scale with event multiplicity. In p+p collisions at 7 TeV one measures multiplicities which are comparable with those registered in peripheral AuAu and CuCu collisions at RHIC, so direct comparisons and tests of scaling laws are now possible. We show the results of double-differential 3D pion HBT analysis, as a function of multiplicity and pair momentum. The results for two collision energies are compared to results obtained in the heavy-ion collisions at similar multiplicity and p+p collisions at lower energy. We identify the relevant scaling variables for the femtoscopic radii and discuss the similarities and differences to results from heavy-ions. The observed trends give insight into the soft particle production mechanism in p+p collisions and suggest that a self-interacting collective system may be created in sufficiently high multiplicity events. First results for the central Pb+Pb collisions are also shown. A significant increase of the reaction zone volume and lifetime in comparison to RHIC is observed. Signatures of collective hydrodynamics-like behavior of the system are also apparent, and are compared to model predictions.
How to estimate the 3D power spectrum of the Lyman-α forest
NASA Astrophysics Data System (ADS)
Font-Ribera, Andreu; McDonald, Patrick; Slosar, Anže
2018-01-01
We derive and numerically implement an algorithm for estimating the 3D power spectrum of the Lyman-α (Lyα) forest flux fluctuations. The algorithm exploits the unique geometry of Lyα forest data to efficiently measure the cross-spectrum between lines of sight as a function of parallel wavenumber, transverse separation and redshift. We start by approximating the global covariance matrix as block-diagonal, where only pixels from the same spectrum are correlated. We then compute the eigenvectors of the derivative of the signal covariance with respect to cross-spectrum parameters, and project the inverse-covariance-weighted spectra onto them. This acts much like a radial Fourier transform over redshift windows. The resulting cross-spectrum inference is then converted into our final product, an approximation of the likelihood for the 3D power spectrum expressed as second order Taylor expansion around a fiducial model. We demonstrate the accuracy and scalability of the algorithm and comment on possible extensions. Our algorithm will allow efficient analysis of the upcoming Dark Energy Spectroscopic Instrument dataset.
How to estimate the 3D power spectrum of the Lyman-α forest
Font-Ribera, Andreu; McDonald, Patrick; Slosar, Anže
2018-01-02
Here, we derive and numerically implement an algorithm for estimating the 3D power spectrum of the Lyman-α (Lyα) forest flux fluctuations. The algorithm exploits the unique geometry of Lyα forest data to efficiently measure the cross-spectrum between lines of sight as a function of parallel wavenumber, transverse separation and redshift. We start by approximating the global covariance matrix as block-diagonal, where only pixels from the same spectrum are correlated. We then compute the eigenvectors of the derivative of the signal covariance with respect to cross-spectrum parameters, and project the inverse-covariance-weighted spectra onto them. This acts much like a radial Fouriermore » transform over redshift windows. The resulting cross-spectrum inference is then converted into our final product, an approximation of the likelihood for the 3D power spectrum expressed as second order Taylor expansion around a fiducial model. We demonstrate the accuracy and scalability of the algorithm and comment on possible extensions. Our algorithm will allow efficient analysis of the upcoming Dark Energy Spectroscopic Instrument dataset.« less
How to estimate the 3D power spectrum of the Lyman-α forest
DOE Office of Scientific and Technical Information (OSTI.GOV)
Font-Ribera, Andreu; McDonald, Patrick; Slosar, Anže
Here, we derive and numerically implement an algorithm for estimating the 3D power spectrum of the Lyman-α (Lyα) forest flux fluctuations. The algorithm exploits the unique geometry of Lyα forest data to efficiently measure the cross-spectrum between lines of sight as a function of parallel wavenumber, transverse separation and redshift. We start by approximating the global covariance matrix as block-diagonal, where only pixels from the same spectrum are correlated. We then compute the eigenvectors of the derivative of the signal covariance with respect to cross-spectrum parameters, and project the inverse-covariance-weighted spectra onto them. This acts much like a radial Fouriermore » transform over redshift windows. The resulting cross-spectrum inference is then converted into our final product, an approximation of the likelihood for the 3D power spectrum expressed as second order Taylor expansion around a fiducial model. We demonstrate the accuracy and scalability of the algorithm and comment on possible extensions. Our algorithm will allow efficient analysis of the upcoming Dark Energy Spectroscopic Instrument dataset.« less
Towards Full-Waveform Ambient Noise Inversion
NASA Astrophysics Data System (ADS)
Sager, Korbinian; Ermert, Laura; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas
2017-04-01
Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source distribution, and thereby to contribute to a better understanding of both Earth structure and noise generation. First, we develop an inversion strategy based on a 2D finite-difference code using adjoint techniques. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: i) the capability of different misfit functionals to image wave speed anomalies and source distribution and ii) possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus (http://salvus.io). It allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface and the corresponding sensitivity kernels for the distribution of noise sources and Earth structure. By studying the effect of noise sources on correlation functions in 3D, we validate the aforementioned inversion strategy and prepare the workflow necessary for the first application of full waveform ambient noise inversion to a global dataset, for which a model for the distribution of noise sources is already available.
Towards dense volumetric pancreas segmentation in CT using 3D fully convolutional networks
NASA Astrophysics Data System (ADS)
Roth, Holger; Oda, Masahiro; Shimizu, Natsuki; Oda, Hirohisa; Hayashi, Yuichiro; Kitasaka, Takayuki; Fujiwara, Michitaka; Misawa, Kazunari; Mori, Kensaku
2018-03-01
Pancreas segmentation in computed tomography imaging has been historically difficult for automated methods because of the large shape and size variations between patients. In this work, we describe a custom-build 3D fully convolutional network (FCN) that can process a 3D image including the whole pancreas and produce an automatic segmentation. We investigate two variations of the 3D FCN architecture; one with concatenation and one with summation skip connections to the decoder part of the network. We evaluate our methods on a dataset from a clinical trial with gastric cancer patients, including 147 contrast enhanced abdominal CT scans acquired in the portal venous phase. Using the summation architecture, we achieve an average Dice score of 89.7 +/- 3.8 (range [79.8, 94.8])% in testing, achieving the new state-of-the-art performance in pancreas segmentation on this dataset.
Luo, Yuan; Szolovits, Peter; Dighe, Anand S; Baron, Jason M
2018-06-01
A key challenge in clinical data mining is that most clinical datasets contain missing data. Since many commonly used machine learning algorithms require complete datasets (no missing data), clinical analytic approaches often entail an imputation procedure to "fill in" missing data. However, although most clinical datasets contain a temporal component, most commonly used imputation methods do not adequately accommodate longitudinal time-based data. We sought to develop a new imputation algorithm, 3-dimensional multiple imputation with chained equations (3D-MICE), that can perform accurate imputation of missing clinical time series data. We extracted clinical laboratory test results for 13 commonly measured analytes (clinical laboratory tests). We imputed missing test results for the 13 analytes using 3 imputation methods: multiple imputation with chained equations (MICE), Gaussian process (GP), and 3D-MICE. 3D-MICE utilizes both MICE and GP imputation to integrate cross-sectional and longitudinal information. To evaluate imputation method performance, we randomly masked selected test results and imputed these masked results alongside results missing from our original data. We compared predicted results to measured results for masked data points. 3D-MICE performed significantly better than MICE and GP-based imputation in a composite of all 13 analytes, predicting missing results with a normalized root-mean-square error of 0.342, compared to 0.373 for MICE alone and 0.358 for GP alone. 3D-MICE offers a novel and practical approach to imputing clinical laboratory time series data. 3D-MICE may provide an additional tool for use as a foundation in clinical predictive analytics and intelligent clinical decision support.
Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets
2010-01-01
Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics. PMID:20064262
Artan, G.A.; Verdin, J.P.; Lietzow, R.
2013-01-01
We illustrate the ability to monitor the status of snowpack over large areas by using a~spatially distributed snow accumulation and ablation model in the Upper Colorado Basin. The model was forced with precipitation fields from the National Weather Service (NWS) Multi-sensor Precipitation Estimator (MPE) and the Tropical Rainfall Measuring Mission (TRMM) datasets; remaining meteorological model input data was from NOAA's Global Forecast System (GFS) model output fields. The simulated snow water equivalent (SWE) was compared to SWEs from the Snow Data Assimilation System (SNODAS) and SNOwpack TELemetry system (SNOTEL) over a~region of the Western United States that covers parts of the Upper Colorado Basin. We also compared the SWE product estimated from the Special Sensor Microwave Imager (SSM/I) and Scanning Multichannel Microwave Radiometer (SMMR) to the SNODAS and SNOTEL SWE datasets. Agreement between the spatial distribution of the simulated SWE with both SNODAS and SNOTEL was high for the two model runs for the entire snow accumulation period. Model-simulated SWEs, both with MPE and TRMM, were significantly correlated spatially on average with the SNODAS (r = 0.81 and r = 0.54; d.f. = 543) and the SNOTEL SWE (r = 0.85 and r = 0.55; d.f. = 543), when monthly basinwide simulated average SWE the correlation was also highly significant (r = 0.95 and r = 0.73; d.f. = 12). The SWE estimated from the passive microwave imagery was not correlated either with the SNODAS SWE or (r = 0.14, d.f. = 7) SNOTEL-reported SWE values (r = 0.08, d.f. = 7). The agreement between modeled SWE and the SWE recorded by SNODAS and SNOTEL weakened during the snowmelt period due to an underestimation bias of the air temperature that was used as model input forcing.
Triplet correlation functions in liquid water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhabal, Debdas; Chakravarty, Charusita, E-mail: charus@chemistry.iitd.ac.in; Singh, Murari
Triplet correlations have been shown to play a crucial role in the transformation of simple liquids to anomalous tetrahedral fluids [M. Singh, D. Dhabal, A. H. Nguyen, V. Molinero, and C. Chakravarty, Phys. Rev. Lett. 112, 147801 (2014)]. Here we examine triplet correlation functions for water, arguably the most important tetrahedral liquid, under ambient conditions, using configurational ensembles derived from molecular dynamics (MD) simulations and reverse Monte Carlo (RMC) datasets fitted to experimental scattering data. Four different RMC data sets with widely varying hydrogen-bond topologies fitted to neutron and x-ray scattering data are considered [K. T. Wikfeldt, M. Leetmaa, M.more » P. Ljungberg, A. Nilsson, and L. G. M. Pettersson, J. Phys. Chem. B 113, 6246 (2009)]. Molecular dynamics simulations are performed for two rigid-body effective pair potentials (SPC/E and TIP4P/2005) and the monatomic water (mW) model. Triplet correlation functions are compared with other structural measures for tetrahedrality, such as the O–O–O angular distribution function and the local tetrahedral order distributions. In contrast to the pair correlation functions, which are identical for all the RMC ensembles, the O–O–O triplet correlation function can discriminate between ensembles with different degrees of tetrahedral network formation with the maximally symmetric, tetrahedral SYM dataset displaying distinct signatures of tetrahedrality similar to those obtained from atomistic simulations of the SPC/E model. Triplet correlations from the RMC datasets conform closely to the Kirkwood superposition approximation, while those from MD simulations show deviations within the first two neighbour shells. The possibilities for experimental estimation of triplet correlations of water and other tetrahedral liquids are discussed.« less
4D very high-resolution topography monitoring of surface deformation using UAV-SfM framework.
NASA Astrophysics Data System (ADS)
Clapuyt, François; Vanacker, Veerle; Schlunegger, Fritz; Van Oost, Kristof
2016-04-01
During the last years, exploratory research has shown that UAV-based image acquisition is suitable for environmental remote sensing and monitoring. Image acquisition with cameras mounted on an UAV can be performed at very-high spatial resolution and high temporal frequency in the most dynamic environments. Combined with Structure-from-Motion algorithm, the UAV-SfM framework is capable of providing digital surface models (DSM) which are highly accurate when compared to other very-high resolution topographic datasets and highly reproducible for repeated measurements over the same study area. In this study, we aim at assessing (1) differential movement of the Earth's surface and (2) the sediment budget of a complex earthflow located in the Central Swiss Alps based on three topographic datasets acquired over a period of 2 years. For three time steps, we acquired aerial photographs with a standard reflex camera mounted on a low-cost and lightweight UAV. Image datasets were then processed with the Structure-from-Motion algorithm in order to reconstruct a 3D dense point cloud representing the topography. Georeferencing of outputs has been achieved based on the ground control point (GCP) extraction method, previously surveyed on the field with a RTK GPS. Finally, digital elevation model of differences (DOD) has been computed to assess the topographic changes between the three acquisition dates while surface displacements have been quantified by using image correlation techniques. Our results show that the digital elevation model of topographic differences is able to capture surface deformation at cm-scale resolution. The mean annual displacement of the earthflow is about 3.6 m while the forefront of the landslide has advanced by ca. 30 meters over a period of 18 months. The 4D analysis permits to identify the direction and velocity of Earth movement. Stable topographic ridges condition the direction of the flow with highest downslope movement on steep slopes, and diffuse movement due to lateral sediment flux in the central part of the earthflow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Q; Devpura, S; Feghali, K
2016-06-15
Purpose: To investigate correlation of normal lung CT density changes with dose accuracy and outcome after SBRT for patients with early stage lung cancer. Methods: Dose distributions for patients originally planned and treated using a 1-D pencil beam-based (PB-1D) dose algorithm were retrospectively recomputed using algorithms: 3-D pencil beam (PB-3D), and model-based Methods: AAA, Acuros XB (AXB), and Monte Carlo (MC). Prescription dose was 12 Gy × 4 fractions. Planning CT images were rigidly registered to the followup CT datasets at 6–9 months after treatment. Corresponding dose distributions were mapped from the planning to followup CT images. Following the methodmore » of Palma et al .(1–2), Hounsfield Unit (HU) changes in lung density in individual, 5 Gy, dose bins from 5–45 Gy were assessed in the peri-tumor region, defined as a uniform, 3 cm expansion around the ITV(1). Results: There is a 10–15% displacement of the high dose region (40–45 Gy) with the model-based algorithms, relative to the PB method, due to the electron scattering of dose away from the tumor into normal lung tissue (Fig.1). Consequently, the high-dose lung region falls within the 40–45 Gy dose range, causing an increase in HU change in this region, as predicted by model-based algorithms (Fig.2). The patient with the highest HU change (∼110) had mild radiation pneumonitis, and the patient with HU change of ∼80–90 had shortness of breath. No evidence of pneumonitis was observed for the 3 patients with smaller CT density changes (<50 HU). Changes in CT densities, and dose-response correlation, as computed with model-based algorithms, are in excellent agreement with the findings of Palma et al. (1–2). Conclusion: Dose computed with PB (1D or 3D) algorithms was poorly correlated with clinically relevant CT density changes, as opposed to model-based algorithms. A larger cohort of patients is needed to confirm these results. This work was supported in part by a grant from Varian Medical Systems, Palo Alto, CA.« less
To generate a finite element model of human thorax using the VCH dataset
NASA Astrophysics Data System (ADS)
Shi, Hui; Liu, Qian
2009-10-01
Purpose: To generate a three-dimensional (3D) finite element (FE) model of human thorax which may provide the basis of biomechanics simulation for the study of design effect and mechanism of safety belt when vehicle collision. Methods: Using manually or semi-manually segmented method, the interested area can be segmented from the VCH (Visible Chinese Human) dataset. The 3D surface model of thorax is visualized by using VTK (Visualization Toolkit) and further translated into (Stereo Lithography) STL format, which approximates the geometry of solid model by representing the boundaries with triangular facets. The data in STL format need to be normalized into NURBS surfaces and IGES format using software such as Geomagic Studio to provide archetype for reverse engineering. The 3D FE model was established using Ansys software. Results: The generated 3D FE model was an integrated thorax model which could reproduce human's complicated structure morphology including clavicle, ribs, spine and sternum. It was consisted of 1 044 179 elements in total. Conclusions: Compared with the previous thorax model, this FE model enhanced the authenticity and precision of results analysis obviously, which can provide a sound basis for analysis of human thorax biomechanical research. Furthermore, using the method above, we can also establish 3D FE models of some other organizes and tissues utilizing the VCH dataset.
The Index cohesive effect on stock market correlations
NASA Astrophysics Data System (ADS)
Shapira, Y.; Kenett, D. Y.; Ben-Jacob, E.
2009-12-01
We present empirical examination and reassessment of the functional role of the market Index, using datasets of stock returns for eight years, by analyzing and comparing the results for two very different markets: 1) the New York Stock Exchange (NYSE), representing a large, mature market, and 2) the Tel Aviv Stock Exchange (TASE), representing a small, young market. Our method includes special collective (holographic) analysis of stock-Index correlations, of nested stock correlations (including the Index as an additional ghost stock) and of bare stock correlations (after subtraction of the Index return from the stocks returns). Our findings verify and strongly substantiate the assumed functional role of the index in the financial system as a cohesive force between stocks, i.e., the correlations between stocks are largely due to the strong correlation between each stock and the Index (the adhesive effect), rather than inter-stock dependencies. The Index adhesive and cohesive effects on the market correlations in the two markets are presented and compared in a reduced 3-D principal component space of the correlation matrices (holographic presentation). The results provide new insights into the interplay between an index and its constituent stocks in TASE-like versus NYSE-like markets.
Three-dimensional scanning transmission electron microscopy of biological specimens.
de Jonge, Niels; Sougrat, Rachid; Northan, Brian M; Pennycook, Stephen J
2010-02-01
A three-dimensional (3D) reconstruction of the cytoskeleton and a clathrin-coated pit in mammalian cells has been achieved from a focal-series of images recorded in an aberration-corrected scanning transmission electron microscope (STEM). The specimen was a metallic replica of the biological structure comprising Pt nanoparticles 2-3 nm in diameter, with a high stability under electron beam radiation. The 3D dataset was processed by an automated deconvolution procedure. The lateral resolution was 1.1 nm, set by pixel size. Particles differing by only 10 nm in vertical position were identified as separate objects with greater than 20% dip in contrast between them. We refer to this value as the axial resolution of the deconvolution or reconstruction, the ability to recognize two objects, which were unresolved in the original dataset. The resolution of the reconstruction is comparable to that achieved by tilt-series transmission electron microscopy. However, the focal-series method does not require mechanical tilting and is therefore much faster. 3D STEM images were also recorded of the Golgi ribbon in conventional thin sections containing 3T3 cells with a comparable axial resolution in the deconvolved dataset.
Three-Dimensional Scanning Transmission Electron Microscopy of Biological Specimens
de Jonge, Niels; Sougrat, Rachid; Northan, Brian M.; Pennycook, Stephen J.
2010-01-01
A three-dimensional (3D) reconstruction of the cytoskeleton and a clathrin-coated pit in mammalian cells has been achieved from a focal-series of images recorded in an aberration-corrected scanning transmission electron microscope (STEM). The specimen was a metallic replica of the biological structure comprising Pt nanoparticles 2–3 nm in diameter, with a high stability under electron beam radiation. The 3D dataset was processed by an automated deconvolution procedure. The lateral resolution was 1.1 nm, set by pixel size. Particles differing by only 10 nm in vertical position were identified as separate objects with greater than 20% dip in contrast between them. We refer to this value as the axial resolution of the deconvolution or reconstruction, the ability to recognize two objects, which were unresolved in the original dataset. The resolution of the reconstruction is comparable to that achieved by tilt-series transmission electron microscopy. However, the focal-series method does not require mechanical tilting and is therefore much faster. 3D STEM images were also recorded of the Golgi ribbon in conventional thin sections containing 3T3 cells with a comparable axial resolution in the deconvolved dataset. PMID:20082729
NASA Astrophysics Data System (ADS)
Poli, D.; Remondino, F.; Angiuli, E.; Agugiaro, G.
2015-02-01
Today the use of spaceborne Very High Resolution (VHR) optical sensors for automatic 3D information extraction is increasing in the scientific and civil communities. The 3D Optical Metrology (3DOM) unit of the Bruno Kessler Foundation (FBK) in Trento (Italy) has collected VHR satellite imagery, as well as aerial and terrestrial data over Trento for creating a complete testfield for investigations on image radiometry, geometric accuracy, automatic digital surface model (DSM) generation, 2D/3D feature extraction, city modelling and data fusion. This paper addresses the radiometric and the geometric aspects of the VHR spaceborne imagery included in the Trento testfield and their potential for 3D information extraction. The dataset consist of two stereo-pairs acquired by WorldView-2 and by GeoEye-1 in panchromatic and multispectral mode, and a triplet from Pléiades-1A. For reference and validation, a DSM from airborne LiDAR acquisition is used. The paper gives details on the project, dataset characteristics and achieved results.
Validation of 3D multimodality roadmapping in interventional neuroradiology
NASA Astrophysics Data System (ADS)
Ruijters, Daniel; Homan, Robert; Mielekamp, Peter; van de Haar, Peter; Babic, Drazenko
2011-08-01
Three-dimensional multimodality roadmapping is entering clinical routine utilization for neuro-vascular treatment. Its purpose is to navigate intra-arterial and intra-venous endovascular devices through complex vascular anatomy by fusing pre-operative computed tomography (CT) or magnetic resonance (MR) with the live fluoroscopy image. The fused image presents the real-time position of the intra-vascular devices together with the patient's 3D vascular morphology and its soft-tissue context. This paper investigates the effectiveness, accuracy, robustness and computation times of the described methods in order to assess their suitability for the intended clinical purpose: accurate interventional navigation. The mutual information-based 3D-3D registration proved to be of sub-voxel accuracy and yielded an average registration error of 0.515 mm and the live machine-based 2D-3D registration delivered an average error of less than 0.2 mm. The capture range of the image-based 3D-3D registration was investigated to characterize its robustness, and yielded an extent of 35 mm and 25° for >80% of the datasets for registration of 3D rotational angiography (3DRA) with CT, and 15 mm and 20° for >80% of the datasets for registration of 3DRA with MR data. The image-based 3D-3D registration could be computed within 8 s, while applying the machine-based 2D-3D registration only took 1.5 µs, which makes them very suitable for interventional use.
BiPACE 2D--graph-based multiple alignment for comprehensive 2D gas chromatography-mass spectrometry.
Hoffmann, Nils; Wilhelm, Mathias; Doebbe, Anja; Niehaus, Karsten; Stoye, Jens
2014-04-01
Comprehensive 2D gas chromatography-mass spectrometry is an established method for the analysis of complex mixtures in analytical chemistry and metabolomics. It produces large amounts of data that require semiautomatic, but preferably automatic handling. This involves the location of significant signals (peaks) and their matching and alignment across different measurements. To date, there exist only a few openly available algorithms for the retention time alignment of peaks originating from such experiments that scale well with increasing sample and peak numbers, while providing reliable alignment results. We describe BiPACE 2D, an automated algorithm for retention time alignment of peaks from 2D gas chromatography-mass spectrometry experiments and evaluate it on three previously published datasets against the mSPA, SWPA and Guineu algorithms. We also provide a fourth dataset from an experiment studying the H2 production of two different strains of Chlamydomonas reinhardtii that is available from the MetaboLights database together with the experimental protocol, peak-detection results and manually curated multiple peak alignment for future comparability with newly developed algorithms. BiPACE 2D is contained in the freely available Maltcms framework, version 1.3, hosted at http://maltcms.sf.net, under the terms of the L-GPL v3 or Eclipse Open Source licenses. The software used for the evaluation along with the underlying datasets is available at the same location. The C.reinhardtii dataset is freely available at http://www.ebi.ac.uk/metabolights/MTBLS37.
The 6dF Galaxy Survey: Mass and Motions in the Local Universe
NASA Astrophysics Data System (ADS)
Colless, M.; Jones, H.; Campbell, L.; Burkey, D.; Taylor, A.; Saunders, W.
2005-01-01
The 6dF Galaxy Survey will provide 167000 redshifts and about 15000 peculiar velocities for galaxies over most of the southern sky out to about cz = 30000 km/s. The survey is currently almost half complete, with the final observations due in mid-2005. An initial data release was made public in December 2002; the first third of the dataset will be released at the end of 2003, with the remaining thirds being released at the end of 2004 and 2005. The status of the survey, the survey database and other relevant information can be obtained from the 6dFGS web site at http://www.mso.anu.edu.au/6dFGS. In terms of constraining cosmological parameters, combining the 6dFGS redshift and peculiar velocity surveys will allow us to: (1) break the degeneracy between the redshift-space distortion parameter beta = Omega_m0.6b/b and the galaxy-mass correlation parameter rg; (2) measure the four parameters Ag, Gamma, beta and rg with precisions of between 1% and 3%; (3) measure the variation of rg and b with scale to within a few percent over a wide range of scales.
Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations
Jia, Kebin
2015-01-01
This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed. PMID:26120356
Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations.
Zhao, Liya; Jia, Kebin
2015-01-01
This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed.
Al-Kadi, Omar S; Chung, Daniel Y F; Carlisle, Robert C; Coussios, Constantin C; Noble, J Alison
2015-04-01
Intensity variations in image texture can provide powerful quantitative information about physical properties of biological tissue. However, tissue patterns can vary according to the utilized imaging system and are intrinsically correlated to the scale of analysis. In the case of ultrasound, the Nakagami distribution is a general model of the ultrasonic backscattering envelope under various scattering conditions and densities where it can be employed for characterizing image texture, but the subtle intra-heterogeneities within a given mass are difficult to capture via this model as it works at a single spatial scale. This paper proposes a locally adaptive 3D multi-resolution Nakagami-based fractal feature descriptor that extends Nakagami-based texture analysis to accommodate subtle speckle spatial frequency tissue intensity variability in volumetric scans. Local textural fractal descriptors - which are invariant to affine intensity changes - are extracted from volumetric patches at different spatial resolutions from voxel lattice-based generated shape and scale Nakagami parameters. Using ultrasound radio-frequency datasets we found that after applying an adaptive fractal decomposition label transfer approach on top of the generated Nakagami voxels, tissue characterization results were superior to the state of art. Experimental results on real 3D ultrasonic pre-clinical and clinical datasets suggest that describing tumor intra-heterogeneity via this descriptor may facilitate improved prediction of therapy response and disease characterization. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Salamunićcar, Goran; Lončarić, Sven; Pina, Pedro; Bandeira, Lourenço; Saraiva, José
2011-01-01
Recently, all the craters from the major currently available manually assembled catalogues have been merged into the catalogue with 57 633 known Martian impact craters (MA57633GT). In addition, the work on crater detection algorithm (CDA), developed to search for still uncatalogued impact craters using 1/128° MOLA data, resulted in MA115225GT. In parallel with this work another CDA has been developed which resulted in the Stepinski catalogue containing 75 919 craters (MA75919T). The new MA130301GT catalogue presented in this paper is the result of: (1) overall merger of MA115225GT and MA75919T; (2) 2042 additional craters found using Shen-Castan based CDA from the previous work and 1/128° MOLA data; and (3) 3129 additional craters found using CDA for optical images from the previous work and selected regions of 1/256° MDIM, 1/256° THEMIS-DIR, and 1/256° MOC datasets. All craters from MA130301GT are manually aligned with all used datasets. For all the craters that originate from the used catalogues (Barlow, Rodionova, Boyce, Kuzmin, Stepinski) we integrated all the attributes available in these catalogues. With such an approach MA130301GT provides everything that was included in these catalogues, plus: (1) the correlation between various morphological descriptors from used catalogues; (2) the correlation between manually assigned attributes and automated depth/diameter measurements from MA75919T and our CDA; (3) surface dating which has been improved in resolution globally; (4) average errors and their standard deviations for manually and automatically assigned attributes such as position coordinates, diameter, depth/diameter ratio, etc.; and (5) positional accuracy of features in the used datasets according to the defined coordinate system referred to as MDIM 2.1, which incorporates 1232 globally distributed ground control points, while our catalogue contains 130 301 cross-references between each of the used datasets. Global completeness of MA130301GT is up to ˜ D≥2 km (it contains 85 783 such craters, while the smallest D is 0.924 km). This is a considerable improvement in comparison with the completeness of the Rodionova (˜10 km), Barlow (˜5 km) and Stepinski (˜3 km) catalogues. An accompanying result to the new catalogue is a contribution to the evaluation of CDAs - the following methods have been developed: (1) a new context-aware method for the advanced automated registration of craters with GT catalogues; (2) a new method for manual registration of newly found craters into GT catalogues; and (3) additional new accompanying methods for objective evaluation of CDAs using different datasets including optical images.
Joint inversions of two VTEM surveys using quasi-3D TDEM and 3D magnetic inversion algorithms
NASA Astrophysics Data System (ADS)
Kaminski, Vlad; Di Massa, Domenico; Viezzoli, Andrea
2016-05-01
In the current paper, we present results of a joint quasi-three-dimensional (quasi-3D) inversion of two versatile time domain electromagnetic (VTEM) datasets, as well as a joint 3D inversion of associated aeromagnetic datasets, from two surveys flown six years apart from one another (2007 and 2013) over a volcanogenic massive sulphide gold (VMS-Au) prospect in northern Ontario, Canada. The time domain electromagnetic (TDEM) data were inverted jointly using the spatially constrained inversion (SCI) approach. In order to increase the coherency in the model space, a calibration parameter was added. This was followed by a joint inversion of the total magnetic intensity (TMI) data extracted from the two surveys. The results of the inversions have been studied and matched with the known geology, adding some new valuable information to the ongoing mineral exploration initiative.
Evaluation of terrestrial photogrammetric point clouds derived from thermal imagery
NASA Astrophysics Data System (ADS)
Metcalf, Jeremy P.; Olsen, Richard C.
2016-05-01
Computer vision and photogrammetric techniques have been widely applied to digital imagery producing high density 3D point clouds. Using thermal imagery as input, the same techniques can be applied to infrared data to produce point clouds in 3D space, providing surface temperature information. The work presented here is an evaluation of the accuracy of 3D reconstruction of point clouds produced using thermal imagery. An urban scene was imaged over an area at the Naval Postgraduate School, Monterey, CA, viewing from above as with an airborne system. Terrestrial thermal and RGB imagery were collected from a rooftop overlooking the site using a FLIR SC8200 MWIR camera and a Canon T1i DSLR. In order to spatially align each dataset, ground control points were placed throughout the study area using Trimble R10 GNSS receivers operating in RTK mode. Each image dataset is processed to produce a dense point cloud for 3D evaluation.
Fast 3D shape screening of large chemical databases through alignment-recycling
Fontaine, Fabien; Bolton, Evan; Borodina, Yulia; Bryant, Stephen H
2007-01-01
Background Large chemical databases require fast, efficient, and simple ways of looking for similar structures. Although such tasks are now fairly well resolved for graph-based similarity queries, they remain an issue for 3D approaches, particularly for those based on 3D shape overlays. Inspired by a recent technique developed to compare molecular shapes, we designed a hybrid methodology, alignment-recycling, that enables efficient retrieval and alignment of structures with similar 3D shapes. Results Using a dataset of more than one million PubChem compounds of limited size (< 28 heavy atoms) and flexibility (< 6 rotatable bonds), we obtained a set of a few thousand diverse structures covering entirely the 3D shape space of the conformers of the dataset. Transformation matrices gathered from the overlays between these diverse structures and the 3D conformer dataset allowed us to drastically (100-fold) reduce the CPU time required for shape overlay. The alignment-recycling heuristic produces results consistent with de novo alignment calculation, with better than 80% hit list overlap on average. Conclusion Overlay-based 3D methods are computationally demanding when searching large databases. Alignment-recycling reduces the CPU time to perform shape similarity searches by breaking the alignment problem into three steps: selection of diverse shapes to describe the database shape-space; overlay of the database conformers to the diverse shapes; and non-optimized overlay of query and database conformers using common reference shapes. The precomputation, required by the first two steps, is a significant cost of the method; however, once performed, querying is two orders of magnitude faster. Extensions and variations of this methodology, for example, to handle more flexible and larger small-molecules are discussed. PMID:17880744
Dawood, Faten A; Rahmat, Rahmita W; Kadiman, Suhaini B; Abdullah, Lili N; Zamrin, Mohd D
2014-01-01
This paper presents a hybrid method to extract endocardial contour of the right ventricular (RV) in 4-slices from 3D echocardiography dataset. The overall framework comprises four processing phases. In Phase I, the region of interest (ROI) is identified by estimating the cavity boundary. Speckle noise reduction and contrast enhancement were implemented in Phase II as preprocessing tasks. In Phase III, the RV cavity region was segmented by generating intensity threshold which was used for once for all frames. Finally, Phase IV is proposed to extract the RV endocardial contour in a complete cardiac cycle using a combination of shape-based contour detection and improved radial search algorithm. The proposed method was applied to 16 datasets of 3D echocardiography encompassing the RV in long-axis view. The accuracy of experimental results obtained by the proposed method was evaluated qualitatively and quantitatively. It has been done by comparing the segmentation results of RV cavity based on endocardial contour extraction with the ground truth. The comparative analysis results show that the proposed method performs efficiently in all datasets with overall performance of 95% and the root mean square distances (RMSD) measure in terms of mean ± SD was found to be 2.21 ± 0.35 mm for RV endocardial contours.
Estimating Building Age with 3d GIS
NASA Astrophysics Data System (ADS)
Biljecki, F.; Sindram, M.
2017-10-01
Building datasets (e.g. footprints in OpenStreetMap and 3D city models) are becoming increasingly available worldwide. However, the thematic (attribute) aspect is not always given attention, as many of such datasets are lacking in completeness of attributes. A prominent attribute of buildings is the year of construction, which is useful for some applications, but its availability may be scarce. This paper explores the potential of estimating the year of construction (or age) of buildings from other attributes using random forest regression. The developed method has a two-fold benefit: enriching datasets and quality control (verification of existing attributes). Experiments are carried out on a semantically rich LOD1 dataset of Rotterdam in the Netherlands using 9 attributes. The results are mixed: the accuracy in the estimation of building age depends on the available information used in the regression model. In the best scenario we have achieved predictions with an RMSE of 11 years, but in more realistic situations with limited knowledge about buildings the error is much larger (RMSE = 26 years). Hence the main conclusion of the paper is that inferring building age with 3D city models is possible to a certain extent because it reveals the approximate period of construction, but precise estimations remain a difficult task.
de Boer, Bouke A; Soufan, Alexandre T; Hagoort, Jaco; Mohun, Timothy J; van den Hoff, Maurice J B; Hasman, Arie; Voorbraak, Frans P J M; Moorman, Antoon F M; Ruijter, Jan M
2011-01-01
Interpretation of the results of anatomical and embryological studies relies heavily on proper visualization of complex morphogenetic processes and patterns of gene expression in a three-dimensional (3D) context. However, reconstruction of complete 3D datasets is time consuming and often researchers study only a few sections. To help in understanding the resulting 2D data we developed a program (TRACTS) that places such arbitrary histological sections into a high-resolution 3D model of the developing heart. The program places sections correctly, robustly and as precisely as the best of the fits achieved by five morphology experts. Dissemination of 3D data is severely hampered by the 2D medium of print publication. Many insights gained from studying the 3D object are very hard to convey using 2D images and are consequently lost or cannot be verified independently. It is possible to embed 3D objects into a pdf document, which is a format widely used for the distribution of scientific papers. Using the freeware program Adobe Reader to interact with these 3D objects is reasonably straightforward; creating such objects is not. We have developed a protocol that describes, step by step, how 3D objects can be embedded into a pdf document. Both the use of TRACTS and the inclusion of 3D objects in pdf documents can help in the interpretation of 2D and 3D data, and will thus optimize communication on morphological issues in developmental biology.
Accuracy and Specific Value of Cardiovascular 3D-Models in Pediatric CT-Angiography.
Hammon, Matthias; Rompel, Oliver; Seuss, Hannes; Dittrich, Sven; Uder, Michael; Rüffer, Andrè; Cesnjevar, Robert; Ehret, Nicole; Glöckler, Martin
2017-12-01
Computed tomography (CT)-angiography is routinely performed prior to catheter-based and surgical treatment in congenital heart disease. To date, little is known about the accuracy and advantage of different 3D-reconstructions in CT-data. Exact anatomical information is crucial. We analyzed 35 consecutive CT-angiographies of infants with congenital heart disease. All datasets are reconstructed three-dimensionally using volume rendering technique (VRT) and threshold-based segmentation (stereolithographic model, STL). Additionally, the two-dimensional maximum intensity projection (MIP) reconstructs two-dimensional data. In each dataset and resulting image, measurements of vascular diameters for four different vessels were estimated and compared to the reference standard, measured via multiplanar reformation (MPR). The resulting measurements obtained via the STL-images, MIP-images, and the VRT-images were compared with the reference standard. There was a significant difference (p < 0.05) between measurements. The mean difference was 0.0 for STL-images, -0.1 for MIP-images, and -0.3 for VRT-images. The range of the differences was -0.7 to 1.0 mm for STL-images, -0.6 to 0.5 mm for MIP-images and -1.1 to 0.7 mm for VRT-images. There was an excellent correlation between the STL-, MIP-, VRT-measurements, and the reference standard. Inter-reader reliability was excellent (p < 0.01). STL-models of cardiovascular structures are more accurate than the traditional VRT-models. Additionally, they can be standardized and are reproducible.
NASA Astrophysics Data System (ADS)
Nguyen, D. T.; Bertholet, J.; Kim, J.-H.; O'Brien, R.; Booth, J. T.; Poulsen, P. R.; Keall, P. J.
2018-01-01
Increasing evidence suggests that intrafraction tumour motion monitoring needs to include both 3D translations and 3D rotations. Presently, methods to estimate the rotation motion require the 3D translation of the target to be known first. However, ideally, translation and rotation should be estimated concurrently. We present the first method to directly estimate six-degree-of-freedom (6DoF) motion from the target’s projection on a single rotating x-ray imager in real-time. This novel method is based on the linear correlations between the superior-inferior translations and the motion in the other five degrees-of-freedom. The accuracy of the method was evaluated in silico with 81 liver tumour motion traces from 19 patients with three implanted markers. The ground-truth motion was estimated using the current gold standard method where each marker’s 3D position was first estimated using a Gaussian probability method, and the 6DoF motion was then estimated from the 3D positions using an iterative method. The 3D position of each marker was projected onto a gantry-mounted imager with an imaging rate of 11 Hz. After an initial 110° gantry rotation (200 images), a correlation model between the superior-inferior translations and the five other DoFs was built using a least square method. The correlation model was then updated after each subsequent frame to estimate 6DoF motion in real-time. The proposed algorithm had an accuracy (±precision) of -0.03 ± 0.32 mm, -0.01 ± 0.13 mm and 0.03 ± 0.52 mm for translations in the left-right (LR), superior-inferior (SI) and anterior-posterior (AP) directions respectively; and, 0.07 ± 1.18°, 0.07 ± 1.00° and 0.06 ± 1.32° for rotations around the LR, SI and AP axes respectively on the dataset. The first method to directly estimate real-time 6DoF target motion from segmented marker positions on a 2D imager was devised. The algorithm was evaluated using 81 motion traces from 19 liver patients and was found to have sub-mm and sub-degree accuracy.
Berthod, L; Whitley, D C; Roberts, G; Sharpe, A; Greenwood, R; Mills, G A
2017-02-01
Understanding the sorption of pharmaceuticals to sewage sludge during waste water treatment processes is important for understanding their environmental fate and in risk assessments. The degree of sorption is defined by the sludge/water partition coefficient (K d ). Experimental K d values (n=297) for active pharmaceutical ingredients (n=148) in primary and activated sludge were collected from literature. The compounds were classified by their charge at pH7.4 (44 uncharged, 60 positively and 28 negatively charged, and 16 zwitterions). Univariate models relating log K d to log K ow for each charge class showed weak correlations (maximum R 2 =0.51 for positively charged) with no overall correlation for the combined dataset (R 2 =0.04). Weaker correlations were found when relating log K d to log D ow . Three sets of molecular descriptors (Molecular Operating Environment, VolSurf and ParaSurf) encoding a range of physico-chemical properties were used to derive multivariate models using stepwise regression, partial least squares and Bayesian artificial neural networks (ANN). The best predictive performance was obtained with ANN, with R 2 =0.62-0.69 for these descriptors using the complete dataset. Use of more complex Vsurf and ParaSurf descriptors showed little improvement over Molecular Operating Environment descriptors. The most influential descriptors in the ANN models, identified by automatic relevance determination, highlighted the importance of hydrophobicity, charge and molecular shape effects in these sorbate-sorbent interactions. The heterogeneous nature of the different sewage sludges used to measure K d limited the predictability of sorption from physico-chemical properties of the pharmaceuticals alone. Standardization of test materials for the measurement of K d would improve comparability of data from different studies, in the long-term leading to better quality environmental risk assessments. Copyright © 2016 British Geological Survey, NERC. Published by Elsevier B.V. All rights reserved.
Cosmic sculpture: a new way to visualise the cosmic microwave background
NASA Astrophysics Data System (ADS)
Clements, D. L.; Sato, S.; Portela Fonseca, A.
2017-01-01
3D printing presents an attractive alternative to visual representation of physical datasets such as astronomical images that can be used for research, outreach or teaching purposes, and is especially relevant to people with a visual disability. We here report the use of 3D printing technology to produce a representation of the all-sky cosmic microwave background (CMB) intensity anisotropy maps produced by the Planck mission. The success of this work in representing key features of the CMB is discussed as is the potential of this approach for representing other astrophysical data sets. 3D printing such datasets represents a highly complementary approach to the usual 2D projections used in teaching and outreach work, and can also form the basis of undergraduate projects. The CAD files used to produce the models discussed in this paper are made available.
Establishing a threshold for the number of missing days using 7 d pedometer data.
Kang, Minsoo; Hart, Peter D; Kim, Youngdeok
2012-11-01
The purpose of this study was to examine the threshold of the number of missing days of recovery using the individual information (II)-centered approach. Data for this study came from 86 participants, aged from 17 to 79 years old, who had 7 consecutive days of complete pedometer (Yamax SW 200) wear. Missing datasets (1 d through 5 d missing) were created by a SAS random process 10,000 times each. All missing values were replaced using the II-centered approach. A 7 d average was calculated for each dataset, including the complete dataset. Repeated measure ANOVA was used to determine the differences between 1 d through 5 d missing datasets and the complete dataset. Mean absolute percentage error (MAPE) was also computed. Mean (SD) daily step count for the complete 7 d dataset was 7979 (3084). Mean (SD) values for the 1 d through 5 d missing datasets were 8072 (3218), 8066 (3109), 7968 (3273), 7741 (3050) and 8314 (3529), respectively (p > 0.05). The lower MAPEs were estimated for 1 d missing (5.2%, 95% confidence interval (CI) 4.4-6.0) and 2 d missing (8.4%, 95% CI 7.0-9.8), while all others were greater than 10%. The results of this study show that the 1 d through 5 d missing datasets, with replaced values, were not significantly different from the complete dataset. Based on the MAPE results, it is not recommended to replace more than two days of missing step counts.
D Partition-Based Clustering for Supply Chain Data Management
NASA Astrophysics Data System (ADS)
Suhaibah, A.; Uznir, U.; Anton, F.; Mioc, D.; Rahman, A. A.
2015-10-01
Supply Chain Management (SCM) is the management of the products and goods flow from its origin point to point of consumption. During the process of SCM, information and dataset gathered for this application is massive and complex. This is due to its several processes such as procurement, product development and commercialization, physical distribution, outsourcing and partnerships. For a practical application, SCM datasets need to be managed and maintained to serve a better service to its three main categories; distributor, customer and supplier. To manage these datasets, a structure of data constellation is used to accommodate the data into the spatial database. However, the situation in geospatial database creates few problems, for example the performance of the database deteriorate especially during the query operation. We strongly believe that a more practical hierarchical tree structure is required for efficient process of SCM. Besides that, three-dimensional approach is required for the management of SCM datasets since it involve with the multi-level location such as shop lots and residential apartments. 3D R-Tree has been increasingly used for 3D geospatial database management due to its simplicity and extendibility. However, it suffers from serious overlaps between nodes. In this paper, we proposed a partition-based clustering for the construction of a hierarchical tree structure. Several datasets are tested using the proposed method and the percentage of the overlapping nodes and volume coverage are computed and compared with the original 3D R-Tree and other practical approaches. The experiments demonstrated in this paper substantiated that the hierarchical structure of the proposed partitionbased clustering is capable of preserving minimal overlap and coverage. The query performance was tested using 300,000 points of a SCM dataset and the results are presented in this paper. This paper also discusses the outlook of the structure for future reference.
On-line 3D motion estimation using low resolution MRI
NASA Astrophysics Data System (ADS)
Glitzner, M.; de Senneville, B. Denis; Lagendijk, J. J. W.; Raaymakers, B. W.; Crijns, S. P. M.
2015-08-01
Image processing such as deformable image registration finds its way into radiotherapy as a means to track non-rigid anatomy. With the advent of magnetic resonance imaging (MRI) guided radiotherapy, intrafraction anatomy snapshots become technically feasible. MRI provides the needed tissue signal for high-fidelity image registration. However, acquisitions, especially in 3D, take a considerable amount of time. Pushing towards real-time adaptive radiotherapy, MRI needs to be accelerated without degrading the quality of information. In this paper, we investigate the impact of image resolution on the quality of motion estimations. Potentially, spatially undersampled images yield comparable motion estimations. At the same time, their acquisition times would reduce greatly due to the sparser sampling. In order to substantiate this hypothesis, exemplary 4D datasets of the abdomen were downsampled gradually. Subsequently, spatiotemporal deformations are extracted consistently using the same motion estimation for each downsampled dataset. Errors between the original and the respectively downsampled version of the dataset are then evaluated. Compared to ground-truth, results show high similarity of deformations estimated from downsampled image data. Using a dataset with {{≤ft(2.5 \\text{mm}\\right)}3} voxel size, deformation fields could be recovered well up to a downsampling factor of 2, i.e. {{≤ft(5 \\text{mm}\\right)}3} . In a therapy guidance scenario MRI, imaging speed could accordingly increase approximately fourfold, with acceptable loss of estimated motion quality.
Clinical applications of three-dimensional tortuosity metrics
NASA Astrophysics Data System (ADS)
Dougherty, Geoff; Johnson, Michael J.
2007-03-01
The measurement of abnormal vascular tortuosity is important in the diagnosis of many diseases. Metrics based on three-dimensional (3-D) curvature, using approximate polynomial spline-fitting to "data balls" centered along the mid-line of the vessel, minimize digitization errors and give tortuosity values largely independent of the resolution of the imaging system. In order to establish their clinical validity we applied them to a number of clinical vascular systems, using both 2-D (standard angiograms and retinal images) and 3-D datasets (from computed tomography angiography (CTA) and magnetic resonance angiography (MRA)). Using the abdominal aortograms we found that the metrics correlated well with the ranking of an expert panel of three vascular surgeons. Both the mean curvature and the root-mean square curvature provided good discrimination between vessels of different tortuosity: and using a data ball size of one-quarter of the local vessel radius in the spline fitting gave consistent results. Tortuous retinal vessels resulting from retinitis or diabetes, but not from vasculitis, could be distinguished from normal vessels. Tortuosity values based on 3-D data sets gave higher values than their 2-D projections, and could easily be implemented in automatic measurement. They produced values sufficiently discriminating to assess the relative utility of arteries for endoluminal repair of aneurysms.
Automatic co-segmentation of lung tumor based on random forest in PET-CT images
NASA Astrophysics Data System (ADS)
Jiang, Xueqing; Xiang, Dehui; Zhang, Bin; Zhu, Weifang; Shi, Fei; Chen, Xinjian
2016-03-01
In this paper, a fully automatic method is proposed to segment the lung tumor in clinical 3D PET-CT images. The proposed method effectively combines PET and CT information to make full use of the high contrast of PET images and superior spatial resolution of CT images. Our approach consists of three main parts: (1) initial segmentation, in which spines are removed in CT images and initial connected regions achieved by thresholding based segmentation in PET images; (2) coarse segmentation, in which monotonic downhill function is applied to rule out structures which have similar standardized uptake values (SUV) to the lung tumor but do not satisfy a monotonic property in PET images; (3) fine segmentation, random forests method is applied to accurately segment the lung tumor by extracting effective features from PET and CT images simultaneously. We validated our algorithm on a dataset which consists of 24 3D PET-CT images from different patients with non-small cell lung cancer (NSCLC). The average TPVF, FPVF and accuracy rate (ACC) were 83.65%, 0.05% and 99.93%, respectively. The correlation analysis shows our segmented lung tumor volumes has strong correlation ( average 0.985) with the ground truth 1 and ground truth 2 labeled by a clinical expert.
NASA Astrophysics Data System (ADS)
Liu, Jianzhong; Kern, Petra S.; Gerberick, G. Frank; Santos-Filho, Osvaldo A.; Esposito, Emilio X.; Hopfinger, Anton J.; Tseng, Yufeng J.
2008-06-01
In previous studies we have developed categorical QSAR models for predicting skin-sensitization potency based on 4D-fingerprint (4D-FP) descriptors and in vivo murine local lymph node assay (LLNA) measures. Only 4D-FP derived from the ground state (GMAX) structures of the molecules were used to build the QSAR models. In this study we have generated 4D-FP descriptors from the first excited state (EMAX) structures of the molecules. The GMAX, EMAX and the combined ground and excited state 4D-FP descriptors (GEMAX) were employed in building categorical QSAR models. Logistic regression (LR) and partial least square coupled logistic regression (PLS-CLR), found to be effective model building for the LLNA skin-sensitization measures in our previous studies, were used again in this study. This also permitted comparison of the prior ground state models to those involving first excited state 4D-FP descriptors. Three types of categorical QSAR models were constructed for each of the GMAX, EMAX and GEMAX datasets: a binary model (2-state), an ordinal model (3-state) and a binary-binary model (two-2-state). No significant differences exist among the LR 2-state model constructed for each of the three datasets. However, the PLS-CLR 3-state and 2-state models based on the EMAX and GEMAX datasets have higher predictivity than those constructed using only the GMAX dataset. These EMAX and GMAX categorical models are also more significant and predictive than corresponding models built in our previous QSAR studies of LLNA skin-sensitization measures.
Multibeam 3D Underwater SLAM with Probabilistic Registration.
Palomer, Albert; Ridao, Pere; Ribas, David
2016-04-20
This paper describes a pose-based underwater 3D Simultaneous Localization and Mapping (SLAM) using a multibeam echosounder to produce high consistency underwater maps. The proposed algorithm compounds swath profiles of the seafloor with dead reckoning localization to build surface patches (i.e., point clouds). An Iterative Closest Point (ICP) with a probabilistic implementation is then used to register the point clouds, taking into account their uncertainties. The registration process is divided in two steps: (1) point-to-point association for coarse registration and (2) point-to-plane association for fine registration. The point clouds of the surfaces to be registered are sub-sampled in order to decrease both the computation time and also the potential of falling into local minima during the registration. In addition, a heuristic is used to decrease the complexity of the association step of the ICP from O(n2) to O(n) . The performance of the SLAM framework is tested using two real world datasets: First, a 2.5D bathymetric dataset obtained with the usual down-looking multibeam sonar configuration, and second, a full 3D underwater dataset acquired with a multibeam sonar mounted on a pan and tilt unit.
von Knobelsdorff-Brenkenhoff, Florian; Gruettner, Henriette; Trauzeddel, Ralf F; Greiser, Andreas; Schulz-Menger, Jeanette
2014-06-01
To omit risks of contrast agent administration, native magnetic resonance angiography (MRA) is desired for assessing the thoracic aorta. The aim was to evaluate a native steady-state free precession (SSFP) three-dimensional (3D) MRA in comparison with contrast-enhanced MRA as the gold standard. Seventy-six prospective patients with known or suspicion of thoracic aortic disease underwent MRA at 1.5 T using (i) native 3D SSFP MRA with ECG and navigator gating and high isotropic spatial resolution (1.3 × 1.3 × 1.3 mm(3)) and (ii) conventional contrast-enhanced ECG-gated gradient-echo 3D MRA (1.3 × 0.8 × 1.8 mm(3)). Datasets were compared at nine aortic levels regarding image quality (score 0-3: 0 = poor, 3 = excellent) and aortic diameters, as well as observer dependency and final diagnosis. Statistical tests included paired t-test, correlation analysis, and Bland-Altman analysis. Native 3D MRA was acquired successfully in 70 of 76 subjects (mean acquisition time 8.6 ± 2.7 min), while irregular breathing excluded 6 of 76 subjects. Aortic diameters agreed close between both methods at all aortic levels (r = 0.99; bias ± SD -0.12 ± 1.2 mm) with low intra- and inter-observer dependency (intraclass correlation coefficient 0.99). Native MRA studies resulted in the same final diagnosis as the contrast-enhanced MRA. The mean image quality score was superior with native compared with contrast-enhanced MRA (2.4 ± 0.6 vs. 1.6 ± 0.5; P < 0.001). Accuracy of aortic size measurements, certainty in defining the diagnosis and benefits in image quality at the aortic root, underscore the use of the tested high-resolution native 3D SSFP MRA as an appropriate alternative to contrast-enhanced MRA to assess the thoracic aorta. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2014. For permissions please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Marcott, Curtis; Lo, Michael; Hu, Qichi; Kjoller, Kevin; Boskey, Adele; Noda, Isao
2014-07-01
The recent combination of atomic force microscopy and infrared spectroscopy (AFM-IR) has led to the ability to obtain IR spectra with nanoscale spatial resolution, nearly two orders-of-magnitude better than conventional Fourier transform infrared (FT-IR) microspectroscopy. This advanced methodology can lead to significantly sharper spectral features than are typically seen in conventional IR spectra of inhomogeneous materials, where a wider range of molecular environments are coaveraged by the larger sample cross section being probed. In this work, two-dimensional (2D) correlation analysis is used to examine position sensitive spectral variations in datasets of closely spaced AFM-IR spectra. This analysis can reveal new key insights, providing a better understanding of the new spectral information that was previously hidden under broader overlapped spectral features. Two examples of the utility of this new approach are presented. Two-dimensional correlation analysis of a set of AFM-IR spectra were collected at 200-nm increments along a line through a nucleation site generated by remelting a small spot on a thin film of poly(3-hydroxybutyrate-co-3-hydroxyhexanoate). There are two different crystalline carbonyl band components near 1720 cm-1 that sequentially disappear before a band at 1740 cm-1 due to more disordered material appears. In the second example, 2D correlation analysis of a series of AFM-IR spectra spaced every 1 μm of a thin cross section of a bone sample measured outward from an osteon center of bone growth. There are many changes in the amide I and phosphate band contours, suggesting changes in the bone structure are occurring as the bone matures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dooley, James J.
2010-11-01
This paper presents two distinct datasets that describe investments in energy research and development (R&D) by the US private sector since the mid1970s, which is when the US government began to systematically collect these data. The first dataset is based upon a broad survey of more than 20,000 firms’ industrial R&D activities. This broad survey of US industry is coordinated by the US National Science Foundation. The second dataset discussed here is a much narrower accounting of the energy R&D activities of the approximately two dozen largest US oil and gas companies conducted by the US Department of Energy’s Energymore » Information Agency. Even given the large disparity in the breadth and scope of these two surveys of the private sector’s support for energy R&D, both datasets tell the same story in terms of the broad outlines of the private sector’s investments in energy R&D since the mid 1970s. The broad outlines of the US private sector’s support for energy R&D since the mid 1970s is: (1) In the immediate aftermath of the Arab Oil Embargo of 1973, there is a large surge in US private sector investments in energy R&D that peaked in the period between 1980 and 1982 at approximately $3.7 billion to $6.7 billion per year (in inflation adjusted 2010 US dollars) depending upon which survey is used (2) Private sector investments in energy R&D declined from this peak until bottoming out at approximately $1.8 billion to $1 billion per year in 1999; (3) US private sector support for energy R&D has recovered somewhat over the past decade and stands at $2.2 billion to $3.4 billion. Both data sets indicate that the US private sector’s support for energy R&D has been and remains dominated by fossil energy R&D and in particular R&D related to the needs of the oil and gas industry.« less
D Survey in Complex Archaeological Environments: AN Approach by Terrestrial Laser Scanning
NASA Astrophysics Data System (ADS)
Ebolese, D.; Dardanelli, G.; Lo Brutto, M.; Sciortino, R.
2018-05-01
The survey of archaeological sites by appropriate geomatics technologies is an important research topic. In particular, the 3D survey by terrestrial laser scanning has become a common practice for 3D archaeological data collection. Even if terrestrial laser scanning survey is quite well established, due to the complexity of the most archaeological contexts, many issues can arise and make the survey more difficult. The aim of this work is to describe the methodology chosen for a terrestrial laser scanning survey in a complex archaeological environment according to the issues related to the particular structure of the site. The developed approach was used for the terrestrial laser scanning survey and documentation of a part of the archaeological site of Elaiussa Sebaste in Turkey. The proposed technical solutions have allowed providing an accurate and detailed 3D dataset of the study area. In addition, further products useful for archaeological analysis were also obtained from the 3D dataset of the study area.
An application of cascaded 3D fully convolutional networks for medical image segmentation.
Roth, Holger R; Oda, Hirohisa; Zhou, Xiangrong; Shimizu, Natsuki; Yang, Ying; Hayashi, Yuichiro; Oda, Masahiro; Fujiwara, Michitaka; Misawa, Kazunari; Mori, Kensaku
2018-06-01
Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ∼10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. 1 . Copyright © 2018 Elsevier Ltd. All rights reserved.
Application Perspective of 2D+SCALE Dimension
NASA Astrophysics Data System (ADS)
Karim, H.; Rahman, A. Abdul
2016-09-01
Different applications or users need different abstraction of spatial models, dimensionalities and specification of their datasets due to variations of required analysis and output. Various approaches, data models and data structures are now available to support most current application models in Geographic Information System (GIS). One of the focuses trend in GIS multi-dimensional research community is the implementation of scale dimension with spatial datasets to suit various scale application needs. In this paper, 2D spatial datasets that been scaled up as the third dimension are addressed as 2D+scale (or 3D-scale) dimension. Nowadays, various data structures, data models, approaches, schemas, and formats have been proposed as the best approaches to support variety of applications and dimensionality in 3D topology. However, only a few of them considers the element of scale as their targeted dimension. As the scale dimension is concerned, the implementation approach can be either multi-scale or vario-scale (with any available data structures and formats) depending on application requirements (topology, semantic and function). This paper attempts to discuss on the current and new potential applications which positively could be integrated upon 3D-scale dimension approach. The previous and current works on scale dimension as well as the requirements to be preserved for any given applications, implementation issues and future potential applications forms the major discussion of this paper.
Morariu, Cosmin Adrian; Terheiden, Tobias; Dohle, Daniel Sebastian; Tsagakis, Konstantinos; Pauli, Josef
2016-02-01
Our goal is to provide precise measurements of the aortic dimensions in case of dissection pathologies. Quantification of surface lengths and aortic radii/diameters together with the visualization of the dissection membrane represents crucial prerequisites for enabling minimally invasive treatment of type A dissections, which always also imply the ascending aorta. We seek a measure invariant to luminance and contrast for aortic outer wall segmentation. Therefore, we propose a 2D graph-based approach using phase congruency combined with additional features. Phase congruency is extended to 3D by designing a novel conic directional filter and adding a lowpass component to the 3D Log-Gabor filterbank for extracting the fine dissection membrane, which separates the true lumen from the false one within the aorta. The result of the outer wall segmentation is compared with manually annotated axial slices belonging to 11 CTA datasets. Quantitative assessment of our novel 2D/3D membrane extraction algorithms has been obtained for 10 datasets and reveals subvoxel accuracy in all cases. Aortic inner and outer surface lengths, determined within 2 cadaveric CT datasets, are validated against manual measurements performed by a vascular surgeon on excised aortas of the body donors. This contribution proposes a complete pipeline for segmentation and quantification of aortic dissections. Validation against ground truth of the 3D contour lengths quantification represents a significant step toward custom-designed stent-grafts.
A large dataset of synthetic SEM images of powder materials and their ground truth 3D structures.
DeCost, Brian L; Holm, Elizabeth A
2016-12-01
This data article presents a data set comprised of 2048 synthetic scanning electron microscope (SEM) images of powder materials and descriptions of the corresponding 3D structures that they represent. These images were created using open source rendering software, and the generating scripts are included with the data set. Eight particle size distributions are represented with 256 independent images from each. The particle size distributions are relatively similar to each other, so that the dataset offers a useful benchmark to assess the fidelity of image analysis techniques. The characteristics of the PSDs and the resulting images are described and analyzed in more detail in the research article "Characterizing powder materials using keypoint-based computer vision methods" (B.L. DeCost, E.A. Holm, 2016) [1]. These data are freely available in a Mendeley Data archive "A large dataset of synthetic SEM images of powder materials and their ground truth 3D structures" (B.L. DeCost, E.A. Holm, 2016) located at http://dx.doi.org/10.17632/tj4syyj9mr.1[2] for any academic, educational, or research purposes.
Compression and accelerated rendering of volume data using DWT
NASA Astrophysics Data System (ADS)
Kamath, Preyas; Akleman, Ergun; Chan, Andrew K.
1998-09-01
2D images cannot convey information on object depth and location relative to the surfaces. The medical community is increasingly using 3D visualization techniques to view data from CT scans, MRI etc. 3D images provide more information on depth and location in the spatial domain to help surgeons making better diagnoses of the problem. 3D images can be constructed from 2D images using 3D scalar algorithms. With recent advances in communication techniques, it is possible for doctors to diagnose and plan treatment of a patient who lives at a remote location. It is made possible by transmitting relevant data of the patient via telephone lines. If this information is to be reconstructed in 3D, then 2D images must be transmitted. However 2D dataset storage occupies a lot of memory. In addition, visualization algorithms are slow. We describe in this paper a scheme which reduces the data transfer time by only transmitting information that the doctor wants. Compression is achieved by reducing the amount of data transfer. This is possible by using the 3D wavelet transform applied to 3D datasets. Since the wavelet transform is localized in frequency and spatial domain, we transmit detail only in the region where the doctor needs it. Since only ROM (Region of Interest) is reconstructed in detail, we need to render only ROI in detail, thus we can reduce the rendering time.
2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.
Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter
2014-01-01
3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image registration techniques. Different strategies for automatic serial image registration applied to MS datasets are outlined in detail. The third image modality is histology driven, i.e. a digital scan of the histological stained slices in high-resolution. After fusion of reconstructed scan images and MRI the slice-related coordinates of the mass spectra can be propagated into 3D-space. After image registration of scan images and histological stained images, the anatomical information from histology is fused with the mass spectra from MALDI-MSI. As a result of the described pipeline we have a set of 3 dimensional images representing the same anatomies, i.e. the reconstructed slice scans, the spectral images as well as corresponding clustering results, and the acquired MRI. Great emphasis is put on the fact that the co-registered MRI providing anatomical details improves the interpretation of 3D MALDI images. The ability to relate mass spectrometry derived molecular information with in vivo and in vitro imaging has potentially important implications. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. Copyright © 2013. Published by Elsevier B.V.
Automatic 3D liver location and segmentation via convolutional neural network and graph cut.
Lu, Fang; Wu, Fa; Hu, Peijun; Peng, Zhiyi; Kong, Dexing
2017-02-01
Segmentation of the liver from abdominal computed tomography (CT) images is an essential step in some computer-assisted clinical interventions, such as surgery planning for living donor liver transplant, radiotherapy and volume measurement. In this work, we develop a deep learning algorithm with graph cut refinement to automatically segment the liver in CT scans. The proposed method consists of two main steps: (i) simultaneously liver detection and probabilistic segmentation using 3D convolutional neural network; (ii) accuracy refinement of the initial segmentation with graph cut and the previously learned probability map. The proposed approach was validated on forty CT volumes taken from two public databases MICCAI-Sliver07 and 3Dircadb1. For the MICCAI-Sliver07 test dataset, the calculated mean ratios of volumetric overlap error (VOE), relative volume difference (RVD), average symmetric surface distance (ASD), root-mean-square symmetric surface distance (RMSD) and maximum symmetric surface distance (MSD) are 5.9, 2.7 %, 0.91, 1.88 and 18.94 mm, respectively. For the 3Dircadb1 dataset, the calculated mean ratios of VOE, RVD, ASD, RMSD and MSD are 9.36, 0.97 %, 1.89, 4.15 and 33.14 mm, respectively. The proposed method is fully automatic without any user interaction. Quantitative results reveal that the proposed approach is efficient and accurate for hepatic volume estimation in a clinical setup. The high correlation between the automatic and manual references shows that the proposed method can be good enough to replace the time-consuming and nonreproducible manual segmentation method.
Pb Speciation Data to Estimate Lead Bioavailability to Quail
Linear combination fitting data for lead speciation of soil samples evaluated through an in-vivo/in-vitro correlation for quail exposure.This dataset is associated with the following publication:Beyer, W.N., N. Basta, R. Chaney, P. Henry, D. Mosby, B. Rattner, K. Scheckel , D. Sprague, and J. Weber. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL. G.A. Burton, Jr., and C. H. Ward ENVIRONMENTAL TOXICOLOGY AND CHEMISTRY. Society of Environmental Toxicology and Chemistry, Pensacola, FL, USA, 35(9): 2311-2319, (2016).
Characterizing Time Series Data Diversity for Wind Forecasting: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, Brian S; Chartan, Erol Kevin; Feng, Cong
Wind forecasting plays an important role in integrating variable and uncertain wind power into the power grid. Various forecasting models have been developed to improve the forecasting accuracy. However, it is challenging to accurately compare the true forecasting performances from different methods and forecasters due to the lack of diversity in forecasting test datasets. This paper proposes a time series characteristic analysis approach to visualize and quantify wind time series diversity. The developed method first calculates six time series characteristic indices from various perspectives. Then the principal component analysis is performed to reduce the data dimension while preserving the importantmore » information. The diversity of the time series dataset is visualized by the geometric distribution of the newly constructed principal component space. The volume of the 3-dimensional (3D) convex polytope (or the length of 1D number axis, or the area of the 2D convex polygon) is used to quantify the time series data diversity. The method is tested with five datasets with various degrees of diversity.« less
Liang, Yunyun; Liu, Sanyang; Zhang, Shengli
2016-12-01
Apoptosis, or programed cell death, plays a central role in the development and homeostasis of an organism. Obtaining information on subcellular location of apoptosis proteins is very helpful for understanding the apoptosis mechanism. The prediction of subcellular localization of an apoptosis protein is still a challenging task, and existing methods mainly based on protein primary sequences. In this paper, we introduce a new position-specific scoring matrix (PSSM)-based method by using detrended cross-correlation (DCCA) coefficient of non-overlapping windows. Then a 190-dimensional (190D) feature vector is constructed on two widely used datasets: CL317 and ZD98, and support vector machine is adopted as classifier. To evaluate the proposed method, objective and rigorous jackknife cross-validation tests are performed on the two datasets. The results show that our approach offers a novel and reliable PSSM-based tool for prediction of apoptosis protein subcellular localization. Copyright © 2016 Elsevier Inc. All rights reserved.
Wang, X; Henzler, T; Gawlitza, J; Diehl, S; Wilhelm, T; Schoenberg, S O; Jin, Z Y; Xue, H D; Smakic, A
2016-11-01
Dynamic volume perfusion CT (dVPCT) provides valuable information on tissue perfusion in patients with hepatocellular carcinoma (HCC) and pancreatic cancer. However, currently dVPCT is often performed in addition to conventional CT acquisitions due to the limited morphologic image quality of dose optimized dVPCT protocols. The aim of this study was to prospectively compare objective and subjective image quality, lesion detectability and radiation dose between mean temporal arterial (mTA) and mean temporal portal venous (mTPV) images calculated from low dose dynamic volume perfusion CT (dVPCT) datasets with linearly blended 120-kVp arterial and portal venous datasets in patients with HCC and pancreatic cancer. All patients gave written informed consent for this institutional review board-approved HIPAA compliant study. 27 consecutive patients (18 men, 9 women, mean age, 69.1 years±9.4) with histologically proven HCC or suspected pancreatic cancer were prospectively enrolled. The study CT protocol included a dVPCT protocol performed with 70 or 80kVp tube voltage (18 spiral acquisitions, 71.2s total acquisition times) and standard dual-energy (90/150kVpSn) arterial and portal venous acquisition performed 25min after the dVPCT. The mTA and mTPV images were manually reconstructed from the 3 to 5 best visually selected single arterial and 3 to 5 best single portal venous phases dVPCT dataset. The linearly blended 120-kVp images were calculated from dual-energy CT (DECT) raw data. Image noise, SNR, and CNR of the liver, abdominal aorta (AA) and main portal vein (PV) were compared between the mTA/mTPV and the linearly blended 120-kVp dual-energy arterial and portal venous datasets, respectively. Subjective image quality was evaluated by two radiologists regarding subjective image noise, sharpness and overall diagnostic image quality using a 5-point Likert Scale. In addition, liver lesion detectability was performed for each liver segment by the two radiologists using the linearly blended120-kVp arterial and portal venous datasets as the reference standard. Image noise, SNR and CNR values of the mTA and mTPV were significantly higher when compared to the corresponding linearly blended arterial and portal venous 120-kVp datasets (all p<0.001) except for image noise within the PV in the portal venous phases (p=0.136). image quality of mTA and mTPV were rated significantly better when compared to the linearly blended 120-kVp arterial and portal venous datasets. Both readers were able to detect all liver lesions found on the linearly blended 120-kVp arterial and portal venous datasets using the mTA and mTPV datasets. The effective radiation dose of the dVPCT was 27.6mSv for the 80kVp protocol and 14.5mSv for the 70kVp protocol. The mean effective radiation dose for the linearly blended 120-kVp arterial and portal venous CT protocol together of the upper abdomen was 5.60mSv±1.48mSv. Our preliminary data suggest that subjective and objective image quality of mTA and mTPV datasets calculated from low-kVp dVPCT datasets is non-inferior when compared to linearly blended 120-kVp arterial and portal venous acquisitions in patients with HCC and pancreatic cancer. Thus, dVPCT could be used as a stand-alone imaging technique without additionally performed conventional arterial and portal venous CT acquisitions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tscherning, Carl Christian; Herceg, Matija
2014-05-01
The methods of Least-Squares Collocation (LSC) and the Reduced Point Mass method (RPM) both uses radial basis-functions for the representation of the anomalous gravity potential (T). LSC uses as many base-functions as the number of observations, while the RPM method uses as many as deemed necessary. Both methods have been evaluated and for some tests compared in the two areas (Central Europe and South-East Pacific). For both areas test data had been generated using EGM2008. As observational data (a) ground gravity disturbances, (b) airborne gravity disturbances, (c) GOCE like Second order radial derivatives and (d) GRACE along-track potential differences were available. The use of these data for the computation of values of (e) T in a grid was the target of the evaluation and comparison investigation. Due to the fact that T in principle can only be computed using global data, the remove-restore procedure was used, with EGM2008 subtracted (and later added to T) up to degree 240 using dataset (a) and (b) and up to degree 36 for datasets (c) and (d). The estimated coefficient error was accounted for when using LSC and in the calculation of error-estimates. The main result is that T was estimated with an error (computed minus control data, (e) from which EGM2008 to degree 240 or 36 had been subtracted ) as found in the table (LSC used): Area Europe Data-set (mgal) (e)-240(a) (b) (e)-36 (c) (d) Mean -0.0 0.0 -0.1 -0.1 -0.3 -1.8 Standard deviation4.1 0.8 2.7 32.6 6.0 19.2 Max. difference 19.9 10.4 16.9 69.9 31.3 47.0 Min.difference -16.2 -3.7 -15.5 -92.1 -27.8 -65.5 Area Pacific Data-set (mgal) (e)-240(a) (b) (e)-36 (c) (d) Mean -0.1 -0.1 -0.1 4.6 -0.2 0.2 Standard deviation4.8 0.2 1.9 49.1 6.7 18.6 Max.difference 22.2 1.8 13.4 115.5 26.9 26.5 Min.difference -28.7 -3.1 -15.7 -106.4 -33.6 22.1 The result using RPM with data-sets (a), (b), (c) gave comparable results. The use of (d) with the RPM method is being implemented. Tests were also done computing dataset (a) from the other datasets. The results here may serve as a bench-mark for other radial basis-function implementations for computing approximations to T. Improvements are certainly possible, e.g. by taking the topography and bathymetry into account.
Metric Evaluation Pipeline for 3d Modeling of Urban Scenes
NASA Astrophysics Data System (ADS)
Bosch, M.; Leichtman, A.; Chilcott, D.; Goldberg, H.; Brown, M.
2017-05-01
Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.
Wood, Bradley M; Jia, Guang; Carmichael, Owen; McKlveen, Kevin; Homberger, Dominique G
2018-05-12
3D imaging techniques enable the non-destructive analysis and modeling of complex structures. Among these, MRI exhibits good soft tissue contrast, but is currently less commonly used for non-clinical research than x-ray CT, even though the latter requires contrast-staining that shrinks and distorts soft tissues. When the objective is the creation of a realistic and complete 3D model of soft tissue structures, MRI data are more demanding to acquire and visualize and require extensive post-processing because they comprise non-cubic voxels with dimensions that represent a trade-off between tissue contrast and image resolution. Therefore, thin soft tissue structures with complex spatial configurations are not always visible in a single MRI dataset, so that standard segmentation techniques are not sufficient for their complete visualization. By using the example of the thin and spatially complex connective tissue myosepta in lampreys, we developed a workflow protocol for the selection of the appropriate parameters for the acquisition of MRI data and for the visualization and 3D modeling of soft tissue structures. This protocol includes a novel recursive segmentation technique for supplementing missing data in one dataset with data from another dataset to produce realistic and complete 3D models. Such 3D models are needed for the modeling of dynamic processes, such as the biomechanics of fish locomotion. However, our methodology is applicable to the visualization of any thin soft tissue structures with complex spatial configurations, such as fasciae, aponeuroses, and small blood vessels and nerves, for clinical research and the further exploration of tensegrity. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.
Li, Bo; Li, Hao; Dong, Li; Huang, Guofu
2017-11-01
In this study, we sought to investigate the feasibility of fast carotid artery MR angiography (MRA) by combining three-dimensional time-of-flight (3D TOF) with compressed sensing method (CS-3D TOF). A pseudo-sequential phase encoding order was developed for CS-3D TOF to generate hyper-intense vessel and suppress background tissues in under-sampled 3D k-space. Seven healthy volunteers and one patient with carotid artery stenosis were recruited for this study. Five sequential CS-3D TOF scans were implemented at 1, 2, 3, 4 and 5-fold acceleration factors for carotid artery MRA. Blood signal-to-tissue ratio (BTR) values for fully-sampled and under-sampled acquisitions were calculated and compared in seven subjects. Blood area (BA) was measured and compared between fully sampled acquisition and each under-sampled one. There were no significant differences between the fully-sampled dataset and each under-sampled in BTR comparisons (P>0.05 for all comparisons). The carotid vessel BAs measured from the images of CS-3D TOF sequences with 2, 3, 4 and 5-fold acceleration scans were all highly correlated with that of the fully-sampled acquisition. The contrast between blood vessels and background tissues of the images at 2 to 5-fold acceleration is comparable to that of fully sampled images. The images at 2× to 5× exhibit the comparable lumen definition to the corresponding images at 1×. By combining the pseudo-sequential phase encoding order, CS reconstruction, and 3D TOF sequence, this technique provides excellent visualizations for carotid vessel and calcifications in a short scan time. It has the potential to be integrated into current multiple blood contrast imaging protocol. Copyright © 2017. Published by Elsevier Inc.
Accession numbers for microarray datasets used in Oshida et al. Chemical and Hormonal Effects on STAT5b-Dependent Sexual Dimorphism of the Liver Transcriptome. PLoS One. 2016 Mar 9;11(3):e0150284. This dataset is associated with the following publication:Oshida, K., D. Waxman, and C. Corton. Chemical and Hormonal Effects on STAT5b-Dependent Sexual Dimorphism of the Liver Transcriptome.. PLoS ONE. Public Library of Science, San Francisco, CA, USA, 11(3): NA, (2016).
Ameliorating slice gaps in multislice magnetic resonance images: an interpolation scheme.
Kashou, Nasser H; Smith, Mark A; Roberts, Cynthia J
2015-01-01
Standard two-dimension (2D) magnetic resonance imaging (MRI) clinical acquisition protocols utilize orthogonal plane images which contain slice gaps (SG). The purpose of this work is to introduce a novel interpolation method for these orthogonal plane MRI 2D datasets. Three goals can be achieved: (1) increasing the resolution based on a priori knowledge of scanning protocol, (2) ameliorating the loss of data as a result of SG and (3) reconstructing a three-dimension (3D) dataset from 2D images. MRI data was collected using a 3T GE scanner and simulated using Matlab. The procedure for validating the MRI data combination algorithm was performed using a Shepp-Logan and a Gaussian phantom in both 2D and 3D of varying matrix sizes (64-512), as well as on one MRI dataset of a human brain and on an American College of Radiology magnetic resonance accreditation phantom. The squared error and mean squared error were computed in comparing this scheme to common interpolating functions employed in MR consoles and workstations. The mean structure similarity matrix was computed in 2D as a means of qualitative image assessment. Additionally, MRI scans were used for qualitative assessment of the method. This new scheme was consistently more accurate than upsampling each orientation separately and averaging the upsampled data. An efficient new interpolation approach to resolve SG was developed. This scheme effectively fills in the missing data points by using orthogonal plane images. To date, there have been few attempts to combine the information of three MRI plane orientations using brain images. This has specific applications for clinical MRI, functional MRI, diffusion-weighted imaging/diffusion tensor imaging and MR angiography where 2D slice acquisition are used. In these cases, the 2D data can be combined using our method in order to obtain 3D volume.
Extracellular space preservation aids the connectomic analysis of neural circuits
Pallotto, Marta; Watkins, Paul V; Fubara, Boma; Singer, Joshua H; Briggman, Kevin L
2015-01-01
Dense connectomic mapping of neuronal circuits is limited by the time and effort required to analyze 3D electron microscopy (EM) datasets. Algorithms designed to automate image segmentation suffer from substantial error rates and require significant manual error correction. Any improvement in segmentation error rates would therefore directly reduce the time required to analyze 3D EM data. We explored preserving extracellular space (ECS) during chemical tissue fixation to improve the ability to segment neurites and to identify synaptic contacts. ECS preserved tissue is easier to segment using machine learning algorithms, leading to significantly reduced error rates. In addition, we observed that electrical synapses are readily identified in ECS preserved tissue. Finally, we determined that antibodies penetrate deep into ECS preserved tissue with only minimal permeabilization, thereby enabling correlated light microscopy (LM) and EM studies. We conclude that preservation of ECS benefits multiple aspects of the connectomic analysis of neural circuits. DOI: http://dx.doi.org/10.7554/eLife.08206.001 PMID:26650352
Double Ramp Loss Based Reject Option Classifier
2015-05-22
choose 10% of these points uniformly at random and flip their labels. 2. Ionosphere Dataset [2] : This dataset describes the problem of discrimi- nating...good versus bad radars based on whether they send some useful infor- mation about the Ionosphere . There are 34 variables and 351 observations. 3... Ionosphere dataset (nonlinear classifiers using RBF kernel for both the approaches) d LDR (C = 2, γ = 0.125) LDH (C = 16, γ = 0.125) Risk RR Acc(unrej
3D landslide motion from a UAV-derived time-series of morphological attributes
NASA Astrophysics Data System (ADS)
Valasia Peppa, Maria; Mills, Jon Philip; Moore, Philip; Miller, Pauline; Chambers, Jon
2017-04-01
Landslides are recognised as dynamic and significantly hazardous phenomena. Time-series observations can improve the understanding of a landslide's complex behaviour and aid assessment of its geometry and kinematics. Conventional quantification of landslide motion involves the installation of survey markers into the ground at discrete locations and periodic observations over time. However, such surveying is labour intensive, provides limited spatial resolution, is occasionally hazardous for steep terrain, or even impossible for inaccessible mountainous areas. The emergence of mini unmanned aerial vehicles (UAVs) equipped with off-the-shelf compact cameras, alongside the structure-from-motion (SfM) photogrammetric pipeline and modern pixel-based matching approaches, has expedited the automatic generation of high resolution digital elevation models (DEMs). Moreover, cross-correlation functions applied to finely co-registered consecutive orthomosaics and/or DEMs have been widely used to determine the displacement of moving features in an automated way, resulting in high spatial resolution motion vectors. This research focuses on estimating the 3D displacement field of an active slow moving earth-slide earth-flow landslide located in Lias mudrocks of North Yorkshire, UK, with the ultimate aim of assessing landslide deformation patterns. The landslide extends approximately 290 m E-W and 230 m N-S, with an average slope of 12˚ and 50 m elevation difference from N-S. Cross-correlation functions were applied to an eighteen-month duration, UAV-derived, time-series of morphological attributes in order to determine motion vectors for subsequent landslide analysis. A self-calibrating bundle adjustment was firstly incorporated into the SfM pipeline and utilised to process imagery acquired using a Panasonic Lumix DMC-LX5 compact camera from a mini fixed-wing Quest 300 UAV, with 2 m wingspan and maximum 5 kg payload. Data from six field campaigns were used to generate a DEM time-series at 6 cm spatial resolution. DEMs were georeferenced into a common reference frame using control information from surveyed ground control points. The accuracy of the co-registration was estimated from planimetric and vertical RMS errors at independent checkpoints as 4 cm and 3 cm respectively. Afterwards, various morphological attributes, including shaded relief, curvature and openness were calculated from the UAV-derived DEMs. These attributes are indicative of the local structures of discernible geomorphological features (e.g. scarps, ridges, cracks, etc.), the motion of which can be monitored using the cross-correlation algorithm. Multiple experiments were conducted to test the performance of the cross-correlation function implemented on successive epochs. Two benchmark datasets were used for validation of the cross-correlation results: a) the motion vectors generated from the surveyed 3D position of installed markers; b) the calculated displacements of features, manually tracked from successive UAV-derived orthomosaics. Both benchmark datasets detected a maximum planimetric displacement of approximately 1 m at the foot of the landslide, with a dominant N-S orientation, between December 2014 and May 2016. Preliminary cross-correlation results illustrated a similar planimetric motion in both magnitude and orientation, however user intervention was required to filter spurious displacement vectors.
LiDAR Vegetation Investigation and Signature Analysis System (LVISA)
NASA Astrophysics Data System (ADS)
Höfle, Bernhard; Koenig, Kristina; Griesbaum, Luisa; Kiefer, Andreas; Hämmerle, Martin; Eitel, Jan; Koma, Zsófia
2015-04-01
Our physical environment undergoes constant changes in space and time with strongly varying triggers, frequencies, and magnitudes. Monitoring these environmental changes is crucial to improve our scientific understanding of complex human-environmental interactions and helps us to respond to environmental change by adaptation or mitigation. The three-dimensional (3D) description of the Earth surface features and the detailed monitoring of surface processes using 3D spatial data have gained increasing attention within the last decades, such as in climate change research (e.g., glacier retreat), carbon sequestration (e.g., forest biomass monitoring), precision agriculture and natural hazard management. In all those areas, 3D data have helped to improve our process understanding by allowing quantifying the structural properties of earth surface features and their changes over time. This advancement has been fostered by technological developments and increased availability of 3D sensing systems. In particular, LiDAR (light detection and ranging) technology, also referred to as laser scanning, has made significant progress and has evolved into an operational tool in environmental research and geosciences. The main result of LiDAR measurements is a highly spatially resolved 3D point cloud. Each point within the LiDAR point cloud has a XYZ coordinate associated with it and often additional information such as the strength of the returned backscatter. The point cloud provided by LiDAR contains rich geospatial, structural, and potentially biochemical information about the surveyed objects. To deal with the inherently unorganized datasets and the large data volume (frequently millions of XYZ coordinates) of LiDAR datasets, a multitude of algorithms for automatic 3D object detection (e.g., of single trees) and physical surface description (e.g., biomass) have been developed. However, so far the exchange of datasets and approaches (i.e., extraction algorithms) among LiDAR users lacks behind. We propose a novel concept, the LiDAR Vegetation Investigation and Signature Analysis System (LVISA), which shall enhance sharing of i) reference datasets of single vegetation objects with rich reference data (e.g., plant species, basic plant morphometric information) and ii) approaches for information extraction (e.g., single tree detection, tree species classification based on waveform LiDAR features). We will build an extensive LiDAR data repository for supporting the development and benchmarking of LiDAR-based object information extraction. The LiDAR Vegetation Investigation and Signature Analysis System (LVISA) uses international web service standards (Open Geospatial Consortium, OGC) for geospatial data access and also analysis (e.g., OGC Web Processing Services). This will allow the research community identifying plant object specific vegetation features from LiDAR data, while accounting for differences in LiDAR systems (e.g., beam divergence), settings (e.g., point spacing), and calibration techniques. It is the goal of LVISA to develop generic 3D information extraction approaches, which can be seamlessly transferred to other datasets, timestamps and also extraction tasks. The current prototype of LVISA can be visited and tested online via http://uni-heidelberg.de/lvisa. Video tutorials provide a quick overview and entry into the functionality of LVISA. We will present the current advances of LVISA and we will highlight future research and extension of LVISA, such as integrating low-cost LiDAR data and datasets acquired by highly temporal scanning of vegetation (e.g., continuous measurements). Everybody is invited to join the LVISA development and share datasets and analysis approaches in an interoperable way via the web-based LVISA geoportal.
The cost of a small membrane bioreactor.
Lo, C H; McAdam, E; Judd, S
2015-01-01
The individual cost contributions to the mechanical components of a small membrane bioreactor (MBR) (100-2,500 m3/d flow capacity) are itemised and collated to generate overall capital and operating costs (CAPEX and OPEX) as a function of size. The outcomes are compared to those from previously published detailed cost studies provided for both very small containerised plants (<40 m3/day capacity) and larger municipal plants (2,200-19,000 m3/d). Cost curves, as a function of flow capacity, determined for OPEX, CAPEX and net present value (NPV) based on the heuristic data used indicate a logarithmic function for OPEX and a power-based one for the CAPEX. OPEX correlations were in good quantitative agreement with those reported in the literature. Disparities in the calculated CAPEX trend compared with reported data were attributed to differences in assumptions concerning cost contributions. More reasonable agreement was obtained with the reported membrane separation component CAPEX data from published studies. The heuristic approach taken appears appropriate for small-scale MBRs with minimal costs associated with installation. An overall relationship of net present value=(a tb)Q(-c lnt+d) was determined for the net present value where a=1.265, b=0.44, c=0.00385 and d=0.868 according to the dataset employed for the analysis.
Comparison of recent SnIa datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, J.C. Bueno; Perivolaropoulos, L.; Nesseris, S., E-mail: jbueno@cc.uoi.gr, E-mail: nesseris@nbi.ku.dk, E-mail: leandros@uoi.gr
2009-11-01
We rank the six latest Type Ia supernova (SnIa) datasets (Constitution (C), Union (U), ESSENCE (Davis) (E), Gold06 (G), SNLS 1yr (S) and SDSS-II (D)) in the context of the Chevalier-Polarski-Linder (CPL) parametrization w(a) = w{sub 0}+w{sub 1}(1−a), according to their Figure of Merit (FoM), their consistency with the cosmological constant (ΛCDM), their consistency with standard rulers (Cosmic Microwave Background (CMB) and Baryon Acoustic Oscillations (BAO)) and their mutual consistency. We find a significant improvement of the FoM (defined as the inverse area of the 95.4% parameter contour) with the number of SnIa of these datasets ((C) highest FoM, (U),more » (G), (D), (E), (S) lowest FoM). Standard rulers (CMB+BAO) have a better FoM by about a factor of 3, compared to the highest FoM SnIa dataset (C). We also find that the ranking sequence based on consistency with ΛCDM is identical with the corresponding ranking based on consistency with standard rulers ((S) most consistent, (D), (C), (E), (U), (G) least consistent). The ranking sequence of the datasets however changes when we consider the consistency with an expansion history corresponding to evolving dark energy (w{sub 0},w{sub 1}) = (−1.4,2) crossing the phantom divide line w = −1 (it is practically reversed to (G), (U), (E), (S), (D), (C)). The SALT2 and MLCS2k2 fitters are also compared and some peculiar features of the SDSS-II dataset when standardized with the MLCS2k2 fitter are pointed out. Finally, we construct a statistic to estimate the internal consistency of a collection of SnIa datasets. We find that even though there is good consistency among most samples taken from the above datasets, this consistency decreases significantly when the Gold06 (G) dataset is included in the sample.« less
NASA Astrophysics Data System (ADS)
Brinkmann, Benjamin H.; O'Brien, Terence J.; Robb, Richard A.; Sharbrough, Frank W.
1997-05-01
Advances in neuroimaging have enhanced the clinician's ability to localize the epileptogenic zone in focal epilepsy, but 20-50 percent of these cases still remain unlocalized. Many sophisticated modalities have been used to study epilepsy, but scalp electrode recorded electroencephalography is particularly useful due to its noninvasive nature and excellent temporal resolution. This study is aimed at specific locations of scalp electrode EEG information for correlation with anatomical structures in the brain. 3D position localizing devices commonly used in virtual reality systems are used to digitize the coordinates of scalp electrodes in a standard clinical configuration. The electrode coordinates are registered with a high- resolution MRI dataset using a robust surface matching algorithm. Volume rendering can then be used to visualize the electrodes and electrode potentials interpolated over the scalp. The accuracy of the coordinate registration is assessed quantitatively with a realistic head phantom.
NASA Astrophysics Data System (ADS)
Nedimovic, M. R.; Mountain, G. S.; Austin, J. A., Jr.; Fulthorpe, C.; Aali, M.; Baldwin, K.; Bhatnagar, T.; Johnson, C.; Küçük, H. M.; Newton, A.; Stanley, J.
2015-12-01
In June-July 2015, we acquired the first 3D/2D hybrid (short/long streamer) multichannel seismic (MCS) reflection dataset. These data were collected simultaneously across IODP Exp. 313 drillsites, off New Jersey, using R/V Langsethand cover ~95% of the planned 12x50 km box. Despite the large survey area, the lateral and vertical resolution for the 3D dataset is almost a magnitude of order higher than for data gathered for standard petroleum exploration. Such high-resolution was made possible by collection of common midpoint (CMP) lines whose combined length is ~3 times the Earth's circumference (~120,000 profile km) and a source rich in high-frequencies. We present details on the data acquisition, ongoing data analysis, and preliminary results. The science driving this project is presented by Mountain et al. The 3D component of this innovative survey used an athwartship cross cable, extended laterally by 2 barovanes roughly 357.5 m apart and trailed by 24 50-m P-Cables spaced ~12.5 m with near-trace offset of 53 m. Each P-Cable had 8 single hydrophone groups spaced at 6.25 m for a total of 192 channels. Record length was 4 s and sample rate 0.5 ms, with no low cut and an 824 Hz high cut filter. We ran 77 sail lines spaced ~150 m. Receiver locations were determined using 2 GPS receivers mounted on floats and 2 compasses and depth sensors per streamer. Streamer depths varied from 2.1 to 3.7 m. The 2D component used a single 3 km streamer, with 240 9-hydrophone groups spaced at 12.5 m, towed astern with near-trace offset of 229 m. The record length was 4 s and sample rate 0.5 ms, with low cut filter at 2 Hz and high cut at 412 Hz. Receiver locations were recorded using GPS at the head float and tail buoy, combined with 12 bird compasses spaced ~300 m. Nominal streamer depth was 4.5 m. The source for both systems was a 700 in3 linear array of 4 Bolt air guns suspended at 4.5 m towing depth, 271.5 m behind the ship's stern. Shot spacing was 12.5 m. Data analysis to prestack time migration is being carried out by Absolute Imaging, a commercial company. The shipboard QC analysis and brute stacks indicate that the final product will be superb. Key advantages of the hybrid 3D/2D dataset are: (1) Velocity control from the 2D long-streamer data combined with the ultra-high resolution of the P-Cable 3D dataset; (2) Opportunity for prestack and poststack attribute analysis.
BioSig3D: High Content Screening of Three-Dimensional Cell Culture Models
Bilgin, Cemal Cagatay; Fontenay, Gerald; Cheng, Qingsu; Chang, Hang; Han, Ju; Parvin, Bahram
2016-01-01
BioSig3D is a computational platform for high-content screening of three-dimensional (3D) cell culture models that are imaged in full 3D volume. It provides an end-to-end solution for designing high content screening assays, based on colony organization that is derived from segmentation of nuclei in each colony. BioSig3D also enables visualization of raw and processed 3D volumetric data for quality control, and integrates advanced bioinformatics analysis. The system consists of multiple computational and annotation modules that are coupled together with a strong use of controlled vocabularies to reduce ambiguities between different users. It is a web-based system that allows users to: design an experiment by defining experimental variables, upload a large set of volumetric images into the system, analyze and visualize the dataset, and either display computed indices as a heatmap, or phenotypic subtypes for heterogeneity analysis, or download computed indices for statistical analysis or integrative biology. BioSig3D has been used to profile baseline colony formations with two experiments: (i) morphogenesis of a panel of human mammary epithelial cell lines (HMEC), and (ii) heterogeneity in colony formation using an immortalized non-transformed cell line. These experiments reveal intrinsic growth properties of well-characterized cell lines that are routinely used for biological studies. BioSig3D is being released with seed datasets and video-based documentation. PMID:26978075
Evaluating Soil Moisture Retrievals from ESA's SMOS and NASA's SMAP Brightness Temperature Datasets
NASA Technical Reports Server (NTRS)
Al-Yaari, A.; Wigernon, J.-P.; Kerr, Y.; Rodriguez-Fernandez, N.; O'Neill, P. E.; Jackson, T. J.; De Lannoy, G. J. M.; Al Bitar, A.; Mialon, A.; Richaume, P.;
2017-01-01
Two satellites are currently monitoring surface soil moisture (SM) using L-band observations: SMOS (Soil Moisture and Ocean Salinity), a joint ESA (European Space Agency), CNES (Centre national d'tudes spatiales), and CDTI (the Spanish government agency with responsibility for space) satellite launched on November 2, 2009 and SMAP (Soil Moisture Active Passive), a National Aeronautics and Space Administration (NASA) satellite successfully launched in January 2015. In this study, we used a multilinear regression approach to retrieve SM from SMAP data to create a global dataset of SM, which is consistent with SM data retrieved from SMOS. This was achieved by calibrating coefficients of the regression model using the CATDS (Centre Aval de Traitement des Donnes) SMOS Level 3 SM and the horizontally and vertically polarized brightness temperatures (TB) at 40 deg incidence angle, over the 2013 - 2014 period. Next, this model was applied to SMAP L3 TB data from Apr 2015 to Jul 2016. The retrieved SM from SMAP (referred to here as SMAP_Reg) was compared to: (i) the operational SMAP L3 SM (SMAP_SCA), retrieved using the baseline Single Channel retrieval Algorithm (SCA); and (ii) the operational SMOSL3 SM, derived from the multiangular inversion of the L-MEB model (L-MEB algorithm) (SMOSL3). This inter-comparison was made against in situ soil moisture measurements from more than 400 sites spread over the globe, which are used here as a reference soil moisture dataset. The in situ observations were obtained from the International Soil Moisture Network (ISMN; https:ismn.geo.tuwien.ac.at) in North of America (PBO_H2O, SCAN, SNOTEL, iRON, and USCRN), in Australia (Oznet), Africa (DAHRA), and in Europe (REMEDHUS, SMOSMANIA, FMI, and RSMN). The agreement was analyzed in terms of four classical statistical criteria: Root Mean Squared Error (RMSE),Bias, Unbiased RMSE (UnbRMSE), and correlation coefficient (R). Results of the comparison of these various products with in situ observations show that the performance of both SMAP products i.e. SMAP_SCA and SMAP_Reg is 48 similar and marginally better to that of the SMOSL3 product particularly over the PBO_H2O, SCAN, and USCRN sites. However, SMOSL3 SM was closer to the in situ observations over the DAHRA and Oznet sites. We found that the correlation between all three datasets and in situ measurements is best (R 0.80) over the Oznet sites and worst (R 0.58) over the SNOTEL sites for SMAP_SCA and over the DAHRA and SMOSMANIA sites (R 0.51 and R 0.45 for SMAP_Reg and SMOSL3, respectively). The Bias values showed that all products are generally dry, except over RSMN, DAHRA, and Oznet (and FMI for SMAP_SCA). Finally, our analysis provided interesting insights that can be useful to improve the consistency between SMAP and SMOS datasets.
Evaluating soil moisture retrievals from ESA's SMOS and NASA's SMAP brightness temperature datasets.
Al-Yaari, A; Wigneron, J-P; Kerr, Y; Rodriguez-Fernandez, N; O'Neill, P E; Jackson, T J; De Lannoy, G J M; Al Bitar, A; Mialon, A; Richaume, P; Walker, J P; Mahmoodi, A; Yueh, S
2017-05-01
Two satellites are currently monitoring surface soil moisture (SM) using L-band observations: SMOS (Soil Moisture and Ocean Salinity), a joint ESA (European Space Agency), CNES (Centre national d'études spatiales), and CDTI (the Spanish government agency with responsibility for space) satellite launched on November 2, 2009 and SMAP (Soil Moisture Active Passive), a National Aeronautics and Space Administration (NASA) satellite successfully launched in January 2015. In this study, we used a multilinear regression approach to retrieve SM from SMAP data to create a global dataset of SM, which is consistent with SM data retrieved from SMOS. This was achieved by calibrating coefficients of the regression model using the CATDS (Centre Aval de Traitement des Données) SMOS Level 3 SM and the horizontally and vertically polarized brightness temperatures (TB) at 40° incidence angle, over the 2013 - 2014 period. Next, this model was applied to SMAP L3 TB data from Apr 2015 to Jul 2016. The retrieved SM from SMAP (referred to here as SMAP_Reg) was compared to: (i) the operational SMAP L3 SM (SMAP_SCA), retrieved using the baseline Single Channel retrieval Algorithm (SCA); and (ii) the operational SMOSL3 SM, derived from the multiangular inversion of the L-MEB model (L-MEB algorithm) (SMOSL3). This inter-comparison was made against in situ soil moisture measurements from more than 400 sites spread over the globe, which are used here as a reference soil moisture dataset. The in situ observations were obtained from the International Soil Moisture Network (ISMN; https://ismn.geo.tuwien.ac.at/) in North of America (PBO_H2O, SCAN, SNOTEL, iRON, and USCRN), in Australia (Oznet), Africa (DAHRA), and in Europe (REMEDHUS, SMOSMANIA, FMI, and RSMN). The agreement was analyzed in terms of four classical statistical criteria: Root Mean Squared Error (RMSE), Bias, Unbiased RMSE (UnbRMSE), and correlation coefficient (R). Results of the comparison of these various products with in situ observations show that the performance of both SMAP products i.e. SMAP_SCA and SMAP_Reg is similar and marginally better to that of the SMOSL3 product particularly over the PBO_H2O, SCAN, and USCRN sites. However, SMOSL3 SM was closer to the in situ observations over the DAHRA and Oznet sites. We found that the correlation between all three datasets and in situ measurements is best (R > 0.80) over the Oznet sites and worst (R = 0.58) over the SNOTEL sites for SMAP_SCA and over the DAHRA and SMOSMANIA sites (R= 0.51 and R= 0.45 for SMAP_Reg and SMOSL3, respectively). The Bias values showed that all products are generally dry, except over RSMN, DAHRA, and Oznet (and FMI for SMAP_SCA). Finally, our analysis provided interesting insights that can be useful to improve the consistency between SMAP and SMOS datasets.
3-D QSARS FOR RANKING AND PRIORITIZATION OF LARGE CHEMICAL DATASETS: AN EDC CASE STUDY
The COmmon REactivity Pattern (COREPA) approach is a three-dimensional structure activity (3-D QSAR) technique that permits identification and quantification of specific global and local steroelectronic characteristics associated with a chemical's biological activity. It goes bey...
NASA Astrophysics Data System (ADS)
Bera, D.; Raghunathan, S. B.; Chen, C.; Chen, Z.; Pertijs, M. A. P.; Verweij, M. D.; Daeichin, V.; Vos, H. J.; van der Steen, A. F. W.; de Jong, N.; Bosch, J. G.
2018-04-01
Until now, no matrix transducer has been realized for 3D transesophageal echocardiography (TEE) in pediatric patients. In 3D TEE with a matrix transducer, the biggest challenges are to connect a large number of elements to a standard ultrasound system, and to achieve a high volume rate (>200 Hz). To address these issues, we have recently developed a prototype miniaturized matrix transducer for pediatric patients with micro-beamforming and a small central transmitter. In this paper we propose two multiline parallel 3D beamforming techniques (µBF25 and µBF169) using the micro-beamformed datasets from 25 and 169 transmit events to achieve volume rates of 300 Hz and 44 Hz, respectively. Both the realizations use angle-weighted combination of the neighboring overlapping sub-volumes to avoid artifacts due to sharp intensity changes introduced by parallel beamforming. In simulation, the image quality in terms of the width of the point spread function (PSF), lateral shift invariance and mean clutter level for volumes produced by µBF25 and µBF169 are similar to the idealized beamforming using a conventional single-line acquisition with a fully-sampled matrix transducer (FS4k, 4225 transmit events). For completeness, we also investigated a 9 transmit-scheme (3 × 3) that allows even higher frame rates but found worse B-mode image quality with our probe. The simulations were experimentally verified by acquiring the µBF datasets from the prototype using a Verasonics V1 research ultrasound system. For both µBF169 and µBF25, the experimental PSFs were similar to the simulated PSFs, but in the experimental PSFs, the clutter level was ~10 dB higher. Results indicate that the proposed multiline 3D beamforming techniques with the prototype matrix transducer are promising candidates for real-time pediatric 3D TEE.
Using Geometry-Based Metrics as Part of Fitness-for-Purpose Evaluations of 3D City Models
NASA Astrophysics Data System (ADS)
Wong, K.; Ellul, C.
2016-10-01
Three-dimensional geospatial information is being increasingly used in a range of tasks beyond visualisation. 3D datasets, however, are often being produced without exact specifications and at mixed levels of geometric complexity. This leads to variations within the models' geometric and semantic complexity as well as the degree of deviation from the corresponding real world objects. Existing descriptors and measures of 3D data such as CityGML's level of detail are perhaps only partially sufficient in communicating data quality and fitness-for-purpose. This study investigates whether alternative, automated, geometry-based metrics describing the variation of complexity within 3D datasets could provide additional relevant information as part of a process of fitness-for-purpose evaluation. The metrics include: mean vertex/edge/face counts per building; vertex/face ratio; minimum 2D footprint area and; minimum feature length. Each metric was tested on six 3D city models from international locations. The results show that geometry-based metrics can provide additional information on 3D city models as part of fitness-for-purpose evaluations. The metrics, while they cannot be used in isolation, may provide a complement to enhance existing data descriptors if backed up with local knowledge, where possible.
Mendenhall, Jeffrey; Meiler, Jens
2016-02-01
Dropout is an Artificial Neural Network (ANN) training technique that has been shown to improve ANN performance across canonical machine learning (ML) datasets. Quantitative Structure Activity Relationship (QSAR) datasets used to relate chemical structure to biological activity in Ligand-Based Computer-Aided Drug Discovery pose unique challenges for ML techniques, such as heavily biased dataset composition, and relatively large number of descriptors relative to the number of actives. To test the hypothesis that dropout also improves QSAR ANNs, we conduct a benchmark on nine large QSAR datasets. Use of dropout improved both enrichment false positive rate and log-scaled area under the receiver-operating characteristic curve (logAUC) by 22-46 % over conventional ANN implementations. Optimal dropout rates are found to be a function of the signal-to-noise ratio of the descriptor set, and relatively independent of the dataset. Dropout ANNs with 2D and 3D autocorrelation descriptors outperform conventional ANNs as well as optimized fingerprint similarity search methods.
Mendenhall, Jeffrey; Meiler, Jens
2016-01-01
Dropout is an Artificial Neural Network (ANN) training technique that has been shown to improve ANN performance across canonical machine learning (ML) datasets. Quantitative Structure Activity Relationship (QSAR) datasets used to relate chemical structure to biological activity in Ligand-Based Computer-Aided Drug Discovery (LB-CADD) pose unique challenges for ML techniques, such as heavily biased dataset composition, and relatively large number of descriptors relative to the number of actives. To test the hypothesis that dropout also improves QSAR ANNs, we conduct a benchmark on nine large QSAR datasets. Use of dropout improved both Enrichment false positive rate (FPR) and log-scaled area under the receiver-operating characteristic curve (logAUC) by 22–46% over conventional ANN implementations. Optimal dropout rates are found to be a function of the signal-to-noise ratio of the descriptor set, and relatively independent of the dataset. Dropout ANNs with 2D and 3D autocorrelation descriptors outperform conventional ANNs as well as optimized fingerprint similarity search methods. PMID:26830599
NASA Astrophysics Data System (ADS)
Berezowski, T.; Szcześniak, M.; Kardel, I.; Michałowski, R.; Okruszko, T.; Mezghani, A.; Piniewski, M.
2015-12-01
The CHASE-PL Forcing Data-Gridded Daily Precipitation and Temperature Dataset-5 km (CPLFD-GDPT5) consists of 1951-2013 daily minimum and maximum air temperatures and precipitation totals interpolated onto a 5 km grid based on daily meteorological observations from Institute of Meteorology and Water Management (IMGW-PIB; Polish stations), Deutscher Wetterdienst (DWD, German and Czech stations), ECAD and NOAA-NCDC (Slovak, Ukrainian and Belarus stations). The main purpose for constructing this product was the need for long-term aerial precipitation and temperature data for earth-system modelling, especially hydrological modelling. The spatial coverage is the union of Vistula and Odra basin and Polish territory. The number of available meteorological stations for precipitation and temperature varies in time from about 100 for temperature and 300 for precipitation in 1950 up to about 180 for temperature and 700 for precipitation in 1990. The precipitation dataset was corrected for snowfall and rainfall under-catch with the Richter method. The interpolation methods were: kriging with elevation as external drift for temperatures and indicator kriging combined with universal kriging for precipitation. The kriging cross-validation revealed low root mean squared errors expressed as a fraction of standard deviation (SD): 0.54 and 0.47 for minimum and maximum temperature, respectively and 0.79 for precipitation. The correlation scores were 0.84 for minimum temperatures, 0.88 for maximum temperatures and 0.65 for precipitation. The CPLFD-GDPT5 product is consistent with 1971-2000 climatic data published by IMGW-PIB. We also confirm good skill of the product for hydrological modelling by performing an application using the Soil and Water Assessment Tool (SWAT) in the Vistula and Odra basins. Link to the dataset: http://data.3tu.nl/repository/uuid:e939aec0-bdd1-440f-bd1e-c49ff10d0a07
Live dynamic analysis of the developing cardiovascular system in mice
NASA Astrophysics Data System (ADS)
Lopez, Andrew L.; Wang, Shang; Larin, Kirill V.; Larina, Irina V.
2017-02-01
The study of the developing cardiovascular system in mice is important for understanding human cardiogenesis and congenital heart defects. Our research focuses on imaging early development in the mouse embryo to specifically understand cardiovascular development under the regulation of dynamic factors like contractile force and blood flow using optical coherence tomography (OCT). We have previously developed an OCT based approach that combines static embryo culture and advanced image processing with computational modeling to live-image mouse embryos and obtain 4D (3D+time) cardiodynamic datasets. Here we present live 4D dynamic blood flow imaging of the early embryonic mouse heart in correlation with heart wall movement. We are using this approach to understand how specific mutations impact heart wall dynamics, and how this influences flow patterns and cardiogenesis. We perform studies in mutant embryos with cardiac phenotypes such as myosin regulatory light chain 2, atrial isoform (Mlc2a). This work is brings us closer to understanding the connections between dynamic mechanical factors and gene programs responsible for early cardiovascular development.
von Spiczak, Jochen; Mannil, Manoj; Kozerke, Sebastian; Alkadhi, Hatem; Manka, Robert
2018-03-30
Since patients with myocardial hypoperfusion due to coronary artery disease (CAD) with preserved viability are known to benefit from revascularization, accurate differentiation of hypoperfusion from scar is desirable. To develop a framework for 3D fusion of whole-heart dynamic cardiac MR perfusion and late gadolinium enhancement (LGE) to delineate stress-induced myocardial hypoperfusion and scar. Prospective feasibility study. Sixteen patients (61 ± 14 years, two females) with known/suspected CAD. 1.5T (nine patients); 3.0T (seven patients); whole-heart dynamic 3D cardiac MR perfusion (3D-PERF, under adenosine stress); 3D LGE inversion recovery sequences (3D-SCAR). A software framework was developed for 3D fusion of 3D-PERF and 3D-SCAR. Computation steps included: 1) segmentation of the left ventricle in 3D-PERF and 3D-SCAR; 2) semiautomatic thresholding of perfusion/scar data; 3) automatic calculation of ischemic/scar burden (ie, pathologic relative to total myocardium); 4) projection of perfusion/scar values onto artificial template of the left ventricle; 5) semiautomatic coregistration to an exemplary heart contour easing 3D orientation; and 6) 3D rendering of the combined datasets using automatically defined color tables. All tasks were performed by two independent, blinded readers (J.S. and R.M.). Intraclass correlation coefficients (ICC) for determining interreader agreement. Image acquisition, postprocessing, and 3D fusion were feasible in all cases. In all, 10/16 patients showed stress-induced hypoperfusion in 3D-PERF; 8/16 patients showed LGE in 3D-SCAR. For 3D-PERF, semiautomatic thresholding was possible in all patients. For 3D-SCAR, automatic thresholding was feasible where applicable. Average ischemic burden was 11 ± 7% (J.S.) and 12 ± 7% (R.M.). Average scar burden was 8 ± 5% (J.S.) and 7 ± 4% (R.M.). Interreader agreement was excellent (ICC for 3D-PERF = 0.993, for 3D-SCAR = 0.99). 3D fusion of 3D-PERF and 3D-SCAR facilitates intuitive delineation of stress-induced myocardial hypoperfusion and scar. 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.
Includes 1) list of genes in the STAT5b biomarker and 2) list of accession numbers for microarray datasets used in study.This dataset is associated with the following publication:Oshida, K., N. Vasani, D. Waxman, and C. Corton. Disruption of STAT5b-Regulated Sexual Dimorphism of the Liver Transcriptome by Diverse Factors Is a Common Event. PLoS ONE. Public Library of Science, San Francisco, CA, USA, 11(3): NA, (2016).
Anisotropy effects on 3D waveform inversion
NASA Astrophysics Data System (ADS)
Stekl, I.; Warner, M.; Umpleby, A.
2010-12-01
In the recent years 3D waveform inversion has become achievable procedure for seismic data processing. A number of datasets has been inverted and presented (Warner el al 2008, Ben Hadj at all, Sirgue et all 2010) using isotropic 3D waveform inversion. However the question arises will the results be affected by isotropic assumption. Full-wavefield inversion techniques seek to match field data, wiggle-for-wiggle, to synthetic data generated by a high-resolution model of the sub-surface. In this endeavour, correctly matching the travel times of the principal arrivals is a necessary minimal requirement. In many, perhaps most, long-offset and wide-azimuth datasets, it is necessary to introduce some form of p-wave velocity anisotropy to match the travel times successfully. If this anisotropy is not also incorporated into the wavefield inversion, then results from the inversion will necessarily be compromised. We have incorporated anisotropy into our 3D wavefield tomography codes, characterised as spatially varying transverse isotropy with a tilted axis of symmetry - TTI anisotropy. This enhancement approximately doubles both the run time and the memory requirements of the code. We show that neglect of anisotropy can lead to significant artefacts in the recovered velocity models. We will present inversion results of inverting anisotropic 3D dataset by assuming isotropic earth and compare them with anisotropic inversion result. As a test case Marmousi model extended to 3D with no velocity variation in third direction and with added spatially varying anisotropy is used. Acquisition geometry is assumed as OBC with sources and receivers everywhere at the surface. We attempted inversion using both 2D and full 3D acquisition for this dataset. Results show that if no anisotropy is taken into account although image looks plausible most features are miss positioned in depth and space, even for relatively low anisotropy, which leads to incorrect result. This may lead to misinterpretation of results. However if correct physics is used results agree with correct model. Our algorithm is relatively affordable and runs on standard pc clusters in acceptable time. Refferences: H. Ben Hadj Ali, S. Operto and J. Virieux. Velocity model building by 3D frequency-domain full-waveform inversion of wide-aperture seismic data, Geophysics (Special issue: Velocity Model Building), 73(6), P. VE101-VE117 (2008). L. Sirgue, O.I. Barkved, J. Dellinger, J. Etgen, U. Albertin, J.H. Kommedal, Full waveform inversion: the next leap forward in imaging at Valhall, First Brake April 2010 - Issue 4 - Volume 28 M. Warner, I. Stekl, A. Umpleby, Efficient and Effective 3D Wavefield Tomography, 70th EAGE Conference & Exhibition (2008)
Wang, Zheng; Zhang, Chuanbao; Sun, Lihua; Liang, Jingshan; Liu, Xing; Li, Guanzhang; Yao, Kun; Zhang, Wei; Jiang, Tao
2016-12-20
Activation of receptor tyrosine kinases is common in Malignancies. FGFR3 fusion with TACC3 has been reported to have transforming effects in primary glioblastoma and display oncogenic activity in vitro and in vivo. We set out to investigate the role of FGFR3 in glioma through transcriptomic analysis. FGFR3 increased in Classical subtype and Neural subtype consistently in CGGA and TCGA cohort. Similar patterns of FGFR3 distribution through subtypes were observed in CGGA and TCGA samples. Gene ontology analysis was performed with genes that were significantly correlated with FGFR3 expression. We found that positively associated biological processes of FGFR3 were focused on differentiated cellular functions and neuronal activities, while negatively correlated biological processes focused on mitosis and cell cycle phase. Clinical investigation showed that higher FGFR3 expression predicted improved survival for glioma patients, especially in Proneural subtype. Moreover, FGFR3 showed very limited relevance with other receptor tyrosine kinases in glioma at transcriptome level. FGFR3 expression data of glioma was obtained from Chinese Glioma Genome Atlas (CGGA) and TCGA (The Cancer Genome Atlas). In total, RNA sequencing data of 325 glioma samples and mRNA microarray data of 301 samples from CGGA dataset were enrolled into this study. To consolidate the findings that we have revealed in CGGA dataset, RNA-seq data of 672 glioma samples from TCGA dataset were used as a validation cohort. R language was used as the main tool to perform statistical analysis and graphical work. FGFR3 expression increased in classical and neural subtypes and was associated with differentiated cellular functions. FGFR3 showed very limited correlation with other common receptor tyrosine kinases, and predicted improved survival for glioma patients.
Wang, Zheng; Zhang, Chuanbao; Sun, Lihua; Liang, Jingshan; Liu, Xing; Li, Guanzhang; Yao, Kun; Zhang, Wei; Jiang, Tao
2016-01-01
Background Activation of receptor tyrosine kinases is common in Malignancies. FGFR3 fusion with TACC3 has been reported to have transforming effects in primary glioblastoma and display oncogenic activity in vitro and in vivo. We set out to investigate the role of FGFR3 in glioma through transcriptomic analysis. Results FGFR3 increased in Classical subtype and Neural subtype consistently in CGGA and TCGA cohort. Similar patterns of FGFR3 distribution through subtypes were observed in CGGA and TCGA samples. Gene ontology analysis was performed with genes that were significantly correlated with FGFR3 expression. We found that positively associated biological processes of FGFR3 were focused on differentiated cellular functions and neuronal activities, while negatively correlated biological processes focused on mitosis and cell cycle phase. Clinical investigation showed that higher FGFR3 expression predicted improved survival for glioma patients, especially in Proneural subtype. Moreover, FGFR3 showed very limited relevance with other receptor tyrosine kinases in glioma at transcriptome level. Materials and Methods FGFR3 expression data of glioma was obtained from Chinese Glioma Genome Atlas (CGGA) and TCGA (The Cancer Genome Atlas). In total, RNA sequencing data of 325 glioma samples and mRNA microarray data of 301 samples from CGGA dataset were enrolled into this study. To consolidate the findings that we have revealed in CGGA dataset, RNA-seq data of 672 glioma samples from TCGA dataset were used as a validation cohort. R language was used as the main tool to perform statistical analysis and graphical work. Conclusions FGFR3 expression increased in classical and neural subtypes and was associated with differentiated cellular functions. FGFR3 showed very limited correlation with other common receptor tyrosine kinases, and predicted improved survival for glioma patients. PMID:27829236
Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy
2016-11-01
Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.
NASA Astrophysics Data System (ADS)
Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy
2016-11-01
Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.
Mechanistic simulation of normal-tissue damage in radiotherapy—implications for dose-volume analyses
NASA Astrophysics Data System (ADS)
Rutkowska, Eva; Baker, Colin; Nahum, Alan
2010-04-01
A radiobiologically based 3D model of normal tissue has been developed in which complications are generated when 'irradiated'. The aim is to provide insight into the connection between dose-distribution characteristics, different organ architectures and complication rates beyond that obtainable with simple DVH-based analytical NTCP models. In this model the organ consists of a large number of functional subunits (FSUs), populated by stem cells which are killed according to the LQ model. A complication is triggered if the density of FSUs in any 'critical functioning volume' (CFV) falls below some threshold. The (fractional) CFV determines the organ architecture and can be varied continuously from small (series-like behaviour) to large (parallel-like). A key feature of the model is its ability to account for the spatial dependence of dose distributions. Simulations were carried out to investigate correlations between dose-volume parameters and the incidence of 'complications' using different pseudo-clinical dose distributions. Correlations between dose-volume parameters and outcome depended on characteristics of the dose distributions and on organ architecture. As anticipated, the mean dose and V20 correlated most strongly with outcome for a parallel organ, and the maximum dose for a serial organ. Interestingly better correlation was obtained between the 3D computer model and the LKB model with dose distributions typical for serial organs than with those typical for parallel organs. This work links the results of dose-volume analyses to dataset characteristics typical for serial and parallel organs and it may help investigators interpret the results from clinical studies.
Street curb recognition in 3d point cloud data using morphological operations
NASA Astrophysics Data System (ADS)
Rodríguez-Cuenca, Borja; Concepción Alonso-Rodríguez, María; García-Cortés, Silverio; Ordóñez, Celestino
2015-04-01
Accurate and automatic detection of cartographic-entities saves a great deal of time and money when creating and updating cartographic databases. The current trend in remote sensing feature extraction is to develop methods that are as automatic as possible. The aim is to develop algorithms that can obtain accurate results with the least possible human intervention in the process. Non-manual curb detection is an important issue in road maintenance, 3D urban modeling, and autonomous navigation fields. This paper is focused on the semi-automatic recognition of curbs and street boundaries using a 3D point cloud registered by a mobile laser scanner (MLS) system. This work is divided into four steps. First, a coordinate system transformation is carried out, moving from a global coordinate system to a local one. After that and in order to simplify the calculations involved in the procedure, a rasterization based on the projection of the measured point cloud on the XY plane was carried out, passing from the 3D original data to a 2D image. To determine the location of curbs in the image, different image processing techniques such as thresholding and morphological operations were applied. Finally, the upper and lower edges of curbs are detected by an unsupervised classification algorithm on the curvature and roughness of the points that represent curbs. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. This method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. That point cloud comprises more than 6,000,000 points and covers a 400-meter street. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. That point cloud comprises 8,000,000 points and represents a 160-meter street. The proposed method provides success rates in curb recognition of over 85% in both datasets.
An automated method for accurate vessel segmentation.
Yang, Xin; Liu, Chaoyue; Le Minh, Hung; Wang, Zhiwei; Chien, Aichi; Cheng, Kwang-Ting Tim
2017-05-07
Vessel segmentation is a critical task for various medical applications, such as diagnosis assistance of diabetic retinopathy, quantification of cerebral aneurysm's growth, and guiding surgery in neurosurgical procedures. Despite technology advances in image segmentation, existing methods still suffer from low accuracy for vessel segmentation in the two challenging while common scenarios in clinical usage: (1) regions with a low signal-to-noise-ratio (SNR), and (2) at vessel boundaries disturbed by adjacent non-vessel pixels. In this paper, we present an automated system which can achieve highly accurate vessel segmentation for both 2D and 3D images even under these challenging scenarios. Three key contributions achieved by our system are: (1) a progressive contrast enhancement method to adaptively enhance contrast of challenging pixels that were otherwise indistinguishable, (2) a boundary refinement method to effectively improve segmentation accuracy at vessel borders based on Canny edge detection, and (3) a content-aware region-of-interests (ROI) adjustment method to automatically determine the locations and sizes of ROIs which contain ambiguous pixels and demand further verification. Extensive evaluation of our method is conducted on both 2D and 3D datasets. On a public 2D retinal dataset (named DRIVE (Staal 2004 IEEE Trans. Med. Imaging 23 501-9)) and our 2D clinical cerebral dataset, our approach achieves superior performance to the state-of-the-art methods including a vesselness based method (Frangi 1998 Int. Conf. on Medical Image Computing and Computer-Assisted Intervention) and an optimally oriented flux (OOF) based method (Law and Chung 2008 European Conf. on Computer Vision). An evaluation on 11 clinical 3D CTA cerebral datasets shows that our method can achieve 94% average accuracy with respect to the manual segmentation reference, which is 23% to 33% better than the five baseline methods (Yushkevich 2006 Neuroimage 31 1116-28; Law and Chung 2008 European Conf. on Computer Vision; Law and Chung 2009 IEEE Trans. Image Process. 18 596-612; Wang 2015 J. Neurosci. Methods 241 30-6) with manually optimized parameters. Our system has also been applied clinically for cerebral aneurysm development analysis. Experimental results on 10 patients' data, with two 3D CT scans per patient, show that our system's automatic diagnosis outcomes are consistent with clinicians' manual measurements.
An automated method for accurate vessel segmentation
NASA Astrophysics Data System (ADS)
Yang, Xin; Liu, Chaoyue; Le Minh, Hung; Wang, Zhiwei; Chien, Aichi; (Tim Cheng, Kwang-Ting
2017-05-01
Vessel segmentation is a critical task for various medical applications, such as diagnosis assistance of diabetic retinopathy, quantification of cerebral aneurysm’s growth, and guiding surgery in neurosurgical procedures. Despite technology advances in image segmentation, existing methods still suffer from low accuracy for vessel segmentation in the two challenging while common scenarios in clinical usage: (1) regions with a low signal-to-noise-ratio (SNR), and (2) at vessel boundaries disturbed by adjacent non-vessel pixels. In this paper, we present an automated system which can achieve highly accurate vessel segmentation for both 2D and 3D images even under these challenging scenarios. Three key contributions achieved by our system are: (1) a progressive contrast enhancement method to adaptively enhance contrast of challenging pixels that were otherwise indistinguishable, (2) a boundary refinement method to effectively improve segmentation accuracy at vessel borders based on Canny edge detection, and (3) a content-aware region-of-interests (ROI) adjustment method to automatically determine the locations and sizes of ROIs which contain ambiguous pixels and demand further verification. Extensive evaluation of our method is conducted on both 2D and 3D datasets. On a public 2D retinal dataset (named DRIVE (Staal 2004 IEEE Trans. Med. Imaging 23 501-9)) and our 2D clinical cerebral dataset, our approach achieves superior performance to the state-of-the-art methods including a vesselness based method (Frangi 1998 Int. Conf. on Medical Image Computing and Computer-Assisted Intervention) and an optimally oriented flux (OOF) based method (Law and Chung 2008 European Conf. on Computer Vision). An evaluation on 11 clinical 3D CTA cerebral datasets shows that our method can achieve 94% average accuracy with respect to the manual segmentation reference, which is 23% to 33% better than the five baseline methods (Yushkevich 2006 Neuroimage 31 1116-28; Law and Chung 2008 European Conf. on Computer Vision; Law and Chung 2009 IEEE Trans. Image Process. 18 596-612; Wang 2015 J. Neurosci. Methods 241 30-6) with manually optimized parameters. Our system has also been applied clinically for cerebral aneurysm development analysis. Experimental results on 10 patients’ data, with two 3D CT scans per patient, show that our system’s automatic diagnosis outcomes are consistent with clinicians’ manual measurements.
Explore Efficient Local Features from RGB-D Data for One-Shot Learning Gesture Recognition.
Wan, Jun; Guo, Guodong; Li, Stan Z
2016-08-01
Availability of handy RGB-D sensors has brought about a surge of gesture recognition research and applications. Among various approaches, one shot learning approach is advantageous because it requires minimum amount of data. Here, we provide a thorough review about one-shot learning gesture recognition from RGB-D data and propose a novel spatiotemporal feature extracted from RGB-D data, namely mixed features around sparse keypoints (MFSK). In the review, we analyze the challenges that we are facing, and point out some future research directions which may enlighten researchers in this field. The proposed MFSK feature is robust and invariant to scale, rotation and partial occlusions. To alleviate the insufficiency of one shot training samples, we augment the training samples by artificially synthesizing versions of various temporal scales, which is beneficial for coping with gestures performed at varying speed. We evaluate the proposed method on the Chalearn gesture dataset (CGD). The results show that our approach outperforms all currently published approaches on the challenging data of CGD, such as translated, scaled and occluded subsets. When applied to the RGB-D datasets that are not one-shot (e.g., the Cornell Activity Dataset-60 and MSR Daily Activity 3D dataset), the proposed feature also produces very promising results under leave-one-out cross validation or one-shot learning.
3D geometric split-merge segmentation of brain MRI datasets.
Marras, Ioannis; Nikolaidis, Nikolaos; Pitas, Ioannis
2014-05-01
In this paper, a novel method for MRI volume segmentation based on region adaptive splitting and merging is proposed. The method, called Adaptive Geometric Split Merge (AGSM) segmentation, aims at finding complex geometrical shapes that consist of homogeneous geometrical 3D regions. In each volume splitting step, several splitting strategies are examined and the most appropriate is activated. A way to find the maximal homogeneity axis of the volume is also introduced. Along this axis, the volume splitting technique divides the entire volume in a number of large homogeneous 3D regions, while at the same time, it defines more clearly small homogeneous regions within the volume in such a way that they have greater probabilities of survival at the subsequent merging step. Region merging criteria are proposed to this end. The presented segmentation method has been applied to brain MRI medical datasets to provide segmentation results when each voxel is composed of one tissue type (hard segmentation). The volume splitting procedure does not require training data, while it demonstrates improved segmentation performance in noisy brain MRI datasets, when compared to the state of the art methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Three-dimensional Electromagnetic Modeling of the Hawaiian Swell
NASA Astrophysics Data System (ADS)
Avdeev, D.; Utada, H.; Kuvshinov, A.; Koyama, T.
2004-12-01
An anomalous behavior of the geomagnetic deep sounding (GDS) responses at the Honolulu geomagnetic observatory has been reported by many researchers. Kuvshinov et al (2004) found that the predicted GDS Dst C-response does not match the experimental data -- 10-20% disagreement occurs for all periods of 2 to 30 days, qualitatively implying a more resistive, rather than conductive, structure beneath the Hawaiian Islands. Simpson et al. (2000) found that the GDS Sq C-response at the Honolulu observatory is about 4 times larger than that at a Hawaii island site, again suggesting a more resistive (than elsewhere around) structure beneath the observatory. Constable and Heinson (2004, http://mahi.ucsd.edu/Steve/swell.pdf), presenting a 2-D interpretation of the magnetotelluric (MT) and GDS responses recently obtained at 7 seafloor sites to the south of the Hawaii Islands, concluded that the dataset require the presence of a narrow conducting plume just beneath the islands. The main motivation of our work is to reveal the reason of the anomalous behavior of the Honolulu response. Obviously, the cause may be due to heterogeneity of either the conductivity or the source field. We examine this problem in some detail with reference to the Constable and Heinson's seafloor dataset, as well as the available dataset from the Honolulu observatory. To address the problem we apply numerical modeling using the three-dimensional (3-D) forward modeling code of Avdeev et al. (1997, 2002). With this code we simulate various regional 3-D conductivity models that may produce EM responses that better fit the experimental datasets, at least qualitatively. Also, to explain some features of the experimental long-period GDS responses we numerically studied a possible effect in the responses caused by the equatorial electrojet. Our 3-D modeling results show that, in particular: (1) The GDS responses are better explained by models with a resistive lithosphere whereas the MT data are better fit by models without one; (2) A conductive plume under the Hawaiian Islands may not be required by the MT and GDS datasets considered; (3) An equatorial electrojet might affect the imaginary part of the GDS responses at periods of 2 h and more; (4) The anomalous large value of 0.4 observed in the real part of the seafloor GDS responses still cannot be explained by the 3-D models considered. It seems to require more complicated models.
Bayesian correlated clustering to integrate multiple datasets
Kirk, Paul; Griffin, Jim E.; Savage, Richard S.; Ghahramani, Zoubin; Wild, David L.
2012-01-01
Motivation: The integration of multiple datasets remains a key challenge in systems biology and genomic medicine. Modern high-throughput technologies generate a broad array of different data types, providing distinct—but often complementary—information. We present a Bayesian method for the unsupervised integrative modelling of multiple datasets, which we refer to as MDI (Multiple Dataset Integration). MDI can integrate information from a wide range of different datasets and data types simultaneously (including the ability to model time series data explicitly using Gaussian processes). Each dataset is modelled using a Dirichlet-multinomial allocation (DMA) mixture model, with dependencies between these models captured through parameters that describe the agreement among the datasets. Results: Using a set of six artificially constructed time series datasets, we show that MDI is able to integrate a significant number of datasets simultaneously, and that it successfully captures the underlying structural similarity between the datasets. We also analyse a variety of real Saccharomyces cerevisiae datasets. In the two-dataset case, we show that MDI’s performance is comparable with the present state-of-the-art. We then move beyond the capabilities of current approaches and integrate gene expression, chromatin immunoprecipitation–chip and protein–protein interaction data, to identify a set of protein complexes for which genes are co-regulated during the cell cycle. Comparisons to other unsupervised data integration techniques—as well as to non-integrative approaches—demonstrate that MDI is competitive, while also providing information that would be difficult or impossible to extract using other methods. Availability: A Matlab implementation of MDI is available from http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/. Contact: D.L.Wild@warwick.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23047558
NASA Astrophysics Data System (ADS)
Slynko, Inna; Da Silva, Franck; Bret, Guillaume; Rognan, Didier
2016-09-01
High affinity ligands for a given target tend to share key molecular interactions with important anchoring amino acids and therefore often present quite conserved interaction patterns. This simple concept was formalized in a topological knowledge-based scoring function (GRIM) for selecting the most appropriate docking poses from previously X-rayed interaction patterns. GRIM first converts protein-ligand atomic coordinates (docking poses) into a simple 3D graph describing the corresponding interaction pattern. In a second step, proposed graphs are compared to that found from template structures in the Protein Data Bank. Last, all docking poses are rescored according to an empirical score (GRIMscore) accounting for overlap of maximum common subgraphs. Taking the opportunity of the public D3R Grand Challenge 2015, GRIM was used to rescore docking poses for 36 ligands (6 HSP90α inhibitors, 30 MAP4K4 inhibitors) prior to the release of the corresponding protein-ligand X-ray structures. When applied to the HSP90α dataset, for which many protein-ligand X-ray structures are already available, GRIM provided very high quality solutions (mean rmsd = 1.06 Å, n = 6) as top-ranked poses, and significantly outperformed a state-of-the-art scoring function. In the case of MAP4K4 inhibitors, for which preexisting 3D knowledge is scarce and chemical diversity is much larger, the accuracy of GRIM poses decays (mean rmsd = 3.18 Å, n = 30) although GRIM still outperforms an energy-based scoring function. GRIM rescoring appears to be quite robust with comparison to the other approaches competing for the same challenge (42 submissions for the HSP90 dataset, 27 for the MAP4K4 dataset) as it ranked 3rd and 2nd respectively, for the two investigated datasets. The rescoring method is quite simple to implement, independent on a docking engine, and applicable to any target for which at least one holo X-ray structure is available.
Challenges in Extracting Information From Large Hydrogeophysical-monitoring Datasets
NASA Astrophysics Data System (ADS)
Day-Lewis, F. D.; Slater, L. D.; Johnson, T.
2012-12-01
Over the last decade, new automated geophysical data-acquisition systems have enabled collection of increasingly large and information-rich geophysical datasets. Concurrent advances in field instrumentation, web services, and high-performance computing have made real-time processing, inversion, and visualization of large three-dimensional tomographic datasets practical. Geophysical-monitoring datasets have provided high-resolution insights into diverse hydrologic processes including groundwater/surface-water exchange, infiltration, solute transport, and bioremediation. Despite the high information content of such datasets, extraction of quantitative or diagnostic hydrologic information is challenging. Visual inspection and interpretation for specific hydrologic processes is difficult for datasets that are large, complex, and (or) affected by forcings (e.g., seasonal variations) unrelated to the target hydrologic process. New strategies are needed to identify salient features in spatially distributed time-series data and to relate temporal changes in geophysical properties to hydrologic processes of interest while effectively filtering unrelated changes. Here, we review recent work using time-series and digital-signal-processing approaches in hydrogeophysics. Examples include applications of cross-correlation, spectral, and time-frequency (e.g., wavelet and Stockwell transforms) approaches to (1) identify salient features in large geophysical time series; (2) examine correlation or coherence between geophysical and hydrologic signals, even in the presence of non-stationarity; and (3) condense large datasets while preserving information of interest. Examples demonstrate analysis of large time-lapse electrical tomography and fiber-optic temperature datasets to extract information about groundwater/surface-water exchange and contaminant transport.
Automated liver elasticity calculation for 3D MRE
NASA Astrophysics Data System (ADS)
Dzyubak, Bogdan; Glaser, Kevin J.; Manduca, Armando; Ehman, Richard L.
2017-03-01
Magnetic Resonance Elastography (MRE) is a phase-contrast MRI technique which calculates quantitative stiffness images, called elastograms, by imaging the propagation of acoustic waves in tissues. It is used clinically to diagnose liver fibrosis. Automated analysis of MRE is difficult as the corresponding MRI magnitude images (which contain anatomical information) are affected by intensity inhomogeneity, motion artifact, and poor tissue- and edge-contrast. Additionally, areas with low wave amplitude must be excluded. An automated algorithm has already been successfully developed and validated for clinical 2D MRE. 3D MRE acquires substantially more data and, due to accelerated acquisition, has exacerbated image artifacts. Also, the current 3D MRE processing does not yield a confidence map to indicate MRE wave quality and guide ROI selection, as is the case in 2D. In this study, extension of the 2D automated method, with a simple wave-amplitude metric, was developed and validated against an expert reader in a set of 57 patient exams with both 2D and 3D MRE. The stiffness discrepancy with the expert for 3D MRE was -0.8% +/- 9.45% and was better than discrepancy with the same reader for 2D MRE (-3.2% +/- 10.43%), and better than the inter-reader discrepancy observed in previous studies. There were no automated processing failures in this dataset. Thus, the automated liver elasticity calculation (ALEC) algorithm is able to calculate stiffness from 3D MRE data with minimal bias and good precision, while enabling stiffness measurements to be fully reproducible and to be easily performed on the large 3D MRE datasets.
Finite-fault source inversion using adjoint methods in 3D heterogeneous media
NASA Astrophysics Data System (ADS)
Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia
2018-04-01
Accounting for lateral heterogeneities in the 3D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3D heterogeneity in source inversion involves pre-computing 3D Green's functions, which requires a number of 3D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense datasets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3D heterogeneous velocity model. The velocity model comprises a uniform background and a 3D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3D velocity model are performed for two different station configurations, a dense and a sparse network with 1 km and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.
Spatio-temporal interpolation of soil moisture in 3D+T using automated sensor network data
NASA Astrophysics Data System (ADS)
Gasch, C.; Hengl, T.; Magney, T. S.; Brown, D. J.; Gräler, B.
2014-12-01
Soil sensor networks provide frequent in situ measurements of dynamic soil properties at fixed locations, producing data in 2- or 3-dimensions and through time (2D+T and 3D+T). Spatio-temporal interpolation of 3D+T point data produces continuous estimates that can then be used for prediction at unsampled times and locations, as input for process models, and can simply aid in visualization of properties through space and time. Regression-kriging with 3D and 2D+T data has successfully been implemented, but currently the field of geostatistics lacks an analytical framework for modeling 3D+T data. Our objective is to develop robust 3D+T models for mapping dynamic soil data that has been collected with high spatial and temporal resolution. For this analysis, we use data collected from a sensor network installed on the R.J. Cook Agronomy Farm (CAF), a 37-ha Long-Term Agro-Ecosystem Research (LTAR) site in Pullman, WA. For five years, the sensors have collected hourly measurements of soil volumetric water content at 42 locations and five depths. The CAF dataset also includes a digital elevation model and derivatives, a soil unit description map, crop rotations, electromagnetic induction surveys, daily meteorological data, and seasonal satellite imagery. The soil-water sensor data, combined with the spatial and temporal covariates, provide an ideal dataset for developing 3D+T models. The presentation will include preliminary results and address main implementation strategies.
Speksnijder, L; Rousian, M; Steegers, E A P; Van Der Spek, P J; Koning, A H J; Steensma, A B
2012-07-01
Virtual reality is a novel method of visualizing ultrasound data with the perception of depth and offers possibilities for measuring non-planar structures. The levator ani hiatus has both convex and concave aspects. The aim of this study was to compare levator ani hiatus volume measurements obtained with conventional three-dimensional (3D) ultrasound and with a virtual reality measurement technique and to establish their reliability and agreement. 100 symptomatic patients visiting a tertiary pelvic floor clinic with a normal intact levator ani muscle diagnosed on translabial ultrasound were selected. Datasets were analyzed using a rendered volume with a slice thickness of 1.5 cm at the level of minimal hiatal dimensions during contraction. The levator area (in cm(2)) was measured and multiplied by 1.5 to get the levator ani hiatus volume in conventional 3D ultrasound (in cm(3)). Levator ani hiatus volume measurements were then measured semi-automatically in virtual reality (cm(3) ) using a segmentation algorithm. An intra- and interobserver analysis of reliability and agreement was performed in 20 randomly chosen patients. The mean difference between levator ani hiatus volume measurements performed using conventional 3D ultrasound and virtual reality was 0.10 (95% CI, - 0.15 to 0.35) cm(3). The intraclass correlation coefficient (ICC) comparing conventional 3D ultrasound with virtual reality measurements was > 0.96. Intra- and interobserver ICCs for conventional 3D ultrasound measurements were > 0.94 and for virtual reality measurements were > 0.97, indicating good reliability for both. Levator ani hiatus volume measurements performed using virtual reality were reliable and the results were similar to those obtained with conventional 3D ultrasonography. Copyright © 2012 ISUOG. Published by John Wiley & Sons, Ltd.
Groth, M; Forkert, N D; Buhk, J H; Schoenfeld, M; Goebell, E; Fiehler, J
2013-02-01
To compare intra- and inter-observer reliability of aneurysm measurements obtained by a 3D computer-aided technique with standard manual aneurysm measurements in different imaging modalities. A total of 21 patients with 29 cerebral aneurysms were studied. All patients underwent digital subtraction angiography (DSA), contrast-enhanced (CE-MRA) and time-of-flight magnetic resonance angiography (TOF-MRA). Aneurysm neck and depth diameters were manually measured by two observers in each modality. Additionally, semi-automatic computer-aided diameter measurements were performed using 3D vessel surface models derived from CE- (CE-com) and TOF-MRA (TOF-com) datasets. Bland-Altman analysis (BA) and intra-class correlation coefficient (ICC) were used to evaluate intra- and inter-observer agreement. BA revealed the narrowest relative limits of intra- and inter-observer agreement for aneurysm neck and depth diameters obtained by TOF-com (ranging between ±5.3 % and ±28.3 %) and CE-com (ranging between ±23.3 % and ±38.1 %). Direct measurements in DSA, TOF-MRA and CE-MRA showed considerably wider limits of agreement. The highest ICCs were observed for TOF-com and CE-com (ICC values, 0.92 or higher for intra- as well as inter-observer reliability). Computer-aided aneurysm measurement in 3D offers improved intra- and inter-observer reliability and a reproducible parameter extraction, which may be used in clinical routine and as objective surrogate end-points in clinical trials.
Froeling, Martijn; Tax, Chantal M W; Vos, Sjoerd B; Luijten, Peter R; Leemans, Alexander
2017-05-01
In this work, we present the MASSIVE (Multiple Acquisitions for Standardization of Structural Imaging Validation and Evaluation) brain dataset of a single healthy subject, which is intended to facilitate diffusion MRI (dMRI) modeling and methodology development. MRI data of one healthy subject (female, 25 years) were acquired on a clinical 3 Tesla system (Philips Achieva) with an eight-channel head coil. In total, the subject was scanned on 18 different occasions with a total acquisition time of 22.5 h. The dMRI data were acquired with an isotropic resolution of 2.5 mm 3 and distributed over five shells with b-values up to 4000 s/mm 2 and two Cartesian grids with b-values up to 9000 s/mm 2 . The final dataset consists of 8000 dMRI volumes, corresponding B 0 field maps and noise maps for subsets of the dMRI scans, and ten three-dimensional FLAIR, T 1 -, and T 2 -weighted scans. The average signal-to-noise-ratio of the non-diffusion-weighted images was roughly 35. This unique set of in vivo MRI data will provide a robust framework to evaluate novel diffusion processing techniques and to reliably compare different approaches for diffusion modeling. The MASSIVE dataset is made publically available (both unprocessed and processed) on www.massive-data.org. Magn Reson Med 77:1797-1809, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
3-D interactive visualisation tools for Hi spectral line imaging
NASA Astrophysics Data System (ADS)
van der Hulst, J. M.; Punzo, D.; Roerdink, J. B. T. M.
2017-06-01
Upcoming HI surveys will deliver such large datasets that automated processing using the full 3-D information to find and characterize HI objects is unavoidable. Full 3-D visualization is an essential tool for enabling qualitative and quantitative inspection and analysis of the 3-D data, which is often complex in nature. Here we present SlicerAstro, an open-source extension of 3DSlicer, a multi-platform open source software package for visualization and medical image processing, which we developed for the inspection and analysis of HI spectral line data. We describe its initial capabilities, including 3-D filtering, 3-D selection and comparative modelling.
Robust Parallel Motion Estimation and Mapping with Stereo Cameras in Underground Infrastructure
NASA Astrophysics Data System (ADS)
Liu, Chun; Li, Zhengning; Zhou, Yuan
2016-06-01
Presently, we developed a novel robust motion estimation method for localization and mapping in underground infrastructure using a pre-calibrated rigid stereo camera rig. Localization and mapping in underground infrastructure is important to safety. Yet it's also nontrivial since most underground infrastructures have poor lighting condition and featureless structure. Overcoming these difficulties, we discovered that parallel system is more efficient than the EKF-based SLAM approach since parallel system divides motion estimation and 3D mapping tasks into separate threads, eliminating data-association problem which is quite an issue in SLAM. Moreover, the motion estimation thread takes the advantage of state-of-art robust visual odometry algorithm which is highly functional under low illumination and provides accurate pose information. We designed and built an unmanned vehicle and used the vehicle to collect a dataset in an underground garage. The parallel system was evaluated by the actual dataset. Motion estimation results indicated a relative position error of 0.3%, and 3D mapping results showed a mean position error of 13cm. Off-line process reduced position error to 2cm. Performance evaluation by actual dataset showed that our system is capable of robust motion estimation and accurate 3D mapping in poor illumination and featureless underground environment.
Pizarro, Ricardo A; Cheng, Xi; Barnett, Alan; Lemaitre, Herve; Verchinski, Beth A; Goldman, Aaron L; Xiao, Ena; Luo, Qian; Berman, Karen F; Callicott, Joseph H; Weinberger, Daniel R; Mattay, Venkata S
2016-01-01
High-resolution three-dimensional magnetic resonance imaging (3D-MRI) is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM) algorithm in the quality assessment of structural brain images, using global and region of interest (ROI) automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy) of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.
Three Dimensional Optical Coherence Tomography Imaging: Advantages and Advances
Gabriele, Michelle L; Wollstein, Gadi; Ishikawa, Hiroshi; Xu, Juan; Kim, Jongsick; Kagemann, Larry; Folio, Lindsey S; Schuman, Joel S.
2010-01-01
Three dimensional (3D) ophthalmic imaging using optical coherence tomography (OCT) has revolutionized assessment of the eye, the retina in particular. Recent technological improvements have made the acquisition of 3D-OCT datasets feasible. However, while volumetric data can improve disease diagnosis and follow-up, novel image analysis techniques are now necessary in order to process the dense 3D-OCT dataset. Fundamental software improvements include methods for correcting subject eye motion, segmenting structures or volumes of interest, extracting relevant data post hoc and signal averaging to improve delineation of retinal layers. In addition, innovative methods for image display, such as C-mode sectioning, provide a unique viewing perspective and may improve interpretation of OCT images of pathologic structures. While all of these methods are being developed, most remain in an immature state. This review describes the current status of 3D-OCT scanning and interpretation, and discusses the need for standardization of clinical protocols as well as the potential benefits of 3D-OCT scanning that could come when software methods for fully exploiting these rich data sets are available clinically. The implications of new image analysis approaches include improved reproducibility of measurements garnered from 3D-OCT, which may then help improve disease discrimination and progression detection. In addition, 3D-OCT offers the potential for preoperative surgical planning and intraoperative surgical guidance. PMID:20542136
Noormohammadpour, Pardis; Tavana, Bahareh; Mansournia, Mohammad Ali; Zeinalizadeh, Mehdi; Mirzashahi, Babak; Rostami, Mohsen; Kordi, Ramin
2018-05-01
Translation and cultural adaptation of the National Institutes of Health (NIH) Task Force's minimal dataset. The purpose of this study was to evaluate validity and reliability of the Farsi version of NIH Task Force's recommended multidimensional minimal dataset for research on chronic low back pain (CLBP). Considering the high treatment cost of CLBP and its increasing prevalence, NIH Pain Consortium developed research standards (including recommendations for definitions, a minimum dataset, and outcomes' report) for studies regarding CLBP. Application of these recommendations could standardize research and improve comparability of different studies in CLBP. This study has three phases: translation of dataset into Farsi and its cultural adaptation, assessment of pre-final version of dataset's comprehensibility via a pilot study, and investigation of the reliability and validity of final version of translated dataset. Subjects were 250 patients with CLBP. Test-retest reliability, content validity, and convergent validity (correlations among different dimensions of dataset and Farsi versions of Oswestry Disability Index, Roland Morris Disability Questionnaire, Fear-Avoidance Belief Questionnaire, and Beck Depression Inventory-II) were assessed. The Farsi version demonstrated good/excellent convergent validity (the correlation coefficient between impact dimension and ODI was r = 0.75 [P < 0.001], between impact dimension and Roland-Morris Disability Questionnaire was r = 0.80 [P < 0.001], and between psychological dimension and BDI was r = 0.62 [P < 0.001]). The test-retest reliability was also strong (intraclass correlation coefficient value ranged between 0.70 and 0.95) and the internal consistency was good/excellent (Chronbach's alpha coefficients' value for two main dimensions including impact dimension and psychological dimension were 0.91 and 0.82 [P < 0.001], respectively). In addition, its face validity and content validity were acceptable. The Farsi version of minimal dataset for research on CLBP is a reliable and valid instrument for data gathering in patients with CLBP. This minimum dataset can be a step toward standardization of research regarding CLBP. 3.
3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study.
Dolz, Jose; Desrosiers, Christian; Ben Ayed, Ismail
2018-04-15
This study investigates a 3D and fully convolutional neural network (CNN) for subcortical brain structure segmentation in MRI. 3D CNN architectures have been generally avoided due to their computational and memory requirements during inference. We address the problem via small kernels, allowing deeper architectures. We further model both local and global context by embedding intermediate-layer outputs in the final prediction, which encourages consistency between features extracted at different scales and embeds fine-grained information directly in the segmentation process. Our model is efficiently trained end-to-end on a graphics processing unit (GPU), in a single stage, exploiting the dense inference capabilities of fully CNNs. We performed comprehensive experiments over two publicly available datasets. First, we demonstrate a state-of-the-art performance on the ISBR dataset. Then, we report a large-scale multi-site evaluation over 1112 unregistered subject datasets acquired from 17 different sites (ABIDE dataset), with ages ranging from 7 to 64 years, showing that our method is robust to various acquisition protocols, demographics and clinical factors. Our method yielded segmentations that are highly consistent with a standard atlas-based approach, while running in a fraction of the time needed by atlas-based methods and avoiding registration/normalization steps. This makes it convenient for massive multi-site neuroanatomical imaging studies. To the best of our knowledge, our work is the first to study subcortical structure segmentation on such large-scale and heterogeneous data. Copyright © 2017 Elsevier Inc. All rights reserved.
Rapid, semi-automatic fracture and contact mapping for point clouds, images and geophysical data
NASA Astrophysics Data System (ADS)
Thiele, Samuel T.; Grose, Lachlan; Samsu, Anindita; Micklethwaite, Steven; Vollgger, Stefan A.; Cruden, Alexander R.
2017-12-01
The advent of large digital datasets from unmanned aerial vehicle (UAV) and satellite platforms now challenges our ability to extract information across multiple scales in a timely manner, often meaning that the full value of the data is not realised. Here we adapt a least-cost-path solver and specially tailored cost functions to rapidly interpolate structural features between manually defined control points in point cloud and raster datasets. We implement the method in the geographic information system QGIS and the point cloud and mesh processing software CloudCompare. Using these implementations, the method can be applied to a variety of three-dimensional (3-D) and two-dimensional (2-D) datasets, including high-resolution aerial imagery, digital outcrop models, digital elevation models (DEMs) and geophysical grids. We demonstrate the algorithm with four diverse applications in which we extract (1) joint and contact patterns in high-resolution orthophotographs, (2) fracture patterns in a dense 3-D point cloud, (3) earthquake surface ruptures of the Greendale Fault associated with the Mw7.1 Darfield earthquake (New Zealand) from high-resolution light detection and ranging (lidar) data, and (4) oceanic fracture zones from bathymetric data of the North Atlantic. The approach improves the consistency of the interpretation process while retaining expert guidance and achieves significant improvements (35-65 %) in digitisation time compared to traditional methods. Furthermore, it opens up new possibilities for data synthesis and can quantify the agreement between datasets and an interpretation.
Computer 3D site model generation based on aerial images
NASA Astrophysics Data System (ADS)
Zheltov, Sergey Y.; Blokhinov, Yuri B.; Stepanov, Alexander A.; Skryabin, Sergei V.; Sibiriakov, Alexandre V.
1997-07-01
The technology for 3D model design of real world scenes and its photorealistic rendering are current topics of investigation. Development of such technology is very attractive to implement in vast varieties of applications: military mission planning, crew training, civil engineering, architecture, virtual reality entertainments--just a few were mentioned. 3D photorealistic models of urban areas are often discussed now as upgrade from existing 2D geographic information systems. Possibility of site model generation with small details depends on two main factors: available source dataset and computer power resources. In this paper PC based technology is presented, so the scenes of middle resolution (scale of 1:1000) be constructed. Types of datasets are the gray level aerial stereo pairs of photographs (scale of 1:14000) and true color on ground photographs of buildings (scale ca.1:1000). True color terrestrial photographs are also necessary for photorealistic rendering, that in high extent improves human perception of the scene.
Caresio, Cristina; Caballo, Marco; Deandrea, Maurilio; Garberoglio, Roberto; Mormile, Alberto; Rossetto, Ruth; Limone, Paolo; Molinari, Filippo
2018-05-15
To perform a comparative quantitative analysis of Power Doppler ultrasound (PDUS) and Contrast-Enhancement ultrasound (CEUS) for the quantification of thyroid nodules vascularity patterns, with the goal of identifying biomarkers correlated with the malignancy of the nodule with both imaging techniques. We propose a novel method to reconstruct the vascular architecture from 3-D PDUS and CEUS images of thyroid nodules, and to automatically extract seven quantitative features related to the morphology and distribution of vascular network. Features include three tortuosity metrics, the number of vascular trees and branches, the vascular volume density, and the main spatial vascularity pattern. Feature extraction was performed on 20 thyroid lesions (ten benign and ten malignant), of which we acquired both PDUS and CEUS. MANOVA (multivariate analysis of variance) was used to differentiate benign and malignant lesions based on the most significant features. The analysis of the extracted features showed a significant difference between the benign and malignant nodules for both PDUS and CEUS techniques for all the features. Furthermore, by using a linear classifier on the significant features identified by the MANOVA, benign nodules could be entirely separated from the malignant ones. Our early results confirm the correlation between the morphology and distribution of blood vessels and the malignancy of the lesion, and also show (at least for the dataset used in this study) a considerable similarity in terms of findings of PDUS and CEUS imaging for thyroid nodules diagnosis and classification. © 2018 American Association of Physicists in Medicine.
Three-dimensional transesophageal echocardiography: Principles and clinical applications.
Vegas, Annette
2016-10-01
A basic understanding of evolving 3D technology enables the echocardiographer to master the new skills necessary to acquire, manipulate, and interpret 3D datasets. Single button activation of specific 3D imaging modes for both TEE and transthoracic echocardiography (TTE) matrix array probes include (a) live, (b) zoom, (c) full volume (FV), and (d) color Doppler FV. Evaluation of regional LV wall motion by RT 3D TEE is based on a change in LV chamber subvolume over time from altered segmental myocardial contractility. Unlike standard 2D TEE, there is no direct measurement of myocardial thickening or displacement of individual segments.
Enabling Real-Time Volume Rendering of Functional Magnetic Resonance Imaging on an iOS Device.
Holub, Joseph; Winer, Eliot
2017-12-01
Powerful non-invasive imaging technologies like computed tomography (CT), ultrasound, and magnetic resonance imaging (MRI) are used daily by medical professionals to diagnose and treat patients. While 2D slice viewers have long been the standard, many tools allowing 3D representations of digital medical data are now available. The newest imaging advancement, functional MRI (fMRI) technology, has changed medical imaging from viewing static to dynamic physiology (4D) over time, particularly to study brain activity. Add this to the rapid adoption of mobile devices for everyday work and the need to visualize fMRI data on tablets or smartphones arises. However, there are few mobile tools available to visualize 3D MRI data, let alone 4D fMRI data. Building volume rendering tools on mobile devices to visualize 3D and 4D medical data is challenging given the limited computational power of the devices. This paper describes research that explored the feasibility of performing real-time 3D and 4D volume raycasting on a tablet device. The prototype application was tested on a 9.7" iPad Pro using two different fMRI datasets of brain activity. The results show that mobile raycasting is able to achieve between 20 and 40 frames per second for traditional 3D datasets, depending on the sampling interval, and up to 9 frames per second for 4D data. While the prototype application did not always achieve true real-time interaction, these results clearly demonstrated that visualizing 3D and 4D digital medical data is feasible with a properly constructed software framework.
Correlation between Cognition and Function across the Spectrum of Alzheimer's Disease.
Liu-Seifert, H; Siemers, E; Selzler, K; Sundell, K; Aisen, P; Cummings, J; Raskin, J; Mohs, R
2016-01-01
Both cognitive and functional deterioration are characteristic of the clinical progression of Alzheimer's disease (AD). To systematically assess correlations between widely used measures of cognition and function across the spectrum of AD. Spearman rank correlations were calculated for cognitive and functional measures across datasets from various AD patient populations. Post-hoc analysis from existing databases. Pooled data from placebo-treated patients with mild (MMSE score ≥20 and ≤26) and moderate (MMSE score ≥16 and ≤19) AD dementia from two Phase 3 solanezumab (EXPEDITION/2) and two semagecesatat (IDENTITY/2) studies and normal, late mild cognitive impairment (LMCI) and mild AD patients from the Alzheimer's Disease Neuroimaging Initiative 2-Grand Opportunity (ADNI-2/GO). Intervention (if any): Placebo (EXPEDITION/2 and IDENTITY/2 subjects). Cognitive and functional abilities were measured in all datasets. Data were collected at baseline and every three months for 18 months in EXPEDITION and IDENTITY studies; and at baseline, 6, 12, and 24 months in the ADNI dataset. The relationship of cognition and function became stronger over time as AD patients progressed from preclinical to moderate dementia disease stages, with the magnitude of correlations dependent on disease stage and the complexity of functional task. The correlations were minimal in the normal control population, but became stronger with disease progression. This analysis found that measures of cognition and function become more strongly correlated with disease progression from preclinical to moderate dementia across multiple datasets. These findings improve the understanding of the relationship between cognitive and functional clinical measures during the course of AD progression and how cognition and function measures relate to each other in AD clinical trials.
Subbotin, Sergei A; Ragsdale, Erik J; Mullens, Teresa; Roberts, Philip A; Mundo-Ocampo, Manuel; Baldwin, James G
2008-08-01
The root lesion nematodes of the genus Pratylenchus Filipjev, 1936 are migratory endoparasites of plant roots, considered among the most widespread and important nematode parasites in a variety of crops. We obtained gene sequences from the D2 and D3 expansion segments of 28S rRNA partial and 18S rRNA from 31 populations belonging to 11 valid and two unidentified species of root lesion nematodes and five outgroup taxa. These datasets were analyzed using maximum parsimony and Bayesian inference. The alignments were generated using the secondary structure models for these molecules and analyzed with Bayesian inference under the standard models and the complex model, considering helices under the doublet model and loops and bulges under the general time reversible model. The phylogenetic informativeness of morphological characters is tested by reconstruction of their histories on rRNA based trees using parallel parsimony and Bayesian approaches. Phylogenetic and sequence analyses of the 28S D2-D3 dataset with 145 accessions for 28 species and 18S dataset with 68 accessions for 15 species confirmed among large numbers of geographical diverse isolates that most classical morphospecies are monophyletic. Phylogenetic analyses revealed at least six distinct major clades of examined Pratylenchus species and these clades are generally congruent with those defined by characters derived from lip patterns, numbers of lip annules, and spermatheca shape. Morphological results suggest the need for sophisticated character discovery and analysis for morphology based phylogenetics in nematodes.
Evaluating new SMAP soil moisture for drought monitoring in the rangelands of the US High Plains
Velpuri, Naga Manohar; Senay, Gabriel B.; Morisette, Jeffrey T.
2016-01-01
Level 3 soil moisture datasets from the recently launched Soil Moisture Active Passive (SMAP) satellite are evaluated for drought monitoring in rangelands.Validation of SMAP soil moisture (SSM) with in situ and modeled estimates showed high level of agreement.SSM showed the highest correlation with surface soil moisture (0-5 cm) and a strong correlation to depths up to 20 cm.SSM showed a reliable and expected response of capturing seasonal dynamics in relation to precipitation, land surface temperature, and evapotranspiration.Further evaluation using multi-year SMAP datasets is necessary to quantify the full benefits and limitations for drought monitoring in rangelands.
Comparative analysis of 2D and 3D distance measurements to study spatial genome organization.
Finn, Elizabeth H; Pegoraro, Gianluca; Shachar, Sigal; Misteli, Tom
2017-07-01
The spatial organization of genomes is non-random, cell-type specific, and has been linked to cellular function. The investigation of spatial organization has traditionally relied extensively on fluorescence microscopy. The validity of the imaging methods used to probe spatial genome organization often depends on the accuracy and precision of distance measurements. Imaging-based measurements may either use 2 dimensional datasets or 3D datasets which include the z-axis information in image stacks. Here we compare the suitability of 2D vs 3D distance measurements in the analysis of various features of spatial genome organization. We find in general good agreement between 2D and 3D analysis with higher convergence of measurements as the interrogated distance increases, especially in flat cells. Overall, 3D distance measurements are more accurate than 2D distances, but are also more susceptible to noise. In particular, z-stacks are prone to error due to imaging properties such as limited resolution along the z-axis and optical aberrations, and we also find significant deviations from unimodal distance distributions caused by low sampling frequency in z. These deviations are ameliorated by significantly higher sampling frequency in the z-direction. We conclude that 2D distances are preferred for comparative analyses between cells, but 3D distances are preferred when comparing to theoretical models in large samples of cells. In general and for practical purposes, 2D distance measurements are preferable for many applications of analysis of spatial genome organization. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Wawerzinek, Britta; Buness, Hermann; Lüschen, Ewald; Thomas, Rüdiger
2017-04-01
To establish a dense area-wide network of geothermal facilities, the Stadtwerke München initiated the joint research project GRAME together with the Leibniz Institute for Applied Geophysics (GeoParaMoL*). As a database for the project, a 3D seismic survey was acquired from November 1015 to March 2016 and covers 170 km2 of the southern part of Munich. 3D seismic exploration is a well-established method to explore geothermal reservoirs, and its value for reservoir characterization of the Malm has been proven by several projects. A particular challenge often is the determination of geophysical parameters for facies interpretation without any borehole information, which is needed for calibration. A new approach to facilitate a reliable interpretation is to include shear waves in the interpretation workflow, which helps to tie down the range of lithological and petrophysical parameters. Shear wave measurements were conducted during the regular 3D seismic survey in Munich. In a passive experiment, the survey was additionally recorded on 467 single, 3-component (3C), digital receivers that were deployed along one main line (15 km length) and two crosslines (4 km length). In this way another 3D P-wave as well as a 3D shear wave dataset were acquired. In the active shear wave experiment the SHOVER technique (Edelmann, 1981) was applied to directly excite shear waves using standard vertical vibrators. The 3C recordings of both datasets show, in addition to the P-wave reflections on the vertical component, clear shear-wave signals on the horizontal components. The structural image of the P-waves recorded on the vertical component of the 3C receivers displays clear reflectors within the Molasse Basin down to the Malm and correlates well with the structural image of the regular survey. Taking into account a travel time ratio of 1.6 the reflection patterns of horizontal and vertical components approximately coincide. This indicates that Molasse sediments and the Malm can also be imaged by shear waves. Further processing steps will derive geophysical parameters (e.g. vp/vs) and clarify the amount of converted waves. GeoParaMoL (FKZ 0325787B) is funded by the Federal Ministry for Economic Affairs and Energy (BMWi). Edelmann, H.A.K. (1981): SHOVER shear-wave generation by vibration orthogonal to the polarization. Geophysical Prospecting 29, 541-549. * http://www.liag-hannover.de/en/fsp/ge/geoparamol.html
NASA Astrophysics Data System (ADS)
Snidero, M.; Amilibia, A.; Gratacos, O.; Muñoz, J. A.
2009-04-01
This work presents a methodological workflow for the 3D reconstruction of geological surfaces at regional scale, based on remote sensing data and geological maps. This workflow has been tested on the reconstruction of the Anaran anticline, located in the Zagros Fold and Thrust belt mountain front. The used remote sensing data-set is a combination of Aster and Spot images as well as a high resolution digital elevation model. A consistent spatial positioning of the complete data-set in a 3D environment is necessary to obtain satisfactory results during the reconstruction. The Aster images have been processed by the Optimum Index Factor (OIF) technique, in order to facilitate the geological mapping. By pansharpening of the resulting Aster image with the SPOT panchromatic one we obtain the final high-resolution image used during the 3D mapping. Structural data (dip data) has been acquired through the analysis of the 3D mapped geological traces. Structural analysis of the resulting data-set allows us to divide the structure in different cylindrical domains. Related plunge lines orientation has been used to project data along the structure, covering areas with little or no information. Once a satisfactory dataset has been acquired, we reconstruct a selected horizon following the dip-domain concept. By manual editing, the obtained surfaces have been adjusted to the mapped geological limits as well as to the modeled faults. With the implementation of the Discrete Smooth Interpolation (DSI) algorithm, the final surfaces have been reconstructed along the anticline. Up to date the results demonstrate that the proposed methodology is a powerful tool for 3D reconstruction of geological surfaces when working with remote sensing data, in very inaccessible areas (eg. Iran, China, Africa). It is especially useful in semiarid regions where the structure strongly controls the topography. The reconstructed surfaces clearly show the geometry in the different sectors of the structure: presence of a back thrust affecting the back limb in the southern part of the anticline, the geometry of the grabens located along the anticline crest, the crosscutting relationship in the north-south faulted zone with the main thrust, the northern dome periclinal closure.
Options in virtual 3D, optical-impression-based planning of dental implants.
Reich, Sven; Kern, Thomas; Ritter, Lutz
2014-01-01
If a 3D radiograph, which in today's dentistry often consists of a CBCT dataset, is available for computerized implant planning, the 3D planning should also consider functional prosthetic aspects. In a conventional workflow, the CBCT is done with a specially produced radiopaque prosthetic setup that makes the desired prosthetic situation visible during virtual implant planning. If an exclusively digital workflow is chosen, intraoral digital impressions are taken. On these digital models, the desired prosthetic suprastructures are designed. The entire datasets are virtually superimposed by a "registration" process on the corresponding structures (teeth) in the CBCTs. Thus, both the osseous and prosthetic structures are visible in one single 3D application and make it possible to consider surgical and prosthetic aspects. After having determined the implant positions on the computer screen, a drilling template is designed digitally. According to this design (CAD), a template is printed or milled in CAM process. This template is the first physically extant product in the entire workflow. The article discusses the options and limitations of this workflow.
Exploring 3D Human Action Recognition: from Offline to Online.
Liu, Zhenyu; Li, Rui; Tan, Jianrong
2018-02-20
With the introduction of cost-effective depth sensors, a tremendous amount of research has been devoted to studying human action recognition using 3D motion data. However, most existing methods work in an offline fashion, i.e., they operate on a segmented sequence. There are a few methods specifically designed for online action recognition, which continually predicts action labels as a stream sequence proceeds. In view of this fact, we propose a question: can we draw inspirations and borrow techniques or descriptors from existing offline methods, and then apply these to online action recognition? Note that extending offline techniques or descriptors to online applications is not straightforward, since at least two problems-including real-time performance and sequence segmentation-are usually not considered in offline action recognition. In this paper, we give a positive answer to the question. To develop applicable online action recognition methods, we carefully explore feature extraction, sequence segmentation, computational costs, and classifier selection. The effectiveness of the developed methods is validated on the MSR 3D Online Action dataset and the MSR Daily Activity 3D dataset.
Exploring 3D Human Action Recognition: from Offline to Online
Li, Rui; Liu, Zhenyu; Tan, Jianrong
2018-01-01
With the introduction of cost-effective depth sensors, a tremendous amount of research has been devoted to studying human action recognition using 3D motion data. However, most existing methods work in an offline fashion, i.e., they operate on a segmented sequence. There are a few methods specifically designed for online action recognition, which continually predicts action labels as a stream sequence proceeds. In view of this fact, we propose a question: can we draw inspirations and borrow techniques or descriptors from existing offline methods, and then apply these to online action recognition? Note that extending offline techniques or descriptors to online applications is not straightforward, since at least two problems—including real-time performance and sequence segmentation—are usually not considered in offline action recognition. In this paper, we give a positive answer to the question. To develop applicable online action recognition methods, we carefully explore feature extraction, sequence segmentation, computational costs, and classifier selection. The effectiveness of the developed methods is validated on the MSR 3D Online Action dataset and the MSR Daily Activity 3D dataset. PMID:29461502
NASA Astrophysics Data System (ADS)
Bentaieb, Samia; Ouamri, Abdelaziz; Nait-Ali, Amine; Keche, Mokhtar
2018-01-01
We propose and evaluate a three-dimensional (3D) face recognition approach that applies the speeded up robust feature (SURF) algorithm to the depth representation of shape index map, under real-world conditions, using only a single gallery sample for each subject. First, the 3D scans are preprocessed, then SURF is applied on the shape index map to find interest points and their descriptors. Each 3D face scan is represented by keypoints descriptors, and a large dictionary is built from all the gallery descriptors. At the recognition step, descriptors of a probe face scan are sparsely represented by the dictionary. A multitask sparse representation classification is used to determine the identity of each probe face. The feasibility of the approach that uses the SURF algorithm on the shape index map for face identification/authentication is checked through an experimental investigation conducted on Bosphorus, University of Milano Bicocca, and CASIA 3D datasets. It achieves an overall rank one recognition rate of 97.75%, 80.85%, and 95.12%, respectively, on these datasets.
Mitrović, Uroš; Likar, Boštjan; Pernuš, Franjo; Špiclin, Žiga
2018-02-01
Image guidance for minimally invasive surgery is based on spatial co-registration and fusion of 3D pre-interventional images and treatment plans with the 2D live intra-interventional images. The spatial co-registration or 3D-2D registration is the key enabling technology; however, the performance of state-of-the-art automated methods is rather unclear as they have not been assessed under the same test conditions. Herein we perform a quantitative and comparative evaluation of ten state-of-the-art methods for 3D-2D registration on a public dataset of clinical angiograms. Image database consisted of 3D and 2D angiograms of 25 patients undergoing treatment for cerebral aneurysms or arteriovenous malformations. On each of the datasets, highly accurate "gold-standard" registrations of 3D and 2D images were established based on patient-attached fiducial markers. The database was used to rigorously evaluate ten state-of-the-art 3D-2D registration methods, namely two intensity-, two gradient-, three feature-based and three hybrid methods, both for registration of 3D pre-interventional image to monoplane or biplane 2D images. Intensity-based methods were most accurate in all tests (0.3 mm). One of the hybrid methods was most robust with 98.75% of successful registrations (SR) and capture range of 18 mm for registrations of 3D to biplane 2D angiograms. In general, registration accuracy was similar whether registration of 3D image was performed onto mono- or biplanar 2D images; however, the SR was substantially lower in case of 3D to monoplane 2D registration. Two feature-based and two hybrid methods had clinically feasible execution times in the order of a second. Performance of methods seems to fall below expectations in terms of robustness in case of registration of 3D to monoplane 2D images, while translation into clinical image guidance systems seems readily feasible for methods that perform registration of the 3D pre-interventional image onto biplanar intra-interventional 2D images.
Molecular docking performance evaluated on the D3R Grand Challenge 2015 drug-like ligand datasets
NASA Astrophysics Data System (ADS)
Selwa, Edithe; Martiny, Virginie Y.; Iorga, Bogdan I.
2016-09-01
The D3R Grand Challenge 2015 was focused on two protein targets: Heat Shock Protein 90 (HSP90) and Mitogen-Activated Protein Kinase Kinase Kinase Kinase 4 (MAP4K4). We used a protocol involving a preliminary analysis of the available data in PDB and PubChem BioAssay, and then a docking/scoring step using more computationally demanding parameters that were required to provide more reliable predictions. We could evidence that different docking software and scoring functions can behave differently on individual ligand datasets, and that the flexibility of specific binding site residues is a crucial element to provide good predictions.
Automated Liver Elasticity Calculation for 3D MRE
Dzyubak, Bogdan; Glaser, Kevin J.; Manduca, Armando; Ehman, Richard L.
2017-01-01
Magnetic Resonance Elastography (MRE) is a phase-contrast MRI technique which calculates quantitative stiffness images, called elastograms, by imaging the propagation of acoustic waves in tissues. It is used clinically to diagnose liver fibrosis. Automated analysis of MRE is difficult as the corresponding MRI magnitude images (which contain anatomical information) are affected by intensity inhomogeneity, motion artifact, and poor tissue- and edge-contrast. Additionally, areas with low wave amplitude must be excluded. An automated algorithm has already been successfully developed and validated for clinical 2D MRE. 3D MRE acquires substantially more data and, due to accelerated acquisition, has exacerbated image artifacts. Also, the current 3D MRE processing does not yield a confidence map to indicate MRE wave quality and guide ROI selection, as is the case in 2D. In this study, extension of the 2D automated method, with a simple wave-amplitude metric, was developed and validated against an expert reader in a set of 57 patient exams with both 2D and 3D MRE. The stiffness discrepancy with the expert for 3D MRE was −0.8% ± 9.45% and was better than discrepancy with the same reader for 2D MRE (−3.2% ± 10.43%), and better than the inter-reader discrepancy observed in previous studies. There were no automated processing failures in this dataset. Thus, the automated liver elasticity calculation (ALEC) algorithm is able to calculate stiffness from 3D MRE data with minimal bias and good precision, while enabling stiffness measurements to be fully reproducible and to be easily performed on the large 3D MRE datasets. PMID:29033488
SU-E-T-20: A Correlation Study of 2D and 3D Gamma Passing Rates for Prostate IMRT Plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, D; Sun Yat-sen University Cancer Center, Guangzhou, Guangdong; Wang, B
2015-06-15
Purpose: To investigate the correlation between the two-dimensional gamma passing rate (2D %GP) and three-dimensional gamma passing rate (3D %GP) in prostate IMRT quality assurance. Methods: Eleven prostate IMRT plans were randomly selected from the clinical database and were used to obtain dose distributions in the phantom and patient. Three types of delivery errors (MLC bank sag errors, central MLC errors and monitor unit errors) were intentionally introduced to modify the clinical plans through an in-house Matlab program. This resulted in 187 modified plans. The 2D %GP and 3D %GP were analyzed using different dose-difference and distance-toagreement (1%-1mm, 2%-2mm andmore » 3%-3mm) and 20% dose threshold. The 2D %GP and 3D %GP were then compared not only for the whole region, but also for the PTVs and critical structures using the statistical Pearson’s correlation coefficient (γ). Results: For different delivery errors, the average comparison of 2D %GP and 3D %GP showed different conclusions. The statistical correlation coefficients between 2D %GP and 3D %GP for the whole dose distribution showed that except for 3%/3mm criterion, 2D %GP and 3D %GP of 1%/1mm criterion and 2%/2mm criterion had strong correlations (Pearson’s γ value >0.8). Compared with the whole region, the correlations of 2D %GP and 3D %GP for PTV were better (the γ value for 1%/1mm, 2%/2mm and 3%/3mm criterion was 0.959, 0.931 and 0.855, respectively). However for the rectum, there was no correlation between 2D %GP and 3D %GP. Conclusion: For prostate IMRT, the correlation between 2D %GP and 3D %GP for the PTV is better than that for normal structures. The lower dose-difference and DTA criterion shows less difference between 2D %GP and 3D %GP. Other factors such as the dosimeter characteristics and TPS algorithm bias may also influence the correlation between 2D %GP and 3D %GP.« less
A machine learning pipeline for automated registration and classification of 3D lidar data
NASA Astrophysics Data System (ADS)
Rajagopal, Abhejit; Chellappan, Karthik; Chandrasekaran, Shivkumar; Brown, Andrew P.
2017-05-01
Despite the large availability of geospatial data, registration and exploitation of these datasets remains a persis- tent challenge in geoinformatics. Popular signal processing and machine learning algorithms, such as non-linear SVMs and neural networks, rely on well-formatted input models as well as reliable output labels, which are not always immediately available. In this paper we outline a pipeline for gathering, registering, and classifying initially unlabeled wide-area geospatial data. As an illustrative example, we demonstrate the training and test- ing of a convolutional neural network to recognize 3D models in the OGRIP 2007 LiDAR dataset using fuzzy labels derived from OpenStreetMap as well as other datasets available on OpenTopography.org. When auxiliary label information is required, various text and natural language processing filters are used to extract and cluster keywords useful for identifying potential target classes. A subset of these keywords are subsequently used to form multi-class labels, with no assumption of independence. Finally, we employ class-dependent geometry extraction routines to identify candidates from both training and testing datasets. Our regression networks are able to identify the presence of 6 structural classes, including roads, walls, and buildings, in volumes as big as 8000 m3 in as little as 1.2 seconds on a commodity 4-core Intel CPU. The presented framework is neither dataset nor sensor-modality limited due to the registration process, and is capable of multi-sensor data-fusion.
Nie, Zhi; Vairavan, Srinivasan; Narayan, Vaibhav A; Ye, Jieping; Li, Qingqin S
2018-01-01
Identification of risk factors of treatment resistance may be useful to guide treatment selection, avoid inefficient trial-and-error, and improve major depressive disorder (MDD) care. We extended the work in predictive modeling of treatment resistant depression (TRD) via partition of the data from the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) cohort into a training and a testing dataset. We also included data from a small yet completely independent cohort RIS-INT-93 as an external test dataset. We used features from enrollment and level 1 treatment (up to week 2 response only) of STAR*D to explore the feature space comprehensively and applied machine learning methods to model TRD outcome at level 2. For TRD defined using QIDS-C16 remission criteria, multiple machine learning models were internally cross-validated in the STAR*D training dataset and externally validated in both the STAR*D testing dataset and RIS-INT-93 independent dataset with an area under the receiver operating characteristic curve (AUC) of 0.70-0.78 and 0.72-0.77, respectively. The upper bound for the AUC achievable with the full set of features could be as high as 0.78 in the STAR*D testing dataset. Model developed using top 30 features identified using feature selection technique (k-means clustering followed by χ2 test) achieved an AUC of 0.77 in the STAR*D testing dataset. In addition, the model developed using overlapping features between STAR*D and RIS-INT-93, achieved an AUC of > 0.70 in both the STAR*D testing and RIS-INT-93 datasets. Among all the features explored in STAR*D and RIS-INT-93 datasets, the most important feature was early or initial treatment response or symptom severity at week 2. These results indicate that prediction of TRD prior to undergoing a second round of antidepressant treatment could be feasible even in the absence of biomarker data.
Numericware i: Identical by State Matrix Calculator
Kim, Bongsong; Beavis, William D
2017-01-01
We introduce software, Numericware i, to compute identical by state (IBS) matrix based on genotypic data. Calculating an IBS matrix with a large dataset requires large computer memory and takes lengthy processing time. Numericware i addresses these challenges with 2 algorithmic methods: multithreading and forward chopping. The multithreading allows computational routines to concurrently run on multiple central processing unit (CPU) processors. The forward chopping addresses memory limitation by dividing a dataset into appropriately sized subsets. Numericware i allows calculation of the IBS matrix for a large genotypic dataset using a laptop or a desktop computer. For comparison with different software, we calculated genetic relationship matrices using Numericware i, SPAGeDi, and TASSEL with the same genotypic dataset. Numericware i calculates IBS coefficients between 0 and 2, whereas SPAGeDi and TASSEL produce different ranges of values including negative values. The Pearson correlation coefficient between the matrices from Numericware i and TASSEL was high at .9972, whereas SPAGeDi showed low correlation with Numericware i (.0505) and TASSEL (.0587). With a high-dimensional dataset of 500 entities by 10 000 000 SNPs, Numericware i spent 382 minutes using 19 CPU threads and 64 GB memory by dividing the dataset into 3 pieces, whereas SPAGeDi and TASSEL failed with the same dataset. Numericware i is freely available for Windows and Linux under CC-BY 4.0 license at https://figshare.com/s/f100f33a8857131eb2db. PMID:28469375
Divergent homologs of the predicted small RNA BpCand697 in Burkholderia spp.
NASA Astrophysics Data System (ADS)
Damiri, Nadzirah; Mohd-Padil, Hirzahida; Firdaus-Raih, Mohd
2015-09-01
The small RNA (sRNA) gene candidate, BpCand697 was previously reported to be unique to Burkholderia spp. and is encoded at 3' non-coding region of a putative AraC family transcription regulator gene. This study demonstrates the conservation of BpCand697 sequence across 32 Burkholderia spp. including B. pseudomallei, B. mallei, B. thailandensis and Burkholderia sp. by integrating both sequence homology and secondary structural analyses of BpCand697 within the dataset. The divergent sequence of BpCand697 was also used as a discriminatory power in clustering the dataset according to the potential virulence of Burkholderia spp., showing that B. thailandensis was clearly secluded from the virulent cluster of B. pseudomallei and B. mallei. Finally, the differential co-transcript expression of BpCand697 and its flanking gene, bpsl2391 was detected in Burkholderia pseudomallei D286 after grown under two different culture conditions using nutrient-rich and minimal media. It is hypothesized that the differential expression of BpCand697-bpsl2391 co-transcript between the two standard prepared media might correlate with nutrient availability in the culture media, suggesting that the physical co-localization of BpCand697 in B. pseudomallei D286 might be directly or indirectly involved with the transcript regulation of bpsl2391 under the selected in vitro culture conditions.
Sun, Yu; Collinson, Simon L; Suckling, John; Sim, Kang
2018-06-07
Emerging evidence suggests that schizophrenia is associated with brain dysconnectivity. Nonetheless, the implicit assumption of stationary functional connectivity (FC) adopted in most previous resting-state functional magnetic resonance imaging (fMRI) studies raises an open question of schizophrenia-related aberrations in dynamic properties of resting-state FC. This study introduces an empirical method to examine the dynamic functional dysconnectivity in patients with schizophrenia. Temporal brain networks were estimated from resting-state fMRI of 2 independent datasets (patients/controls = 18/19 and 53/57 for self-recorded dataset and a publicly available replication dataset, respectively) by the correlation of sliding time-windowed time courses among regions of a predefined atlas. Through the newly introduced temporal efficiency approach and temporal random network models, we examined, for the first time, the 3D spatiotemporal architecture of the temporal brain network. We found that although prominent temporal small-world properties were revealed in both groups, temporal brain networks of patients with schizophrenia in both datasets showed a significantly higher temporal global efficiency, which cannot be simply attributable to head motion and sampling error. Specifically, we found localized changes of temporal nodal properties in the left frontal, right medial parietal, and subcortical areas that were associated with clinical features of schizophrenia. Our findings demonstrate that altered dynamic FC may underlie abnormal brain function and clinical symptoms observed in schizophrenia. Moreover, we provide new evidence to extend the dysconnectivity hypothesis in schizophrenia from static to dynamic brain network and highlight the potential of aberrant brain dynamic FC in unraveling the pathophysiologic mechanisms of the disease.
Shah, Keneil K; Oleske, James M; Gomez, Hernan F; Davidow, Amy L; Bogden, John D
2017-06-01
To determine whether there are substantial differences by state between 2 large datasets in the proportion of children with elevated blood lead levels (BLLs); to identify states in which the percentage of elevated BLLs is high in either or both datasets; and to compare the percentage of elevated BLLs in individual states with those of children living in Flint, Michigan, during the months when these children were exposed to lead-contaminated drinking water. Tables of BLLs for individual states from the Quest Diagnostics and the Centers for Disease Control and Prevention datasets for 2014-2015, containing more than 3 million BLLs of young children?6 years old, were constructed to compare the Quest Diagnostics and Centers for Disease Control and Prevention data with one another and with BLLs available for Flint children for 2014-2015. For some states, the percentages of BLLs ?5.0?µg/dL are similar in the 2 datasets, whereas for other states, the datasets differ substantially in the percentage of BLLs ?5.0?µg/dL. The percentage of BLLs ?5.0?µg/dL is greater in some states in both datasets than observed in Flint when children were exposed to contaminated water. The data presented in this study can be a resource for pediatricians and public health professionals involved in the design of state programs to reduce lead exposure (primary prevention) and identify children with elevated BLLs (secondary prevention). Published by Elsevier Inc.
Tsybovskii, I S; Veremeichik, V M; Kotova, S A; Kritskaya, S V; Evmenenko, S A; Udina, I G
2017-02-01
For the Republic of Belarus, development of a forensic reference database on the basis of 18 autosomal microsatellites (STR) using a population dataset (N = 1040), “familial” genotypic dataset (N = 2550) obtained from expertise performance of paternity testing, and a dataset of genotypes from a criminal registration database (N = 8756) is described. Population samples studied consist of 80% ethnic Belarusians and 20% individuals of other nationality or of mixed origin (by questionnaire data). Genotypes of 12346 inhabitants of the Republic of Belarus from 118 regional samples studied by 18 autosomal microsatellites are included in the sample: 16 tetranucleotide STR (D2S1338, TPOX, D3S1358, CSF1PO, D5S818, D8S1179, D7S820, THO1, vWA, D13S317, D16S539, D18S51, D19S433, D21S11, F13B, and FGA) and two pentanucleotide STR (Penta D and Penta E). The samples studied are in Hardy–Weinberg equilibrium according to distribution of genotypes by 18 STR. Significant differences were not detected between discrete populations or between samples from various historical ethnographic regions of the Republic of Belarus (Western and Eastern Polesie, Podneprovye, Ponemanye, Poozerye, and Center), which indicates the absence of prominent genetic differentiation. Statistically significant differences between the studied genotypic datasets also were not detected, which made it possible to combine the datasets and consider the total sample as a unified forensic reference database for 18 “criminalistic” STR loci. Differences between reference database of the Republic of Belarus and Russians and Ukrainians by the distribution of the range of autosomal STR also were not detected, corresponding to a close genetic relationship of the three Eastern Slavic nations mediated by common origin and intense mutual migrations. Significant differences by separate STR loci between the reference database of Republic of Belarus and populations of Southern and Western Slavs were observed. The necessity of using original reference database for support of forensic expertise practice in the Republic of Belarus was demonstrated.
DMirNet: Inferring direct microRNA-mRNA association networks.
Lee, Minsu; Lee, HyungJune
2016-12-05
MicroRNAs (miRNAs) play important regulatory roles in the wide range of biological processes by inducing target mRNA degradation or translational repression. Based on the correlation between expression profiles of a miRNA and its target mRNA, various computational methods have previously been proposed to identify miRNA-mRNA association networks by incorporating the matched miRNA and mRNA expression profiles. However, there remain three major issues to be resolved in the conventional computation approaches for inferring miRNA-mRNA association networks from expression profiles. 1) Inferred correlations from the observed expression profiles using conventional correlation-based methods include numerous erroneous links or over-estimated edge weight due to the transitive information flow among direct associations. 2) Due to the high-dimension-low-sample-size problem on the microarray dataset, it is difficult to obtain an accurate and reliable estimate of the empirical correlations between all pairs of expression profiles. 3) Because the previously proposed computational methods usually suffer from varying performance across different datasets, a more reliable model that guarantees optimal or suboptimal performance across different datasets is highly needed. In this paper, we present DMirNet, a new framework for identifying direct miRNA-mRNA association networks. To tackle the aforementioned issues, DMirNet incorporates 1) three direct correlation estimation methods (namely Corpcor, SPACE, Network deconvolution) to infer direct miRNA-mRNA association networks, 2) the bootstrapping method to fully utilize insufficient training expression profiles, and 3) a rank-based Ensemble aggregation to build a reliable and robust model across different datasets. Our empirical experiments on three datasets demonstrate the combinatorial effects of necessary components in DMirNet. Additional performance comparison experiments show that DMirNet outperforms the state-of-the-art Ensemble-based model [1] which has shown the best performance across the same three datasets, with a factor of up to 1.29. Further, we identify 43 putative novel multi-cancer-related miRNA-mRNA association relationships from an inferred Top 1000 direct miRNA-mRNA association network. We believe that DMirNet is a promising method to identify novel direct miRNA-mRNA relations and to elucidate the direct miRNA-mRNA association networks. Since DMirNet infers direct relationships from the observed data, DMirNet can contribute to reconstructing various direct regulatory pathways, including, but not limited to, the direct miRNA-mRNA association networks.
NASA Astrophysics Data System (ADS)
Xin, H.; Thurber, C. H.; Zhang, H.; Wang, F.
2014-12-01
A number of geophysical studies have been carried out along the San Andreas Fault (SAF) in the Northern Gabilan Range (NGR) with the purpose of characterizing in detail the fault zone structure. Previous seismic research has revealed the complex structure of the crustal volume in the NGR region in two-dimensions (Thurber et al., 1996, 1997), and there has been some work on the three-dimensional (3D) structure at a coarser scale (Lin and Roecker, 1997). In our study we use earthquake body-wave arrival times and differential times (P and S) and explosion arrival times (only P) to image the 3D P- and S-wave velocity structure of the upper crust along the SAF in the NGR using double-difference (DD) tomography. The earthquake and explosion data types have complementary strengths - the earthquake data have good resolution at depth and resolve both Vp and Vs structure, although only where there are sufficient seismic rays between hypocenter and stations, whereas the explosions contribute very good near-surface resolution but for P waves only. The original dataset analyzed by Thurber et al. (1996, 1997) included data from 77 local earthquakes and 8 explosions. We enlarge the dataset with 114 more earthquakes that occurred in the study area, obtain improved S-wave picks using an automated picker, and include absolute and cross-correlation differential times. The inversion code we use is the algorithm tomoDD (Zhang and Thurber, 2003). We assess how the P and S velocity models and earthquake locations vary as we alter the inversion parameters and the inversion grid. The new inversion results show clearly the fine-scale structure of the SAF at depth in 3D, sharpening the image of the velocity contrast from the southwest side to the northeast side.
Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano
2013-01-01
The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard “condition-based” designs, as well as “computational” methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli. PMID:24194828
Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano
2013-01-01
The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard "condition-based" designs, as well as "computational" methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli.
Gundogdu, Erhan; Ozkan, Huseyin; Alatan, A Aydin
2017-11-01
Correlation filters have been successfully used in visual tracking due to their modeling power and computational efficiency. However, the state-of-the-art correlation filter-based (CFB) tracking algorithms tend to quickly discard the previous poses of the target, since they consider only a single filter in their models. On the contrary, our approach is to register multiple CFB trackers for previous poses and exploit the registered knowledge when an appearance change occurs. To this end, we propose a novel tracking algorithm [of complexity O(D) ] based on a large ensemble of CFB trackers. The ensemble [of size O(2 D ) ] is organized over a binary tree (depth D ), and learns the target appearance subspaces such that each constituent tracker becomes an expert of a certain appearance. During tracking, the proposed algorithm combines only the appearance-aware relevant experts to produce boosted tracking decisions. Additionally, we propose a versatile spatial windowing technique to enhance the individual expert trackers. For this purpose, spatial windows are learned for target objects as well as the correlation filters and then the windowed regions are processed for more robust correlations. In our extensive experiments on benchmark datasets, we achieve a substantial performance increase by using the proposed tracking algorithm together with the spatial windowing.
Population of 224 realistic human subject-based computational breast phantoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, David W.; Wells, Jered R., E-mail: jered.wells@duke.edu; Sturgeon, Gregory M.
Purpose: To create a database of highly realistic and anatomically variable 3D virtual breast phantoms based on dedicated breast computed tomography (bCT) data. Methods: A tissue classification and segmentation algorithm was used to create realistic and detailed 3D computational breast phantoms based on 230 + dedicated bCT datasets from normal human subjects. The breast volume was identified using a coarse three-class fuzzy C-means segmentation algorithm which accounted for and removed motion blur at the breast periphery. Noise in the bCT data was reduced through application of a postreconstruction 3D bilateral filter. A 3D adipose nonuniformity (bias field) correction was thenmore » applied followed by glandular segmentation using a 3D bias-corrected fuzzy C-means algorithm. Multiple tissue classes were defined including skin, adipose, and several fractional glandular densities. Following segmentation, a skin mask was produced which preserved the interdigitated skin, adipose, and glandular boundaries of the skin interior. Finally, surface modeling was used to produce digital phantoms with methods complementary to the XCAT suite of digital human phantoms. Results: After rejecting some datasets due to artifacts, 224 virtual breast phantoms were created which emulate the complex breast parenchyma of actual human subjects. The volume breast density (with skin) ranged from 5.5% to 66.3% with a mean value of 25.3% ± 13.2%. Breast volumes ranged from 25.0 to 2099.6 ml with a mean value of 716.3 ± 386.5 ml. Three breast phantoms were selected for imaging with digital compression (using finite element modeling) and simple ray-tracing, and the results show promise in their potential to produce realistic simulated mammograms. Conclusions: This work provides a new population of 224 breast phantoms based on in vivo bCT data for imaging research. Compared to previous studies based on only a few prototype cases, this dataset provides a rich source of new cases spanning a wide range of breast types, volumes, densities, and parenchymal patterns.« less
Population of 224 realistic human subject-based computational breast phantoms
Erickson, David W.; Wells, Jered R.; Sturgeon, Gregory M.; Dobbins, James T.; Segars, W. Paul; Lo, Joseph Y.
2016-01-01
Purpose: To create a database of highly realistic and anatomically variable 3D virtual breast phantoms based on dedicated breast computed tomography (bCT) data. Methods: A tissue classification and segmentation algorithm was used to create realistic and detailed 3D computational breast phantoms based on 230 + dedicated bCT datasets from normal human subjects. The breast volume was identified using a coarse three-class fuzzy C-means segmentation algorithm which accounted for and removed motion blur at the breast periphery. Noise in the bCT data was reduced through application of a postreconstruction 3D bilateral filter. A 3D adipose nonuniformity (bias field) correction was then applied followed by glandular segmentation using a 3D bias-corrected fuzzy C-means algorithm. Multiple tissue classes were defined including skin, adipose, and several fractional glandular densities. Following segmentation, a skin mask was produced which preserved the interdigitated skin, adipose, and glandular boundaries of the skin interior. Finally, surface modeling was used to produce digital phantoms with methods complementary to the XCAT suite of digital human phantoms. Results: After rejecting some datasets due to artifacts, 224 virtual breast phantoms were created which emulate the complex breast parenchyma of actual human subjects. The volume breast density (with skin) ranged from 5.5% to 66.3% with a mean value of 25.3% ± 13.2%. Breast volumes ranged from 25.0 to 2099.6 ml with a mean value of 716.3 ± 386.5 ml. Three breast phantoms were selected for imaging with digital compression (using finite element modeling) and simple ray-tracing, and the results show promise in their potential to produce realistic simulated mammograms. Conclusions: This work provides a new population of 224 breast phantoms based on in vivo bCT data for imaging research. Compared to previous studies based on only a few prototype cases, this dataset provides a rich source of new cases spanning a wide range of breast types, volumes, densities, and parenchymal patterns. PMID:26745896
Valverde, Sergi; Cabezas, Mariano; Roura, Eloy; González-Villà, Sandra; Pareto, Deborah; Vilanova, Joan C; Ramió-Torrentà, Lluís; Rovira, Àlex; Oliver, Arnau; Lladó, Xavier
2017-07-15
In this paper, we present a novel automated method for White Matter (WM) lesion segmentation of Multiple Sclerosis (MS) patient images. Our approach is based on a cascade of two 3D patch-wise convolutional neural networks (CNN). The first network is trained to be more sensitive revealing possible candidate lesion voxels while the second network is trained to reduce the number of misclassified voxels coming from the first network. This cascaded CNN architecture tends to learn well from a small (n≤35) set of labeled data of the same MRI contrast, which can be very interesting in practice, given the difficulty to obtain manual label annotations and the large amount of available unlabeled Magnetic Resonance Imaging (MRI) data. We evaluate the accuracy of the proposed method on the public MS lesion segmentation challenge MICCAI2008 dataset, comparing it with respect to other state-of-the-art MS lesion segmentation tools. Furthermore, the proposed method is also evaluated on two private MS clinical datasets, where the performance of our method is also compared with different recent public available state-of-the-art MS lesion segmentation methods. At the time of writing this paper, our method is the best ranked approach on the MICCAI2008 challenge, outperforming the rest of 60 participant methods when using all the available input modalities (T1-w, T2-w and FLAIR), while still in the top-rank (3rd position) when using only T1-w and FLAIR modalities. On clinical MS data, our approach exhibits a significant increase in the accuracy segmenting of WM lesions when compared with the rest of evaluated methods, highly correlating (r≥0.97) also with the expected lesion volume. Copyright © 2017 Elsevier Inc. All rights reserved.
Schoof, Rosalind A; Johnson, Dina L; Handziuk, Emma R; Landingham, Cynthia Van; Feldpausch, Alma M; Gallagher, Alexa E; Dell, Linda D; Kephart, Amy
2016-10-01
Lead exposure and blood lead levels (BLLs) in the United States have declined dramatically since the 1970s as many widespread lead uses have been discontinued. Large scale mining and mineral processing represents an additional localized source of potential lead exposure in many historical mining communities, such as Butte, Montana. After 25 years of ongoing remediation efforts and a residential metals abatement program that includes blood lead monitoring of Butte children, examination of blood lead trends offers a unique opportunity to assess the effectiveness of Butte's lead source and exposure reduction measures. This study examined BLL trends in Butte children ages 1-5 (n= 2796) from 2003-2010 as compared to a reference dataset matched for similar demographic characteristics over the same period. Blood lead differences across Butte during the same period are also examined. Findings are interpreted with respect to effectiveness of remediation and other factors potentially contributing to ongoing exposure concerns. BLLs from Butte were compared with a reference dataset (n=2937) derived from the National Health and Nutrition Examination Survey. The reference dataset was initially matched for child age and sample dates. Additional demographic factors associated with higher BLLs were then evaluated. Weights were applied to make the reference dataset more consistent with the Butte dataset for the three factors that were most disparate (poverty-to-income ratio, house age, and race/ethnicity). A weighted linear mixed regression model showed Butte geometric mean BLLs were higher than reference BLLs for 2003-2004 (3.48vs. 2.05µg/dL), 2005-2006 (2.65vs. 1.80µg/dL), and 2007-2008 (2.2vs. 1.72µg/dL), but comparable for 2009-2010 (1.53vs. 1.51µg/dL). This trend suggests that, over time, the impact of other factors that may be associated with Butte BLLs has been reduced. Neighborhood differences were examined by dividing the Butte dataset into the older area called "Uptown", located at higher elevation atop historical mine workings, and "the Flats", at lower elevation and more recently developed. Significant declines in BLLs were observed over time in both areas, though Uptown had slightly higher BLLs than the Flats (2003-2004: 3.57vs. 3.45µg/dL, p=0.7; 2005-2006: 2.84vs. 2.52µg/dL, p=0.1; 2007-2008: 2.58vs. 1.99µg/dL, p=0.001; 2009-2010: 1.71vs. 1.44µg/dL, p=0.02). BLLs were higher when tested in summer/fall than in winter/spring for both neighborhoods, and statistically higher BLLs were found for children in Uptown living in properties built before 1940. Neighborhood differences and the persistence of a greater percentage of high BLLs (>5µg/dL) in Butte vs. the reference dataset support continuation of the home lead abatement program. Butte BLL declines likely reflect the cumulative effectiveness of screening efforts, community-wide remediation, and the ongoing metals abatement program in Butte in addition to other factors not accounted for by this study. As evidenced in Butte, abatement programs that include home evaluations and assistance in addressing multiple sources of lead exposure can be an important complement to community-wide soil remediation activities. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Gongwen; Ma, Zhenbo; Li, Ruixi; Song, Yaowu; Qu, Jianan; Zhang, Shouting; Yan, Changhai; Han, Jiangwei
2017-04-01
In this paper, multi-source (geophysical, geochemical, geological and remote sensing) datasets were used to construct multi-scale (district-, deposit-, and orebody-scale) 3D geological models and extract 3D exploration criteria for subsurface Mo-polymetallic exploration targeting in the Luanchuan district in China. The results indicate that (i) a series of region-/district-scale NW-trending thrusts controlled main Mo-polymetallic forming, and they were formed by regional Indosinian Qinling orogenic events, the secondary NW-trending district-scale folds and NE-trending faults and the intrusive stock structure are produced based on thrust structure in Caledonian-Indosinian orogenic events; they are ore-bearing zones and ore-forming structures; (ii) the NW-trending district-scale and NE-trending deposit-scale normal faults were crossed and controlled by the Jurassic granite stocks in 3D space, they are associated with the magma-skarn Mo polymetallic mineralization (the 3D buffer distance of ore-forming granite stocks is 600 m) and the NW-trending hydrothermal Pb-Zn deposits which are surrounded by the Jurassic granite stocks and constrained by NW-trending or NE-trending faults (the 3D buffer distance of ore-forming fault is 700 m); and (iii) nine Mo polymetallic and four Pb-Zn targets were identified in the subsurface of the Luanchuan district.
Variable density randomized stack of spirals (VDR-SoS) for compressive sensing MRI.
Valvano, Giuseppe; Martini, Nicola; Landini, Luigi; Santarelli, Maria Filomena
2016-07-01
To develop a 3D sampling strategy based on a stack of variable density spirals for compressive sensing MRI. A random sampling pattern was obtained by rotating each spiral by a random angle and by delaying for few time steps the gradient waveforms of the different interleaves. A three-dimensional (3D) variable sampling density was obtained by designing different variable density spirals for each slice encoding. The proposed approach was tested with phantom simulations up to a five-fold undersampling factor. Fully sampled 3D dataset of a human knee, and of a human brain, were obtained from a healthy volunteer. The proposed approach was tested with off-line reconstructions of the knee dataset up to a four-fold acceleration and compared with other noncoherent trajectories. The proposed approach outperformed the standard stack of spirals for various undersampling factors. The level of coherence and the reconstruction quality of the proposed approach were similar to those of other trajectories that, however, require 3D gridding for the reconstruction. The variable density randomized stack of spirals (VDR-SoS) is an easily implementable trajectory that could represent a valid sampling strategy for 3D compressive sensing MRI. It guarantees low levels of coherence without requiring 3D gridding. Magn Reson Med 76:59-69, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Real-time 3D ultrasound imaging of infant tongue movements during breast-feeding.
Burton, Pat; Deng, Jing; McDonald, Daren; Fewtrell, Mary S
2013-09-01
Whether infants use suction or peristaltic tongue movements or a combination to extract milk during breast-feeding is controversial. The aims of this pilot study were 1] to evaluate the feasibility of using 3D ultrasound scanning to visualise infant tongue movements; and 2] to ascertain whether peristaltic tongue movements could be demonstrated during breast-feeding. 15 healthy term infants, aged 2 weeks to 4 months were scanned during breast-feeding, using a real-time 3D ultrasound system, with a 7 MHz transducer placed sub-mentally. 1] The method proved feasible, with 72% of bi-plane datasets and 56% of real-time 3D datasets providing adequate coverage [>75%] of the infant tongue. 2] Peristaltic tongue movement was observed in 13 of 15 infants [83%] from real-time or reformatted truly mid-sagittal views under 3D guidance. This is the first study to demonstrate the feasibility of using 3D ultrasound to visualise infant tongue movements during breast-feeding. Peristaltic infant tongue movement was present in the majority of infants when the image plane was truly mid-sagittal but was not apparent if the image was slightly off the mid-sagittal plane. This should be considered in studies investigating the relative importance of vacuum and peristalsis for milk transfer. Copyright © 2013 Elsevier Ltd. All rights reserved.
3D visualization of numeric planetary data using JMARS
NASA Astrophysics Data System (ADS)
Dickenshied, S.; Christensen, P. R.; Anwar, S.; Carter, S.; Hagee, W.; Noss, D.
2013-12-01
JMARS (Java Mission-planning and Analysis for Remote Sensing) is a free geospatial application developed by the Mars Space Flight Facility at Arizona State University. Originally written as a mission planning tool for the THEMIS instrument on board the MARS Odyssey Spacecraft, it was released as an analysis tool to the general public in 2003. Since then it has expanded to be used for mission planning and scientific data analysis by additional NASA missions to Mars, the Moon, and Vesta, and it has come to be used by scientists, researchers and students of all ages from more than 40 countries around the world. The public version of JMARS now also includes remote sensing data for Mercury, Venus, Earth, the Moon, Mars, and a number of the moons of Jupiter and Saturn. Additional datasets for asteroids and other smaller bodies are being added as they becomes available and time permits. In addition to visualizing multiple datasets in context with one another, significant effort has been put into on-the-fly projection of georegistered data over surface topography. This functionality allows a user to easily create and modify 3D visualizations of any regional scene where elevation data is available in JMARS. This can be accomplished through the use of global topographic maps or regional numeric data such as HiRISE or HRSC DTMs. Users can also upload their own regional or global topographic dataset and use it as an elevation source for 3D rendering of their scene. The 3D Layer in JMARS allows the user to exaggerate the z-scale of any elevation source to emphasize the vertical variance throughout a scene. In addition, the user can rotate, tilt, and zoom the scene to any desired angle and then illuminate it with an artificial light source. This scene can be easily overlain with additional JMARS datasets such as maps, images, shapefiles, contour lines, or scale bars, and the scene can be easily saved as a graphic image for use in presentations or publications.
TeraStitcher - A tool for fast automatic 3D-stitching of teravoxel-sized microscopy images
2012-01-01
Background Further advances in modern microscopy are leading to teravoxel-sized tiled 3D images at high resolution, thus increasing the dimension of the stitching problem of at least two orders of magnitude. The existing software solutions do not seem adequate to address the additional requirements arising from these datasets, such as the minimization of memory usage and the need to process just a small portion of data. Results We propose a free and fully automated 3D Stitching tool designed to match the special requirements coming out of teravoxel-sized tiled microscopy images that is able to stitch them in a reasonable time even on workstations with limited resources. The tool was tested on teravoxel-sized whole mouse brain images with micrometer resolution and it was also compared with the state-of-the-art stitching tools on megavoxel-sized publicy available datasets. This comparison confirmed that the solutions we adopted are suited for stitching very large images and also perform well on datasets with different characteristics. Indeed, some of the algorithms embedded in other stitching tools could be easily integrated in our framework if they turned out to be more effective on other classes of images. To this purpose, we designed a software architecture which separates the strategies that use efficiently memory resources from the algorithms which may depend on the characteristics of the acquired images. Conclusions TeraStitcher is a free tool that enables the stitching of Teravoxel-sized tiled microscopy images even on workstations with relatively limited resources of memory (<8 GB) and processing power. It exploits the knowledge of approximate tile positions and uses ad-hoc strategies and algorithms designed for such very large datasets. The produced images can be saved into a multiresolution representation to be efficiently retrieved and processed. We provide TeraStitcher both as standalone application and as plugin of the free software Vaa3D. PMID:23181553
NASA Astrophysics Data System (ADS)
Nesbit, P. R.; Hugenholtz, C.; Durkin, P.; Hubbard, S. M.; Kucharczyk, M.; Barchyn, T.
2016-12-01
Remote sensing and digital mapping have started to revolutionize geologic mapping in recent years as a result of their realized potential to provide high resolution 3D models of outcrops to assist with interpretation, visualization, and obtaining accurate measurements of inaccessible areas. However, in stratigraphic mapping applications in complex terrain, it is difficult to acquire information with sufficient detail at a wide spatial coverage with conventional techniques. We demonstrate the potential of a UAV and Structure from Motion (SfM) photogrammetric approach for improving 3D stratigraphic mapping applications within a complex badland topography. Our case study is performed in Dinosaur Provincial Park (Alberta, Canada), mapping late Cretaceous fluvial meander belt deposits of the Dinosaur Park formation amidst a succession of steeply sloping hills and abundant drainages - creating a challenge for stratigraphic mapping. The UAV-SfM dataset (2 cm spatial resolution) is compared directly with a combined satellite and aerial LiDAR dataset (30 cm spatial resolution) to reveal advantages and limitations of each dataset before presenting a unique workflow that utilizes the dense point cloud from the UAV-SfM dataset for analysis. The UAV-SfM dense point cloud minimizes distortion, preserves 3D structure, and records an RGB attribute - adding potential value in future studies. The proposed UAV-SfM workflow allows for high spatial resolution remote sensing of stratigraphy in complex topographic environments. This extended capability can add value to field observations and has the potential to be integrated with subsurface petroleum models.
3D Visualizations of Abstract DataSets
2010-08-01
contrasts no shadows, drop shadows and drop lines. 15. SUBJECT TERMS 3D displays, 2.5D displays, abstract network visualizations, depth perception , human...altitude perception in airspace management and airspace route planning—simulated reality visualizations that employ altitude and heading as well as...cues employed by display designers for depicting real-world scenes on a flat surface can be applied to create a perception of depth for abstract
Single-shot diffraction data from the Mimivirus particle using an X-ray free-electron laser.
Ekeberg, Tomas; Svenda, Martin; Seibert, M Marvin; Abergel, Chantal; Maia, Filipe R N C; Seltzer, Virginie; DePonte, Daniel P; Aquila, Andrew; Andreasson, Jakob; Iwan, Bianca; Jönsson, Olof; Westphal, Daniel; Odić, Duško; Andersson, Inger; Barty, Anton; Liang, Meng; Martin, Andrew V; Gumprecht, Lars; Fleckenstein, Holger; Bajt, Saša; Barthelmess, Miriam; Coppola, Nicola; Claverie, Jean-Michel; Loh, N Duane; Bostedt, Christoph; Bozek, John D; Krzywinski, Jacek; Messerschmidt, Marc; Bogan, Michael J; Hampton, Christina Y; Sierra, Raymond G; Frank, Matthias; Shoeman, Robert L; Lomb, Lukas; Foucar, Lutz; Epp, Sascha W; Rolles, Daniel; Rudenko, Artem; Hartmann, Robert; Hartmann, Andreas; Kimmel, Nils; Holl, Peter; Weidenspointner, Georg; Rudek, Benedikt; Erk, Benjamin; Kassemeyer, Stephan; Schlichting, Ilme; Strüder, Lothar; Ullrich, Joachim; Schmidt, Carlo; Krasniqi, Faton; Hauser, Günter; Reich, Christian; Soltau, Heike; Schorb, Sebastian; Hirsemann, Helmut; Wunderer, Cornelia; Graafsma, Heinz; Chapman, Henry; Hajdu, Janos
2016-08-01
Free-electron lasers (FEL) hold the potential to revolutionize structural biology by producing X-ray pules short enough to outrun radiation damage, thus allowing imaging of biological samples without the limitation from radiation damage. Thus, a major part of the scientific case for the first FELs was three-dimensional (3D) reconstruction of non-crystalline biological objects. In a recent publication we demonstrated the first 3D reconstruction of a biological object from an X-ray FEL using this technique. The sample was the giant Mimivirus, which is one of the largest known viruses with a diameter of 450 nm. Here we present the dataset used for this successful reconstruction. Data-analysis methods for single-particle imaging at FELs are undergoing heavy development but data collection relies on very limited time available through a highly competitive proposal process. This dataset provides experimental data to the entire community and could boost algorithm development and provide a benchmark dataset for new algorithms.
The coming paradigm shift: A transition from manual to automated microscopy.
Farahani, Navid; Monteith, Corey E
2016-01-01
The field of pathology has used light microscopy (LM) extensively since the mid-19(th) century for examination of histological tissue preparations. This technology has remained the foremost tool in use by pathologists even as other fields have undergone a great change in recent years through new technologies. However, as new microscopy techniques are perfected and made available, this reliance on the standard LM will likely begin to change. Advanced imaging involving both diffraction-limited and subdiffraction techniques are bringing nondestructive, high-resolution, molecular-level imaging to pathology. Some of these technologies can produce three-dimensional (3D) datasets from sampled tissues. In addition, block-face/tissue-sectioning techniques are already providing automated, large-scale 3D datasets of whole specimens. These datasets allow pathologists to see an entire sample with all of its spatial information intact, and furthermore allow image analysis such as detection, segmentation, and classification, which are impossible in standard LM. It is likely that these technologies herald a major paradigm shift in the field of pathology.
Single-shot diffraction data from the Mimivirus particle using an X-ray free-electron laser
NASA Astrophysics Data System (ADS)
Ekeberg, Tomas; Svenda, Martin; Seibert, M. Marvin; Abergel, Chantal; Maia, Filipe R. N. C.; Seltzer, Virginie; Deponte, Daniel P.; Aquila, Andrew; Andreasson, Jakob; Iwan, Bianca; Jönsson, Olof; Westphal, Daniel; Odić, Duško; Andersson, Inger; Barty, Anton; Liang, Meng; Martin, Andrew V.; Gumprecht, Lars; Fleckenstein, Holger; Bajt, Saša; Barthelmess, Miriam; Coppola, Nicola; Claverie, Jean-Michel; Loh, N. Duane; Bostedt, Christoph; Bozek, John D.; Krzywinski, Jacek; Messerschmidt, Marc; Bogan, Michael J.; Hampton, Christina Y.; Sierra, Raymond G.; Frank, Matthias; Shoeman, Robert L.; Lomb, Lukas; Foucar, Lutz; Epp, Sascha W.; Rolles, Daniel; Rudenko, Artem; Hartmann, Robert; Hartmann, Andreas; Kimmel, Nils; Holl, Peter; Weidenspointner, Georg; Rudek, Benedikt; Erk, Benjamin; Kassemeyer, Stephan; Schlichting, Ilme; Strüder, Lothar; Ullrich, Joachim; Schmidt, Carlo; Krasniqi, Faton; Hauser, Günter; Reich, Christian; Soltau, Heike; Schorb, Sebastian; Hirsemann, Helmut; Wunderer, Cornelia; Graafsma, Heinz; Chapman, Henry; Hajdu, Janos
2016-08-01
Free-electron lasers (FEL) hold the potential to revolutionize structural biology by producing X-ray pules short enough to outrun radiation damage, thus allowing imaging of biological samples without the limitation from radiation damage. Thus, a major part of the scientific case for the first FELs was three-dimensional (3D) reconstruction of non-crystalline biological objects. In a recent publication we demonstrated the first 3D reconstruction of a biological object from an X-ray FEL using this technique. The sample was the giant Mimivirus, which is one of the largest known viruses with a diameter of 450 nm. Here we present the dataset used for this successful reconstruction. Data-analysis methods for single-particle imaging at FELs are undergoing heavy development but data collection relies on very limited time available through a highly competitive proposal process. This dataset provides experimental data to the entire community and could boost algorithm development and provide a benchmark dataset for new algorithms.
Hybrid 3D printing: a game-changer in personalized cardiac medicine?
Kurup, Harikrishnan K N; Samuel, Bennett P; Vettukattil, Joseph J
2015-12-01
Three-dimensional (3D) printing in congenital heart disease has the potential to increase procedural efficiency and patient safety by improving interventional and surgical planning and reducing radiation exposure. Cardiac magnetic resonance imaging and computed tomography are usually the source datasets to derive 3D printing. More recently, 3D echocardiography has been demonstrated to derive 3D-printed models. The integration of multiple imaging modalities for hybrid 3D printing has also been shown to create accurate printed heart models, which may prove to be beneficial for interventional cardiologists, cardiothoracic surgeons, and as an educational tool. Further advancements in the integration of different imaging modalities into a single platform for hybrid 3D printing and virtual 3D models will drive the future of personalized cardiac medicine.
Gíslason, Magnús; Sigurðsson, Sigurður; Guðnason, Vilmundur; Harris, Tamara; Carraro, Ugo; Gargiulo, Paolo
2018-01-01
Sarcopenic muscular degeneration has been consistently identified as an independent risk factor for mortality in aging populations. Recent investigations have realized the quantitative potential of computed tomography (CT) image analysis to describe skeletal muscle volume and composition; however, the optimum approach to assessing these data remains debated. Current literature reports average Hounsfield unit (HU) values and/or segmented soft tissue cross-sectional areas to investigate muscle quality. However, standardized methods for CT analyses and their utility as a comorbidity index remain undefined, and no existing studies compare these methods to the assessment of entire radiodensitometric distributions. The primary aim of this study was to present a comparison of nonlinear trimodal regression analysis (NTRA) parameters of entire radiodensitometric muscle distributions against extant CT metrics and their correlation with lower extremity function (LEF) biometrics (normal/fast gait speed, timed up-and-go, and isometric leg strength) and biochemical and nutritional parameters, such as total solubilized cholesterol (SCHOL) and body mass index (BMI). Data were obtained from 3,162 subjects, aged 66–96 years, from the population-based AGES-Reykjavik Study. 1-D k-means clustering was employed to discretize each biometric and comorbidity dataset into twelve subpopulations, in accordance with Sturges’ Formula for Class Selection. Dataset linear regressions were performed against eleven NTRA distribution parameters and standard CT analyses (fat/muscle cross-sectional area and average HU value). Parameters from NTRA and CT standards were analogously assembled by age and sex. Analysis of specific NTRA parameters with standard CT results showed linear correlation coefficients greater than 0.85, but multiple regression analysis of correlative NTRA parameters yielded a correlation coefficient of 0.99 (P<0.005). These results highlight the specificities of each muscle quality metric to LEF biometrics, SCHOL, and BMI, and particularly highlight the value of the connective tissue regime in this regard. PMID:29513690
Edmunds, Kyle; Gíslason, Magnús; Sigurðsson, Sigurður; Guðnason, Vilmundur; Harris, Tamara; Carraro, Ugo; Gargiulo, Paolo
2018-01-01
Sarcopenic muscular degeneration has been consistently identified as an independent risk factor for mortality in aging populations. Recent investigations have realized the quantitative potential of computed tomography (CT) image analysis to describe skeletal muscle volume and composition; however, the optimum approach to assessing these data remains debated. Current literature reports average Hounsfield unit (HU) values and/or segmented soft tissue cross-sectional areas to investigate muscle quality. However, standardized methods for CT analyses and their utility as a comorbidity index remain undefined, and no existing studies compare these methods to the assessment of entire radiodensitometric distributions. The primary aim of this study was to present a comparison of nonlinear trimodal regression analysis (NTRA) parameters of entire radiodensitometric muscle distributions against extant CT metrics and their correlation with lower extremity function (LEF) biometrics (normal/fast gait speed, timed up-and-go, and isometric leg strength) and biochemical and nutritional parameters, such as total solubilized cholesterol (SCHOL) and body mass index (BMI). Data were obtained from 3,162 subjects, aged 66-96 years, from the population-based AGES-Reykjavik Study. 1-D k-means clustering was employed to discretize each biometric and comorbidity dataset into twelve subpopulations, in accordance with Sturges' Formula for Class Selection. Dataset linear regressions were performed against eleven NTRA distribution parameters and standard CT analyses (fat/muscle cross-sectional area and average HU value). Parameters from NTRA and CT standards were analogously assembled by age and sex. Analysis of specific NTRA parameters with standard CT results showed linear correlation coefficients greater than 0.85, but multiple regression analysis of correlative NTRA parameters yielded a correlation coefficient of 0.99 (P<0.005). These results highlight the specificities of each muscle quality metric to LEF biometrics, SCHOL, and BMI, and particularly highlight the value of the connective tissue regime in this regard.
Automatic detection of lift-off and touch-down of a pick-up walker using 3D kinematics.
Grootveld, L; Thies, S B; Ogden, D; Howard, D; Kenney, L P J
2014-02-01
Walking aids have been associated with falls and it is believed that incorrect use limits their usefulness. Measures are therefore needed that characterize their stable use and the classification of key events in walking aid movement is the first step in their development. This study presents an automated algorithm for detection of lift-off (LO) and touch-down (TD) events of a pick-up walker. For algorithm design and initial testing, a single user performed trials for which the four individual walker feet lifted off the ground and touched down again in various sequences, and for different amounts of frame loading (Dataset_1). For further validation, ten healthy young subjects walked with the pick-up walker on flat ground (Dataset_2a) and on a narrow beam (Dataset_2b), to challenge balance. One 88-year-old walking frame user was also assessed. Kinematic data were collected with a 3D optoelectronic camera system. The algorithm detected over 93% of events (Dataset_1), and 95% and 92% in Dataset_2a and b, respectively. Of the various LO/TD sequences, those associated with natural progression resulted in up to 100% correctly identified events. For the 88-year-old walking frame user, 96% of LO events and 93% of TD events were detected, demonstrating the potential of the approach. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
Comparison of surgically induced astigmatism following different glaucoma operations.
Tanito, Masaki; Matsuzaki, Yukari; Ikeda, Yoshifumi; Fujihara, Etsuko
2017-01-01
To compare surgically induced astigmatism (SIA) among glaucomatous eyes treated with trabeculectomy (LEC), EX-PRESS ® shunt (EXP), ab externo trabeculotomy (exLOT), or microhook ab interno trabeculotomy (μLOT). Eighty right eyes of 80 subjects who underwent LEC (n=20), EXP (n=20), exLOT (n=20), or μLOT (n=20) were included. The dataset including the best-corrected visual acuity (BCVA), intraocular pressure (IOP), and keratometry recordings preoperatively and 3 months postoperatively was collected by chart review. The means of the vector magnitude, vector meridian, and arithmetic magnitude of the preoperative and postoperative astigmatism and SIA were calculated. The correlations among the SIA magnitude, postoperative BCVA, and IOP were assessed. The mean astigmatic arithmetic magnitudes did not differ significantly ( P =0.0732) preoperatively among the four groups, but the magnitude was significantly ( P =0.0002) greater in the LEC group than the other groups postoperatively. The mean SIA vectors were calculated to be 1.01 D at 56°, 0.62 D at 74°, 0.23 D at 112°, and 0.12 D at 97° for the LEC, EXP, exLOT, and μLOT groups, respectively. The mean SIA arithmetic magnitudes were significantly ( P <0.0001) greater in the LEC group than the other groups. Three months postoperatively, the SIA magnitude was correlated positively with the logarithm of the minimum angle of resolution (logMAR) BCVA ( r =0.3538) and negatively with the IOP ( r =-0.3265); the logMAR BCVA was correlated negatively with the IOP ( r =-0.3105). EXP, exLOT, and μLOT induce less corneal astigmatism than LEC in the early postoperative period.
Comparison of surgically induced astigmatism following different glaucoma operations
Tanito, Masaki; Matsuzaki, Yukari; Ikeda, Yoshifumi; Fujihara, Etsuko
2017-01-01
Aim To compare surgically induced astigmatism (SIA) among glaucomatous eyes treated with trabeculectomy (LEC), EX-PRESS® shunt (EXP), ab externo trabeculotomy (exLOT), or microhook ab interno trabeculotomy (μLOT). Subjects and methods Eighty right eyes of 80 subjects who underwent LEC (n=20), EXP (n=20), exLOT (n=20), or μLOT (n=20) were included. The dataset including the best-corrected visual acuity (BCVA), intraocular pressure (IOP), and keratometry recordings preoperatively and 3 months postoperatively was collected by chart review. The means of the vector magnitude, vector meridian, and arithmetic magnitude of the preoperative and postoperative astigmatism and SIA were calculated. The correlations among the SIA magnitude, postoperative BCVA, and IOP were assessed. Results The mean astigmatic arithmetic magnitudes did not differ significantly (P=0.0732) preoperatively among the four groups, but the magnitude was significantly (P=0.0002) greater in the LEC group than the other groups postoperatively. The mean SIA vectors were calculated to be 1.01 D at 56°, 0.62 D at 74°, 0.23 D at 112°, and 0.12 D at 97° for the LEC, EXP, exLOT, and μLOT groups, respectively. The mean SIA arithmetic magnitudes were significantly (P<0.0001) greater in the LEC group than the other groups. Three months postoperatively, the SIA magnitude was correlated positively with the logarithm of the minimum angle of resolution (logMAR) BCVA (r=0.3538) and negatively with the IOP (r=−0.3265); the logMAR BCVA was correlated negatively with the IOP (r=−0.3105). Conclusion EXP, exLOT, and μLOT induce less corneal astigmatism than LEC in the early postoperative period. PMID:29238159
Automatic Beam Path Analysis of Laser Wakefield Particle Acceleration Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, Oliver; Geddes, Cameron G.R.; Cormier-Michel, Estelle
2009-10-19
Numerical simulations of laser wakefield particle accelerators play a key role in the understanding of the complex acceleration process and in the design of expensive experimental facilities. As the size and complexity of simulation output grows, an increasingly acute challenge is the practical need for computational techniques that aid in scientific knowledge discovery. To that end, we present a set of data-understanding algorithms that work in concert in a pipeline fashion to automatically locate and analyze high energy particle bunches undergoing acceleration in very large simulation datasets. These techniques work cooperatively by first identifying features of interest in individual timesteps,more » then integrating features across timesteps, and based on the information derived perform analysis of temporally dynamic features. This combination of techniques supports accurate detection of particle beams enabling a deeper level of scientific understanding of physical phenomena than hasbeen possible before. By combining efficient data analysis algorithms and state-of-the-art data management we enable high-performance analysis of extremely large particle datasets in 3D. We demonstrate the usefulness of our methods for a variety of 2D and 3D datasets and discuss the performance of our analysis pipeline.« less
Automated Creation of Labeled Pointcloud Datasets in Support of Machine-Learning Based Perception
2017-12-01
computationally intensive 3D vector math and took more than ten seconds to segment a single LIDAR frame from the HDL-32e with the Dell XPS15 9650’s Intel...Core i7 CPU. Depth Clustering avoids the computationally intensive 3D vector math of Euclidean Clustering-based DON segmentation and, instead
Babitha, Pallikkara Pulikkal; Sahila, Mohammed Marunnan; Bandaru, Srinivas; Nayarisseri, Anuraj; Sureshkumar, Sivanpillai
2015-01-01
The present AChE inhibitors have been successful in the treatment of Alzheimer׳s Diseases however suffers serious side effects. Therefore in this view, the present study was sought to identify compounds with appreciable pharmacological profile targeting AChE. Analogue of Rivastigmine and Fluoxetine hybrid synthesized by Toda et al, 2003 (dataset1), and Coumarin-Tacrine hybrids synthesized by Qi Sun et al (dataset2) formed the test compounds for the present pharmacological evaluation. p-cholorophenyl substituted Rivastigmine and Fluoxetine hybrid compound (26d) from dataset 1 and -OCH3 substitute Coumarin-Tacrine hybrids (1h) from dataset 2 demonstrated superior pharmacological profile. 26 d showed superior pharmacological profile comparison to the entire compounds in either dataset owing to its better electrostatic interactions and hydrogen bonding patterns. In order to identify still better compound with pharmacological profile than 26 d and 1h, virtual screening was performed. The best docked compound (PubCId: PubCid: 68874404) showed better affinity than its parent 26 d, however showed poor ADME profile and AMES toxicity. CHEMBL2391475 (PubCid: 71699632) similar to 1h had reduced affinity in comparison to its parent compound 1h. From, our extensive analysis involving binding affinity analysis, ADMET properties predictions and pharmacophoric mappings, we report p-cholorophenyl substituted rivastigmine and fluoxetine hybrid (26d) to be a potential candidate for AcHE inhibition which in addition can overcome narrow therapeutic window of present AChE inhibitors in clinical treatment of Alzheimer׳s disease. AD - Alzheimer׳s Disease, AChE - Acetyl Choline Estarase, OPLS - Optimized Potentials for Liquid Simulations, PDB - Protein Data Bank.
2007-06-01
the Los Angeles Area A-16 Figure A-5 Illustration of the Life Cycle of the Japanese Beetle A-17 Figure B-1 Knowledge Wall B-8 Figure B-2...Structure of World Trade in 1992 C-15 Figure D-1 3D Panorama Cinema at CAVI in Use D-2 Figure D-2 Holobench at CAVI D-2 Figure D-3 Virtual Studio...Illustration of the Life Cycle of the Japanese Beetle. A.6 STEPS IN ESTABLISHING A THEORY OF EFFECTIVE VISUALISATION An established theory generally
A Robust Post-Processing Workflow for Datasets with Motion Artifacts in Diffusion Kurtosis Imaging
Li, Xianjun; Yang, Jian; Gao, Jie; Luo, Xue; Zhou, Zhenyu; Hu, Yajie; Wu, Ed X.; Wan, Mingxi
2014-01-01
Purpose The aim of this study was to develop a robust post-processing workflow for motion-corrupted datasets in diffusion kurtosis imaging (DKI). Materials and methods The proposed workflow consisted of brain extraction, rigid registration, distortion correction, artifacts rejection, spatial smoothing and tensor estimation. Rigid registration was utilized to correct misalignments. Motion artifacts were rejected by using local Pearson correlation coefficient (LPCC). The performance of LPCC in characterizing relative differences between artifacts and artifact-free images was compared with that of the conventional correlation coefficient in 10 randomly selected DKI datasets. The influence of rejected artifacts with information of gradient directions and b values for the parameter estimation was investigated by using mean square error (MSE). The variance of noise was used as the criterion for MSEs. The clinical practicality of the proposed workflow was evaluated by the image quality and measurements in regions of interest on 36 DKI datasets, including 18 artifact-free (18 pediatric subjects) and 18 motion-corrupted datasets (15 pediatric subjects and 3 essential tremor patients). Results The relative difference between artifacts and artifact-free images calculated by LPCC was larger than that of the conventional correlation coefficient (p<0.05). It indicated that LPCC was more sensitive in detecting motion artifacts. MSEs of all derived parameters from the reserved data after the artifacts rejection were smaller than the variance of the noise. It suggested that influence of rejected artifacts was less than influence of noise on the precision of derived parameters. The proposed workflow improved the image quality and reduced the measurement biases significantly on motion-corrupted datasets (p<0.05). Conclusion The proposed post-processing workflow was reliable to improve the image quality and the measurement precision of the derived parameters on motion-corrupted DKI datasets. The workflow provided an effective post-processing method for clinical applications of DKI in subjects with involuntary movements. PMID:24727862
A robust post-processing workflow for datasets with motion artifacts in diffusion kurtosis imaging.
Li, Xianjun; Yang, Jian; Gao, Jie; Luo, Xue; Zhou, Zhenyu; Hu, Yajie; Wu, Ed X; Wan, Mingxi
2014-01-01
The aim of this study was to develop a robust post-processing workflow for motion-corrupted datasets in diffusion kurtosis imaging (DKI). The proposed workflow consisted of brain extraction, rigid registration, distortion correction, artifacts rejection, spatial smoothing and tensor estimation. Rigid registration was utilized to correct misalignments. Motion artifacts were rejected by using local Pearson correlation coefficient (LPCC). The performance of LPCC in characterizing relative differences between artifacts and artifact-free images was compared with that of the conventional correlation coefficient in 10 randomly selected DKI datasets. The influence of rejected artifacts with information of gradient directions and b values for the parameter estimation was investigated by using mean square error (MSE). The variance of noise was used as the criterion for MSEs. The clinical practicality of the proposed workflow was evaluated by the image quality and measurements in regions of interest on 36 DKI datasets, including 18 artifact-free (18 pediatric subjects) and 18 motion-corrupted datasets (15 pediatric subjects and 3 essential tremor patients). The relative difference between artifacts and artifact-free images calculated by LPCC was larger than that of the conventional correlation coefficient (p<0.05). It indicated that LPCC was more sensitive in detecting motion artifacts. MSEs of all derived parameters from the reserved data after the artifacts rejection were smaller than the variance of the noise. It suggested that influence of rejected artifacts was less than influence of noise on the precision of derived parameters. The proposed workflow improved the image quality and reduced the measurement biases significantly on motion-corrupted datasets (p<0.05). The proposed post-processing workflow was reliable to improve the image quality and the measurement precision of the derived parameters on motion-corrupted DKI datasets. The workflow provided an effective post-processing method for clinical applications of DKI in subjects with involuntary movements.
2012-01-01
Background ChIP-seq provides new opportunities to study allele-specific protein-DNA binding (ASB). However, detecting allelic imbalance from a single ChIP-seq dataset often has low statistical power since only sequence reads mapped to heterozygote SNPs are informative for discriminating two alleles. Results We develop a new method iASeq to address this issue by jointly analyzing multiple ChIP-seq datasets. iASeq uses a Bayesian hierarchical mixture model to learn correlation patterns of allele-specificity among multiple proteins. Using the discovered correlation patterns, the model allows one to borrow information across datasets to improve detection of allelic imbalance. Application of iASeq to 77 ChIP-seq samples from 40 ENCODE datasets and 1 genomic DNA sample in GM12878 cells reveals that allele-specificity of multiple proteins are highly correlated, and demonstrates the ability of iASeq to improve allelic inference compared to analyzing each individual dataset separately. Conclusions iASeq illustrates the value of integrating multiple datasets in the allele-specificity inference and offers a new tool to better analyze ASB. PMID:23194258
NASA Astrophysics Data System (ADS)
Spica, Z. J.; Perton, M.; Calo, M.; Cordoba-Montiel, F.; Legrand, D.; Iglesias, A.
2015-12-01
Standard application of the seismic ambient noise tomography considers the existence of synchronous records at stations for green's functions retrieval. More recent theoretical and experimental observations showed the possibility to apply correlation of coda of noise correlation (C3) to obtain green's functions between stations of asynchronous seismic networks making possible to dramatically increase databases for imagining the Earth's interior. However, this possibility has not been fully exploited yet, and right now the data C3 are not included into tomographic inversions to refine seismic structures. Here we show for the first time how to incorporate the data of C1 and C3 to calculate dispersion maps of Rayleigh waves in the range period of 10-120s, and how the merging of these datasets improves the resolution of the structures imaged. Tomographic images are obtained for an area covering Mexico, the Gulf of Mexico and the southern U.S. We show dispersion maps calculated using both data of C1 and the complete dataset (C1+C3). The latter provide new details of the seismic structure of the region allowing a better understanding of their role on the geodynamics of the study area. The resolving power obtained in our study is several times higher than in previous studies based on ambient noise. This demonstrates the new possibilities for imaging the Earth's crust and upper mantle using this enlarged database.
Chen, Zikuan; Calhoun, Vince D
2016-03-01
Conventionally, independent component analysis (ICA) is performed on an fMRI magnitude dataset to analyze brain functional mapping (AICA). By solving the inverse problem of fMRI, we can reconstruct the brain magnetic susceptibility (χ) functional states. Upon the reconstructed χ dataspace, we propose an ICA-based brain functional χ mapping method (χICA) to extract task-evoked brain functional map. A complex division algorithm is applied to a timeseries of fMRI phase images to extract temporal phase changes (relative to an OFF-state snapshot). A computed inverse MRI (CIMRI) model is used to reconstruct a 4D brain χ response dataset. χICA is implemented by applying a spatial InfoMax ICA algorithm to the reconstructed 4D χ dataspace. With finger-tapping experiments on a 7T system, the χICA-extracted χ-depicted functional map is similar to the SPM-inferred functional χ map by a spatial correlation of 0.67 ± 0.05. In comparison, the AICA-extracted magnitude-depicted map is correlated with the SPM magnitude map by 0.81 ± 0.05. The understanding of the inferiority of χICA to AICA for task-evoked functional map is an ongoing research topic. For task-evoked brain functional mapping, we compare the data-driven ICA method with the task-correlated SPM method. In particular, we compare χICA with AICA for extracting task-correlated timecourses and functional maps. χICA can extract a χ-depicted task-evoked brain functional map from a reconstructed χ dataspace without the knowledge about brain hemodynamic responses. The χICA-extracted brain functional χ map reveals a bidirectional BOLD response pattern that is unavailable (or different) from AICA. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lague, D.
2014-12-01
High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.
Statistical link between external climate forcings and modes of ocean variability
NASA Astrophysics Data System (ADS)
Malik, Abdul; Brönnimann, Stefan; Perona, Paolo
2017-07-01
In this study we investigate statistical link between external climate forcings and modes of ocean variability on inter-annual (3-year) to centennial (100-year) timescales using de-trended semi-partial-cross-correlation analysis technique. To investigate this link we employ observations (AD 1854-1999), climate proxies (AD 1600-1999), and coupled Atmosphere-Ocean-Chemistry Climate Model simulations with SOCOL-MPIOM (AD 1600-1999). We find robust statistical evidence that Atlantic multi-decadal oscillation (AMO) has intrinsic positive correlation with solar activity in all datasets employed. The strength of the relationship between AMO and solar activity is modulated by volcanic eruptions and complex interaction among modes of ocean variability. The observational dataset reveals that El Niño southern oscillation (ENSO) has statistically significant negative intrinsic correlation with solar activity on decadal to multi-decadal timescales (16-27-year) whereas there is no evidence of a link on a typical ENSO timescale (2-7-year). In the observational dataset, the volcanic eruptions do not have a link with AMO on a typical AMO timescale (55-80-year) however the long-term datasets (proxies and SOCOL-MPIOM output) show that volcanic eruptions have intrinsic negative correlation with AMO on inter-annual to multi-decadal timescales. The Pacific decadal oscillation has no link with solar activity, however, it has positive intrinsic correlation with volcanic eruptions on multi-decadal timescales (47-54-year) in reconstruction and decadal to multi-decadal timescales (16-32-year) in climate model simulations. We also find evidence of a link between volcanic eruptions and ENSO, however, the sign of relationship is not consistent between observations/proxies and climate model simulations.
de Winter, Joost C F; Gosling, Samuel D; Potter, Jeff
2016-09-01
The Pearson product–moment correlation coefficient ( r p ) and the Spearman rank correlation coefficient ( r s ) are widely used in psychological research. We compare r p and r s on 3 criteria: variability, bias with respect to the population value, and robustness to an outlier. Using simulations across low (N = 5) to high (N = 1,000) sample sizes we show that, for normally distributed variables, r p and r s have similar expected values but r s is more variable, especially when the correlation is strong. However, when the variables have high kurtosis, r p is more variable than r s . Next, we conducted a sampling study of a psychometric dataset featuring symmetrically distributed data with light tails, and of 2 Likert-type survey datasets, 1 with light-tailed and the other with heavy-tailed distributions. Consistent with the simulations, r p had lower variability than r s in the psychometric dataset. In the survey datasets with heavy-tailed variables in particular, r s had lower variability than r p , and often corresponded more accurately to the population Pearson correlation coefficient ( R p ) than r p did. The simulations and the sampling studies showed that variability in terms of standard deviations can be reduced by about 20% by choosing r s instead of r p . In comparison, increasing the sample size by a factor of 2 results in a 41% reduction of the standard deviations of r s and r p . In conclusion, r p is suitable for light-tailed distributions, whereas r s is preferable when variables feature heavy-tailed distributions or when outliers are present, as is often the case in psychological research. PsycINFO Database Record (c) 2016 APA, all rights reserved
GRASS GIS: The first Open Source Temporal GIS
NASA Astrophysics Data System (ADS)
Gebbert, Sören; Leppelt, Thomas
2015-04-01
GRASS GIS is a full featured, general purpose Open Source geographic information system (GIS) with raster, 3D raster and vector processing support[1]. Recently, time was introduced as a new dimension that transformed GRASS GIS into the first Open Source temporal GIS with comprehensive spatio-temporal analysis, processing and visualization capabilities[2]. New spatio-temporal data types were introduced in GRASS GIS version 7, to manage raster, 3D raster and vector time series. These new data types are called space time datasets. They are designed to efficiently handle hundreds of thousands of time stamped raster, 3D raster and vector map layers of any size. Time stamps can be defined as time intervals or time instances in Gregorian calendar time or relative time. Space time datasets are simplifying the processing and analysis of large time series in GRASS GIS, since these new data types are used as input and output parameter in temporal modules. The handling of space time datasets is therefore equal to the handling of raster, 3D raster and vector map layers in GRASS GIS. A new dedicated Python library, the GRASS GIS Temporal Framework, was designed to implement the spatio-temporal data types and their management. The framework provides the functionality to efficiently handle hundreds of thousands of time stamped map layers and their spatio-temporal topological relations. The framework supports reasoning based on the temporal granularity of space time datasets as well as their temporal topology. It was designed in conjunction with the PyGRASS [3] library to support parallel processing of large datasets, that has a long tradition in GRASS GIS [4,5]. We will present a subset of more than 40 temporal modules that were implemented based on the GRASS GIS Temporal Framework, PyGRASS and the GRASS GIS Python scripting library. These modules provide a comprehensive temporal GIS tool set. The functionality range from space time dataset and time stamped map layer management over temporal aggregation, temporal accumulation, spatio-temporal statistics, spatio-temporal sampling, temporal algebra, temporal topology analysis, time series animation and temporal topology visualization to time series import and export capabilities with support for NetCDF and VTK data formats. We will present several temporal modules that support parallel processing of raster and 3D raster time series. [1] GRASS GIS Open Source Approaches in Spatial Data Handling In Open Source Approaches in Spatial Data Handling, Vol. 2 (2008), pp. 171-199, doi:10.1007/978-3-540-74831-19 by M. Neteler, D. Beaudette, P. Cavallini, L. Lami, J. Cepicky edited by G. Brent Hall, Michael G. Leahy [2] Gebbert, S., Pebesma, E., 2014. A temporal GIS for field based environmental modeling. Environ. Model. Softw. 53, 1-12. [3] Zambelli, P., Gebbert, S., Ciolli, M., 2013. Pygrass: An Object Oriented Python Application Programming Interface (API) for Geographic Resources Analysis Support System (GRASS) Geographic Information System (GIS). ISPRS Intl Journal of Geo-Information 2, 201-219. [4] Löwe, P., Klump, J., Thaler, J. (2012): The FOSS GIS Workbench on the GFZ Load Sharing Facility compute cluster, (Geophysical Research Abstracts Vol. 14, EGU2012-4491, 2012), General Assembly European Geosciences Union (Vienna, Austria 2012). [5] Akhter, S., Aida, K., Chemin, Y., 2010. "GRASS GIS on High Performance Computing with MPI, OpenMP and Ninf-G Programming Framework". ISPRS Conference, Kyoto, 9-12 August 2010
CT-based manual segmentation and evaluation of paranasal sinuses.
Pirner, S; Tingelhoff, K; Wagner, I; Westphal, R; Rilk, M; Wahl, F M; Bootz, F; Eichhorn, Klaus W G
2009-04-01
Manual segmentation of computed tomography (CT) datasets was performed for robot-assisted endoscope movement during functional endoscopic sinus surgery (FESS). Segmented 3D models are needed for the robots' workspace definition. A total of 50 preselected CT datasets were each segmented in 150-200 coronal slices with 24 landmarks being set. Three different colors for segmentation represent diverse risk areas. Extension and volumetric measurements were performed. Three-dimensional reconstruction was generated after segmentation. Manual segmentation took 8-10 h for each CT dataset. The mean volumes were: right maxillary sinus 17.4 cm(3), left side 17.9 cm(3), right frontal sinus 4.2 cm(3), left side 4.0 cm(3), total frontal sinuses 7.9 cm(3), sphenoid sinus right side 5.3 cm(3), left side 5.5 cm(3), total sphenoid sinus volume 11.2 cm(3). Our manually segmented 3D-models present the patient's individual anatomy with a special focus on structures in danger according to the diverse colored risk areas. For safe robot assistance, the high-accuracy models represent an average of the population for anatomical variations, extension and volumetric measurements. They can be used as a database for automatic model-based segmentation. None of the segmentation methods so far described provide risk segmentation. The robot's maximum distance to the segmented border can be adjusted according to the differently colored areas.
NASA Astrophysics Data System (ADS)
Miao, W.; Li, G.; Niu, F.
2016-12-01
Knowledge on the 3D sediment structure beneath the Gulf of Mexico passive margin is not only important to explore the oil and gas resources in the area, but also essential to decipher the deep crust and mantle structure beneath the margin with teleseismic data. In this study, we conduct a joint inversion of Rayleigh wave ellipticity and phase velocity at 6-40 s to construct a 3-D S wave velocity model in a rectangular area of 100°-87° west and 28°-37° north. We use ambient noise data from a total of 215 stations of the Transportable Array deployed under the Earthscope project. Rayleigh wave ellipticity, or Rayleigh wave Z/H (vertical to horizontal) amplitude ratio is mostly sensitive to shallow sediment structure, while the dispersion data are expected to have reasonably good resolution to uppermost mantle depths. The Z/H ratios measured from stations inside the Gulf Coastal Plain are distinctly lower in comparison with those measured from the inland stations. We also measured the phase velocity dispersion from the same ambient noise dataset. Our preliminary 3-D model is featured by strong low-velocity anomalies at shallow depth, which are spatially well correlated with Gulf Cost, East Texas, and the Lower Mississippi basins. We will discuss other features of the 3-D models once the model is finalized.
Two-dimensional turbulence cross-correlation functions in the edge of NSTX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zweben, S. J.; Stotler, D. P.; Scotti, F.
The 2D radial vs. poloidal cross-correlation functions of edge plasma turbulence were measured near the outer midplane using a gas puff imaging (GPI) diagnostic on NSTX. These correlation functions were evaluated at radii r = 0 cm, ±3 cm, and ±6 cm from the separatrix and poloidal locations p = 0 cm and ±7.5 cm from the GPI poloidal center line for 20 different shots. The ellipticity ε and tilt angle φ of the positive cross-correlation regions and the minimum negative cross-correlation “cmin” and total negative over positive values “neg/pos” were evaluated for each of these cases. The average resultsmore » over this dataset were ε = 2.2 ± 0.9, φ = 87° ± 34° (i.e., poloidally oriented), cmin =-0.30 ± 0.15, and neg/pos = 0.25 ± 0.24. Thus, there was a significant variation in these correlation results within this database, with dependences on the location within the image, the magnetic geometry, and the plasma parameters. In conclusion, possible causes for this variation are discussed, including the misalignment of the GPI view with the local B field line, the magnetic shear of field lines at the edge, the poloidal flow shear of the turbulence, blob-hole correlations, and the neutral density 'shadowing' effect in GPI.« less
Two-dimensional turbulence cross-correlation functions in the edge of NSTX
Zweben, S. J.; Stotler, D. P.; Scotti, F.; ...
2017-09-26
The 2D radial vs. poloidal cross-correlation functions of edge plasma turbulence were measured near the outer midplane using a gas puff imaging (GPI) diagnostic on NSTX. These correlation functions were evaluated at radii r = 0 cm, ±3 cm, and ±6 cm from the separatrix and poloidal locations p = 0 cm and ±7.5 cm from the GPI poloidal center line for 20 different shots. The ellipticity ε and tilt angle φ of the positive cross-correlation regions and the minimum negative cross-correlation “cmin” and total negative over positive values “neg/pos” were evaluated for each of these cases. The average resultsmore » over this dataset were ε = 2.2 ± 0.9, φ = 87° ± 34° (i.e., poloidally oriented), cmin =-0.30 ± 0.15, and neg/pos = 0.25 ± 0.24. Thus, there was a significant variation in these correlation results within this database, with dependences on the location within the image, the magnetic geometry, and the plasma parameters. In conclusion, possible causes for this variation are discussed, including the misalignment of the GPI view with the local B field line, the magnetic shear of field lines at the edge, the poloidal flow shear of the turbulence, blob-hole correlations, and the neutral density 'shadowing' effect in GPI.« less
Data-adaptive harmonic analysis and prediction of sea level change in North Atlantic region
NASA Astrophysics Data System (ADS)
Kondrashov, D. A.; Chekroun, M.
2017-12-01
This study aims to characterize North Atlantic sea level variability across the temporal and spatial scales. We apply recently developed data-adaptive Harmonic Decomposition (DAH) and Multilayer Stuart-Landau Models (MSLM) stochastic modeling techniques [Chekroun and Kondrashov, 2017] to monthly 1993-2017 dataset of Combined TOPEX/Poseidon, Jason-1 and Jason-2/OSTM altimetry fields over North Atlantic region. The key numerical feature of the DAH relies on the eigendecomposition of a matrix constructed from time-lagged spatial cross-correlations. In particular, eigenmodes form an orthogonal set of oscillating data-adaptive harmonic modes (DAHMs) that come in pairs and in exact phase quadrature for a given temporal frequency. Furthermore, the pairs of data-adaptive harmonic coefficients (DAHCs), obtained by projecting the dataset onto associated DAHMs, can be very efficiently modeled by a universal parametric family of simple nonlinear stochastic models - coupled Stuart-Landau oscillators stacked per frequency, and synchronized across different frequencies by the stochastic forcing. Despite the short record of altimetry dataset, developed DAH-MSLM model provides for skillful prediction of key dynamical and statistical features of sea level variability. References M. D. Chekroun and D. Kondrashov, Data-adaptive harmonic spectra and multilayer Stuart-Landau models. HAL preprint, 2017, https://hal.archives-ouvertes.fr/hal-01537797
Slynko, Inna; Da Silva, Franck; Bret, Guillaume; Rognan, Didier
2016-09-01
High affinity ligands for a given target tend to share key molecular interactions with important anchoring amino acids and therefore often present quite conserved interaction patterns. This simple concept was formalized in a topological knowledge-based scoring function (GRIM) for selecting the most appropriate docking poses from previously X-rayed interaction patterns. GRIM first converts protein-ligand atomic coordinates (docking poses) into a simple 3D graph describing the corresponding interaction pattern. In a second step, proposed graphs are compared to that found from template structures in the Protein Data Bank. Last, all docking poses are rescored according to an empirical score (GRIMscore) accounting for overlap of maximum common subgraphs. Taking the opportunity of the public D3R Grand Challenge 2015, GRIM was used to rescore docking poses for 36 ligands (6 HSP90α inhibitors, 30 MAP4K4 inhibitors) prior to the release of the corresponding protein-ligand X-ray structures. When applied to the HSP90α dataset, for which many protein-ligand X-ray structures are already available, GRIM provided very high quality solutions (mean rmsd = 1.06 Å, n = 6) as top-ranked poses, and significantly outperformed a state-of-the-art scoring function. In the case of MAP4K4 inhibitors, for which preexisting 3D knowledge is scarce and chemical diversity is much larger, the accuracy of GRIM poses decays (mean rmsd = 3.18 Å, n = 30) although GRIM still outperforms an energy-based scoring function. GRIM rescoring appears to be quite robust with comparison to the other approaches competing for the same challenge (42 submissions for the HSP90 dataset, 27 for the MAP4K4 dataset) as it ranked 3rd and 2nd respectively, for the two investigated datasets. The rescoring method is quite simple to implement, independent on a docking engine, and applicable to any target for which at least one holo X-ray structure is available.
NASA Astrophysics Data System (ADS)
Fang, Li; Xu, Yusheng; Yao, Wei; Stilla, Uwe
2016-11-01
For monitoring of glacier surface motion in pole and alpine areas, radar remote sensing is becoming a popular technology accounting for its specific advantages of being independent of weather conditions and sunlight. In this paper we propose a method for glacier surface motion monitoring using phase correlation (PC) based on point-like features (PLF). We carry out experiments using repeat-pass TerraSAR X-band (TSX) and Sentinel-1 C-band (S1C) intensity images of the Taku glacier in Juneau icefield located in southeast Alaska. The intensity imagery is first filtered by an improved adaptive refined Lee filter while the effect of topographic reliefs is removed via SRTM-X DEM. Then, a robust phase correlation algorithm based on singular value decomposition (SVD) and an improved random sample consensus (RANSAC) algorithm is applied to sequential PLF pairs generated by correlation using a 2D sinc function template. The approaches for glacier monitoring are validated by both simulated SAR data and real SAR data from two satellites. The results obtained from these three test datasets confirm the superiority of the proposed approach compared to standard correlation-like methods. By the use of the proposed adaptive refined Lee filter, we achieve a good balance between the suppression of noise and the preservation of local image textures. The presented phase correlation algorithm shows the accuracy of better than 0.25 pixels, when conducting matching tests using simulated SAR intensity images with strong noise. Quantitative 3D motions and velocities of the investigated Taku glacier during a repeat-pass period are obtained, which allows a comprehensive and reliable analysis for the investigation of large-scale glacier surface dynamics.
Improving stability of prediction models based on correlated omics data by using network approaches.
Tissier, Renaud; Houwing-Duistermaat, Jeanine; Rodríguez-Girondo, Mar
2018-01-01
Building prediction models based on complex omics datasets such as transcriptomics, proteomics, metabolomics remains a challenge in bioinformatics and biostatistics. Regularized regression techniques are typically used to deal with the high dimensionality of these datasets. However, due to the presence of correlation in the datasets, it is difficult to select the best model and application of these methods yields unstable results. We propose a novel strategy for model selection where the obtained models also perform well in terms of overall predictability. Several three step approaches are considered, where the steps are 1) network construction, 2) clustering to empirically derive modules or pathways, and 3) building a prediction model incorporating the information on the modules. For the first step, we use weighted correlation networks and Gaussian graphical modelling. Identification of groups of features is performed by hierarchical clustering. The grouping information is included in the prediction model by using group-based variable selection or group-specific penalization. We compare the performance of our new approaches with standard regularized regression via simulations. Based on these results we provide recommendations for selecting a strategy for building a prediction model given the specific goal of the analysis and the sizes of the datasets. Finally we illustrate the advantages of our approach by application of the methodology to two problems, namely prediction of body mass index in the DIetary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome study (DILGOM) and prediction of response of each breast cancer cell line to treatment with specific drugs using a breast cancer cell lines pharmacogenomics dataset.
Predictors of mitral annulus enlargement? A real-time three-dimensional transesophageal study.
Boilève, V; Dreyfus, J; Attias, D; Scheuble, A; Codogno, I; Brochet, E; Vahanian, A; Messika-Zeitoun, D
2018-06-05
Mitral annulus (MA) enlargement can be observed in various cardiac conditions but respective influence of left atrial (LA) and left ventricle (LV) size remained unclear. In 120 patients who underwent a clinically indicated 3D-transesophageal-echocardiography, 30 atrial fibrillation (AF), 30 secondary mitral regurgitation (SMR), 30 primary myxomatous mitral regurgitation (PMR) and 30 mitral stenosis (MS), we evaluated the association between MA area (MA-area) and LA volume (LAvol) measured using the biplane area-length method, end-diastolic (LVEDV) and end-systolic (LVESV) volumes measured using the biplane Simpson method. MA-area was measured based on 3D datasets using QLab10. MA-area was correlated to LVEDV (r = 0.42, p < 0.0001), LVESV (r = 0.29, p = 0.001) but more markedly to LAvol (r = 0.62, p < 0.0001). Correlation between MA-area and LAvol was sustained in all subsets whereas MA-area was not correlated to LVEDV and LVESV in patients with SMR and with PMR (all p > 0.10). In multivariate analysis main predictors of MA-area were LAvol (p < 0.0001) and myxomatous etiology of MR (p = 0.0003) followed by LVEDV (p = 0.006) and LVESV (p = 0.02). In a population of patients with a wide range of LA/LV size related to various conditions, LA volume and myxomatous MR etiology appeared as main predictors of MA size whereas LV size had a more modest influence. Copyright © 2017 Elsevier B.V. All rights reserved.
Absolute Geostrophic Velocity Inverted from World Ocean Atlas 2013 (WOAV13) with the P-Vector Method
2015-11-01
The WOAV13 dataset comprises 3D global gridded climatological fields of absolute geostrophic velocity inverted...from World Ocean Atlas-2013 (WOA13) temperature and salinity fields using the P-vector method. It provides a climatological velocity field that is... climatology Dataset Identifier: gov.noaa.nodc:0121576 Creator: NOAP Lab, Department of Oceanography, Naval Postgraduate School, Monterey, CA Title
NASA Astrophysics Data System (ADS)
Hüsami Afşar, M.; Bulut, B.; Yilmaz, M. T.
2017-12-01
Soil moisture is one of the fundamental parameters of the environment that plays a major role in carbon, energy, and water cycles. Spatial distribution and temporal changes of soil moisture is one of the important components in climatic, ecological and natural hazards at global, regional and local levels scales. Therefore retrieval of soil moisture datasets has a great importance in these studies. Given soil moisture can be retrieved through different platforms (i.e., in-situ measurements, numerical modeling, and remote sensing) for the same location and time period, it is often desirable to evaluate these different datasets to assign the most accurate estimates for different purposes. During last decades, efforts have been given to provide evaluations about different soil moisture products based on various statistical analysis of the soil moisture time series (i.e., comparison of correlation, bias, and their error standard deviation). On the other hand, there is still need for the comparisons of the soil moisture products in drought analysis context. In this study, LPRM and NOAH Land Surface Model soil moisture datasets are investigated in drought analysis context using station-based watershed average datasets obtained over four USDA ARS watersheds as ground truth. Here, the drought analysis are performed using the standardized soil moisture datasets (i.e., zero mean and one standard deviation) while the droughts are defined as consecutive negative anomalies less than -1 for longer than 3 months duration. Accordingly, the drought characteristics (duration and severity) and false alarm and hit/miss ratios of LPRM and NOAH datasets are validated using station-based datasets as ground truth. Results showed that although the NOAH soil moisture products have better correlations, LPRM based soil moisture retrievals show better consistency in drought analysis. This project is supported by TUBITAK Project number 114Y676.
Bansal, Ravi; Peterson, Bradley S
2018-06-01
Identifying regional effects of interest in MRI datasets usually entails testing a priori hypotheses across many thousands of brain voxels, requiring control for false positive findings in these multiple hypotheses testing. Recent studies have suggested that parametric statistical methods may have incorrectly modeled functional MRI data, thereby leading to higher false positive rates than their nominal rates. Nonparametric methods for statistical inference when conducting multiple statistical tests, in contrast, are thought to produce false positives at the nominal rate, which has thus led to the suggestion that previously reported studies should reanalyze their fMRI data using nonparametric tools. To understand better why parametric methods may yield excessive false positives, we assessed their performance when applied both to simulated datasets of 1D, 2D, and 3D Gaussian Random Fields (GRFs) and to 710 real-world, resting-state fMRI datasets. We showed that both the simulated 2D and 3D GRFs and the real-world data contain a small percentage (<6%) of very large clusters (on average 60 times larger than the average cluster size), which were not present in 1D GRFs. These unexpectedly large clusters were deemed statistically significant using parametric methods, leading to empirical familywise error rates (FWERs) as high as 65%: the high empirical FWERs were not a consequence of parametric methods failing to model spatial smoothness accurately, but rather of these very large clusters that are inherently present in smooth, high-dimensional random fields. In fact, when discounting these very large clusters, the empirical FWER for parametric methods was 3.24%. Furthermore, even an empirical FWER of 65% would yield on average less than one of those very large clusters in each brain-wide analysis. Nonparametric methods, in contrast, estimated distributions from those large clusters, and therefore, by construct rejected the large clusters as false positives at the nominal FWERs. Those rejected clusters were outlying values in the distribution of cluster size but cannot be distinguished from true positive findings without further analyses, including assessing whether fMRI signal in those regions correlates with other clinical, behavioral, or cognitive measures. Rejecting the large clusters, however, significantly reduced the statistical power of nonparametric methods in detecting true findings compared with parametric methods, which would have detected most true findings that are essential for making valid biological inferences in MRI data. Parametric analyses, in contrast, detected most true findings while generating relatively few false positives: on average, less than one of those very large clusters would be deemed a true finding in each brain-wide analysis. We therefore recommend the continued use of parametric methods that model nonstationary smoothness for cluster-level, familywise control of false positives, particularly when using a Cluster Defining Threshold of 2.5 or higher, and subsequently assessing rigorously the biological plausibility of the findings, even for large clusters. Finally, because nonparametric methods yielded a large reduction in statistical power to detect true positive findings, we conclude that the modest reduction in false positive findings that nonparametric analyses afford does not warrant a re-analysis of previously published fMRI studies using nonparametric techniques. Copyright © 2018 Elsevier Inc. All rights reserved.
Parallel Rendering of Large Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Garbutt, Alexander E.
2005-01-01
Interactive visualization of large time-varying 3D volume datasets has been and still is a great challenge to the modem computational world. It stretches the limits of the memory capacity, the disk space, the network bandwidth and the CPU speed of a conventional computer. In this SURF project, we propose to develop a parallel volume rendering program on SGI's Prism, a cluster computer equipped with state-of-the-art graphic hardware. The proposed program combines both parallel computing and hardware rendering in order to achieve an interactive rendering rate. We use 3D texture mapping and a hardware shader to implement 3D volume rendering on each workstation. We use SGI's VisServer to enable remote rendering using Prism's graphic hardware. And last, we will integrate this new program with ParVox, a parallel distributed visualization system developed at JPL. At the end of the project, we Will demonstrate remote interactive visualization using this new hardware volume renderer on JPL's Prism System using a time-varying dataset from selected JPL applications.
A New Femtosecond Laser-Based Three-Dimensional Tomography Technique
NASA Astrophysics Data System (ADS)
Echlin, McLean P.
2011-12-01
Tomographic imaging has dramatically changed science, most notably in the fields of medicine and biology, by producing 3D views of structures which are too complex to understand in any other way. Current tomographic techniques require extensive time both for post-processing and data collection. Femtosecond laser based tomographic techniques have been developed in both standard atmosphere (femtosecond laser-based serial sectioning technique - FSLSS) and in vacuum (Tri-Beam System) for the fast collection (10 5mum3/s) of mm3 sized 3D datasets. Both techniques use femtosecond laser pulses to selectively remove layer-by-layer areas of material with low collateral damage and a negligible heat affected zone. To the authors knowledge, femtosecond lasers have never been used to serial section and these techniques have been entirely and uniquely developed by the author and his collaborators at the University of Michigan and University of California Santa Barbara. The FSLSS was applied to measure the 3D distribution of TiN particles in a 4330 steel. Single pulse ablation morphologies and rates were measured and collected from literature. Simultaneous two-phase ablation of TiN and steel matrix was shown to occur at fluences of 0.9-2 J/cm2. Laser scanning protocols were developed minimizing surface roughness to 0.1-0.4 mum for laser-based sectioning. The FSLSS technique was used to section and 3D reconstruct titanium nitride (TiN) containing 4330 steel. Statistical analysis of 3D TiN particle sizes, distribution parameters, and particle density were measured. A methodology was developed to use the 3D datasets to produce statistical volume elements (SVEs) for toughness modeling. Six FSLSS TiN datasets were sub-sampled into 48 SVEs for statistical analysis and toughness modeling using the Rice-Tracey and Garrison-Moody models. A two-parameter Weibull analysis was performed and variability in the toughness data agreed well with Ruggieri et al. bulk toughness measurements. The Tri-Beam system combines the benefits of laser based material removal (speed, low-damage, automated) with detectors that collect chemical, structural, and topological information. Multi-modal sectioning information was collected after many laser scanning passes demonstrating the capability of the Tri-Beam system.
2002-05-01
Wave 2 Foreign DoD Remail 2 2/5/02 2/6/02 3 0 0 36. Wave 2 Domestic DoD Remail 3 2/8/02 2/11/02 643 128 52 37. Wave 2 Domestic Coast Guard...Foreign DoD Remail 3 3/8/02 3/11/02 2 1 0 50. Wave 3 Domestic DoD Remail 4 3/13/02 3/14/02 312 52 9 51. Wave 3 Domestic Coast Guard Remail 4 3...13/02 3/14/02 14 1 3 52 . Wave 3 Domestic DoD Remail 5 3/26/02 3/27/02 673 107 19 53. Wave 3 Domestic Coast Guard Remail 5 3/26/02 3/27/02 29 3 3
An Approach to Develop 3d Geo-Dbms Topological Operators by Re-Using Existing 2d Operators
NASA Astrophysics Data System (ADS)
Xu, D.; Zlatanova, S.
2013-09-01
Database systems are continuously extending their capabilities to store, process and analyse 3D data. Topological relationships which describe the interaction of objects in space is one of the important spatial issues. However, spatial operators for 3D objects are still insufficient. In this paper we present the development of a new 3D topological function to distinguish intersections of 3D planar polygons. The development uses existing 2D functions in the DBMS and two geometric transformations (rotation and projection). This function is tested for a real dataset to detect overlapping 3D city objects. The paper presents the algorithms and analyses the challenges. Suggestions for improvements of the current algorithm as well as possible extensions to handle more 3D topological cases are discussed at the end.
Data assimilation and model evaluation experiment datasets
NASA Technical Reports Server (NTRS)
Lai, Chung-Cheng A.; Qian, Wen; Glenn, Scott M.
1994-01-01
The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMEE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets. The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: (1) collection of observational data; (2) analysis and interpretation; (3) interpolation using the Optimum Thermal Interpolation System package; (4) quality control and re-analysis; and (5) data archiving and software documentation. The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement. Suggestions for DAMEE data usages include (1) ocean modeling and data assimilation studies, (2) diagnosis and theoretical studies, and (3) comparisons with locally detailed observations.
Maertz, Josef; Kolb, Jan Philip; Klein, Thomas; Mohler, Kathrin J; Eibl, Matthias; Wieser, Wolfgang; Huber, Robert; Priglinger, Siegfried; Wolf, Armin
2018-02-01
To demonstrate papillary imaging of eyes with optic disc pits (ODP) or optic disc pit associated maculopathy (ODP-M) with ultrahigh-speed swept-source optical coherence tomography (SS-OCT) at 1.68 million A-scans/s. To generate 3D-renderings of the papillary area with 3D volume-reconstructions of the ODP and highly resolved en face images from a single densely-sampled megahertz-OCT (MHz-OCT) dataset for investigation of ODP-characteristics. A 1.68 MHz-prototype SS-MHz-OCT system at 1050 nm based on a Fourier-domain mode-locked laser was employed to acquire high-definition, 3D datasets with a dense sampling of 1600 × 1600 A-scans over a 45° field of view. Six eyes with ODPs, and two further eyes with glaucomatous alteration or without ocular pathology are presented. 3D-rendering of the deep papillary structures, virtual 3D-reconstructions of the ODPs and depth resolved isotropic en face images were generated using semiautomatic segmentation. 3D-rendering and en face imaging of the optic disc, ODPs and ODP associated pathologies showed a broad spectrum regarding ODP characteristics. Between individuals the shape of the ODP and the appending pathologies varied considerably. MHz-OCT en face imaging generates distinct top-view images of ODPs and ODP-M. MHz-OCT generates high resolution images of retinal pathologies associated with ODP-M and allows visualizing ODPs with depths of up to 2.7 mm. Different patterns of ODPs can be visualized in patients for the first time using 3D-reconstructions and co-registered high-definition en face images extracted from a single densely sampled 1050 nm megahertz-OCT (MHz-OCT) dataset. As the immediate vicinity to the SAS and the site of intrapapillary proliferation is located at the bottom of the ODP it is crucial to image the complete structure and the whole depth of ODPs. Especially in very deep pits, where non-swept-source OCT fails to reach the bottom, conventional swept-source devices and the MHz-OCT alike are feasible and beneficial methods to examine deep details of optic disc pathologies, while the MHz-OCT bears the advantage of an essentially swifter imaging process.
3D reconstruction from multi-view VHR-satellite images in MicMac
NASA Astrophysics Data System (ADS)
Rupnik, Ewelina; Pierrot-Deseilligny, Marc; Delorme, Arthur
2018-05-01
This work addresses the generation of high quality digital surface models by fusing multiple depths maps calculated with the dense image matching method. The algorithm is adapted to very high resolution multi-view satellite images, and the main contributions of this work are in the multi-view fusion. The algorithm is insensitive to outliers, takes into account the matching quality indicators, handles non-correlated zones (e.g. occlusions), and is solved with a multi-directional dynamic programming approach. No geometric constraints (e.g. surface planarity) or auxiliary data in form of ground control points are required for its operation. Prior to the fusion procedures, the RPC geolocation parameters of all images are improved in a bundle block adjustment routine. The performance of the algorithm is evaluated on two VHR (Very High Resolution)-satellite image datasets (Pléiades, WorldView-3) revealing its good performance in reconstructing non-textured areas, repetitive patterns, and surface discontinuities.
NASA Astrophysics Data System (ADS)
Shibahara, A.; Tsukamoto, H.; Kazahaya, K.; Morikawa, N.; Takahashi, M.; Takahashi, H.; Yasuhara, M.; Ohwada, M.; Oyama, Y.; Inamura, A.; Handa, H.; Nakama, J.
2008-12-01
Kobe city is located on the northern side of Osaka sedimentary basin, Japan, containing 1,000-2,000 m thick Quaternary sediments. After the Hanshin-Awaji Earthquake (January 17, 1995), a number of geological and geophysical surveys were conducted in this region. Then high-temperature anomaly of groundwater accompanied with high Cl concentration was detected along fault systems in this area. In addition, dissolved He in groundwater showed nearly upper mantle-like 3He/4He ratio, although there were no Quaternary volcanic activities in this region. Some recent studies have assumed that these groundwater profiles are related with geological structure because some faults and joints can function as pathways for groundwater flow, and mantle-derived water can upwell through the fault system to the ground surface. To verify these hypotheses, we established 3D geological and hydrological model around Osaka sedimentary basin. Our primary goal is to analyze spatial relationship between geological structure and groundwater profile. In the study region, a number of geological and hydrological datasets, such as boring log data, seismic profiling data, groundwater chemical profile, were reported. We converted these datasets to meshed data on the GIS, and plotted in the three dimensional space to visualize spatial distribution. Furthermore, we projected seismic profiling data into three dimensional space and calculated distance between faults and sampling points, using Visual Basic for Applications (VBA) scripts. All 3D models are converted into VRML format, and can be used as a versatile dataset on personal computer. This research project has been conducted under the research contract with the Japan Nuclear Energy Safety Organization (JNES).
Joint Blind Source Separation by Multi-set Canonical Correlation Analysis
Li, Yi-Ou; Adalı, Tülay; Wang, Wei; Calhoun, Vince D
2009-01-01
In this work, we introduce a simple and effective scheme to achieve joint blind source separation (BSS) of multiple datasets using multi-set canonical correlation analysis (M-CCA) [1]. We first propose a generative model of joint BSS based on the correlation of latent sources within and between datasets. We specify source separability conditions, and show that, when the conditions are satisfied, the group of corresponding sources from each dataset can be jointly extracted by M-CCA through maximization of correlation among the extracted sources. We compare source separation performance of the M-CCA scheme with other joint BSS methods and demonstrate the superior performance of the M-CCA scheme in achieving joint BSS for a large number of datasets, group of corresponding sources with heterogeneous correlation values, and complex-valued sources with circular and non-circular distributions. We apply M-CCA to analysis of functional magnetic resonance imaging (fMRI) data from multiple subjects and show its utility in estimating meaningful brain activations from a visuomotor task. PMID:20221319
NASA Astrophysics Data System (ADS)
Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart
2015-02-01
This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51 ± 1.92) to (97.27 ± 0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.
Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L; Beauchemin, Steven S; Rodrigues, George; Gaede, Stewart
2015-02-21
This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51 ± 1.92) to (97.27 ± 0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.
ERIC Educational Resources Information Center
Sander, Ian M.; McGoldrick, Matthew T.; Helms, My N.; Betts, Aislinn; van Avermaete, Anthony; Owers, Elizabeth; Doney, Evan; Liepert, Taimi; Niebur, Glen; Liepert, Douglas; Leevy, W. Matthew
2017-01-01
Advances in three-dimensional (3D) printing allow for digital files to be turned into a "printed" physical product. For example, complex anatomical models derived from clinical or pre-clinical X-ray computed tomography (CT) data of patients or research specimens can be constructed using various printable materials. Although 3D printing…
Exploratory Climate Data Visualization and Analysis Using DV3D and UVCDAT
NASA Technical Reports Server (NTRS)
Maxwell, Thomas
2012-01-01
Earth system scientists are being inundated by an explosion of data generated by ever-increasing resolution in both global models and remote sensors. Advanced tools for accessing, analyzing, and visualizing very large and complex climate data are required to maintain rapid progress in Earth system research. To meet this need, NASA, in collaboration with the Ultra-scale Visualization Climate Data Analysis Tools (UVCOAT) consortium, is developing exploratory climate data analysis and visualization tools which provide data analysis capabilities for the Earth System Grid (ESG). This paper describes DV3D, a UV-COAT package that enables exploratory analysis of climate simulation and observation datasets. OV3D provides user-friendly interfaces for visualization and analysis of climate data at a level appropriate for scientists. It features workflow inte rfaces, interactive 40 data exploration, hyperwall and stereo visualization, automated provenance generation, and parallel task execution. DV30's integration with CDAT's climate data management system (COMS) and other climate data analysis tools provides a wide range of high performance climate data analysis operations. DV3D expands the scientists' toolbox by incorporating a suite of rich new exploratory visualization and analysis methods for addressing the complexity of climate datasets.
McCarron, David A; Kazaks, Alexandra G; Geerling, Joel C; Stern, Judith S; Graudal, Niels A
2013-10-01
The recommendation to restrict dietary sodium for management of hypertensive cardiovascular disease assumes that sodium intake exceeds physiologic need, that it can be significantly reduced, and that the reduction can be maintained over time. In contrast, neuroscientists have identified neural circuits in vertebrate animals that regulate sodium appetite within a narrow physiologic range. This study further validates our previous report that sodium intake, consistent with the neuroscience, tracks within a narrow range, consistent over time and across cultures. Peer-reviewed publications reporting 24-hour urinary sodium excretion (UNaV) in a defined population that were not included in our 2009 publication were identified from the medical literature. These datasets were combined with those in our previous report of worldwide dietary sodium consumption. The new data included 129 surveys, representing 50,060 participants. The mean value and range of 24-hour UNaV in each of these datasets were within 1 SD of our previous estimate. The combined mean and normal range of sodium intake of the 129 datasets were nearly identical to that we previously reported (mean = 158.3±22.5 vs. 162.4±22.4 mmol/d). Merging the previous and new datasets (n = 190) yielded sodium consumption of 159.4±22.3 mmol/d (range = 114-210 mmol/d; 2,622-4,830mg/d). Human sodium intake, as defined by 24-hour UNaV, is characterized by a narrow range that is remarkably reproducible over at least 5 decades and across 45 countries. As documented here, this range is determined by physiologic needs rather than environmental factors. Future guidelines should be based on this biologically determined range.
Evaluation of deformable image registration and a motion model in CT images with limited features.
Liu, F; Hu, Y; Zhang, Q; Kincaid, R; Goodman, K A; Mageras, G S
2012-05-07
Deformable image registration (DIR) is increasingly used in radiotherapy applications and provides the basis for a previously described model of patient-specific respiratory motion. We examine the accuracy of a DIR algorithm and a motion model with respiration-correlated CT (RCCT) images of software phantom with known displacement fields, physical deformable abdominal phantom with implanted fiducials in the liver and small liver structures in patient images. The motion model is derived from a principal component analysis that relates volumetric deformations with the motion of the diaphragm or fiducials in the RCCT. Patient data analysis compares DIR with rigid registration as ground truth: the mean ± standard deviation 3D discrepancy of liver structure centroid positions is 2.0 ± 2.2 mm. DIR discrepancy in the software phantom is 3.8 ± 2.0 mm in lung and 3.7 ± 1.8 mm in abdomen; discrepancies near the chest wall are larger than indicated by image feature matching. Marker's 3D discrepancy in the physical phantom is 3.6 ± 2.8 mm. The results indicate that visible features in the images are important for guiding the DIR algorithm. Motion model accuracy is comparable to DIR, indicating that two principal components are sufficient to describe DIR-derived deformation in these datasets.
NASA Astrophysics Data System (ADS)
Latorre, Diana; Lupattelli, Andrea; Mirabella, Francesco; Trippetta, Fabio; Valoroso, Luisa; Lomax, Anthony; Di Stefano, Raffaele; Collettini, Cristiano; Chiaraluce, Lauro
2014-05-01
Accurate hypocenter location at the crustal scale strongly depends on our knowledge of the 3D velocity structure. The integration of geological and geophysical data, when available, should contribute to a reliable seismic velocity model in order to guarantee high quality earthquake locations as well as their consistency with the geological structure. Here we present a 3D, P- and S-wave velocity model of the Upper Tiber valley region (Northern Apennines) retrieved by combining an extremely robust dataset of surface and sub-surface geological data (seismic reflection profiles and boreholes), in situ and laboratory velocity measurements, and earthquake data. The study area is a portion of the Apennine belt undergoing active extension where a set of high-angle normal faults is detached on the Altotiberina low-angle normal fault (ATF). From 2010, this area hosts a scientific infrastructure (the Alto Tiberina Near Fault Observatory, TABOO; http://taboo.rm.ingv.it/), consisting of a dense array of multi-sensor stations, devoted to studying the earthquakes preparatory phase and the deformation processes along the ATF fault system. The proposed 3D velocity model is a layered model in which irregular shaped surfaces limit the boundaries between main lithological units. The model has been constructed by interpolating depth converted seismic horizons interpreted along 40 seismic reflection profiles (down to 4s two way travel times) that have been calibrated with 6 deep boreholes (down to 5 km depth) and constrained by detailed geological maps and structural surveys data. The layers of the model are characterized by similar rock types and seismic velocity properties. The P- and S-waves velocities for each layer have been derived from velocity measurements coming from both boreholes (sonic logs) and laboratory, where measurements have been performed on analogue natural samples increasing confining pressure in order to simulate crustal conditions. In order to test the 3D velocity model, we located a selected dataset of the 2010-2013 TABOO catalogue, which is composed of about 30,000 micro-earthquakes (see Valoroso et al., same session). Earthquake location was performed by applying the global-search earthquake location method NonLinLoc, which is able to manage strong velocity contrasts as that observed in the study area. The model volume is 65km x 55km x 20km and is parameterized by constant velocity, cubic cells of side 100 m. For comparison, we applied the same inversion code by using the best 1D model of the area obtained with earthquake data. The results show a significant quality improvement with the 3D model both in terms of location parameters and correlation between seismicity distribution and known geological structures.
A 4-D dataset for validation of crystal growth in a complex three-phase material, ice cream
NASA Astrophysics Data System (ADS)
Rockett, P.; Karagadde, S.; Guo, E.; Bent, J.; Hazekamp, J.; Kingsley, M.; Vila-Comamala, J.; Lee, P. D.
2015-06-01
Four dimensional (4D, or 3D plus time) X-ray tomographic imaging of phase changes in materials is quickly becoming an accepted tool for quantifying the development of microstructures to both inform and validate models. However, most of the systems studied have been relatively simple binary compositions with only two phases. In this study we present a quantitative dataset of the phase evolution in a complex three-phase material, ice cream. The microstructure of ice cream is an important parameter in terms of sensorial perception, and therefore quantification and modelling of the evolution of the microstructure with time and temperature is key to understanding its fabrication and storage. The microstructure consists of three phases, air cells, ice crystals, and unfrozen matrix. We perform in situ synchrotron X-ray imaging of ice cream samples using in-line phase contrast tomography, housed within a purpose built cold-stage (-40 to +20oC) with finely controlled variation in specimen temperature. The size and distribution of ice crystals and air cells during programmed temperature cycling are determined using 3D quantification. The microstructural evolution of three-phase materials has many other important applications ranging from biological to structural and functional material, hence this dataset can act as a validation case for numerical investigations on faceted and non-faceted crystal growth in a range of materials.
Siuly; Yin, Xiaoxia; Hadjiloucas, Sillas; Zhang, Yanchun
2016-04-01
This work provides a performance comparison of four different machine learning classifiers: multinomial logistic regression with ridge estimators (MLR) classifier, k-nearest neighbours (KNN), support vector machine (SVM) and naïve Bayes (NB) as applied to terahertz (THz) transient time domain sequences associated with pixelated images of different powder samples. The six substances considered, although have similar optical properties, their complex insertion loss at the THz part of the spectrum is significantly different because of differences in both their frequency dependent THz extinction coefficient as well as differences in their refractive index and scattering properties. As scattering can be unquantifiable in many spectroscopic experiments, classification solely on differences in complex insertion loss can be inconclusive. The problem is addressed using two-dimensional (2-D) cross-correlations between background and sample interferograms, these ensure good noise suppression of the datasets and provide a range of statistical features that are subsequently used as inputs to the above classifiers. A cross-validation procedure is adopted to assess the performance of the classifiers. Firstly the measurements related to samples that had thicknesses of 2mm were classified, then samples at thicknesses of 4mm, and after that 3mm were classified and the success rate and consistency of each classifier was recorded. In addition, mixtures having thicknesses of 2 and 4mm as well as mixtures of 2, 3 and 4mm were presented simultaneously to all classifiers. This approach provided further cross-validation of the classification consistency of each algorithm. The results confirm the superiority in classification accuracy and robustness of the MLR (least accuracy 88.24%) and KNN (least accuracy 90.19%) algorithms which consistently outperformed the SVM (least accuracy 74.51%) and NB (least accuracy 56.86%) classifiers for the same number of feature vectors across all studies. The work establishes a general methodology for assessing the performance of other hyperspectral dataset classifiers on the basis of 2-D cross-correlations in far-infrared spectroscopy or other parts of the electromagnetic spectrum. It also advances the wider proliferation of automated THz imaging systems across new application areas e.g., biomedical imaging, industrial processing and quality control where interpretation of hyperspectral images is still under development. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Martínez-García, Juan; Melgosa, Manuel; Gómez-Robledo, Luis; Li, Changjun; Huang, Min; Liu, Haoxue; Cui, Guihua; Luo, M. Ronnier; Dauser, Thomas
2013-11-01
Colour-difference formulas are tools employed in colour industries for objective pass/fail decisions of manufactured products. These objective decisions are based on instrumental colour measurements which must reliably predict the subjective colour-difference evaluations performed by observers' panels. In a previous paper we have tested the performance of different colour-difference formulas using the datasets employed at the development of the last CIErecommended colour-difference formula CIEDE2000, and we found that the AUDI2000 colour-difference formula for solid (homogeneous) colours performed reasonably well, despite the colour pairs in these datasets were not similar to those typically employed in the automotive industry (CIE Publication x038:2013, 465-469). Here we have tested again AUDI2000 together with 11 advanced colour-difference formulas (CIELUV, CIELAB, CMC, BFD, CIE94, CIEDE2000, CAM02-UCS, CAM02-SCD, DIN99d, DIN99b, OSA-GP-Euclidean) for three visual datasets we may consider particularly useful to the automotive industry because of different reasons: 1) 828 metallic colour pairs used to develop the highly reliable RIT-DuPont dataset (Color Res. Appl. 35, 274-283, 2010); 2) printed samples conforming 893 colour pairs with threshold colour differences (J. Opt. Soc. Am. A 29, 883-891, 2012); 3) 150 colour pairs in a tolerance dataset proposed by AUDI. To measure the relative merits of the different tested colour-difference formulas, we employed the STRESS index (J. Opt. Soc. Am. A 24, 1823-1829, 2007), assuming a 95% confidence level. For datasets 1) and 2), AUDI2000 was in the group of the best colour-difference formulas with no significant differences with respect to CIE94, CIEDE2000, CAM02-UCS, DIN99b and DIN99d formulas. For dataset 3) AUDI2000 provided the best results, being statistically significantly better than all other tested colour-difference formulas.
Aswehlee, Amel M; Elbashti, Mahmoud E; Hattori, Mariko; Sumita, Yuka I; Taniguchi, Hisashi
The purpose of this study was to geometrically evaluate the effect of prosthetic rehabilitation on the facial appearance of mandibulectomy patients. Facial scans (with and without prostheses) were performed for 16 mandibulectomy patients using a noncontact three-dimensional (3D) digitizer, and 3D images were reconstructed with the corresponding software. The 3D datasets were geometrically evaluated and compared using 3D evaluation software. The mean difference in absolute 3D deviations for full face scans was 382.2 μm. This method may be useful in evaluating the effect of conventional prostheses on the facial appearance of individuals with mandibulectomy defects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, D; Pollock, S; Keall, P
Purpose: External respiratory surrogates are often used to predict internal lung tumor motion for beam gating but the assumption of correlation between external and internal surrogates is not always verified resulting in amplitude mismatch and time shift. To test the hypothesis that audiovisual (AV) biofeedback improves the correlation between internal and external respiratory motion, in order to improve the accuracy of respiratory-gated treatments for lung cancer radiotherapy. Methods: In nine lung cancer patients, 2D coronal and sagittal cine-MR images were acquired across two MRI sessions (pre- and mid-treatment) with (1) free breathing (FB) and (2) AV biofeedback. External anterior-posterior (AP)more » respiratory motions of (a) chest and (b) abdomen were simultaneously acquired with physiological measurement unit (PMU, 3T Skyra, Siemens Healthcare Erlangen, Germany) and real-time position management (RPM) system (Varian, Palo Alto, USA), respectively. Internal superior-inferior (SI) respiratory motions of (c) lung tumor (i.e. centroid of auto-segmented lung tumor) and (d) diaphragm (i.e. upper liver dome) were measured from individual cine-MR images across 32 dataset. The four respiratory motions were then synchronized with the cine-MR image acquisition time. Correlation coefficients were calculated in the time variation of two nominated respiratory motions: (1) chest-abdomen, (2) abdomen-diaphragm and (3) diaphragm-lung tumor. The three combinations were compared between FB and AV biofeedback. Results: Compared to FB, AV biofeedback improved chest-abdomen correlation by 17% (p=0.005) from 0.75±0.23 to 0.90±0.05 and abdomen-diaphragm correlation by 4% (p=0.058) from 0.91±0.11 to 0.95±0.05. Compared to FB, AV biofeedback improved diaphragm-lung tumor correlation by 12% (p=0.023) from 0.65±0.21 to 0.74±0.16. Conclusions: Our results demonstrated that AV biofeedback significantly improved the correlation of internal and external respiratory motion, thus suggesting the need of AV biofeedback in respiratory-gated treatments.« less
Jin, X; Yan, H; Han, C; Zhou, Y; Yi, J; Xie, C
2015-03-01
To investigate comparatively the percentage gamma passing rate (%GP) of two-dimensional (2D) and three-dimensional (3D) pre-treatment volumetric modulated arc therapy (VMAT) dosimetric verification and their correlation and sensitivity with percentage dosimetric errors (%DE). %GP of 2D and 3D pre-treatment VMAT quality assurance (QA) with different acceptance criteria was obtained by ArcCHECK® (Sun Nuclear Corporation, Melbourne, FL) for 20 patients with nasopharyngeal cancer (NPC) and 20 patients with oesophageal cancer. %DE were calculated from planned dose-volume histogram (DVH) and patients' predicted DVH calculated by 3DVH® software (Sun Nuclear Corporation). Correlation and sensitivity between %GP and %DE were investigated using Pearson's correlation coefficient (r) and receiver operating characteristics (ROCs). Relatively higher %DE on some DVH-based metrics were observed for both patients with NPC and oesophageal cancer. Except for 2%/2 mm criterion, the average %GPs for all patients undergoing VMAT were acceptable with average rates of 97.11% ± 1.54% and 97.39% ± 1.37% for 2D and 3D 3%/3 mm criteria, respectively. The number of correlations for 3D was higher than that for 2D (21 vs 8). However, the general correlation was still poor for all the analysed metrics (9 out of 26 for 3D 3%/3 mm criterion). The average area under the curve (AUC) of ROCs was 0.66 ± 0.12 and 0.71 ± 0.21 for 2D and 3D evaluations, respectively. There is a lack of correlation between %GP and %DE for both 2D and 3D pre-treatment VMAT dosimetric evaluation. DVH-based dose metrics evaluation obtained from 3DVH will provide more useful analysis. Correlation and sensitivity of %GP with %DE for VMAT QA were studied for the first time.
Jin, X; Yan, H; Han, C; Zhou, Y; Yi, J
2015-01-01
Objective: To investigate comparatively the percentage gamma passing rate (%GP) of two-dimensional (2D) and three-dimensional (3D) pre-treatment volumetric modulated arc therapy (VMAT) dosimetric verification and their correlation and sensitivity with percentage dosimetric errors (%DE). Methods: %GP of 2D and 3D pre-treatment VMAT quality assurance (QA) with different acceptance criteria was obtained by ArcCHECK® (Sun Nuclear Corporation, Melbourne, FL) for 20 patients with nasopharyngeal cancer (NPC) and 20 patients with oesophageal cancer. %DE were calculated from planned dose–volume histogram (DVH) and patients' predicted DVH calculated by 3DVH® software (Sun Nuclear Corporation). Correlation and sensitivity between %GP and %DE were investigated using Pearson's correlation coefficient (r) and receiver operating characteristics (ROCs). Results: Relatively higher %DE on some DVH-based metrics were observed for both patients with NPC and oesophageal cancer. Except for 2%/2 mm criterion, the average %GPs for all patients undergoing VMAT were acceptable with average rates of 97.11% ± 1.54% and 97.39% ± 1.37% for 2D and 3D 3%/3 mm criteria, respectively. The number of correlations for 3D was higher than that for 2D (21 vs 8). However, the general correlation was still poor for all the analysed metrics (9 out of 26 for 3D 3%/3 mm criterion). The average area under the curve (AUC) of ROCs was 0.66 ± 0.12 and 0.71 ± 0.21 for 2D and 3D evaluations, respectively. Conclusions: There is a lack of correlation between %GP and %DE for both 2D and 3D pre-treatment VMAT dosimetric evaluation. DVH-based dose metrics evaluation obtained from 3DVH will provide more useful analysis. Advances in knowledge: Correlation and sensitivity of %GP with %DE for VMAT QA were studied for the first time. PMID:25494412
Thermal Texture Generation and 3d Model Reconstruction Using SFM and Gan
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Mizginov, V. A.
2018-05-01
Realistic 3D models with textures representing thermal emission of the object are widely used in such fields as dynamic scene analysis, autonomous driving, and video surveillance. Structure from Motion (SfM) methods provide a robust approach for the generation of textured 3D models in the visible range. Still, automatic generation of 3D models from the infrared imagery is challenging due to an absence of the feature points and low sensor resolution. Recent advances in Generative Adversarial Networks (GAN) have proved that they can perform complex image-to-image transformations such as a transformation of day to night and generation of imagery in a different spectral range. In this paper, we propose a novel method for generation of realistic 3D models with thermal textures using the SfM pipeline and GAN. The proposed method uses visible range images as an input. The images are processed in two ways. Firstly, they are used for point matching and dense point cloud generation. Secondly, the images are fed into a GAN that performs the transformation from the visible range to the thermal range. We evaluate the proposed method using real infrared imagery captured with a FLIR ONE PRO camera. We generated a dataset with 2000 pairs of real images captured in thermal and visible range. The dataset is used to train the GAN network and to generate 3D models using SfM. The evaluation of the generated 3D models and infrared textures proved that they are similar to the ground truth model in both thermal emissivity and geometrical shape.
Goebel, L; Zurakowski, D; Müller, A; Pape, D; Cucchiarini, M; Madry, H
2014-10-01
To compare the 2D and 3D MOCART system obtained with 9.4 T high-field magnetic resonance imaging (MRI) for the ex vivo analysis of osteochondral repair in a translational model and to correlate the data with semiquantitative histological analysis. Osteochondral samples representing all levels of repair (sheep medial femoral condyles; n = 38) were scanned in a 9.4 T high-field MRI. The 2D and adapted 3D MOCART systems were used for grading after point allocation to each category. Each score was correlated with corresponding reconstructions between both MOCART systems. Data were next correlated with corresponding categories of an elementary (Wakitani) and a complex (Sellers) histological scoring system as gold standards. Correlations between most 2D and 3D MOCART score categories were high, while mean total point values of 3D MOCART scores tended to be 15.8-16.1 points higher compared to the 2D MOCART scores based on a Bland-Altman analysis. "Defect fill" and "total points" of both MOCART scores correlated with corresponding categories of Wakitani and Sellers scores (all P ≤ 0.05). "Subchondral bone plate" also correlated between 3D MOCART and Sellers scores (P < 0.001). Most categories of the 2D and 3D MOCART systems correlate, while total scores were generally higher using the 3D MOCART system. Structural categories "total points" and "defect fill" can reliably be assessed by 9.4 T MRI evaluation using either system, "subchondral bone plate" using the 3D MOCART score. High-field MRI is valuable to objectively evaluate osteochondral repair in translational settings. Copyright © 2014 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Amadori, Chiara; Toscani, Giovanni; Ghielmi, Manlio; Maesano, Francesco Emanuele; D'Ambrogi, Chiara; Lombardi, Stefano; Milanesi, Riccardo; Panara, Yuri; Di Giulio, Andrea
2017-04-01
The Pliocene-Pleistocene tectonic and sedimentary evolution of the eastern Po Plain and northern Adriatic Foreland Basin (PPAF) (extended ca. 35,000 km2) was the consequence of severe Northern Apennine compressional activity and climate-driven eustatic changes. According with the 2D seismic interpretation, facies analysis and sequence stratigraphy approach by Ghielmi et al. (2013 and references therein), these tectono-eustatic phases generated six basin-scale unconformities referred as Base Pliocene (PL1), Intra-Zanclean (PL2), Intra-Piacenzian (PL3), Gelasian (PL4), Base Calabrian (PS1) and Late Calabrian (PS2). We present a basin-wide detailed 3D model of the PPAF region, derived from the interpretation of these unconformities in a dense network of seismic lines (ca. 6,000 km) correlated with more than 200 well stratigraphies (courtesy of ENI E&P). The initial 3D time-model has been time-to-depth converted using the 3D velocity model created with Vel-IO 3D, a tool for 3D depth conversions and then validated and integrated with depth domain dataset from bibliography and well log. Resultant isobath and isopach maps are produced to inspect step-by-step the basin paleogeographic evolution; it occurred through alternating stages of simple and fragmented foredeeps. Changes in the basin geometry through time, from the inner sector located in the Emilia-Romagna Apennines to the outermost region (Veneto and northern Adriatic Sea), were marked by repeated phases of outward migration of two large deep depocenters located in front of Emilia arcs on the west, and in front of Ferrara-Romagna thrusts on the east. During late Pliocene-early Pleistocene, the inner side of the Emilia-Romagna arcs evolved into an elongated deep thrust-top basin due to a strong foredeep fragmentation then, an overall tectono-stratigraphic analysis shows also a decreasing trend of tectonic intensity of the Northern Apennine since Pleistocene until present.
A shape prior-based MRF model for 3D masseter muscle segmentation
NASA Astrophysics Data System (ADS)
Majeed, Tahir; Fundana, Ketut; Lüthi, Marcel; Beinemann, Jörg; Cattin, Philippe
2012-02-01
Medical image segmentation is generally an ill-posed problem that can only be solved by incorporating prior knowledge. The ambiguities arise due to the presence of noise, weak edges, imaging artifacts, inhomogeneous interior and adjacent anatomical structures having similar intensity profile as the target structure. In this paper we propose a novel approach to segment the masseter muscle using the graph-cut incorporating additional 3D shape priors in CT datasets, which is robust to noise; artifacts; and shape deformations. The main contribution of this paper is in translating the 3D shape knowledge into both unary and pairwise potentials of the Markov Random Field (MRF). The segmentation task is casted as a Maximum-A-Posteriori (MAP) estimation of the MRF. Graph-cut is then used to obtain the global minimum which results in the segmentation of the masseter muscle. The method is tested on 21 CT datasets of the masseter muscle, which are noisy with almost all possessing mild to severe imaging artifacts such as high-density artifacts caused by e.g. the very common dental fillings and dental implants. We show that the proposed technique produces clinically acceptable results to the challenging problem of muscle segmentation, and further provide a quantitative and qualitative comparison with other methods. We statistically show that adding additional shape prior into both unary and pairwise potentials can increase the robustness of the proposed method in noisy datasets.
NASA Astrophysics Data System (ADS)
Kroenke, Samantha E.
In June 2009, a 2.2 square mile 3-D high resolution seismic reflection survey was shot in southeastern Illinois in the Phillipstown Consolidated oilfield. A well was drilled in the 3-D survey area to tie the seismic to the geological data with a synthetic seismogram from the sonic log. The objectives of the 3-D seismic survey were three-fold: (1) To image and interpret faulting of the Herald-Phillipstown Fault using drillhole-based geological and seismic cross-sections and structural contour maps created from the drillhole data and seismic reflection data, (2) To test the effectiveness of imaging the faults by selected seismic attributes, and (3) To compare spectral decomposition amplitude maps with an isochron map and an isopach map of a selected geologic interval (VTG interval). Drillhole and seismic reflection data show that various formation offsets increase near the main Herald-Phillipstown fault, and that the fault and its large offset subsidiary faults penetrate the Precambrian crystalline basement. A broad, northeast-trending 10,000 feet wide graben is consistently observed in the drillhole data. Both shallow and deep formations in the geological cross-sections reveal small horst and graben features within the broad graben created possibly in response to fault reactivations. The HPF faults have been interpreted as originally Precambrian age high-angle, normal faults reactivated with various amounts and types of offset. Evidence for strike-slip movement is also clear on several faults. Changes in the seismic attribute values in the selected interval and along various time slices throughout the whole dataset correlate with the Herald-Phillipstown faults. Overall, seismic attributes could provide a means of mapping large offset faults in areas with limited or absent drillhole data. Results of the spectral decomposition suggest that if the interval velocity is known for a particular formation or interval, high-resolution 3-D seismic reflection surveys could utilize these amplitudes as an alternative seismic interpretation method for estimating formation thicknesses. A VTG isopach map was compared with an isochron map and a spectral decomposition amplitude map. The results reveal that the isochron map strongly correlates with the isopach map as well as the spectral decomposition map. It was also found that thicker areas in the isopach correlated with higher amplitude values in the spectral decomposition amplitude map. Offsets along the faults appear sharper in these amplitudes and isochron maps than in the isopach map, possibly as a result of increased spatial sampling.
NASA Astrophysics Data System (ADS)
Carton, H. D.; Carbotte, S. M.; Mutter, J. C.; Canales, J.; Nedimovic, M. R.; Marjanovic, M.; Aghaei, O.; Xu, M.; Han, S.; Stowe, L.
2009-12-01
In the summer of 2008 a large 3D multi-channel seismic dataset (expedition MGL0812) was collected over the 9°50’N Integrated Study Site at the East Pacific Rise, providing insight into the architecture of the magmatic system and its relationship with hydrothermal activity and volcanic/dyking events associated with the 2005-06 eruption. The main area of 3D coverage is located between 9°42’N and 9°57’N, spanning ~28km along-axis, and was acquired along 94 (1 partial) prime lines shot across-axis and each ~24km-long. Pre-processing of the data acquired in this area is now well under way, with significant efforts targeted at amplitude spike removal. Current work focuses on setting up the 3D processing sequence up to the stack stage for a small group of inlines (axis-perpendicular grid lines spaced 37.5m apart) located over the “bull’s eye” site at 9°50’N, a sequence that will subsequently be applied to the whole dataset. At the meeting we will present stacked and migrated sections - inlines, crosslines, time slices - obtained through 3D processing. We will discuss results focusing on the characteristics of the axial magma body, whose detailed structure and along-axis segmentation will be resolved by the 3D data.
Deep learning and face recognition: the state of the art
NASA Astrophysics Data System (ADS)
Balaban, Stephen
2015-05-01
Deep Neural Networks (DNNs) have established themselves as a dominant technique in machine learning. DNNs have been top performers on a wide variety of tasks including image classification, speech recognition, and face recognition.1-3 Convolutional neural networks (CNNs) have been used in nearly all of the top performing methods on the Labeled Faces in the Wild (LFW) dataset.3-6 In this talk and accompanying paper, I attempt to provide a review and summary of the deep learning techniques used in the state-of-the-art. In addition, I highlight the need for both larger and more challenging public datasets to benchmark these systems. Despite the ability of DNNs and autoencoders to perform unsupervised feature learning, modern facial recognition pipelines still require domain specific engineering in the form of re-alignment. For example, in Facebook's recent DeepFace paper, a 3D "frontalization" step lies at the beginning of the pipeline. This step creates a 3D face model for the incoming image and then uses a series of affine transformations of the fiducial points to "frontalize" the image. This step enables the DeepFace system to use a neural network architecture with locally connected layers without weight sharing as opposed to standard convolutional layers.6 Deep learning techniques combined with large datasets have allowed research groups to surpass human level performance on the LFW dataset.3, 5 The high accuracy (99.63% for FaceNet at the time of publishing) and utilization of outside data (hundreds of millions of images in the case of Google's FaceNet) suggest that current face verification benchmarks such as LFW may not be challenging enough, nor provide enough data, for current techniques.3, 5 There exist a variety of organizations with mobile photo sharing applications that would be capable of releasing a very large scale and highly diverse dataset of facial images captured on mobile devices. Such an "ImageNet for Face Recognition" would likely receive a warm welcome from researchers and practitioners alike.
DCS-SVM: a novel semi-automated method for human brain MR image segmentation.
Ahmadvand, Ali; Daliri, Mohammad Reza; Hajiali, Mohammadtaghi
2017-11-27
In this paper, a novel method is proposed which appropriately segments magnetic resonance (MR) brain images into three main tissues. This paper proposes an extension of our previous work in which we suggested a combination of multiple classifiers (CMC)-based methods named dynamic classifier selection-dynamic local training local Tanimoto index (DCS-DLTLTI) for MR brain image segmentation into three main cerebral tissues. This idea is used here and a novel method is developed that tries to use more complex and accurate classifiers like support vector machine (SVM) in the ensemble. This work is challenging because the CMC-based methods are time consuming, especially on huge datasets like three-dimensional (3D) brain MR images. Moreover, SVM is a powerful method that is used for modeling datasets with complex feature space, but it also has huge computational cost for big datasets, especially those with strong interclass variability problems and with more than two classes such as 3D brain images; therefore, we cannot use SVM in DCS-DLTLTI. Therefore, we propose a novel approach named "DCS-SVM" to use SVM in DCS-DLTLTI to improve the accuracy of segmentation results. The proposed method is applied on well-known datasets of the Internet Brain Segmentation Repository (IBSR) and promising results are obtained.
Deep Learning for Lowtextured Image Matching
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Fedorenko, V. V.; Fomin, N. A.
2018-05-01
Low-textured objects pose challenges for an automatic 3D model reconstruction. Such objects are common in archeological applications of photogrammetry. Most of the common feature point descriptors fail to match local patches in featureless regions of an object. Hence, automatic documentation of the archeological process using Structure from Motion (SfM) methods is challenging. Nevertheless, such documentation is possible with the aid of a human operator. Deep learning-based descriptors have outperformed most of common feature point descriptors recently. This paper is focused on the development of a new Wide Image Zone Adaptive Robust feature Descriptor (WIZARD) based on the deep learning. We use a convolutional auto-encoder to compress discriminative features of a local path into a descriptor code. We build a codebook to perform point matching on multiple images. The matching is performed using the nearest neighbor search and a modified voting algorithm. We present a new "Multi-view Amphora" (Amphora) dataset for evaluation of point matching algorithms. The dataset includes images of an Ancient Greek vase found at Taman Peninsula in Southern Russia. The dataset provides color images, a ground truth 3D model, and a ground truth optical flow. We evaluated the WIZARD descriptor on the "Amphora" dataset to show that it outperforms the SIFT and SURF descriptors on the complex patch pairs.
Three-dimensional segmentation of pulmonary artery volume from thoracic computed tomography imaging
NASA Astrophysics Data System (ADS)
Lindenmaier, Tamas J.; Sheikh, Khadija; Bluemke, Emma; Gyacskov, Igor; Mura, Marco; Licskai, Christopher; Mielniczuk, Lisa; Fenster, Aaron; Cunningham, Ian A.; Parraga, Grace
2015-03-01
Chronic obstructive pulmonary disease (COPD), is a major contributor to hospitalization and healthcare costs in North America. While the hallmark of COPD is airflow limitation, it is also associated with abnormalities of the cardiovascular system. Enlargement of the pulmonary artery (PA) is a morphological marker of pulmonary hypertension, and was previously shown to predict acute exacerbations using a one-dimensional diameter measurement of the main PA. We hypothesized that a three-dimensional (3D) quantification of PA size would be more sensitive than 1D methods and encompass morphological changes along the entire central pulmonary artery. Hence, we developed a 3D measurement of the main (MPA), left (LPA) and right (RPA) pulmonary arteries as well as total PA volume (TPAV) from thoracic CT images. This approach incorporates segmentation of pulmonary vessels in cross-section for the MPA, LPA and RPA to provide an estimate of their volumes. Three observers performed five repeated measurements for 15 ex-smokers with ≥10 pack-years, and randomly identified from a larger dataset of 199 patients. There was a strong agreement (r2=0.76) for PA volume and PA diameter measurements, which was used as a gold standard. Observer measurements were strongly correlated and coefficients of variation for observer 1 (MPA:2%, LPA:3%, RPA:2%, TPA:2%) were not significantly different from observer 2 and 3 results. In conclusion, we generated manual 3D pulmonary artery volume measurements from thoracic CT images that can be performed with high reproducibility. Future work will involve automation for implementation in clinical workflows.
Big Data solution for CTBT monitoring: CEA-IDC joint global cross correlation project
NASA Astrophysics Data System (ADS)
Bobrov, Dmitry; Bell, Randy; Brachet, Nicolas; Gaillard, Pierre; Kitov, Ivan; Rozhkov, Mikhail
2014-05-01
Waveform cross-correlation when applied to historical datasets of seismic records provides dramatic improvements in detection, location, and magnitude estimation of natural and manmade seismic events. With correlation techniques, the amplitude threshold of signal detection can be reduced globally by a factor of 2 to 3 relative to currently standard beamforming and STA/LTA detector. The gain in sensitivity corresponds to a body wave magnitude reduction by 0.3 to 0.4 units and doubles the number of events meeting high quality requirements (e.g. detected by three and more seismic stations of the International Monitoring System (IMS). This gain is crucial for seismic monitoring under the Comprehensive Nuclear-Test-Ban Treaty. The International Data Centre (IDC) dataset includes more than 450,000 seismic events, tens of millions of raw detections and continuous seismic data from the primary IMS stations since 2000. This high-quality dataset is a natural candidate for an extensive cross correlation study and the basis of further enhancements in monitoring capabilities. Without this historical dataset recorded by the permanent IMS Seismic Network any improvements would not be feasible. However, due to the mismatch between the volume of data and the performance of the standard Information Technology infrastructure, it becomes impossible to process all the data within tolerable elapsed time. To tackle this problem known as "BigData", the CEA/DASE is part of the French project "DataScale". One objective is to reanalyze 10 years of waveform data from the IMS network with the cross-correlation technique thanks to a dedicated High Performance Computer (HPC) infrastructure operated by the Centre de Calcul Recherche et Technologie (CCRT) at the CEA of Bruyères-le-Châtel. Within 2 years we are planning to enhance detection and phase association algorithms (also using machine learning and automatic classification) and process about 30 terabytes of data provided by the IDC to update the world seismicity map. From the new events and those in the IDC Reviewed Event Bulletin, we will automatically create various sets of master event templates that will be used for the event location globally by the CTBTO and CEA.
Landschoff, Jannes; Du Plessis, Anton; Griffiths, Charles L
2018-04-01
Along with the conventional deposition of physical types at natural history museums, the deposition of 3-dimensional (3D) image data has been proposed for rare and valuable museum specimens, such as irreplaceable type material. Micro computed tomography (μCT) scan data of 5 hermit crab species from South Africa, including rare specimens and type material, depicted main identification characteristics of calcified body parts. However, low-image contrasts, especially in larger (>50 mm total length) specimens, did not allow sufficient 3D reconstructions of weakly calcified and fine characteristics, such as soft tissue of the pleon, mouthparts, gills, and setation. Reconstructions of soft tissue were sometimes possible, depending on individual sample and scanning characteristics. The raw data of seven scans are publicly available for download from the GigaDB repository. Calcified body parts visualized from μCT data can aid taxonomic validation and provide additional, virtual deposition of rare specimens. The use of a nondestructive, nonstaining μCT approach for taxonomy, reconstructions of soft tissue structures, microscopic spines, and setae depend on species characteristics. Constrained to these limitations, the presented dataset can be used for future morphological studies. However, our virtual specimens will be most valuable to taxonomists who can download a digital avatar for 3D examination. Simultaneously, in the event of physical damage to or loss of the original physical specimen, this dataset serves as a vital insurance policy.
NASA Astrophysics Data System (ADS)
Newman, R. L.
2002-12-01
How many images can you display at one time with Power Point without getting "postage stamps"? Do you have fantastic datasets that you cannot view because your computer is too slow/small? Do you assume a few 2-D images of a 3-D picture are sufficient? High-end visualization centers can minimize and often eliminate these problems. The new visualization center [http://siovizcenter.ucsd.edu] at Scripps Institution of Oceanography [SIO] immerses users into a virtual world by projecting 3-D images onto a Panoram GVR-120E wall-sized floor-to-ceiling curved screen [7' x 23'] that has 3.2 mega-pixels of resolution. The Infinite Reality graphics subsystem is driven by a single-pipe SGI Onyx 3400 with a system bandwidth of 44 Gbps. The Onyx is powered by 16 MIPS R12K processors and 16 GB of addressable memory. The system is also equipped with transmitters and LCD shutter glasses which permit stereographic 3-D viewing of high-resolution images. This center is ideal for groups of up to 60 people who can simultaneously view these large-format images. A wide range of hardware and software is available, giving the users a totally immersive working environment in which to display, analyze, and discuss large datasets. The system enables simultaneous display of video and audio streams from sources such as SGI megadesktop and stereo megadesktop, S-VHS video, DVD video, and video from a Macintosh or PC. For instance, one-third of the screen might be displaying S-VHS video from a remotely-operated-vehicle [ROV], while the remaining portion of the screen might be used for an interactive 3-D flight over the same parcel of seafloor. The video and audio combinations using this system are numerous, allowing users to combine and explore data and images in innovative ways, greatly enhancing scientists' ability to visualize, understand and collaborate on complex datasets. In the not-distant future, with the rapid growth in networking speeds in the US, it will be possible for Earth Sciences Departments to collaborate effectively while limiting the amount of physical travel required. This includes porting visualization content to the popular, low-cost Geowall visualization systems, and providing web-based access to databanks filled with stock geoscience visualizations.
Mutual-information-based registration for ultrasound and CT datasets
NASA Astrophysics Data System (ADS)
Firle, Evelyn A.; Wesarg, Stefan; Dold, Christian
2004-05-01
In many applications for minimal invasive surgery the acquisition of intra-operative medical images is helpful if not absolutely necessary. Especially for Brachytherapy imaging is critically important to the safe delivery of the therapy. Modern computed tomography (CT) and magnetic resonance (MR) scanners allow minimal invasive procedures to be performed under direct imaging guidance. However, conventional scanners do not have real-time imaging capability and are expensive technologies requiring a special facility. Ultrasound (U/S) is a much cheaper and one of the most flexible imaging modalities. It can be moved to the application room as required and the physician sees what is happening as it occurs. Nevertheless it may be easier to interpret these 3D intra-operative U/S images if they are used in combination with less noisier preoperative data such as CT. The purpose of our current investigation is to develop a registration tool for automatically combining pre-operative CT volumes with intra-operatively acquired 3D U/S datasets. The applied alignment procedure is based on the information theoretic approach of maximizing the mutual information of two arbitrary datasets from different modalities. Since the CT datasets include a much bigger field of view we introduced a bounding box to narrow down the region of interest within the CT dataset. We conducted a phantom experiment using a CIRS Model 53 U/S Prostate Training Phantom to evaluate the feasibility and accuracy of the proposed method.
NASA Astrophysics Data System (ADS)
Dogon-yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.
2016-10-01
Timely and accurate acquisition of information on the condition and structural changes of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting tree features include; ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraint, such as labour intensive field work, a lot of financial requirement, influences by weather condition and topographical covers which can be overcome by means of integrated airborne based LiDAR and very high resolution digital image datasets. This study presented a semi-automated approach for extracting urban trees from integrated airborne based LIDAR and multispectral digital image datasets over Istanbul city of Turkey. The above scheme includes detection and extraction of shadow free vegetation features based on spectral properties of digital images using shadow index and NDVI techniques and automated extraction of 3D information about vegetation features from the integrated processing of shadow free vegetation image and LiDAR point cloud datasets. The ability of the developed algorithms shows a promising result as an automated and cost effective approach to estimating and delineated 3D information of urban trees. The research also proved that integrated datasets is a suitable technology and a viable source of information for city managers to be used in urban trees management.
Locality-Constrained Discriminative Learning and Coding
2015-06-12
female Caucasian subjects (show in Fig. 3 (d)). There are 4 makeup statues (a) no makeup; (b) lipstick only; (c) eye makeup only; and (d) a full makeup...including lipstick , foundation, blush and eye makeup. Hence, the assembled dataset contains total 204 images and four images per subject. We randomly
Evaluation of Meteorology Data for MOPITT Operational Processing
NASA Astrophysics Data System (ADS)
Ziskin, D.; Deeter, M. N.; Worden, H. M.; Mao, D.; Dean, V.
2015-12-01
Measurements Of Pollution In The Troposphere[1] (MOPITT) is an instrument flying aboard NASA's Terra satellite[2]. It measures CO using correlated spectroscopy[3]. As part of its processing it uses surface temperature, an atmospheric temperature profile and a water vapor profile from analysis. Since there are many analysis products on the market (e.g. GMAO, NCEP, ECMWF etc.) that meet MOPITT's operational requirements, the question arises as to which product is most apt? There is a collection of "validation data" that MOPITT compares its CO retrievals against[4]. The validation dataset has been acquired by in situ air samples taken by aircraft at a series of altitudes. We can run our processing system in "validation mode" which processes the satellite data for only the days that validation data exists and for a spatial subset that corresponds to the region where the validation data has been collected. We will run the MOPITT retrievals in validation mode separately using each variety of analysis data. We will create a cost function that will provide a scalar estimate of the retrieved CO profile error relative to the validation dataset which is assumed to be "the truth". The retrieval errors of each of the input datasets will be compared to each other to provide insight into the best choice for use in operational MOPITT processing. [1] Drummond, J.R., "Measurements of Pollution in the Troposphere (MOPITT)," in The Use of EOS for Studies of Atmospheric Physics, J. C. Gille, G. Visconti, eds. (North Holland, Amsterdam), pp. 77-101, 1992. [2] 1999 EOS Reference Handbook: A Guide to NASA's Earth Science Enterprise and the Earth Observing System; Eds. Michael D. King and Reynold Greenstone; NASA, Greenbelt, MD, 1999. [3] Drummond, J.R., G. P. Brasseur, G. R. Davis, J. C. Gille, J. C. McConnell, G. D. Pesket, H. G. Reichle, N. Roulet, MOPITT Mission Description Document (Department of Physics, University of Toronto, Toronto, Ontario, Canada M5S 1A7), 1993. [4] Deeter, M. N., Martínez-Alonso, S., Edwards, D. P., Emmons, L. K., Gille, J. C., Worden, H. M., Sweeney, C., Pittman, J. V., Daube, B. C., and Wofsy, S. C.: The MOPITT Version 6 product: algorithm enhancements and validation, Atmos. Meas. Tech., 7, 3623-3632, doi:10.5194/amt-7-3623-2014, 2014.
Lae, Marick; Moarii, Matahi; Sadacca, Benjamin; Pinheiro, Alice; Galliot, Marion; Abecassis, Judith; Laurent, Cecile; Reyal, Fabien
2016-01-01
Introduction HER2-positive breast cancer (BC) is a heterogeneous group of aggressive breast cancers, the prognosis of which has greatly improved since the introduction of treatments targeting HER2. However, these tumors may display intrinsic or acquired resistance to treatment, and classifiers of HER2-positive tumors are required to improve the prediction of prognosis and to develop novel therapeutic interventions. Methods We analyzed 2893 primary human breast cancer samples from 21 publicly available datasets and developed a six-metagene signature on a training set of 448 HER2-positive BC. We then used external public datasets to assess the ability of these metagenes to predict the response to chemotherapy (Ignatiadis dataset), and prognosis (METABRIC dataset). Results We identified a six-metagene signature (138 genes) containing metagenes enriched in different gene ontologies. The gene clusters were named as follows: Immunity, Tumor suppressors/proliferation, Interferon, Signal transduction, Hormone/survival and Matrix clusters. In all datasets, the Immunity metagene was less strongly expressed in ER-positive than in ER-negative tumors, and was inversely correlated with the Hormonal/survival metagene. Within the signature, multivariate analyses showed that strong expression of the “Immunity” metagene was associated with higher pCR rates after NAC (OR = 3.71[1.28–11.91], p = 0.019) than weak expression, and with a better prognosis in HER2-positive/ER-negative breast cancers (HR = 0.58 [0.36–0.94], p = 0.026). Immunity metagene expression was associated with the presence of tumor-infiltrating lymphocytes (TILs). Conclusion The identification of a predictive and prognostic immune module in HER2-positive BC confirms the need for clinical testing for immune checkpoint modulators and vaccines for this specific subtype. The inverse correlation between Immunity and hormone pathways opens research perspectives and deserves further investigation. PMID:28005906
Statistical analysis of co-occurrence patterns in microbial presence-absence datasets.
Mainali, Kumar P; Bewick, Sharon; Thielen, Peter; Mehoke, Thomas; Breitwieser, Florian P; Paudel, Shishir; Adhikari, Arjun; Wolfe, Joshua; Slud, Eric V; Karig, David; Fagan, William F
2017-01-01
Drawing on a long history in macroecology, correlation analysis of microbiome datasets is becoming a common practice for identifying relationships or shared ecological niches among bacterial taxa. However, many of the statistical issues that plague such analyses in macroscale communities remain unresolved for microbial communities. Here, we discuss problems in the analysis of microbial species correlations based on presence-absence data. We focus on presence-absence data because this information is more readily obtainable from sequencing studies, especially for whole-genome sequencing, where abundance estimation is still in its infancy. First, we show how Pearson's correlation coefficient (r) and Jaccard's index (J)-two of the most common metrics for correlation analysis of presence-absence data-can contradict each other when applied to a typical microbiome dataset. In our dataset, for example, 14% of species-pairs predicted to be significantly correlated by r were not predicted to be significantly correlated using J, while 37.4% of species-pairs predicted to be significantly correlated by J were not predicted to be significantly correlated using r. Mismatch was particularly common among species-pairs with at least one rare species (<10% prevalence), explaining why r and J might differ more strongly in microbiome datasets, where there are large numbers of rare taxa. Indeed 74% of all species-pairs in our study had at least one rare species. Next, we show how Pearson's correlation coefficient can result in artificial inflation of positive taxon relationships and how this is a particular problem for microbiome studies. We then illustrate how Jaccard's index of similarity (J) can yield improvements over Pearson's correlation coefficient. However, the standard null model for Jaccard's index is flawed, and thus introduces its own set of spurious conclusions. We thus identify a better null model based on a hypergeometric distribution, which appropriately corrects for species prevalence. This model is available from recent statistics literature, and can be used for evaluating the significance of any value of an empirically observed Jaccard's index. The resulting simple, yet effective method for handling correlation analysis of microbial presence-absence datasets provides a robust means of testing and finding relationships and/or shared environmental responses among microbial taxa.
Skeleton-based human action recognition using multiple sequence alignment
NASA Astrophysics Data System (ADS)
Ding, Wenwen; Liu, Kai; Cheng, Fei; Zhang, Jin; Li, YunSong
2015-05-01
Human action recognition and analysis is an active research topic in computer vision for many years. This paper presents a method to represent human actions based on trajectories consisting of 3D joint positions. This method first decompose action into a sequence of meaningful atomic actions (actionlets), and then label actionlets with English alphabets according to the Davies-Bouldin index value. Therefore, an action can be represented using a sequence of actionlet symbols, which will preserve the temporal order of occurrence of each of the actionlets. Finally, we employ sequence comparison to classify multiple actions through using string matching algorithms (Needleman-Wunsch). The effectiveness of the proposed method is evaluated on datasets captured by commodity depth cameras. Experiments of the proposed method on three challenging 3D action datasets show promising results.
Three-dimensional changes in nose and upper lip volume after orthognathic surgery.
van Loon, B; van Heerbeek, N; Bierenbroodspot, F; Verhamme, L; Xi, T; de Koning, M J J; Ingels, K J A O; Bergé, S J; Maal, T J J
2015-01-01
Orthognathic surgery aims to improve both the function and facial appearance of the patient. Translation of the maxillomandibular complex for correction of malocclusion is always followed by changes to the covering soft tissues, especially the nose and lips. The purpose of this study was to evaluate the changes in the nasal region and upper lip due to orthognathic surgery using combined cone beam computed tomography (CBCT) and three-dimensional (3D) stereophotogrammetry datasets. Patients who underwent a Le Fort I osteotomy, with or without a bilateral sagittal split osteotomy, were included in this study. Pre- and postoperative documentation consisted of 3D stereophotogrammetry and CBCT scans. 3D measurements were performed on the combined datasets and analyzed. Anterior translation and clockwise pitching of the maxilla led to a significant volume increase in the lip. Cranial translation of the maxilla led to an increase in the alar width. The combination of CBCT DICOM data and 3D stereophotogrammetry proved to be useful in the 3D analysis of the maxillary hard tissue changes, as well as changes in the soft tissues. Measurements could be acquired and compared to investigate the influence of maxillary movement on the soft tissues of the nose and the upper lip. Copyright © 2014 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Cross-Domain Multi-View Object Retrieval via Multi-Scale Topic Models.
Hong, Richang; Hu, Zhenzhen; Wang, Ruxin; Wang, Meng; Tao, Dacheng
2016-09-27
The increasing number of 3D objects in various applications has increased the requirement for effective and efficient 3D object retrieval methods, which attracted extensive research efforts in recent years. Existing works mainly focus on how to extract features and conduct object matching. With the increasing applications, 3D objects come from different areas. In such circumstances, how to conduct object retrieval becomes more important. To address this issue, we propose a multi-view object retrieval method using multi-scale topic models in this paper. In our method, multiple views are first extracted from each object, and then the dense visual features are extracted to represent each view. To represent the 3D object, multi-scale topic models are employed to extract the hidden relationship among these features with respected to varied topic numbers in the topic model. In this way, each object can be represented by a set of bag of topics. To compare the objects, we first conduct topic clustering for the basic topics from two datasets, and then generate the common topic dictionary for new representation. Then, the two objects can be aligned to the same common feature space for comparison. To evaluate the performance of the proposed method, experiments are conducted on two datasets. The 3D object retrieval experimental results and comparison with existing methods demonstrate the effectiveness of the proposed method.
Octree-based indexing for 3D pointclouds within an Oracle Spatial DBMS
NASA Astrophysics Data System (ADS)
Schön, Bianca; Mosa, Abu Saleh Mohammad; Laefer, Debra F.; Bertolotto, Michela
2013-02-01
A large proportion of today's digital datasets have a spatial component. The effective storage and management of which poses particular challenges, especially with light detection and ranging (LiDAR), where datasets of even small geographic areas may contain several hundred million points. While in the last decade 2.5-dimensional data were prevalent, true 3-dimensional data are increasingly commonplace via LiDAR. They have gained particular popularity for urban applications including generation of city-scale maps, baseline data disaster management, and utility planning. Additionally, LiDAR is commonly used for flood plane identification, coastal-erosion tracking, and forest biomass mapping. Despite growing data availability, current spatial information systems do not provide suitable full support for the data's true 3D nature. Consequently, one system is needed to store the data and another for its processing, thereby necessitating format transformations. The work presented herein aims at a more cost-effective way for managing 3D LiDAR data that allows for storage and manipulation within a single system by enabling a new index within existing spatial database management technology. Implementation of an octree index for 3D LiDAR data atop Oracle Spatial 11g is presented, along with an evaluation showing up to an eight-fold improvement compared to the native Oracle R-tree index.
Xu, Lingyu; Xu, Yuancheng; Coulden, Richard; Sonnex, Emer; Hrybouski, Stanislau; Paterson, Ian; Butler, Craig
2018-05-11
Epicardial adipose tissue (EAT) volume derived from contrast enhanced (CE) computed tomography (CT) scans is not well validated. We aim to establish a reliable threshold to accurately quantify EAT volume from CE datasets. We analyzed EAT volume on paired non-contrast (NC) and CE datasets from 25 patients to derive appropriate Hounsfield (HU) cutpoints to equalize two EAT volume estimates. The gold standard threshold (-190HU, -30HU) was used to assess EAT volume on NC datasets. For CE datasets, EAT volumes were estimated using three previously reported thresholds: (-190HU, -30HU), (-190HU, -15HU), (-175HU, -15HU) and were analyzed by a semi-automated 3D Fat analysis software. Subsequently, we applied a threshold correction to (-190HU, -30HU) based on mean differences in radiodensity between NC and CE images (ΔEATrd = CE radiodensity - NC radiodensity). We then validated our findings on EAT threshold in 21 additional patients with paired CT datasets. EAT volume from CE datasets using previously published thresholds consistently underestimated EAT volume from NC dataset standard by a magnitude of 8.2%-19.1%. Using our corrected threshold (-190HU, -3HU) in CE datasets yielded statistically identical EAT volume to NC EAT volume in the validation cohort (186.1 ± 80.3 vs. 185.5 ± 80.1 cm 3 , Δ = 0.6 cm 3 , 0.3%, p = 0.374). Estimating EAT volume from contrast enhanced CT scans using a corrected threshold of -190HU, -3HU provided excellent agreement with EAT volume from non-contrast CT scans using a standard threshold of -190HU, -30HU. Copyright © 2018. Published by Elsevier B.V.
A seismic network to investigate the sedimentary hosted hydrothermal Lusi system
NASA Astrophysics Data System (ADS)
Javad Fallahi, Mohammad; Mazzini, Adriano; Lupi, Matteo; Obermann, Anne; Karyono, Karyono
2016-04-01
The 29th of May 2006 marked the beginning of the sedimentary hosted hydrothermal Lusi system. During the last 10 years we witnessed numerous alterations of the Lusi system behavior that coincide with the frequent seismic and volcanic activity occurring in the region. In order to monitor the effect that the seismicity and the activity of the volcanic arc have on Lusi, we deployed a ad hoc seismic network. This temporary network consist of 10 broadband and 21 short period stations and is currently operating around the Arjuno-Welirang volcanic complex, along the Watukosek fault system and around Lusi, in the East Java basin since January 2015. We exploit this dataset to investigate surface wave and shear wave velocity structure of the upper-crust beneath the Arjuno-Welirang-Lusi complex in the framework of the Lusi Lab project (ERC grant n° 308126). Rayleigh and Love waves travelling between each station-pair are extracted by cross-correlating long time series of ambient noise data recorded at the stations. Group and phase velocity dispersion curves are obtained by time-frequency analysis of cross-correlation functions, and are tomographically inverted to provide 2D velocity maps corresponding to different sampling depths. 3D shear wave velocity structure is then acquired by inverting the group velocity maps.
NASA Astrophysics Data System (ADS)
Rasti, Reza; Mehridehnavi, Alireza; Rabbani, Hossein; Hajizadeh, Fedra
2018-03-01
The present research intends to propose a fully automatic algorithm for the classification of three-dimensional (3-D) optical coherence tomography (OCT) scans of patients suffering from abnormal macula from normal candidates. The method proposed does not require any denoising, segmentation, retinal alignment processes to assess the intraretinal layers, as well as abnormalities or lesion structures. To classify abnormal cases from the control group, a two-stage scheme was utilized, which consists of automatic subsystems for adaptive feature learning and diagnostic scoring. In the first stage, a wavelet-based convolutional neural network (CNN) model was introduced and exploited to generate B-scan representative CNN codes in the spatial-frequency domain, and the cumulative features of 3-D volumes were extracted. In the second stage, the presence of abnormalities in 3-D OCTs was scored over the extracted features. Two different retinal SD-OCT datasets are used for evaluation of the algorithm based on the unbiased fivefold cross-validation (CV) approach. The first set constitutes 3-D OCT images of 30 normal subjects and 30 diabetic macular edema (DME) patients captured from the Topcon device. The second publicly available set consists of 45 subjects with a distribution of 15 patients in age-related macular degeneration, DME, and normal classes from the Heidelberg device. With the application of the algorithm on overall OCT volumes and 10 repetitions of the fivefold CV, the proposed scheme obtained an average precision of 99.33% on dataset1 as a two-class classification problem and 98.67% on dataset2 as a three-class classification task.
Rasti, Reza; Mehridehnavi, Alireza; Rabbani, Hossein; Hajizadeh, Fedra
2018-03-01
The present research intends to propose a fully automatic algorithm for the classification of three-dimensional (3-D) optical coherence tomography (OCT) scans of patients suffering from abnormal macula from normal candidates. The method proposed does not require any denoising, segmentation, retinal alignment processes to assess the intraretinal layers, as well as abnormalities or lesion structures. To classify abnormal cases from the control group, a two-stage scheme was utilized, which consists of automatic subsystems for adaptive feature learning and diagnostic scoring. In the first stage, a wavelet-based convolutional neural network (CNN) model was introduced and exploited to generate B-scan representative CNN codes in the spatial-frequency domain, and the cumulative features of 3-D volumes were extracted. In the second stage, the presence of abnormalities in 3-D OCTs was scored over the extracted features. Two different retinal SD-OCT datasets are used for evaluation of the algorithm based on the unbiased fivefold cross-validation (CV) approach. The first set constitutes 3-D OCT images of 30 normal subjects and 30 diabetic macular edema (DME) patients captured from the Topcon device. The second publicly available set consists of 45 subjects with a distribution of 15 patients in age-related macular degeneration, DME, and normal classes from the Heidelberg device. With the application of the algorithm on overall OCT volumes and 10 repetitions of the fivefold CV, the proposed scheme obtained an average precision of 99.33% on dataset1 as a two-class classification problem and 98.67% on dataset2 as a three-class classification task. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Canessa, Andrea; Gibaldi, Agostino; Chessa, Manuela; Fato, Marco; Solari, Fabio; Sabatini, Silvio P.
2017-01-01
Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO—GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction. PMID:28350382
Rosskopf, Johannes; Müller, Hans-Peter; Dreyhaupt, Jens; Gorges, Martin; Ludolph, Albert C; Kassubek, Jan
2015-03-01
Diffusion tensor imaging (DTI) for assessing ALS-associated white matter alterations has still not reached the level of a neuroimaging biomarker. Since large-scale multicentre DTI studies in ALS may be hampered by differences in scanning protocols, an approach for pooling of DTI data acquired with different protocols was investigated. Three hundred and nine datasets from 170 ALS patients and 139 controls were collected ex post facto from a monocentric database reflecting different scanning protocols. A 3D correction algorithm was introduced for a combined analysis of DTI metrics despite different acquisition protocols, with the focus on the CST as the tract correlate of ALS neuropathological stage 1. A homogenous set of data was obtained by application of 3D correction matrices. Results showed that a fractional anisotropy (FA) threshold of 0.41 could be defined to discriminate ALS patients from controls (sensitivity/specificity, 74%/72%). For the remaining test sample, sensitivity/specificity values of 68%/74% were obtained. In conclusion, the objective was to merge data recorded with different DTI protocols with 3D correction matrices for analyses at group level. These post processing tools might facilitate analysis of large study samples in a multicentre setting for DTI analysis at group level to aid in establishing DTI as a non-invasive biomarker for ALS.
Araujo Júnior, Edward; Martinez, Luis Henrique; Simioni, Christiane; Martins, Wellington P; Nardozza, Luciano M; Moron, Antonio F
2012-09-01
To assess the fetal lumbosacral spine by three-dimensional (3D) ultrasonography using volume contrast imaging (VCI) omni view method and compare reproducibility and agreement between three different measurement techniques: standard mouse, high definition mouse and pen-tablet. A comparative and prospective study with 40 pregnant women between 20 and 34 + 6 weeks was realized. 3D volume datasets of the fetal spine were acquired using a convex transabdominal transducer. Starting scan plane was the coronal section of fetal lumbosacral spine by VCI-C function. Omni view manual trace was selected and a parallel plane of fetal spine was drawn including interest region. Intraclass correlation coefficient (ICC) was used for reproducibility analysis. The relative difference between three used techniques was compared by chi-square test and Fischer test. Pen-tablet showed better reliability (ICC=0.987). In the relative proportion of differences, this was significantly higher for the pen-tablet (82.14%; p<0.01). In paired comparison, the relative difference was significantly greater for the pen-tablet (p<0.01). The pen-tablet showed to be the most reproductive and concordant method in the measurement of body vertebral area of fetal lumbosacral spine by 3D ultrasonography using the VCI.
Yamada, Yuzo; Toritsuka, Yukiyoshi; Nakamura, Norimasa; Horibe, Shuji; Sugamoto, Kazuomi; Yoshikawa, Hideki; Shino, Konsei
2017-11-01
The concepts of lateral deviation and lateral inclination of the patella, characterized as shift and tilt, have been applied in combination to evaluate patellar malalignment in patients with patellar dislocation. It is not reasonable, however, to describe the 3-dimensional (3D) positional relation between the patella and the femur according to measurements made on 2-dimensional (2D) images. The current study sought to clarify the relation between lateral deviation and inclination of the patella in patients with recurrent dislocation of the patella (RDP) by redefining them via 3D computer models as 3D shift and 3D tilt. Descriptive laboratory study. Altogether, 60 knees from 56 patients with RDP and 15 knees from 10 healthy volunteers were evaluated. 3D shift and tilt of the patella were analyzed with 3D computer models created by magnetic resonance imaging scans obtained at 10° intervals of knee flexion (0°-50°). 3D shift was defined as the spatial distance between the patellar reference point and the midsagittal plane of the femur; it is expressed as a percentage of the interepicondylar width. 3D tilt was defined as the spatial angle between the patellar reference plane and the transepicondylar axis. Correlations between the 2 parameters were assessed with the Pearson correlation coefficient. The patients' mean Pearson correlation coefficient was 0.895 ± 0.186 (range, -0.073 to 0.997; median, 0.965). In all, 56 knees (93%) had coefficients >0.7 (strong correlation); 1 knee (2%), >0.4 (moderate correlation); 2 knees (3%), >0.2 (weak correlation); and 1 knee (2%), <0.2 (no correlation). The mean correlation coefficient of the healthy volunteers was 0.645 ± 0.448 (range, -0.445 to 0.982; median, 0.834). A statistically significant difference was found in the distribution of the correlation coefficients between the patients and the healthy volunteers ( P = .0034). When distribution of the correlation coefficients obtained by the 3D analyses was compared with that by the 2D (conventional) analyses, based on the bisect offset index and patellar tilt angle, the 3D analyses showed statistically higher correlations between the lateral deviation and inclination of the patella ( P < .01). 3D shift and 3D tilt of the patella were moderately or strongly correlated in 95% of patients with RDP at 0° to 50° of knee flexion. It is not always necessary to use both parameters when evaluating patellar alignment, at least for knees with RDP at 0° to 50° of flexion. Such a description may enable surgeons to describe patellar alignment more simply, leading to a better, easier understanding of the characteristics of each patient with RDP.
Deep 3D convolution neural network for CT brain hemorrhage classification
NASA Astrophysics Data System (ADS)
Jnawali, Kamal; Arbabshirani, Mohammad R.; Rao, Navalgund; Patel, Alpen A.
2018-02-01
Intracranial hemorrhage is a critical conditional with the high mortality rate that is typically diagnosed based on head computer tomography (CT) images. Deep learning algorithms, in particular, convolution neural networks (CNN), are becoming the methodology of choice in medical image analysis for a variety of applications such as computer-aided diagnosis, and segmentation. In this study, we propose a fully automated deep learning framework which learns to detect brain hemorrhage based on cross sectional CT images. The dataset for this work consists of 40,367 3D head CT studies (over 1.5 million 2D images) acquired retrospectively over a decade from multiple radiology facilities at Geisinger Health System. The proposed algorithm first extracts features using 3D CNN and then detects brain hemorrhage using the logistic function as the last layer of the network. Finally, we created an ensemble of three different 3D CNN architectures to improve the classification accuracy. The area under the curve (AUC) of the receiver operator characteristic (ROC) curve of the ensemble of three architectures was 0.87. Their results are very promising considering the fact that the head CT studies were not controlled for slice thickness, scanner type, study protocol or any other settings. Moreover, the proposed algorithm reliably detected various types of hemorrhage within the skull. This work is one of the first applications of 3D CNN trained on a large dataset of cross sectional medical images for detection of a critical radiological condition
Magnetotelluric characterization of the northern margin of the Yilgarn Craton (Western Australia)
NASA Astrophysics Data System (ADS)
Piña-Varas, Perla; Dentith, Michael
2017-04-01
The northern margin of the Yilgarn Craton (Western Australia) was deformed during the convergence and collision with the Pilbara Craton and the intervening Glenburgh Terrain that created the Capricorn Orogen. The Yilgarn Craton is one of the most intensively mineralised areas of continental crust with world class deposits of gold and nickel. However, the region to its north has surprisingly few deposits. Cratonic margins are considered to be key indicators of prospectivity at a regional scale. The northern limit of the Yilgarn Craton within the Capricorn Orogen is not well resolved at date because of overlying Proterozoic sedimentary basins. We present here some of the results of an extensive magnetotelluric (MT) study that is being performed in the area. This study is a component of large multi-disciplinary geoscience project on the 'Distal Footprints of Giant Ore Systems' in the Capricorn Orogen. The MT dataset consists of a total of 240 broadband magnetotelluric stations (BBMT) and 84 long period stations (LMT). Analysis of the dataset reveals a clear 3-D geoelectrical behaviour and extreme complexity for most of the sites, including an extremely high number of sites with phases out-of-quadrant at long periods. 3-D inverse modelling of the MT data shows high resistivity Archean units and low resistivity Paleoproterozoic basins, including very low resistivity structures at depth. These strong resistivity contrasts allow us to successfully map northern margin of the Yilgarn Craton beneath basin cover, as well as identifying major lateral conductivity changes in the deep crust suggestive of different tectonic blocks. Upper crustal conductive zones can be correlated with faults on seismic reflection data. Our results suggest MT surveys are a useful tool for regional-scale exploration in the study area and in area of thick cover in general.
Potential Applications of Remote Sensing Precipitation Data on Urban Stormwater Modeling
NASA Astrophysics Data System (ADS)
Maggioni, V.; Tarantola, R.; Ferreira, C.
2014-12-01
Although stormwater modeling is widely used to plan, manage and operate stormwater systems in the urban environment, accuracy in model development and calibration is still problematic. Precipitation is the major forcing of stormwater modeling and one of the most important variables for accurate representation of the water cycle in urban areas. However, rainfall data availability in both temporal and spatial adequate scales is scarce. Here we investigate the potential to apply satellite precipitation products to small-scale urban watersheds with a focus on real-time data for operational use and historical data for model calibration and planning. We present a study case in Northern Virginia, part of the Washington, D.C. metropolitan region. We compare several rainfall datasets from satellites, radar and rain gauges during 2002-2008, using two multi-satellite precipitation products. The first one is the NASA TRMM TMPA at daily/0.25° time/space resolution, which is available in two forms: 3B42-Real Time and 3B42-Version 7, where the latter is a post-processed product, corrected with ground-based observations. The second one is the NOAA CMORPH at 3hrs/0.25° time/space resolution. The NOAA Climate Prediction Center (CPC) data and NCEP Stage IV radar-based product are used as reference datasets for TMPA and CMORPH, respectively. Statistical analyses are conducted to compare these datasets: correlation coefficient, RMSE, bias, probability of correct no-rain detection and of false alarm were computed with a focus on Fairfax, VA county. Preliminary results show that the TMPA products outperform CMORPH, when compared to rain gauges and radar data over the county. Moreover, no appreciable difference is detected between TMPA-V7 and TMPA-RT, which demonstrates that real-time data could be used over the urban watershed with results that are comparable to the adjusted product. Analyses are undergoing to investigate higher temporal resolution and to include a comparison with the Fairfax county rain gages data. Future work will also evaluate the impacts of different precipitation datasets on stormwater runoff for Fairfax county, using the EPA-SWMM5 storm water model.
Orientation-independent measures of ground motion
Boore, D.M.; Watson-Lamprey, Jennie; Abrahamson, N.A.
2006-01-01
The geometric mean of the response spectra for two orthogonal horizontal components of motion, commonly used as the response variable in predictions of strong ground motion, depends on the orientation of the sensors as installed in the field. This means that the measure of ground-motion intensity could differ for the same actual ground motion. This dependence on sensor orientation is most pronounced for strongly correlated motion (the extreme example being linearly polarized motion), such as often occurs at periods of 1 sec or longer. We propose two new measures of the geometric mean, GMRotDpp, and GMRotIpp, that are independent of the sensor orientations. Both are based on a set of geometric means computed from the as-recorded orthogonal horizontal motions rotated through all possible non-redundant rotation angles. GMRotDpp is determined as the ppth percentile of the set of geometric means for a given oscillator period. For example, GMRotDOO, GMRotD50, and GMRotD100 correspond to the minimum, median, and maximum values, respectively. The rotations that lead to GMRotDpp depend on period, whereas a single-period-independent rotation is used for GMRotIpp, the angle being chosen to minimize the spread of the rotation-dependent geometric mean (normalized by GMRotDpp) over the usable range of oscillator periods. GMRotI50 is the ground-motion intensity measure being used in the development of new ground-motion prediction equations by the Pacific Earthquake Engineering Center Next Generation Attenuation project. Comparisons with as-recorded geometric means for a large dataset show that the new measures are systematically larger than the geometric-mean response spectra using the as-recorded values of ground acceleration, but only by a small amount (less than 3%). The theoretical advantage of the new measures is that they remove sensor orientation as a contributor to aleatory uncertainty. Whether the reduction is of practical significance awaits detailed studies of large datasets. A preliminary analysis contained in a companion article by Beyer and Bommer finds that the reduction is small-to-nonexistent for equations based on a wide range of magnitudes and distances. The results of Beyer and Bommer do suggest, however, that there is an increasing reduction as period increases. Whether the reduction increases with other subdivisions of the dataset for which strongly correlated motions might be expected (e.g., pulselike motions close to faults) awaits further analysis.
Assessment of correlations and models for the prediction of CHF in water subcooled flow boiling
NASA Astrophysics Data System (ADS)
Celata, G. P.; Cumo, M.; Mariani, A.
1994-01-01
The present paper provides an analysis of available correlations and models for the prediction of Critical Heat Flux (CHF) in subcooled flow boiling in the range of interest of fusion reactors thermal-hydraulic conditions, i.e. high inlet liquid subcooling and velocity and small channel diameter and length. The aim of the study was to establish the limits of validity of present predictive tools (most of them were proposed with reference to light water reactors (LWR) thermal-hydraulic studies) in the above conditions. The reference dataset represents almost all available data (1865 data points) covering wide ranges of operating conditions in the frame of present interest (0.1 less than p less than 8.4 MPa; 0.3 less than D less than 25.4 mm; 0.1 less than L less than 0.61 m; 2 less than G less than 90.0 Mg/sq m/s; 90 less than delta T(sub sub,in) less than 230 K). Among the tens of predictive tools available in literature four correlations (Levy, Westinghouse, modified-Tong and Tong-75) and three models (Weisman and Ileslamlou, Lee and Mudawar and Katto) were selected. The modified-Tong correlation and the Katto model seem to be reliable predictive tools for the calculation of the CHF in subcooled flow boiling.
Functional CAR models for large spatially correlated functional datasets.
Zhang, Lin; Baladandayuthapani, Veerabhadran; Zhu, Hongxiao; Baggerly, Keith A; Majewski, Tadeusz; Czerniak, Bogdan A; Morris, Jeffrey S
2016-01-01
We develop a functional conditional autoregressive (CAR) model for spatially correlated data for which functions are collected on areal units of a lattice. Our model performs functional response regression while accounting for spatial correlations with potentially nonseparable and nonstationary covariance structure, in both the space and functional domains. We show theoretically that our construction leads to a CAR model at each functional location, with spatial covariance parameters varying and borrowing strength across the functional domain. Using basis transformation strategies, the nonseparable spatial-functional model is computationally scalable to enormous functional datasets, generalizable to different basis functions, and can be used on functions defined on higher dimensional domains such as images. Through simulation studies, we demonstrate that accounting for the spatial correlation in our modeling leads to improved functional regression performance. Applied to a high-throughput spatially correlated copy number dataset, the model identifies genetic markers not identified by comparable methods that ignore spatial correlations.
2008-09-01
improved resolution for shallow geologic structures . Jointly inverting these datasets with seismic body wave (S) travel times provides additional...constraints on the shallow structure and an enhanced 3D shear wave model for our study area in western China. 2008 Monitoring Research Review...for much of Eurasia, although the Arabian Shield and Arctic are less well recovered. The upper velocity gradient was tested for 10-degree cells
Face recognition using 3D facial shape and color map information: comparison and combination
NASA Astrophysics Data System (ADS)
Godil, Afzal; Ressler, Sandy; Grother, Patrick
2004-08-01
In this paper, we investigate the use of 3D surface geometry for face recognition and compare it to one based on color map information. The 3D surface and color map data are from the CAESAR anthropometric database. We find that the recognition performance is not very different between 3D surface and color map information using a principal component analysis algorithm. We also discuss the different techniques for the combination of the 3D surface and color map information for multi-modal recognition by using different fusion approaches and show that there is significant improvement in results. The effectiveness of various techniques is compared and evaluated on a dataset with 200 subjects in two different positions.
2D/3D fetal cardiac dataset segmentation using a deformable model.
Dindoyal, Irving; Lambrou, Tryphon; Deng, Jing; Todd-Pokropek, Andrew
2011-07-01
To segment the fetal heart in order to facilitate the 3D assessment of the cardiac function and structure. Ultrasound acquisition typically results in drop-out artifacts of the chamber walls. The authors outline a level set deformable model to automatically delineate the small fetal cardiac chambers. The level set is penalized from growing into an adjacent cardiac compartment using a novel collision detection term. The region based model allows simultaneous segmentation of all four cardiac chambers from a user defined seed point placed in each chamber. The segmented boundaries are automatically penalized from intersecting at walls with signal dropout. Root mean square errors of the perpendicular distances between the algorithm's delineation and manual tracings are within 2 mm which is less than 10% of the length of a typical fetal heart. The ejection fractions were determined from the 3D datasets. We validate the algorithm using a physical phantom and obtain volumes that are comparable to those from physically determined means. The algorithm segments volumes with an error of within 13% as determined using a physical phantom. Our original work in fetal cardiac segmentation compares automatic and manual tracings to a physical phantom and also measures inter observer variation.
Lagorce, David; Pencheva, Tania; Villoutreix, Bruno O; Miteva, Maria A
2009-11-13
Discovery of new bioactive molecules that could enter drug discovery programs or that could serve as chemical probes is a very complex and costly endeavor. Structure-based and ligand-based in silico screening approaches are nowadays extensively used to complement experimental screening approaches in order to increase the effectiveness of the process and facilitating the screening of thousands or millions of small molecules against a biomolecular target. Both in silico screening methods require as input a suitable chemical compound collection and most often the 3D structure of the small molecules has to be generated since compounds are usually delivered in 1D SMILES, CANSMILES or in 2D SDF formats. Here, we describe the new open source program DG-AMMOS which allows the generation of the 3D conformation of small molecules using Distance Geometry and their energy minimization via Automated Molecular Mechanics Optimization. The program is validated on the Astex dataset, the ChemBridge Diversity database and on a number of small molecules with known crystal structures extracted from the Cambridge Structural Database. A comparison with the free program Balloon and the well-known commercial program Omega generating the 3D of small molecules is carried out. The results show that the new free program DG-AMMOS is a very efficient 3D structure generator engine. DG-AMMOS provides fast, automated and reliable access to the generation of 3D conformation of small molecules and facilitates the preparation of a compound collection prior to high-throughput virtual screening computations. The validation of DG-AMMOS on several different datasets proves that generated structures are generally of equal quality or sometimes better than structures obtained by other tested methods.
Fang, Chi-hua; Lu, Chao-min; Huang, Yan-peng; Li, Xiao-feng; Fan, Ying-fang; Yang, Jian; Xiang, Nan; Pan, Jia-hui
2009-04-01
To study the clinical application of digital medical in the operation on primary liver cancer. The patients (n=11) with primary hepatic carcinoma treated between February and July 2008, including 9 cases of hepatocellular carcinoma, 2 cases of cholangiocellular carcinoma, were scanned using 64 slices helicon computerized tomography (CT) and the datasets was collected. Segment and three-dimensional (3D) reconstruction of the CT image was carried out by the medical image processing system which was developed. And the 3D moulds were imported to the FreeForm Modeling System for smoothing. Then the hepatectomy in treatment of hepatoma and implanting of catheter were simulated with the force-feedback equipment (PHANToM). Finally, 3D models and results of simulation surgery were used for choosing mode of operation and comparing with the findings during the operation. The reconstructed models were true to life, and their spatial disposition and correlation were shown clearly; Blood supply of primary liver cancer could be seen easily. In the simulation surgery system, the process of virtual partial hepatectomy and implanting of catheter using simulation scalpel and catheter on 3D moulds with PHANToM was consistent with the clinical course of surgery. Life-like could be felt and power feeling can be touched during simulation operation. Digital medical benefited knowing the relationship between primary liver cancer and the intrahepatic pipe. It gave an advantage to complete primary liver cancer resection with more liver volume remained. It can improve the surgical effect and decrease the surgical risk and reduce the complication through demonstrating visualized operation before surgery.
Itteboina, Ramesh; Ballu, Srilata; Sivan, Sree Kanth; Manga, Vijjulatha
2017-10-01
Janus kinase 1 (JAK 1) belongs to the JAK family of intracellular nonreceptor tyrosine kinase. JAK-signal transducer and activator of transcription (JAK-STAT) pathway mediate signaling by cytokines, which control survival, proliferation and differentiation of a variety of cells. Three-dimensional quantitative structure activity relationship (3 D-QSAR), molecular docking and molecular dynamics (MD) methods was carried out on a dataset of Janus kinase 1(JAK 1) inhibitors. Ligands were constructed and docked into the active site of protein using GLIDE 5.6. Best docked poses were selected after analysis for further 3 D-QSAR analysis using comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) methodology. Employing 60 molecules in the training set, 3 D-QSAR models were generate that showed good statistical reliability, which is clearly observed in terms of r 2 ncv and q 2 loo values. The predictive ability of these models was determined using a test set of 25 molecules that gave acceptable predictive correlation (r 2 Pred ) values. The key amino acid residues were identified by means of molecular docking, and the stability and rationality of the derived molecular conformations were also validated by MD simulation. The good consonance between the docking results and CoMFA/CoMSIA contour maps provides helpful clues about the reasonable modification of molecules in order to design more efficient JAK 1 inhibitors. The developed models are expected to provide some directives for further synthesis of highly effective JAK 1 inhibitors.
NASA Astrophysics Data System (ADS)
Lashkari, A.; Salehnia, N.; Asadi, S.; Paymard, P.; Zare, H.; Bannayan, M.
2018-05-01
The accuracy of daily output of satellite and reanalysis data is quite crucial for crop yield prediction. This study has evaluated the performance of APHRODITE (Asian Precipitation-Highly-Resolved Observational Data Integration Towards Evaluation), PERSIANN (Rainfall Estimation from Remotely Sensed Information using Artificial Neural Networks), TRMM (Tropical Rainfall Measuring Mission), and AgMERRA (The Modern-Era Retrospective Analysis for Research and Applications) precipitation products to apply as input data for CSM-CERES-Wheat crop growth simulation model to predict rainfed wheat yield. Daily precipitation output from various sources for 7 years (2000-2007) was obtained and compared with corresponding ground-observed precipitation data for 16 ground stations across the northeast of Iran. Comparisons of ground-observed daily precipitation with corresponding data recorded by different sources of datasets showed a root mean square error (RMSE) of less than 3.5 for all data. AgMERRA and APHRODITE showed the highest correlation (0.68 and 0.87) and index of agreement (d) values (0.79 and 0.89) with ground-observed data. When daily precipitation data were aggregated over periods of 10 days, the RMSE values, r, and d values increased (30, 0.8, and 0.7) for AgMERRA, APHRODITE, PERSIANN, and TRMM precipitation data sources. The simulations of rainfed wheat leaf area index (LAI) and dry matter using various precipitation data, coupled with solar radiation and temperature data from observed ones, illustrated typical LAI and dry matter shape across all stations. The average values of LAImax were 0.78, 0.77, 0.74, 0.70, and 0.69 using PERSIANN, AgMERRA, ground-observed precipitation data, APHRODITE, and TRMM. Rainfed wheat grain yield simulated by using AgMERRA and APHRODITE daily precipitation data was highly correlated (r 2 ≥ 70) with those simulated using observed precipitation data. Therefore, gridded data have high potential to be used to supply lack of data and gaps in ground-observed precipitation data.
Comparisons of Upper Tropospheric Humidity Retrievals from TOVS and METEOSAT
NASA Technical Reports Server (NTRS)
Escoffier, C.; Bates, J.; Chedin, A.; Rossow, W. B.; Schmetz, J.
1999-01-01
Two different methods for retrieving Upper Tropospheric Humidities (UTH) from the TOVS (TIROS Operational Vertical Sounder) instruments aboard NOAA polar orbiting satellites are presented and compared. The first one, from the Environmental Technology Laboratory, computed by J. Bates and D. Jackson (hereafter BJ method), estimates UTH from a simplified radiative transfer analysis of the upper tropospheric infrared water vapor channel at wavelength measured by HIRS (6.3 micrometer). The second one results from a neural network analysis of the TOVS (HIRS and MSU) data developed at, the Laboratoire de Meteorologie Dynamique (hereafter the 3I (Improved Initialization Inversion) method). Although the two methods give very similar retrievals in temperate regions (30-60 N and S), an absolute bias up to 16% appears in the convective zone of the tropics. The two datasets have also been compared with UTH retrievals from infrared radiance measurements in the 6.3 micrometer channel from the geostationary satellite METEOSAT (hereafter MET method). The METEOSAT retrievals are systematically drier than the TOVS-based results by an absolute bias between 5 and 25%. Despite the biases, the spatial and temporal correlations are very good. The purpose of this study is to explain the deviations observed between the three datasets. The sensitivity of UTH to air temperature and humidity profiles is analysed as are the clouds effects. Overall, the comparison of the three retrievals gives an assessment of the current uncertainties in water vapor amounts in the upper troposphere as determined from NOAA and METEOSAT satellites.
Estimates of tropical analysis differences in daily values produced by two operational centers
NASA Technical Reports Server (NTRS)
Kasahara, Akira; Mizzi, Arthur P.
1992-01-01
To assess the uncertainty of daily synoptic analyses for the atmospheric state, the intercomparison of three First GARP Global Experiment level IIIb datasets is performed. Daily values of divergence, vorticity, temperature, static stability, vertical motion, mixing ratio, and diagnosed diabatic heating rate are compared for the period of 26 January-11 February 1979. The spatial variance and mean, temporal mean and variance, 2D wavenumber power spectrum, anomaly correlation, and normalized square difference are employed for comparison.
Low-Rank Tensor Subspace Learning for RGB-D Action Recognition.
Jia, Chengcheng; Fu, Yun
2016-07-09
Since RGB-D action data inherently equip with extra depth information compared with RGB data, recently many works employ RGB-D data in a third-order tensor representation containing spatio-temporal structure to find a subspace for action recognition. However, there are two main challenges of these methods. First, the dimension of subspace is usually fixed manually. Second, preserving local information by finding intraclass and inter-class neighbors from a manifold is highly timeconsuming. In this paper, we learn a tensor subspace, whose dimension is learned automatically by low-rank learning, for RGB-D action recognition. Particularly, the tensor samples are factorized to obtain three Projection Matrices (PMs) by Tucker Decomposition, where all the PMs are performed by nuclear norm in a close-form to obtain the tensor ranks which are used as tensor subspace dimension. Additionally, we extract the discriminant and local information from a manifold using a graph constraint. This graph preserves the local knowledge inherently, which is faster than the previous way by calculating both the intra-class and inter-class neighbors of each sample. We evaluate the proposed method on four widely used RGB-D action datasets including MSRDailyActivity3D, MSRActionPairs, MSRActionPairs skeleton and UTKinect-Action3D datasets, and the experimental results show higher accuracy and efficiency of the proposed method.
3D displacement field measurement with correlation based on the micro-geometrical surface texture
NASA Astrophysics Data System (ADS)
Bubaker-Isheil, Halima; Serri, Jérôme; Fontaine, Jean-François
2011-07-01
Image correlation methods are widely used in experimental mechanics to obtain displacement field measurements. Currently, these methods are applied using digital images of the initial and deformed surfaces sprayed with black or white paint. Speckle patterns are then captured and the correlation is performed with a high degree of accuracy to an order of 0.01 pixels. In 3D, however, stereo-correlation leads to a lower degree of accuracy. Correlation techniques are based on the search for a sub-image (or pattern) displacement field. The work presented in this paper introduces a new correlation-based approach for 3D displacement field measurement that uses an additional 3D laser scanner and a CMM (Coordinate Measurement Machine). Unlike most existing methods that require the presence of markers on the observed object (such as black speckle, grids or random patterns), this approach relies solely on micro-geometrical surface textures such as waviness, roughness and aperiodic random defects. The latter are assumed to remain sufficiently small thus providing an adequate estimate of the particle displacement. The proposed approach can be used in a wide range of applications such as sheet metal forming with large strains. The method proceeds by first obtaining cloud points using the 3D laser scanner mounted on a CMM. These points are used to create 2D maps that are then correlated. In this respect, various criteria have been investigated for creating maps consisting of patterns, which facilitate the correlation procedure. Once the maps are created, the correlation between both configurations (initial and moved) is carried out using traditional methods developed for field measurements. Measurement validation was conducted using experiments in 2D and 3D with good results for rigid displacements in 2D, 3D and 2D rotations.
Ritchie, David W; Kozakov, Dima; Vajda, Sandor
2008-09-01
Predicting how proteins interact at the molecular level is a computationally intensive task. Many protein docking algorithms begin by using fast Fourier transform (FFT) correlation techniques to find putative rigid body docking orientations. Most such approaches use 3D Cartesian grids and are therefore limited to computing three dimensional (3D) translational correlations. However, translational FFTs can speed up the calculation in only three of the six rigid body degrees of freedom, and they cannot easily incorporate prior knowledge about a complex to focus and hence further accelerate the calculation. Furthemore, several groups have developed multi-term interaction potentials and others use multi-copy approaches to simulate protein flexibility, which both add to the computational cost of FFT-based docking algorithms. Hence there is a need to develop more powerful and more versatile FFT docking techniques. This article presents a closed-form 6D spherical polar Fourier correlation expression from which arbitrary multi-dimensional multi-property multi-resolution FFT correlations may be generated. The approach is demonstrated by calculating 1D, 3D and 5D rotational correlations of 3D shape and electrostatic expansions up to polynomial order L=30 on a 2 GB personal computer. As expected, 3D correlations are found to be considerably faster than 1D correlations but, surprisingly, 5D correlations are often slower than 3D correlations. Nonetheless, we show that 5D correlations will be advantageous when calculating multi-term knowledge-based interaction potentials. When docking the 84 complexes of the Protein Docking Benchmark, blind 3D shape plus electrostatic correlations take around 30 minutes on a contemporary personal computer and find acceptable solutions within the top 20 in 16 cases. Applying a simple angular constraint to focus the calculation around the receptor binding site produces acceptable solutions within the top 20 in 28 cases. Further constraining the search to the ligand binding site gives up to 48 solutions within the top 20, with calculation times of just a few minutes per complex. Hence the approach described provides a practical and fast tool for rigid body protein-protein docking, especially when prior knowledge about one or both binding sites is available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drukker, Karen, E-mail: kdrukker@uchicago.edu; Sennett, Charlene A.; Giger, Maryellen L.
Purpose: Develop a computer-aided detection method and investigate its feasibility for detection of breast cancer in automated 3D ultrasound images of women with dense breasts. Methods: The HIPAA compliant study involved a dataset of volumetric ultrasound image data, “views,” acquired with an automated U-Systems Somo•V{sup ®} ABUS system for 185 asymptomatic women with dense breasts (BI-RADS Composition/Density 3 or 4). For each patient, three whole-breast views (3D image volumes) per breast were acquired. A total of 52 patients had breast cancer (61 cancers), diagnosed through any follow-up at most 365 days after the original screening mammogram. Thirty-one of these patientsmore » (32 cancers) had a screening-mammogram with a clinically assigned BI-RADS Assessment Category 1 or 2, i.e., were mammographically negative. All software used for analysis was developed in-house and involved 3 steps: (1) detection of initial tumor candidates, (2) characterization of candidates, and (3) elimination of false-positive candidates. Performance was assessed by calculating the cancer detection sensitivity as a function of the number of “marks” (detections) per view. Results: At a single mark per view, i.e., six marks per patient, the median detection sensitivity by cancer was 50.0% (16/32) ± 6% for patients with a screening mammogram-assigned BI-RADS category 1 or 2—similar to radiologists’ performance sensitivity (49.9%) for this dataset from a prior reader study—and 45.9% (28/61) ± 4% for all patients. Conclusions: Promising detection sensitivity was obtained for the computer on a 3D ultrasound dataset of women with dense breasts at a rate of false-positive detections that may be acceptable for clinical implementation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drukker, Karen, E-mail: kdrukker@uchicago.edu; Sennett, Charlene A.; Giger, Maryellen L.
2014-01-15
Purpose: Develop a computer-aided detection method and investigate its feasibility for detection of breast cancer in automated 3D ultrasound images of women with dense breasts. Methods: The HIPAA compliant study involved a dataset of volumetric ultrasound image data, “views,” acquired with an automated U-Systems Somo•V{sup ®} ABUS system for 185 asymptomatic women with dense breasts (BI-RADS Composition/Density 3 or 4). For each patient, three whole-breast views (3D image volumes) per breast were acquired. A total of 52 patients had breast cancer (61 cancers), diagnosed through any follow-up at most 365 days after the original screening mammogram. Thirty-one of these patientsmore » (32 cancers) had a screening-mammogram with a clinically assigned BI-RADS Assessment Category 1 or 2, i.e., were mammographically negative. All software used for analysis was developed in-house and involved 3 steps: (1) detection of initial tumor candidates, (2) characterization of candidates, and (3) elimination of false-positive candidates. Performance was assessed by calculating the cancer detection sensitivity as a function of the number of “marks” (detections) per view. Results: At a single mark per view, i.e., six marks per patient, the median detection sensitivity by cancer was 50.0% (16/32) ± 6% for patients with a screening mammogram-assigned BI-RADS category 1 or 2—similar to radiologists’ performance sensitivity (49.9%) for this dataset from a prior reader study—and 45.9% (28/61) ± 4% for all patients. Conclusions: Promising detection sensitivity was obtained for the computer on a 3D ultrasound dataset of women with dense breasts at a rate of false-positive detections that may be acceptable for clinical implementation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Mathew; Marshall, Matthew J.; Miller, Erin A.
2014-08-26
Understanding the interactions of structured communities known as “biofilms” and other complex matrixes is possible through the X-ray micro tomography imaging of the biofilms. Feature detection and image processing for this type of data focuses on efficiently identifying and segmenting biofilms and bacteria in the datasets. The datasets are very large and often require manual interventions due to low contrast between objects and high noise levels. Thus new software is required for the effectual interpretation and analysis of the data. This work specifies the evolution and application of the ability to analyze and visualize high resolution X-ray micro tomography datasets.
3D linear inversion of magnetic susceptibility data acquired by frequency domain EMI
NASA Astrophysics Data System (ADS)
Thiesson, J.; Tabbagh, A.; Simon, F.-X.; Dabas, M.
2017-01-01
Low induction number EMI instruments are able to simultaneously measure a soil's apparent magnetic susceptibility and electrical conductivity. This family of dual measurement instruments is highly useful for the analysis of soils and archeological sites. However, the electromagnetic properties of soils are found to vary over considerably different ranges: whereas their electrical conductivity varies from ≤ 0.1 to ≥ 100 mS/m, their relative magnetic permeability remains within a very small range, between 1.0001 and 1.01 SI. Consequently, although apparent conductivity measurements need to be inverted using non-linear processes, the variations of the apparent magnetic susceptibility can be approximated through the use of linear processes, as in the case of the magnetic prospection technique. Our proposed 3D inversion algorithm starts from apparent susceptibility data sets, acquired using different instruments over a given area. A reference vertical profile is defined by considering the mode of the vertical distributions of both the electrical resistivity and of the magnetic susceptibility. At each point of the mapped area, the reference vertical profile response is subtracted to obtain the apparent susceptibility variation dataset. A 2D horizontal Fourier transform is applied to these variation datasets and to the dipole (impulse) response of each instrument, a (vertical) 1D inversion is performed at each point in the spectral domain, and finally the resulting dataset is inverse transformed to restore the apparent 3D susceptibility variations. It has been shown that when applied to synthetic results, this method is able to correct the apparent deformations of a buried object resulting from the geometry of the instrument, and to restore reliable quantitative susceptibility contrasts. It also allows the thin layer solution, similar to that used in magnetic prospection, to be implemented. When applied to field data it initially delivers a level of contrast comparable to that obtained with a non-linear 3D inversion. Over four different sites, this method is able to produce, following an acceptably short computation time, realistic values for the lateral and vertical variations in susceptibility, which are significantly different to those given by a point-by-point 1D inversion.
3D local feature BKD to extract road information from mobile laser scanning point clouds
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Liu, Yuan; Dong, Zhen; Liang, Fuxun; Li, Bijun; Peng, Xiangyang
2017-08-01
Extracting road information from point clouds obtained through mobile laser scanning (MLS) is essential for autonomous vehicle navigation, and has hence garnered a growing amount of research interest in recent years. However, the performance of such systems is seriously affected due to varying point density and noise. This paper proposes a novel three-dimensional (3D) local feature called the binary kernel descriptor (BKD) to extract road information from MLS point clouds. The BKD consists of Gaussian kernel density estimation and binarization components to encode the shape and intensity information of the 3D point clouds that are fed to a random forest classifier to extract curbs and markings on the road. These are then used to derive road information, such as the number of lanes, the lane width, and intersections. In experiments, the precision and recall of the proposed feature for the detection of curbs and road markings on an urban dataset and a highway dataset were as high as 90%, thus showing that the BKD is accurate and robust against varying point density and noise.
NASA Astrophysics Data System (ADS)
Patel, Ajay; van de Leemput, Sil C.; Prokop, Mathias; van Ginneken, Bram; Manniesing, Rashindra
2017-03-01
Segmentation of anatomical structures is fundamental in the development of computer aided diagnosis systems for cerebral pathologies. Manual annotations are laborious, time consuming and subject to human error and observer variability. Accurate quantification of cerebrospinal fluid (CSF) can be employed as a morphometric measure for diagnosis and patient outcome prediction. However, segmenting CSF in non-contrast CT images is complicated by low soft tissue contrast and image noise. In this paper we propose a state-of-the-art method using a multi-scale three-dimensional (3D) fully convolutional neural network (CNN) to automatically segment all CSF within the cranial cavity. The method is trained on a small dataset comprised of four manually annotated cerebral CT images. Quantitative evaluation of a separate test dataset of four images shows a mean Dice similarity coefficient of 0.87 +/- 0.01 and mean absolute volume difference of 4.77 +/- 2.70 %. The average prediction time was 68 seconds. Our method allows for fast and fully automated 3D segmentation of cerebral CSF in non-contrast CT, and shows promising results despite a limited amount of training data.
Ramachandra, Ranjan; de Jonge, Niels
2012-01-01
Three-dimensional (3D) data sets were recorded of gold nanoparticles placed on both sides of silicon nitride membranes using focal series aberration-corrected scanning transmission electron microscopy (STEM). The deconvolution of the 3D datasets was optimized to obtain the highest possible axial resolution. The deconvolution involved two different point spread function (PSF)s, each calculated iteratively via blind deconvolution.. Supporting membranes of different thicknesses were tested to study the effect of beam broadening on the deconvolution. It was found that several iterations of deconvolution was efficient in reducing the imaging noise. With an increasing number of iterations, the axial resolution was increased, and most of the structural information was preserved. Additional iterations improved the axial resolution by maximal a factor of 4 to 6, depending on the particular dataset, and up to 8 nm maximal, but at the cost of a reduction of the lateral size of the nanoparticles in the image. Thus, the deconvolution procedure optimized for highest axial resolution is best suited for applications where one is interested in the 3D locations of nanoparticles only. PMID:22152090
Performance testing of LiDAR exploitation software
NASA Astrophysics Data System (ADS)
Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.
2013-04-01
Mobile LiDAR systems are being used widely in recent years for many applications in the field of geoscience. One of most important limitations of this technology is the large computational requirements involved in data processing. Several software solutions for data processing are available in the market, but users are often unknown about the methodologies to verify their performance accurately. In this work a methodology for LiDAR software performance testing is presented and six different suites are studied: QT Modeler, AutoCAD Civil 3D, Mars 7, Fledermaus, Carlson and TopoDOT (all of them in x64). Results depict as QTModeler, TopoDOT and AutoCAD Civil 3D allow the loading of large datasets, while Fledermaus, Mars7 and Carlson do not achieve these powerful performance. AutoCAD Civil 3D needs large loading time in comparison with the most powerful softwares such as QTModeler and TopoDOT. Carlson suite depicts the poorest results among all the softwares under study, where point clouds larger than 5 million points cannot be loaded and loading time is very large in comparison with the other suites even for the smaller datasets. AutoCAD Civil 3D, Carlson and TopoDOT show more threads than other softwares like QTModeler, Mars7 and Fledermaus.
3D Reconstruction of Space Objects from Multi-Views by a Visible Sensor
Zhang, Haopeng; Wei, Quanmao; Jiang, Zhiguo
2017-01-01
In this paper, a novel 3D reconstruction framework is proposed to recover the 3D structural model of a space object from its multi-view images captured by a visible sensor. Given an image sequence, this framework first estimates the relative camera poses and recovers the depths of the surface points by the structure from motion (SFM) method, then the patch-based multi-view stereo (PMVS) algorithm is utilized to generate a dense 3D point cloud. To resolve the wrong matches arising from the symmetric structure and repeated textures of space objects, a new strategy is introduced, in which images are added to SFM in imaging order. Meanwhile, a refining process exploiting the structural prior knowledge that most sub-components of artificial space objects are composed of basic geometric shapes is proposed and applied to the recovered point cloud. The proposed reconstruction framework is tested on both simulated image datasets and real image datasets. Experimental results illustrate that the recovered point cloud models of space objects are accurate and have a complete coverage of the surface. Moreover, outliers and points with severe noise are effectively filtered out by the refinement, resulting in an distinct improvement of the structure and visualization of the recovered points. PMID:28737675
Mori, Kensaku; Ota, Shunsuke; Deguchi, Daisuke; Kitasaka, Takayuki; Suenaga, Yasuhito; Iwano, Shingo; Hasegawa, Yosihnori; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi
2009-01-01
This paper presents a method for the automated anatomical labeling of bronchial branches extracted from 3D CT images based on machine learning and combination optimization. We also show applications of anatomical labeling on a bronchoscopy guidance system. This paper performs automated labeling by using machine learning and combination optimization. The actual procedure consists of four steps: (a) extraction of tree structures of the bronchus regions extracted from CT images, (b) construction of AdaBoost classifiers, (c) computation of candidate names for all branches by using the classifiers, (d) selection of best combination of anatomical names. We applied the proposed method to 90 cases of 3D CT datasets. The experimental results showed that the proposed method can assign correct anatomical names to 86.9% of the bronchial branches up to the sub-segmental lobe branches. Also, we overlaid the anatomical names of bronchial branches on real bronchoscopic views to guide real bronchoscopy.
Adaptive template generation for amyloid PET using a deep learning approach.
Kang, Seung Kwan; Seo, Seongho; Shin, Seong A; Byun, Min Soo; Lee, Dong Young; Kim, Yu Kyeong; Lee, Dong Soo; Lee, Jae Sung
2018-05-11
Accurate spatial normalization (SN) of amyloid positron emission tomography (PET) images for Alzheimer's disease assessment without coregistered anatomical magnetic resonance imaging (MRI) of the same individual is technically challenging. In this study, we applied deep neural networks to generate individually adaptive PET templates for robust and accurate SN of amyloid PET without using matched 3D MR images. Using 681 pairs of simultaneously acquired 11 C-PIB PET and T1-weighted 3D MRI scans of AD, MCI, and cognitively normal subjects, we trained and tested two deep neural networks [convolutional auto-encoder (CAE) and generative adversarial network (GAN)] that produce adaptive best PET templates. More specifically, the networks were trained using 685,100 pieces of augmented data generated by rotating 527 randomly selected datasets and validated using 154 datasets. The input to the supervised neural networks was the 3D PET volume in native space and the label was the spatially normalized 3D PET image using the transformation parameters obtained from MRI-based SN. The proposed deep learning approach significantly enhanced the quantitative accuracy of MRI-less amyloid PET assessment by reducing the SN error observed when an average amyloid PET template is used. Given an input image, the trained deep neural networks rapidly provide individually adaptive 3D PET templates without any discontinuity between the slices (in 0.02 s). As the proposed method does not require 3D MRI for the SN of PET images, it has great potential for use in routine analysis of amyloid PET images in clinical practice and research. © 2018 Wiley Periodicals, Inc.
Effects of 3D Earth structure on W-phase CMT parameters
NASA Astrophysics Data System (ADS)
Morales, Catalina; Duputel, Zacharie; Rivera, Luis; Kanamori, Hiroo
2017-04-01
The source inversion of the W-phase has demonstrated a great potential to provide fast and reliable estimates of the centroid moment tensor (CMT) for moderate to large earthquakes. It has since been implemented in different operational environments (NEIC-USGS, PTWC, etc.) with the aim of providing rapid CMT solutions. These solutions are in particular useful for tsunami warning purposes. Computationally, W-phase waveforms are usually synthetized by summation of normal modes at long period (100 - 1000 s) for a spherical Earth model (e.g., PREM). Although the energy of these modes mainly stays in the mantle where lateral structural variations are relatively small, the impact of 3D heterogeneities on W-phase solutions have not yet been quantified. In this study, we investigate possible bias in W-phase source parameters due to unmodeled lateral structural heterogeneities. We generate a simulated dataset consisting of synthetic seismograms of large past earthquakes that accounts for the Earth's 3D structure. The W-phase algorithm is then used to invert the synthetic dataset for earthquake CMT parameters with and without added noise. Results show that the impact of 3D heterogeneities is generally larger for surface-waves than for W-phase waveforms. However, some discrepancies are noted between inverted W-phase parameters and target values. Particular attention is paid to the possible bias induced by the unmodeled 3D structure into the location of the W-phase centroid. Preliminary results indicate that the parameter that is most susceptible to 3D Earth structure seems to be the centroid depth.
3D video-based deformation measurement of the pelvis bone under dynamic cyclic loading
2011-01-01
Background Dynamic three-dimensional (3D) deformation of the pelvic bones is a crucial factor in the successful design and longevity of complex orthopaedic oncological implants. The current solutions are often not very promising for the patient; thus it would be interesting to measure the dynamic 3D-deformation of the whole pelvic bone in order to get a more realistic dataset for a better implant design. Therefore we hypothesis if it would be possible to combine a material testing machine with a 3D video motion capturing system, used in clinical gait analysis, to measure the sub millimetre deformation of a whole pelvis specimen. Method A pelvis specimen was placed in a standing position on a material testing machine. Passive reflective markers, traceable by the 3D video motion capturing system, were fixed to the bony surface of the pelvis specimen. While applying a dynamic sinusoidal load the 3D-movement of the markers was recorded by the cameras and afterwards the 3D-deformation of the pelvis specimen was computed. The accuracy of the 3D-movement of the markers was verified with 3D-displacement curve with a step function using a manual driven 3D micro-motion-stage. Results The resulting accuracy of the measurement system depended on the number of cameras tracking a marker. The noise level for a marker seen by two cameras was during the stationary phase of the calibration procedure ± 0.036 mm, and ± 0.022 mm if tracked by 6 cameras. The detectable 3D-movement performed by the 3D-micro-motion-stage was smaller than the noise level of the 3D-video motion capturing system. Therefore the limiting factor of the setup was the noise level, which resulted in a measurement accuracy for the dynamic test setup of ± 0.036 mm. Conclusion This 3D test setup opens new possibilities in dynamic testing of wide range materials, like anatomical specimens, biomaterials, and its combinations. The resulting 3D-deformation dataset can be used for a better estimation of material characteristics of the underlying structures. This is an important factor in a reliable biomechanical modelling and simulation as well as in a successful design of complex implants. PMID:21762533
NASA Astrophysics Data System (ADS)
Xiao, X.; Wen, L.
2017-12-01
As a typical active intracontinental mountain range in Central Asia, Tian Shan Mt serves as the prototype in studying geodynamic processes and mechanism of intracontinental mountain building. We study 3D crust and the uppermost mantle structure beneath Tian Shan region using ambient noise and earthquake surface waves. Our dataset includes vertical component records of 62 permanent broadband seismic stations operated by the Earthquake Administration of China. Firstly, we calculate two-year stacked Cross-Correlation Functions (CCFs) of ambient noise records between the stations. The CCFs are treated as the Empirical Green's Functions (EGFs) of each station pair, from which we measured phase velocities of fundamental-mode Rayleigh wave in the period of 3-40 s using a frequency-time analysis method. Secondly, we collect surface wave data from tele-seismic events with Mw > 5.5 and depth shallower than 200 km and measure phase velocities of the fundamental-mode of Rayleigh wave in the period of 30-150 s using a two-station method. Finally, we combine the phase velocity measurements from ambient noise and earthquake surface waves, obtain lateral isotropic phase velocity maps at different periods based on tomography and invert a 3D Vsv model of crust and uppermost mantle down to about 150 km using a Monte Carlo Inversion method. We will discuss our inversion results in detail, as well as their implications to the tectonics in the region.
NASA Astrophysics Data System (ADS)
Wang, Guanxi; Tie, Yun; Qi, Lin
2017-07-01
In this paper, we propose a novel approach based on Depth Maps and compute Multi-Scale Histograms of Oriented Gradient (MSHOG) from sequences of depth maps to recognize actions. Each depth frame in a depth video sequence is projected onto three orthogonal Cartesian planes. Under each projection view, the absolute difference between two consecutive projected maps is accumulated through a depth video sequence to form a Depth Map, which is called Depth Motion Trail Images (DMTI). The MSHOG is then computed from the Depth Maps for the representation of an action. In addition, we apply L2-Regularized Collaborative Representation (L2-CRC) to classify actions. We evaluate the proposed approach on MSR Action3D dataset and MSRGesture3D dataset. Promising experimental result demonstrates the effectiveness of our proposed method.
Human action recognition based on kinematic similarity in real time
Chen, Longting; Luo, Ailing; Zhang, Sicong
2017-01-01
Human action recognition using 3D pose data has gained a growing interest in the field of computer robotic interfaces and pattern recognition since the availability of hardware to capture human pose. In this paper, we propose a fast, simple, and powerful method of human action recognition based on human kinematic similarity. The key to this method is that the action descriptor consists of joints position, angular velocity and angular acceleration, which can meet the different individual sizes and eliminate the complex normalization. The angular parameters of joints within a short sliding time window (approximately 5 frames) around the current frame are used to express each pose frame of human action sequence. Moreover, three modified KNN (k-nearest-neighbors algorithm) classifiers are employed in our method: one for achieving the confidence of every frame in the training step, one for estimating the frame label of each descriptor, and one for classifying actions. Additional estimating of the frame’s time label makes it possible to address single input frames. This approach can be used on difficult, unsegmented sequences. The proposed method is efficient and can be run in real time. The research shows that many public datasets are irregularly segmented, and a simple method is provided to regularize the datasets. The approach is tested on some challenging datasets such as MSR-Action3D, MSRDailyActivity3D, and UTD-MHAD. The results indicate our method achieves a higher accuracy. PMID:29073131
Kim, Jung-in; Choi, Chang Heon; Wu, Hong-Gyun; Kim, Jin Ho; Kim, Kyubo; Park, Jong Min
2017-01-01
The aim of this work was to investigate correlations between 2D and quasi-3D gamma passing rates. A total of 20 patients (10 prostate cases and 10 head and neck cases, H&N) were retrospectively selected. For each patient, both intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) plans were generated. For each plan, 2D gamma evaluation with radiochromic films and quasi-3D gamma evaluation with fluence measurements were performed with both 2%/2 mm and 3%/3 mm criteria. Gamma passing rates were grouped together according to delivery techniques and treatment sites. Statistical analyses were performed to examine the correlation between 2D and quasi-3D gamma evaluations. Statistically significant difference was observed between delivery techniques only in the quasi-3D gamma passing rates with 2%/2 mm. Statistically significant differences were observed between treatment sites in the 2D gamma passing rates (differences of less than 8%). No statistically significant correlations were observed between 2D and quasi-3D gamma passing rates except the VMAT group and the group including both IMRT and VMAT with 3%/3 mm (r = 0.564 with p = 0.012 for theVMAT group and r = 0.372 with p = 0.020 for the group including both IMRT and VMAT), however, those were not strong. No strong correlations were observed between 2D and quasi-3D gamma evaluations. PMID:27690300
On the utility of 3D hand cursors to explore medical volume datasets with a touchless interface.
Lopes, Daniel Simões; Parreira, Pedro Duarte de Figueiredo; Paulo, Soraia Figueiredo; Nunes, Vitor; Rego, Paulo Amaral; Neves, Manuel Cassiano; Rodrigues, Pedro Silva; Jorge, Joaquim Armando
2017-08-01
Analyzing medical volume datasets requires interactive visualization so that users can extract anatomo-physiological information in real-time. Conventional volume rendering systems rely on 2D input devices, such as mice and keyboards, which are known to hamper 3D analysis as users often struggle to obtain the desired orientation that is only achieved after several attempts. In this paper, we address which 3D analysis tools are better performed with 3D hand cursors operating on a touchless interface comparatively to a 2D input devices running on a conventional WIMP interface. The main goals of this paper are to explore the capabilities of (simple) hand gestures to facilitate sterile manipulation of 3D medical data on a touchless interface, without resorting on wearables, and to evaluate the surgical feasibility of the proposed interface next to senior surgeons (N=5) and interns (N=2). To this end, we developed a touchless interface controlled via hand gestures and body postures to rapidly rotate and position medical volume images in three-dimensions, where each hand acts as an interactive 3D cursor. User studies were conducted with laypeople, while informal evaluation sessions were carried with senior surgeons, radiologists and professional biomedical engineers. Results demonstrate its usability as the proposed touchless interface improves spatial awareness and a more fluent interaction with the 3D volume than with traditional 2D input devices, as it requires lesser number of attempts to achieve the desired orientation by avoiding the composition of several cumulative rotations, which is typically necessary in WIMP interfaces. However, tasks requiring precision such as clipping plane visualization and tagging are best performed with mouse-based systems due to noise, incorrect gestures detection and problems in skeleton tracking that need to be addressed before tests in real medical environments might be performed. Copyright © 2017 Elsevier Inc. All rights reserved.
Prediction of protein subcellular locations by GO-FunD-PseAA predictor.
Chou, Kuo-Chen; Cai, Yu-Dong
2004-08-06
The localization of a protein in a cell is closely correlated with its biological function. With the explosion of protein sequences entering into DataBanks, it is highly desired to develop an automated method that can fast identify their subcellular location. This will expedite the annotation process, providing timely useful information for both basic research and industrial application. In view of this, a powerful predictor has been developed by hybridizing the gene ontology approach [Nat. Genet. 25 (2000) 25], functional domain composition approach [J. Biol. Chem. 277 (2002) 45765], and the pseudo-amino acid composition approach [Proteins Struct. Funct. Genet. 43 (2001) 246; Erratum: ibid. 44 (2001) 60]. As a showcase, the recently constructed dataset [Bioinformatics 19 (2003) 1656] was used for demonstration. The dataset contains 7589 proteins classified into 12 subcellular locations: chloroplast, cytoplasmic, cytoskeleton, endoplasmic reticulum, extracellular, Golgi apparatus, lysosomal, mitochondrial, nuclear, peroxisomal, plasma membrane, and vacuolar. The overall success rate of prediction obtained by the jackknife cross-validation was 92%. This is so far the highest success rate performed on this dataset by following an objective and rigorous cross-validation procedure.
NASA Astrophysics Data System (ADS)
Aires, Filipe; Miolane, Léo; Prigent, Catherine; Pham Duc, Binh; Papa, Fabrice; Fluet-Chouinard, Etienne; Lehner, Bernhard
2017-04-01
The Global Inundation Extent from Multi-Satellites (GIEMS) provides multi-year monthly variations of the global surface water extent at 25kmx25km resolution. It is derived from multiple satellite observations. Its spatial resolution is usually compatible with climate model outputs and with global land surface model grids but is clearly not adequate for local applications that require the characterization of small individual water bodies. There is today a strong demand for high-resolution inundation extent datasets, for a large variety of applications such as water management, regional hydrological modeling, or for the analysis of mosquitos-related diseases. A new procedure is introduced to downscale the GIEMS low spatial resolution inundations to a 3 arc second (90 m) dataset. The methodology is based on topography and hydrography information from the HydroSHEDS database. A new floodability index is adopted and an innovative smoothing procedure is developed to ensure the smooth transition, in the high-resolution maps, between the low-resolution boxes from GIEMS. Topography information is relevant for natural hydrology environments controlled by elevation, but is more limited in human-modified basins. However, the proposed downscaling approach is compatible with forthcoming fusion with other more pertinent satellite information in these difficult regions. The resulting GIEMS-D3 database is the only high spatial resolution inundation database available globally at the monthly time scale over the 1993-2007 period. GIEMS-D3 is assessed by analyzing its spatial and temporal variability, and evaluated by comparisons to other independent satellite observations from visible (Google Earth and Landsat), infrared (MODIS) and active microwave (SAR).
Initial spatio-temporal domain expansion of the Modelfest database
NASA Astrophysics Data System (ADS)
Carney, Thom; Mozaffari, Sahar; Sun, Sean; Johnson, Ryan; Shirvastava, Sharona; Shen, Priscilla; Ly, Emma
2013-03-01
The first Modelfest group publication appeared in the SPIE Human Vision and Electronic Imaging conference proceedings in 1999. "One of the group's goals is to develop a public database of test images with threshold data from multiple laboratories for designing and testing HVS (Human Vision Models)." After extended discussions the group selected a set of 45 static images thought to best meet that goal and collected psychophysical detection data which is available on the WEB and presented in the 2000 SPIE conference proceedings. Several groups have used these datasets to test spatial modeling ideas. Further discussions led to the preliminary stimulus specification for extending the database into the temporal domain which was published in the 2002 conference proceeding. After a hiatus of 12 years, some of us have collected spatio-temporal thresholds on an expanded stimulus set of 41 video clips; the original specification included 35 clips. The principal change involved adding one additional spatial pattern beyond the three originally specified. The stimuli consisted of 4 spatial patterns, Gaussian Blob, 4 c/d Gabor patch, 11.3 c/d Gabor patch and a 2D white noise patch. Across conditions the patterns were temporally modulated over a range of approximately 0-25 Hz as well as temporal edge and pulse modulation conditions. The display and data collection specifications were as specified by the Modelfest groups in the 2002 conference proceedings. To date seven subjects have participated in this phase of the data collection effort, one of which also participated in the first phase of Modelfest. Three of the spatio-temporal stimuli were identical to conditions in the original static dataset. Small differences in the thresholds were evident and may point to a stimulus limitation. The temporal CSF peaked between 4 and 8 Hz for the 0 c/d (Gaussian blob) and 4 c/d patterns. The 4 c/d and 11.3 c/d Gabor temporal CSF was low pass while the 0 c/d pattern was band pass. This preliminary expansion of the Modelfest dataset needs the participation of additional laboratories to evaluate the impact of different methods on threshold estimates and increase the subject base. We eagerly await the addition of new data from interested researchers. It remains to be seen how accurately general HVS models will predict thresholds across both Modelfest datasets.
Antony, Bhavna; Abràmoff, Michael D.; Tang, Li; Ramdas, Wishal D.; Vingerling, Johannes R.; Jansonius, Nomdo M.; Lee, Kyungmoo; Kwon, Young H.; Sonka, Milan; Garvin, Mona K.
2011-01-01
The 3-D spectral-domain optical coherence tomography (SD-OCT) images of the retina often do not reflect the true shape of the retina and are distorted differently along the x and y axes. In this paper, we propose a novel technique that uses thin-plate splines in two stages to estimate and correct the distinct axial artifacts in SD-OCT images. The method was quantitatively validated using nine pairs of OCT scans obtained with orthogonal fast-scanning axes, where a segmented surface was compared after both datasets had been corrected. The mean unsigned difference computed between the locations of this artifact-corrected surface after the single-spline and dual-spline correction was 23.36 ± 4.04 μm and 5.94 ± 1.09 μm, respectively, and showed a significant difference (p < 0.001 from two-tailed paired t-test). The method was also validated using depth maps constructed from stereo fundus photographs of the optic nerve head, which were compared to the flattened top surface from the OCT datasets. Significant differences (p < 0.001) were noted between the artifact-corrected datasets and the original datasets, where the mean unsigned differences computed over 30 optic-nerve-head-centered scans (in normalized units) were 0.134 ± 0.035 and 0.302 ± 0.134, respectively. PMID:21833377
Analyzing How We Do Analysis and Consume Data, Results from the SciDAC-Data Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, P.; Aliaga, L.; Mubarak, M.
One of the main goals of the Dept. of Energy funded SciDAC-Data project is to analyze the more than 410,000 high energy physics datasets that have been collected, generated and defined over the past two decades by experiments using the Fermilab storage facilities. These datasets have been used as the input to over 5.6 million recorded analysis projects, for which detailed analytics have been gathered. The analytics and meta information for these datasets and analysis projects are being combined with knowledge of their part of the HEP analysis chains for major experiments to understand how modern computing and data deliverymore » is being used. We present the first results of this project, which examine in detail how the CDF, D0, NOvA, MINERvA and MicroBooNE experiments have organized, classified and consumed petascale datasets to produce their physics results. The results include analysis of the correlations in dataset/file overlap, data usage patterns, data popularity, dataset dependency and temporary dataset consumption. The results provide critical insight into how workflows and data delivery schemes can be combined with different caching strategies to more efficiently perform the work required to mine these large HEP data volumes and to understand the physics analysis requirements for the next generation of HEP computing facilities. In particular we present a detailed analysis of the NOvA data organization and consumption model corresponding to their first and second oscillation results (2014-2016) and the first look at the analysis of the Tevatron Run II experiments. We present statistical distributions for the characterization of these data and data driven models describing their consumption« less
Analyzing how we do Analysis and Consume Data, Results from the SciDAC-Data Project
NASA Astrophysics Data System (ADS)
Ding, P.; Aliaga, L.; Mubarak, M.; Tsaris, A.; Norman, A.; Lyon, A.; Ross, R.
2017-10-01
One of the main goals of the Dept. of Energy funded SciDAC-Data project is to analyze the more than 410,000 high energy physics datasets that have been collected, generated and defined over the past two decades by experiments using the Fermilab storage facilities. These datasets have been used as the input to over 5.6 million recorded analysis projects, for which detailed analytics have been gathered. The analytics and meta information for these datasets and analysis projects are being combined with knowledge of their part of the HEP analysis chains for major experiments to understand how modern computing and data delivery is being used. We present the first results of this project, which examine in detail how the CDF, D0, NOvA, MINERvA and MicroBooNE experiments have organized, classified and consumed petascale datasets to produce their physics results. The results include analysis of the correlations in dataset/file overlap, data usage patterns, data popularity, dataset dependency and temporary dataset consumption. The results provide critical insight into how workflows and data delivery schemes can be combined with different caching strategies to more efficiently perform the work required to mine these large HEP data volumes and to understand the physics analysis requirements for the next generation of HEP computing facilities. In particular we present a detailed analysis of the NOvA data organization and consumption model corresponding to their first and second oscillation results (2014-2016) and the first look at the analysis of the Tevatron Run II experiments. We present statistical distributions for the characterization of these data and data driven models describing their consumption.
Muldoon, P P; Jackson, K J; Perez, E; Harenza, J L; Molas, S; Rais, B; Anwar, H; Zaveri, N T; Maldonado, R; Maskos, U; McIntosh, J M; Dierssen, M; Miles, M F; Chen, X; De Biasi, M; Damaj, M I
2014-08-01
Recent data have indicated that α3β4* neuronal nicotinic (n) ACh receptors may play a role in morphine dependence. Here we investigated if nACh receptors modulate morphine physical withdrawal. To assess the role of α3β4* nACh receptors in morphine withdrawal, we used a genetic correlation approach using publically available datasets within the GeneNetwork web resource, genetic knockout and pharmacological tools. Male and female European-American (n = 2772) and African-American (n = 1309) subjects from the Study of Addiction: Genetics and Environment dataset were assessed for possible associations of polymorphisms in the 15q25 gene cluster and opioid dependence. BXD recombinant mouse lines demonstrated an increased expression of α3, β4 and α5 nACh receptor mRNA in the forebrain and midbrain, which significantly correlated with increased defecation in mice undergoing morphine withdrawal. Mice overexpressing the gene cluster CHRNA5/A3/B4 exhibited increased somatic signs of withdrawal. Furthermore, α5 and β4 nACh receptor knockout mice expressed decreased somatic withdrawal signs compared with their wild-type counterparts. Moreover, selective α3β4* nACh receptor antagonists, α-conotoxin AuIB and AT-1001, attenuated somatic signs of morphine withdrawal in a dose-related manner. In addition, two human datasets revealed a protective role for variants in the CHRNA3 gene, which codes for the α3 nACh receptor subunit, in opioid dependence and withdrawal. In contrast, we found that the α4β2* nACh receptor subtype is not involved in morphine somatic withdrawal signs. Overall, our findings suggest an important role for the α3β4* nACh receptor subtype in morphine physical dependence. © 2014 The British Pharmacological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon
Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) imagesmore » at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a gradient-based similarity measure. Finally, if needed, they obtain the position information of the liver lesion using the 3D preoperative image to which the registered 2D preoperative slice belongs. Results: The proposed method was applied to 23 clinical datasets and quantitative evaluations were conducted. With the exception of one clinical dataset that included US images of extremely low quality, 22 datasets of various liver status were successfully applied in the evaluation. Experimental results showed that the registration error between the anatomical features of US and preoperative MR images is less than 3 mm on average. The lesion tracking error was also found to be less than 5 mm at maximum. Conclusions: A new system has been proposed for real-time registration between 2D US and successive multiple 3D preoperative MR/CT images of the liver and was applied for indirect lesion tracking for image-guided intervention. The system is fully automatic and robust even with images that had low quality due to patient status. Through visual examinations and quantitative evaluations, it was verified that the proposed system can provide high lesion tracking accuracy as well as high registration accuracy, at performance levels which were acceptable for various clinical applications.« less
Fast automatic 3D liver segmentation based on a three-level AdaBoost-guided active shape model
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Baochun; Huang, Cheng; Zhou, Shoujun
Purpose: A robust, automatic, and rapid method for liver delineation is urgently needed for the diagnosis and treatment of liver disorders. Until now, the high variability in liver shape, local image artifacts, and the presence of tumors have complicated the development of automatic 3D liver segmentation. In this study, an automatic three-level AdaBoost-guided active shape model (ASM) is proposed for the segmentation of the liver based on enhanced computed tomography images in a robust and fast manner, with an emphasis on the detection of tumors. Methods: The AdaBoost voxel classifier and AdaBoost profile classifier were used to automatically guide three-levelmore » active shape modeling. In the first level of model initialization, fast automatic liver segmentation by an AdaBoost voxel classifier method is proposed. A shape model is then initialized by registration with the resulting rough segmentation. In the second level of active shape model fitting, a prior model based on the two-class AdaBoost profile classifier is proposed to identify the optimal surface. In the third level, a deformable simplex mesh with profile probability and curvature constraint as the external force is used to refine the shape fitting result. In total, three registration methods—3D similarity registration, probability atlas B-spline, and their proposed deformable closest point registration—are used to establish shape correspondence. Results: The proposed method was evaluated using three public challenge datasets: 3Dircadb1, SLIVER07, and Visceral Anatomy3. The results showed that our approach performs with promising efficiency, with an average of 35 s, and accuracy, with an average Dice similarity coefficient (DSC) of 0.94 ± 0.02, 0.96 ± 0.01, and 0.94 ± 0.02 for the 3Dircadb1, SLIVER07, and Anatomy3 training datasets, respectively. The DSC of the SLIVER07 testing and Anatomy3 unseen testing datasets were 0.964 and 0.933, respectively. Conclusions: The proposed automatic approach achieves robust, accurate, and fast liver segmentation for 3D CTce datasets. The AdaBoost voxel classifier can detect liver area quickly without errors and provides sufficient liver shape information for model initialization. The AdaBoost profile classifier achieves sufficient accuracy and greatly decreases segmentation time. These results show that the proposed segmentation method achieves a level of accuracy comparable to that of state-of-the-art automatic methods based on ASM.« less
Fast automatic 3D liver segmentation based on a three-level AdaBoost-guided active shape model.
He, Baochun; Huang, Cheng; Sharp, Gregory; Zhou, Shoujun; Hu, Qingmao; Fang, Chihua; Fan, Yingfang; Jia, Fucang
2016-05-01
A robust, automatic, and rapid method for liver delineation is urgently needed for the diagnosis and treatment of liver disorders. Until now, the high variability in liver shape, local image artifacts, and the presence of tumors have complicated the development of automatic 3D liver segmentation. In this study, an automatic three-level AdaBoost-guided active shape model (ASM) is proposed for the segmentation of the liver based on enhanced computed tomography images in a robust and fast manner, with an emphasis on the detection of tumors. The AdaBoost voxel classifier and AdaBoost profile classifier were used to automatically guide three-level active shape modeling. In the first level of model initialization, fast automatic liver segmentation by an AdaBoost voxel classifier method is proposed. A shape model is then initialized by registration with the resulting rough segmentation. In the second level of active shape model fitting, a prior model based on the two-class AdaBoost profile classifier is proposed to identify the optimal surface. In the third level, a deformable simplex mesh with profile probability and curvature constraint as the external force is used to refine the shape fitting result. In total, three registration methods-3D similarity registration, probability atlas B-spline, and their proposed deformable closest point registration-are used to establish shape correspondence. The proposed method was evaluated using three public challenge datasets: 3Dircadb1, SLIVER07, and Visceral Anatomy3. The results showed that our approach performs with promising efficiency, with an average of 35 s, and accuracy, with an average Dice similarity coefficient (DSC) of 0.94 ± 0.02, 0.96 ± 0.01, and 0.94 ± 0.02 for the 3Dircadb1, SLIVER07, and Anatomy3 training datasets, respectively. The DSC of the SLIVER07 testing and Anatomy3 unseen testing datasets were 0.964 and 0.933, respectively. The proposed automatic approach achieves robust, accurate, and fast liver segmentation for 3D CTce datasets. The AdaBoost voxel classifier can detect liver area quickly without errors and provides sufficient liver shape information for model initialization. The AdaBoost profile classifier achieves sufficient accuracy and greatly decreases segmentation time. These results show that the proposed segmentation method achieves a level of accuracy comparable to that of state-of-the-art automatic methods based on ASM.
Statistical analysis of co-occurrence patterns in microbial presence-absence datasets
Bewick, Sharon; Thielen, Peter; Mehoke, Thomas; Breitwieser, Florian P.; Paudel, Shishir; Adhikari, Arjun; Wolfe, Joshua; Slud, Eric V.; Karig, David; Fagan, William F.
2017-01-01
Drawing on a long history in macroecology, correlation analysis of microbiome datasets is becoming a common practice for identifying relationships or shared ecological niches among bacterial taxa. However, many of the statistical issues that plague such analyses in macroscale communities remain unresolved for microbial communities. Here, we discuss problems in the analysis of microbial species correlations based on presence-absence data. We focus on presence-absence data because this information is more readily obtainable from sequencing studies, especially for whole-genome sequencing, where abundance estimation is still in its infancy. First, we show how Pearson’s correlation coefficient (r) and Jaccard’s index (J)–two of the most common metrics for correlation analysis of presence-absence data–can contradict each other when applied to a typical microbiome dataset. In our dataset, for example, 14% of species-pairs predicted to be significantly correlated by r were not predicted to be significantly correlated using J, while 37.4% of species-pairs predicted to be significantly correlated by J were not predicted to be significantly correlated using r. Mismatch was particularly common among species-pairs with at least one rare species (<10% prevalence), explaining why r and J might differ more strongly in microbiome datasets, where there are large numbers of rare taxa. Indeed 74% of all species-pairs in our study had at least one rare species. Next, we show how Pearson’s correlation coefficient can result in artificial inflation of positive taxon relationships and how this is a particular problem for microbiome studies. We then illustrate how Jaccard’s index of similarity (J) can yield improvements over Pearson’s correlation coefficient. However, the standard null model for Jaccard’s index is flawed, and thus introduces its own set of spurious conclusions. We thus identify a better null model based on a hypergeometric distribution, which appropriately corrects for species prevalence. This model is available from recent statistics literature, and can be used for evaluating the significance of any value of an empirically observed Jaccard’s index. The resulting simple, yet effective method for handling correlation analysis of microbial presence-absence datasets provides a robust means of testing and finding relationships and/or shared environmental responses among microbial taxa. PMID:29145425
NASA Astrophysics Data System (ADS)
Prat, O. P.; Nelson, B. R.
2014-10-01
We use a suite of quantitative precipitation estimates (QPEs) derived from satellite, radar, and surface observations to derive precipitation characteristics over CONUS for the period 2002-2012. This comparison effort includes satellite multi-sensor datasets (bias-adjusted TMPA 3B42, near-real time 3B42RT), radar estimates (NCEP Stage IV), and rain gauge observations. Remotely sensed precipitation datasets are compared with surface observations from the Global Historical Climatology Network (GHCN-Daily) and from the PRISM (Parameter-elevation Regressions on Independent Slopes Model). The comparisons are performed at the annual, seasonal, and daily scales over the River Forecast Centers (RFCs) for CONUS. Annual average rain rates present a satisfying agreement with GHCN-D for all products over CONUS (± 6%). However, differences at the RFC are more important in particular for near-real time 3B42RT precipitation estimates (-33 to +49%). At annual and seasonal scales, the bias-adjusted 3B42 presented important improvement when compared to its near real time counterpart 3B42RT. However, large biases remained for 3B42 over the Western US for higher average accumulation (≥ 5 mm day-1) with respect to GHCN-D surface observations. At the daily scale, 3B42RT performed poorly in capturing extreme daily precipitation (> 4 in day-1) over the Northwest. Furthermore, the conditional analysis and the contingency analysis conducted illustrated the challenge of retrieving extreme precipitation from remote sensing estimates.
A photogrammetric technique for generation of an accurate multispectral optical flow dataset
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2017-06-01
A presence of an accurate dataset is the key requirement for a successful development of an optical flow estimation algorithm. A large number of freely available optical flow datasets were developed in recent years and gave rise for many powerful algorithms. However most of the datasets include only images captured in the visible spectrum. This paper is focused on the creation of a multispectral optical flow dataset with an accurate ground truth. The generation of an accurate ground truth optical flow is a rather complex problem, as no device for error-free optical flow measurement was developed to date. Existing methods for ground truth optical flow estimation are based on hidden textures, 3D modelling or laser scanning. Such techniques are either work only with a synthetic optical flow or provide a sparse ground truth optical flow. In this paper a new photogrammetric method for generation of an accurate ground truth optical flow is proposed. The method combines the benefits of the accuracy and density of a synthetic optical flow datasets with the flexibility of laser scanning based techniques. A multispectral dataset including various image sequences was generated using the developed method. The dataset is freely available on the accompanying web site.
Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser
NASA Astrophysics Data System (ADS)
Christen, M.
2016-06-01
Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.
Characterization and reconstruction of 3D stochastic microstructures via supervised learning.
Bostanabad, R; Chen, W; Apley, D W
2016-12-01
The need for computational characterization and reconstruction of volumetric maps of stochastic microstructures for understanding the role of material structure in the processing-structure-property chain has been highlighted in the literature. Recently, a promising characterization and reconstruction approach has been developed where the essential idea is to convert the digitized microstructure image into an appropriate training dataset to learn the stochastic nature of the morphology by fitting a supervised learning model to the dataset. This compact model can subsequently be used to efficiently reconstruct as many statistically equivalent microstructure samples as desired. The goal of this paper is to build upon the developed approach in three major directions by: (1) extending the approach to characterize 3D stochastic microstructures and efficiently reconstruct 3D samples, (2) improving the performance of the approach by incorporating user-defined predictors into the supervised learning model, and (3) addressing potential computational issues by introducing a reduced model which can perform as effectively as the full model. We test the extended approach on three examples and show that the spatial dependencies, as evaluated via various measures, are well preserved in the reconstructed samples. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Visual attention in egocentric field-of-view using RGB-D data
NASA Astrophysics Data System (ADS)
Olesova, Veronika; Benesova, Wanda; Polatsek, Patrik
2017-03-01
Most of the existing solutions predicting visual attention focus solely on referenced 2D images and disregard any depth information. This aspect has always represented a weak point since the depth is an inseparable part of the biological vision. This paper presents a novel method of saliency map generation based on results of our experiments with egocentric visual attention and investigation of its correlation with perceived depth. We propose a model to predict the attention using superpixel representation with an assumption that contrast objects are usually salient and have a sparser spatial distribution of superpixels than their background. To incorporate depth information into this model, we propose three different depth techniques. The evaluation is done on our new RGB-D dataset created by SMI eye-tracker glasses and KinectV2 device.
NASA Astrophysics Data System (ADS)
Steer, Philippe; Lague, Dimitri; Gourdon, Aurélie; Croissant, Thomas; Crave, Alain
2016-04-01
The grain-scale morphology of river sediments and their size distribution are important factors controlling the efficiency of fluvial erosion and transport. In turn, constraining the spatial evolution of these two metrics offer deep insights on the dynamics of river erosion and sediment transport from hillslopes to the sea. However, the size distribution of river sediments is generally assessed using statistically-biased field measurements and determining the grain-scale shape of river sediments remains a real challenge in geomorphology. Here we determine, with new methodological approaches based on the segmentation and geomorphological fitting of 3D point cloud dataset, the size distribution and grain-scale shape of sediments located in river environments. Point cloud segmentation is performed using either machine-learning algorithms or geometrical criterion, such as local plan fitting or curvature analysis. Once the grains are individualized into several sub-clouds, each grain-scale morphology is determined using a 3D geometrical fitting algorithm applied on the sub-cloud. If different geometrical models can be conceived and tested, only ellipsoidal models were used in this study. A phase of results checking is then performed to remove grains showing a best-fitting model with a low level of confidence. The main benefits of this automatic method are that it provides 1) an un-biased estimate of grain-size distribution on a large range of scales, from centimeter to tens of meters; 2) access to a very large number of data, only limited by the number of grains in the point-cloud dataset; 3) access to the 3D morphology of grains, in turn allowing to develop new metrics characterizing the size and shape of grains. The main limit of this method is that it is only able to detect grains with a characteristic size greater than the resolution of the point cloud. This new 3D granulometric method is then applied to river terraces both in the Poerua catchment in New-Zealand and along the Laonong river in Taiwan, which point clouds were obtained using both terrestrial lidar scanning and structure from motion photogrammetry.
NASA Astrophysics Data System (ADS)
Qiu, T.; Song, C.
2017-12-01
Many studies have examined the urbanization-induced vegetation phenology changes in urban environments at regional scales. However, relatively few studies have investigated the effects of urban expansion on vegetation phenology at global scale. In this study, we used times series of NASA Vegetation Index and Phenology (VIP) and ESA Climate Change Initiative Land Cover datasets to quantify how urban expansion affects growing seasons of vegetation in 14 different biomes along both latitude and urbanization gradients from 1993 to 2014. First, we calculated the percentages of impervious surface area (ISA) at 0.05˚ grid to match the spatial resolution of VIP dataset. We then applied logistic models to the ISA series to characterize the time periods of stable ISA, pre-urbanization and post-urbanization for each grid. The amplitudes of urbanization were also derived from the fitted ISA series. We then calculated the mean values of the Start of Season (SOS), End of Season (EOS) and Length of Season (LOS) from VIP datasets within each period. Linear regressions were used to quantify the correlations between ISA and SOS/EOS/LOS in 14 biomes along the latitude gradient for each period. We also calculated the differences of SOS/EOS/LOS between pre-urbanization and post-urbanization periods and applied quantile regressions to characterize the relationships between amplitudes of urbanization and those differences. We found significant correlations (p-value < 0.05) between ISA and the growing seasons of a) boreal forests at 55-60 ˚N; b) temperate broadleaf and mixed forests at 30-55 ˚N; c) temperate coniferous forests at 30-45 ˚N; d) temperate grasslands, savannas, and shrublands at 35-60 ˚N and 30-35 ˚S. We also found a significant positive correlation (p-value <0.05) between amplitudes of urbanization and LOS as well as a significant negative correlation (p-value<0.05) between amplitudes of urbanization and SOS in temperate broadleaf and mixed forest.
Fleming, A; Schenkel, F S; Koeck, A; Malchiodi, F; Ali, R A; Corredig, M; Mallard, B; Sargolzaei, M; Miglior, F
2017-05-01
The objective of this study was to estimate the heritability of milk fat globule (MFG) size and mid-infrared (MIR) predicted MFG size in Holstein cattle. The genetic correlations between measured and predicted MFG size with milk fat and protein percentage were also investigated. Average MFG size was measured in 1,583 milk samples taken from 254 Holstein cows from 29 herds across Canada. Size was expressed as volume moment mean (D[4,3]) and surface moment mean (D[3,2]). Analyzed milk samples also had average MFG size predicted from their MIR spectral records. Fat and protein percentages were obtained for all test-day milk samples in the cow's lactation. Univariate and bivariate repeatability animal models were used to estimate heritability and genetic correlations. Moderate heritabilities of 0.364 and 0.466 were found for D[4,3] and D[3,2], respectively, and a strong genetic correlation was found between the 2 traits (0.98). The heritabilities for the MIR-predicted MFG size were lower than those estimated for the measured MFG size at 0.300 for predicted D[4,3] and 0.239 for predicted D[3,2]. The genetic correlation between measured and predicted D[4,3] was 0.685; the correlation was slightly higher between measured and predicted D[3,2] at 0.764, likely due to the better prediction accuracy of D[3,2]. Milk fat percentage had moderate genetic correlations with both D[4,3] and D[3,2] (0.538 and 0.681, respectively). The genetic correlation between predicted MFG size and fat percentage was much stronger (greater than 0.97 for both predicted D[4,3] and D[3,2]). The stronger correlation suggests a limitation for the use of the predicted values of MFG size as indicator traits for true average MFG size in milk in selection programs. Larger samples sizes are required to provide better evidence of the estimated genetic parameters. A genetic component appears to exist for the average MFG size in bovine milk, and the variation could be exploited in selection programs. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Accurate reconstruction of 3D cardiac geometry from coarsely-sliced MRI.
Ringenberg, Jordan; Deo, Makarand; Devabhaktuni, Vijay; Berenfeld, Omer; Snyder, Brett; Boyers, Pamela; Gold, Jeffrey
2014-02-01
We present a comprehensive validation analysis to assess the geometric impact of using coarsely-sliced short-axis images to reconstruct patient-specific cardiac geometry. The methods utilize high-resolution diffusion tensor MRI (DTMRI) datasets as reference geometries from which synthesized coarsely-sliced datasets simulating in vivo MRI were produced. 3D models are reconstructed from the coarse data using variational implicit surfaces through a commonly used modeling tool, CardioViz3D. The resulting geometries were then compared to the reference DTMRI models from which they were derived to analyze how well the synthesized geometries approximate the reference anatomy. Averaged over seven hearts, 95% spatial overlap, less than 3% volume variability, and normal-to-surface distance of 0.32 mm was observed between the synthesized myocardial geometries reconstructed from 8 mm sliced images and the reference data. The results provide strong supportive evidence to validate the hypothesis that coarsely-sliced MRI may be used to accurately reconstruct geometric ventricular models. Furthermore, the use of DTMRI for validation of in vivo MRI presents a novel benchmark procedure for studies which aim to substantiate their modeling and simulation methods using coarsely-sliced cardiac data. In addition, the paper outlines a suggested original procedure for deriving image-based ventricular models using the CardioViz3D software. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Parsa, Azin; Ibrahim, Norliza; Hassan, Bassam; Motroni, Alessandro; van der Stelt, Paul; Wismeijer, Daniel
2012-01-01
To assess the reliability of cone beam computed tomography (CBCT) voxel gray value measurements using Hounsfield units (HU) derived from multislice computed tomography (MSCT) as a clinical reference (gold standard). Ten partially edentulous human mandibular cadavers were scanned by two types of computed tomography (CT) modalities: multislice CT and cone beam CT. On MSCT scans, eight regions of interest (ROI) designating the site for preoperative implant placement were selected in each mandible. The datasets from both CT systems were matched using a three-dimensional (3D) registration algorithm. The mean voxel gray values of the region around the implant sites were compared between MSCT and CBCT. Significant differences between the mean gray values obtained by CBCT and HU by MSCT were found. In all the selected ROIs, CBCT showed higher mean values than MSCT. A strong correlation (R=0.968) between mean voxel gray values of CBCT and mean HU of MSCT was determined. Voxel gray values from CBCT deviate from actual HU units. However, a strong linear correlation exists, which may permit deriving actual HU units from CBCT using linear regression models.
Server-based Approach to Web Visualization of Integrated Three-dimensional Brain Imaging Data
Poliakov, Andrew V.; Albright, Evan; Hinshaw, Kevin P.; Corina, David P.; Ojemann, George; Martin, Richard F.; Brinkley, James F.
2005-01-01
The authors describe a client-server approach to three-dimensional (3-D) visualization of neuroimaging data, which enables researchers to visualize, manipulate, and analyze large brain imaging datasets over the Internet. All computationally intensive tasks are done by a graphics server that loads and processes image volumes and 3-D models, renders 3-D scenes, and sends the renderings back to the client. The authors discuss the system architecture and implementation and give several examples of client applications that allow visualization and analysis of integrated language map data from single and multiple patients. PMID:15561787
Ranking Causal Anomalies via Temporal and Dynamical Analysis on Vanishing Correlations.
Cheng, Wei; Zhang, Kai; Chen, Haifeng; Jiang, Guofei; Chen, Zhengzhang; Wang, Wei
2016-08-01
Modern world has witnessed a dramatic increase in our ability to collect, transmit and distribute real-time monitoring and surveillance data from large-scale information systems and cyber-physical systems. Detecting system anomalies thus attracts significant amount of interest in many fields such as security, fault management, and industrial optimization. Recently, invariant network has shown to be a powerful way in characterizing complex system behaviours. In the invariant network, a node represents a system component and an edge indicates a stable, significant interaction between two components. Structures and evolutions of the invariance network, in particular the vanishing correlations, can shed important light on locating causal anomalies and performing diagnosis. However, existing approaches to detect causal anomalies with the invariant network often use the percentage of vanishing correlations to rank possible casual components, which have several limitations: 1) fault propagation in the network is ignored; 2) the root casual anomalies may not always be the nodes with a high-percentage of vanishing correlations; 3) temporal patterns of vanishing correlations are not exploited for robust detection. To address these limitations, in this paper we propose a network diffusion based framework to identify significant causal anomalies and rank them. Our approach can effectively model fault propagation over the entire invariant network, and can perform joint inference on both the structural, and the time-evolving broken invariance patterns. As a result, it can locate high-confidence anomalies that are truly responsible for the vanishing correlations, and can compensate for unstructured measurement noise in the system. Extensive experiments on synthetic datasets, bank information system datasets, and coal plant cyber-physical system datasets demonstrate the effectiveness of our approach.
Degradation of metallic materials studied by correlative tomography
NASA Astrophysics Data System (ADS)
Burnett, T. L.; Holroyd, N. J. H.; Lewandowski, J. J.; Ogurreck, M.; Rau, C.; Kelley, R.; Pickering, E. J.; Daly, M.; Sherry, A. H.; Pawar, S.; Slater, T. J. A.; Withers, P. J.
2017-07-01
There are a huge array of characterization techniques available today and increasingly powerful computing resources allowing for the effective analysis and modelling of large datasets. However, each experimental and modelling tool only spans limited time and length scales. Correlative tomography can be thought of as the extension of correlative microscopy into three dimensions connecting different techniques, each providing different types of information, or covering different time or length scales. Here the focus is on the linking of time lapse X-ray computed tomography (CT) and serial section electron tomography using the focussed ion beam (FIB)-scanning electron microscope to study the degradation of metals. Correlative tomography can provide new levels of detail by delivering a multiscale 3D picture of key regions of interest. Specifically, the Xe+ Plasma FIB is used as an enabling tool for large-volume high-resolution serial sectioning of materials, and also as a tool for preparation of microscale test samples and samples for nanoscale X-ray CT imaging. The exemplars presented illustrate general aspects relating to correlative workflows, as well as to the time-lapse characterisation of metal microstructures during various failure mechanisms, including ductile fracture of steel and the corrosion of aluminium and magnesium alloys. Correlative tomography is already providing significant insights into materials behaviour, linking together information from different instruments across different scales. Multiscale and multifaceted work flows will become increasingly routine, providing a feed into multiscale materials models as well as illuminating other areas, particularly where hierarchical structures are of interest.
Investigation of the line arrangement of 2D resistivity surveys for 3D inversion*
NASA Astrophysics Data System (ADS)
Inoue, Keisuke; Nakazato, Hiroomi; Takeuchi, Mutsuo; Sugimoto, Yoshihiro; Kim, Hee Joon; Yoshisako, Hiroshi; Konno, Michiaki; Shoda, Daisuke
2018-03-01
We have conducted numerical and field experiments to investigate the applicability of electrode configurations and line layouts commonly used for two-dimensional (2D) resistivity surveys to 3D inversion. We examined three kinds of electrode configurations and two types of line arrangements, for 16 resistivity models of a conductive body in a homogeneous half-space. The results of the numerical experiment revealed that the parallel-line arrangement was effective in identifying the approximate location of the conductive body. The orthogonal-line arrangement was optimal for identifying a target body near the line intersection. As a result, we propose that parallel lines are useful to highlight areas of particular interest where further detailed work with an intersecting line could be carried out. In the field experiment, 2D resistivity data were measured on a loam layer with a backfilled pit. The reconstructed resistivity image derived from parallel-line data showed a low-resistivity portion near the backfilled pit. When an orthogonal line was added to the parallel lines, the newly estimated location of the backfilled pit coincided well with the actual location. In a further field application, we collected several 2D resistivity datasets in the Nojima Fault area in Awaji Island. The 3D inversion of these datasets provided a resistivity distribution corresponding to the geological structure. In particular, the Nojima Fault was imaged as the western boundary of a low-resistivity belt, from only two orthogonal lines.
Construction of 4D high-definition cortical surface atlases of infants: Methods and applications.
Li, Gang; Wang, Li; Shi, Feng; Gilmore, John H; Lin, Weili; Shen, Dinggang
2015-10-01
In neuroimaging, cortical surface atlases play a fundamental role for spatial normalization, analysis, visualization, and comparison of results across individuals and different studies. However, existing cortical surface atlases created for adults are not suitable for infant brains during the first two postnatal years, which is the most dynamic period of postnatal structural and functional development of the highly-folded cerebral cortex. Therefore, spatiotemporal cortical surface atlases for infant brains are highly desired yet still lacking for accurate mapping of early dynamic brain development. To bridge this significant gap, leveraging our infant-dedicated computational pipeline for cortical surface-based analysis and the unique longitudinal infant MRI dataset acquired in our research center, in this paper, we construct the first spatiotemporal (4D) high-definition cortical surface atlases for the dynamic developing infant cortical structures at seven time points, including 1, 3, 6, 9, 12, 18, and 24 months of age, based on 202 serial MRI scans from 35 healthy infants. For this purpose, we develop a novel method to ensure the longitudinal consistency and unbiasedness to any specific subject and age in our 4D infant cortical surface atlases. Specifically, we first compute the within-subject mean cortical folding by unbiased groupwise registration of longitudinal cortical surfaces of each infant. Then we establish longitudinally-consistent and unbiased inter-subject cortical correspondences by groupwise registration of the geometric features of within-subject mean cortical folding across all infants. Our 4D surface atlases capture both longitudinally-consistent dynamic mean shape changes and the individual variability of cortical folding during early brain development. Experimental results on two independent infant MRI datasets show that using our 4D infant cortical surface atlases as templates leads to significantly improved accuracy for spatial normalization of cortical surfaces across infant individuals, in comparison to the infant surface atlases constructed without longitudinal consistency and also the FreeSurfer adult surface atlas. Moreover, based on our 4D infant surface atlases, for the first time, we reveal the spatially-detailed, region-specific correlation patterns of the dynamic cortical developmental trajectories between different cortical regions during early brain development. Copyright © 2015 Elsevier B.V. All rights reserved.
F3D Image Processing and Analysis for Many - and Multi-core Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
F3D is written in OpenCL, so it achieve[sic] platform-portable parallelism on modern mutli-core CPUs and many-core GPUs. The interface and mechanims to access F3D core are written in Java as a plugin for Fiji/ImageJ to deliver several key image-processing algorithms necessary to remove artifacts from micro-tomography data. The algorithms consist of data parallel aware filters that can efficiently utilizes[sic] resources and can work on out of core datasets and scale efficiently across multiple accelerators. Optimizing for data parallel filters, streaming out of core datasets, and efficient resource and memory and data managements over complex execution sequence of filters greatly expeditesmore » any scientific workflow with image processing requirements. F3D performs several different types of 3D image processing operations, such as non-linear filtering using bilateral filtering and/or median filtering and/or morphological operators (MM). F3D gray-level MM operators are one-pass constant time methods that can perform morphological transformations with a line-structuring element oriented in discrete directions. Additionally, MM operators can be applied to gray-scale images, and consist of two parts: (a) a reference shape or structuring element, which is translated over the image, and (b) a mechanism, or operation, that defines the comparisons to be performed between the image and the structuring element. This tool provides a critical component within many complex pipelines such as those for performing automated segmentation of image stacks. F3D is also called a "descendent" of Quant-CT, another software we developed in the past. These two modules are to be integrated in a next version. Further details were reported in: D.M. Ushizima, T. Perciano, H. Krishnan, B. Loring, H. Bale, D. Parkinson, and J. Sethian. Structure recognition from high-resolution images of ceramic composites. IEEE International Conference on Big Data, October 2014.« less
3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models
Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Cai, Weixing; Rottmann, Joerg; Li, Ruijiang; Williams, Christopher; Wagar, Matthew; Berbeco, Ross; Ionascu, Dan; Lewis, John H.
2015-01-01
3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we develop and perform initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and use these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparing to ground truth digital and physical phantom images. The performance of 4DCBCT- and 4DCT- based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms, and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. PMID:25905722
NASA Astrophysics Data System (ADS)
Shibahara, A.; Ohwada, M.; Itoh, J.; Kazahaya, K.; Tsukamoto, H.; Takahashi, M.; Morikawa, N.; Takahashi, H.; Yasuhara, M.; Inamura, A.; Oyama, Y.
2009-12-01
We established 3D geological and hydrological model around Iwate volcano to visualize 3D relationships between subsurface structure and groundwater profile. Iwate volcano is a typical polygenetic volcano located in NE Japan, and its body is composed of two stratovolcanoes which have experienced sector collapses several times. Because of this complex structure, groundwater flow around Iwate volcano is strongly restricted by subsurface construction. For example, Kazahaya and Yasuhara (1999) clarified that shallow groundwater in north and east flanks of Iwate volcano are recharged at the mountaintop, and these flow systems are restricted in north and east area because of the structure of younger volcanic body collapse. In addition, Ohwada et al. (2006) found that these shallow groundwater in north and east flanks have relatively high concentration of major chemical components and high 3He/4He ratios. In this study, we succeeded to visualize the spatial relationship between subsurface structure and chemical profile of shallow and deep groundwater system using 3D model on the GIS. In the study region, a number of geological and hydrological datasets, such as boring log data and groundwater chemical profile, were reported. All these paper data are digitized and converted to meshed data on the GIS, and plotted in the three dimensional space to visualize spatial distribution. We also inputted digital elevation model (DEM) around Iwate volcano issued by the Geographical Survey Institute of Japan, and digital geological maps issued by Geological Survey of Japan, AIST. All 3D models are converted into VRML format, and can be used as a versatile dataset on personal computer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGrath, Deirdre M., E-mail: d.mcgrath@sheffield.ac.uk; Lee, Jenny; Foltz, Warren D.
Purpose: Validation of MRI-guided tumor boundary delineation for targeted prostate cancer therapy is achieved via correlation with gold-standard histopathology of radical prostatectomy specimens. Challenges to accurate correlation include matching the pathology sectioning plane with the in vivo imaging slice plane and correction for the deformation that occurs between in vivo imaging and histology. A methodology is presented for matching of the histological sectioning angle and position to the in vivo imaging slices. Methods: Patients (n = 4) with biochemical failure following external beam radiotherapy underwent diagnostic MRI to confirm localized recurrence of prostate cancer, followed by salvage radical prostatectomy. High-resolutionmore » 3-D MRI of the ex vivo specimens was acquired to determine the pathology sectioning angle that best matched the in vivo imaging slice plane, using matching anatomical features and implanted fiducials. A novel sectioning device was developed to guide sectioning at the correct angle, and to assist the insertion of reference dye marks to aid in histopathology reconstruction. Results: The percentage difference in the positioning of the urethra in the ex vivo pathology sections compared to the positioning in in vivo images was reduced from 34% to 7% through slicing at the best match angle. Reference dye marks were generated, which were visible in ex vivo imaging, in the tissue sections before and after processing, and in histology sections. Conclusions: The method achieved an almost fivefold reduction in the slice-matching error and is readily implementable in combination with standard MRI technology. The technique will be employed to generate datasets for correlation of whole-specimen prostate histopathology with in vivo diagnostic MRI using 3-D deformable registration, allowing assessment of the sensitivity and specificity of MRI parameters for prostate cancer. Although developed specifically for prostate, the method is readily adaptable to other types of whole tissue specimen, such as mastectomy or liver resection.« less
NASA Astrophysics Data System (ADS)
Peña Angulo, Dhais; Trigo, Ricardo; Cortesi, Nicola; Gonzalez-Hidalgo, Jose Carlos
2016-04-01
We have analyzed at monthly scale the spatial distribution of Pearson correlation between monthly mean of maximum (Tmax) and minimum (Tmin) temperatures with weather types (WTs) in the Iberian Peninsula (IP), represent them in a high spatial resolution grid (10km x 10km) from MOTEDAS dataset (Gonzalez-Hidalgo et al., 2015a). The WT classification was that developed by Jenkinson and Collison, adapted to the Iberian Peninsula by Trigo and DaCamara, using Sea Level Pressure data from NCAR/NCEP Reanalysis dataset (period 1951-2010). The spatial distribution of Pearson correlations shows a clear zonal gradient in Tmax under the zonal advection produced in westerly (W) and easterly (E) flows, with negative correlation in the coastland where the air mass come from but positive correlation to the inland areas. The same is true under North-West (NW), North-East (NE), South-West (SW) and South-East (SE) WTs. These spatial gradients are coherent with the spatial distribution of the main mountain chain and offer an example of regional adiabatic phenomena that affect the entire IP (Peña-Angulo et al., 2015b). These spatial gradients have not been observed in Tmin. We suggest that Tmin values are less sensitive to changes in Sea Level Pressure and more related to local factors. These directional WT present a monthly frequency over 10 days and could be a valuable tool for downscaling processes. González-Hidalgo J.C., Peña-Angulo D., Brunetti M., Cortesi, C. (2015a): MOTEDAS: a new monthly temperature database for mainland Spain and the trend in temperature (1951-2010). International Journal of Climatology 31, 715-731. DOI: 10.1002/joc.4298 Peña-Angulo, D., Trigo, R., Cortesi, C., González-Hidalgo, J.C. (2015b): The influence of weather types on the monthly average maximum and minimum temperatures in the Iberian Peninsula. Submitted to Hydrology and Earth System Sciences.
TH-A-9A-01: Active Optical Flow Model: Predicting Voxel-Level Dose Prediction in Spine SBRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, J; Wu, Q.J.; Yin, F
2014-06-15
Purpose: To predict voxel-level dose distribution and enable effective evaluation of cord dose sparing in spine SBRT. Methods: We present an active optical flow model (AOFM) to statistically describe cord dose variations and train a predictive model to represent correlations between AOFM and PTV contours. Thirty clinically accepted spine SBRT plans are evenly divided into training and testing datasets. The development of predictive model consists of 1) collecting a sequence of dose maps including PTV and OAR (spinal cord) as well as a set of associated PTV contours adjacent to OAR from the training dataset, 2) classifying data into fivemore » groups based on PTV's locations relative to OAR, two “Top”s, “Left”, “Right”, and “Bottom”, 3) randomly selecting a dose map as the reference in each group and applying rigid registration and optical flow deformation to match all other maps to the reference, 4) building AOFM by importing optical flow vectors and dose values into the principal component analysis (PCA), 5) applying another PCA to features of PTV and OAR contours to generate an active shape model (ASM), and 6) computing a linear regression model of correlations between AOFM and ASM.When predicting dose distribution of a new case in the testing dataset, the PTV is first assigned to a group based on its contour characteristics. Contour features are then transformed into ASM's principal coordinates of the selected group. Finally, voxel-level dose distribution is determined by mapping from the ASM space to the AOFM space using the predictive model. Results: The DVHs predicted by the AOFM-based model and those in clinical plans are comparable in training and testing datasets. At 2% volume the dose difference between predicted and clinical plans is 4.2±4.4% and 3.3±3.5% in the training and testing datasets, respectively. Conclusion: The AOFM is effective in predicting voxel-level dose distribution for spine SBRT. Partially supported by NIH/NCI under grant #R21CA161389 and a master research grant by Varian Medical System.« less
One Dimensional(1D)-to-2D Crossover of Spin Correlations in the 3D Magnet ZnMn 2O 4
Disseler, S. M.; Chen, Y.; Yeo, S.; ...
2015-12-08
In this paper we report on the intriguing evolution of the dynamical spin correlations of the frustrated spinel ZnMn 2O 4. Inelastic neutron scattering and magnetization studies reveal that the dynamical correlations at high temperatures are 1D. At lower temperature, these dynamical correlations become 2D. Surprisingly, the dynamical correlations condense into a quasi 2D Ising-like ordered state, making this a rare observation of two dimensional order on the spinel lattice. Remarkably, 3D ordering is not observed down to temperatures as low as 300 mK. This unprecedented dimensional crossover stems from frustrated exchange couplings due to the huge Jahn-Teller distortions aroundmore » Mn 3+ ions on the spinel lattice.« less
Automatic estimation of heart boundaries and cardiothoracic ratio from chest x-ray images
NASA Astrophysics Data System (ADS)
Dallal, Ahmed H.; Agarwal, Chirag; Arbabshirani, Mohammad R.; Patel, Aalpen; Moore, Gregory
2017-03-01
Cardiothoracic ratio (CTR) is a widely used radiographic index to assess heart size on chest X-rays (CXRs). Recent studies have suggested that also two-dimensional CTR might contain clinical information about the heart function. However, manual measurement of such indices is both subjective and time consuming. This study proposes a fast algorithm to automatically estimate CTR indices based on CXRs. The algorithm has three main steps: 1) model based lung segmentation, 2) estimation of heart boundaries from lung contours, and 3) computation of cardiothoracic indices from the estimated boundaries. We extended a previously employed lung detection algorithm to automatically estimate heart boundaries without using ground truth heart markings. We used two datasets: a publicly available dataset with 247 images as well as clinical dataset with 167 studies from Geisinger Health System. The models of lung fields are learned from both datasets. The lung regions in a given test image are estimated by registering the learned models to patient CXRs. Then, heart region is estimated by applying Harris operator on segmented lung fields to detect the corner points corresponding to the heart boundaries. The algorithm calculates three indices, CTR1D, CTR2D, and cardiothoracic area ratio (CTAR). The method was tested on 103 clinical CXRs and average error rates of 7.9%, 25.5%, and 26.4% (for CTR1D, CTR2D, and CTAR respectively) were achieved. The proposed method outperforms previous CTR estimation methods without using any heart templates. This method can have important clinical implications as it can provide fast and accurate estimate of cardiothoracic indices.
Gridded global surface ozone metrics for atmospheric chemistry model evaluation
NASA Astrophysics Data System (ADS)
Sofen, E. D.; Bowdalo, D.; Evans, M. J.; Apadula, F.; Bonasoni, P.; Cupeiro, M.; Ellul, R.; Galbally, I. E.; Girgzdiene, R.; Luppo, S.; Mimouni, M.; Nahas, A. C.; Saliba, M.; Tørseth, K.; Wmo Gaw, Epa Aqs, Epa Castnet, Capmon, Naps, Airbase, Emep, Eanet Ozone Datasets, All Other Contributors To
2015-07-01
The concentration of ozone at the Earth's surface is measured at many locations across the globe for the purposes of air quality monitoring and atmospheric chemistry research. We have brought together all publicly available surface ozone observations from online databases from the modern era to build a consistent dataset for the evaluation of chemical transport and chemistry-climate (Earth System) models for projects such as the Chemistry-Climate Model Initiative and Aer-Chem-MIP. From a total dataset of approximately 6600 sites and 500 million hourly observations from 1971-2015, approximately 2200 sites and 200 million hourly observations pass screening as high-quality sites in regional background locations that are appropriate for use in global model evaluation. There is generally good data volume since the start of air quality monitoring networks in 1990 through 2013. Ozone observations are biased heavily toward North America and Europe with sparse coverage over the rest of the globe. This dataset is made available for the purposes of model evaluation as a set of gridded metrics intended to describe the distribution of ozone concentrations on monthly and annual timescales. Metrics include the moments of the distribution, percentiles, maximum daily eight-hour average (MDA8), SOMO35, AOT40, and metrics related to air quality regulatory thresholds. Gridded datasets are stored as netCDF-4 files and are available to download from the British Atmospheric Data Centre (doi:10.5285/08fbe63d-fa6d-4a7a-b952-5932e3ab0452). We provide recommendations to the ozone measurement community regarding improving metadata reporting to simplify ongoing and future efforts in working with ozone data from disparate networks in a consistent manner.
Assimilation of surface NO2 and O3 observations into the SILAM chemistry transport model
NASA Astrophysics Data System (ADS)
Vira, J.; Sofiev, M.
2014-08-01
This paper describes assimilation of trace gas observations into the chemistry transport model SILAM using the 3D-Var method. Assimilation results for year 2012 are presented for the prominent photochemical pollutants ozone (O3) and nitrogen dioxide (NO2). Both species are covered by the Airbase observation database, which provides the observational dataset used in this study. Attention is paid to the background and observation error covariance matrices, which are obtained primarily by iterative application of a posteriori diagnostics. The diagnostics are computed separately for two months representing summer and winter conditions, and further disaggregated by time of day. This allows deriving background and observation error covariance definitions which include both seasonal and diurnal variation. The consistency of the obtained covariance matrices is verified using χ2 diagnostics. The analysis scores are computed for a control set of observation stations withheld from assimilation. Compared to a free-running model simulation, the correlation coefficient for daily maximum values is improved from 0.8 to 0.9 for O3 and from 0.53 to 0.63 for NO2.
Thoracic Idiopathic Scoliosis Severity Is Highly Correlated with 3D Measures of Thoracic Kyphosis.
Sullivan, T Barrett; Reighard, Fredrick G; Osborn, Emily J; Parvaresh, Kevin C; Newton, Peter O
2017-06-07
Loss of thoracic kyphosis has been associated with thoracic idiopathic scoliosis. Modern 3-dimensional (3D) imaging systems allow more accurate characterization of the scoliotic deformity than traditional radiographs. In this study, we utilized 3D calculations to characterize the association between increasing scoliosis severity and changes in the sagittal and axial planes. Patients evaluated in a scoliosis clinic and determined to have either a normal spine or idiopathic scoliosis were included in the analysis. All underwent upright, biplanar radiography with 3D reconstructions. Two-dimensional (2D) measurements of the magnitude of the thoracic major curve and the thoracic kyphosis were recorded. Image processing and MATLAB analysis were utilized to produce a 3D calculation of thoracic kyphosis and apical vertebral axial rotation. Regression analysis was performed to determine the correlation of 2D kyphosis, 3D kyphosis, and apical axial rotation with the magnitude of the thoracic major curve. The 442 patients for whom 2D and 3D data were collected had a main thoracic curve magnitude ranging from 1° to 118°. Linear regression analysis of the 2D and 3D T5-T12 kyphosis versus main thoracic curve magnitude yielded significant models (p < 0.05). The 2D model had a minimally negative slope (-0.07), a small R value (0.02), and a poor correlation coefficient (-0.14). In contrast, the 3D model had a strongly negative slope (-0.54), a high R value (0.56), and a strong correlation coefficient (-0.75). Curve magnitude also had a strong correlation with loss of 3D T1-T12 kyphosis and increasing apical axial rotation. Segmentally calculated 3D thoracic kyphosis had a strongly negative correlation with the magnitude of the main thoracic curve. With near uniformity, 3D thoracic kyphosis progressively decreased as scoliosis magnitude increased, at a rate of more than half the increase in the main thoracic curve magnitude. Analysis confirmed a surprisingly strong correlation between scoliosis severity and loss of 3D kyphosis that was absent in the 2D analysis. A similarly strong correlation between curve magnitude and apical axial rotation was evident. These findings lend further credence to the concept that scoliosis progresses in the coronal, sagittal, and axial planes simultaneously. The findings of this study suggest that 3D assessment is critical for adequate characterization of the multiplanar deformity of idiopathic scoliosis and deformity in the sagittal plane is linked to deformity in the coronal plane. Increasing severity of coronal plane curvature is associated with a progressive loss of thoracic kyphosis that should be anticipated so that the appropriate intraoperative techniques for correction of idiopathic scoliosis can be applied in all 3 planes.
RefEx, a reference gene expression dataset as a web tool for the functional analysis of genes.
Ono, Hiromasa; Ogasawara, Osamu; Okubo, Kosaku; Bono, Hidemasa
2017-08-29
Gene expression data are exponentially accumulating; thus, the functional annotation of such sequence data from metadata is urgently required. However, life scientists have difficulty utilizing the available data due to its sheer magnitude and complicated access. We have developed a web tool for browsing reference gene expression pattern of mammalian tissues and cell lines measured using different methods, which should facilitate the reuse of the precious data archived in several public databases. The web tool is called Reference Expression dataset (RefEx), and RefEx allows users to search by the gene name, various types of IDs, chromosomal regions in genetic maps, gene family based on InterPro, gene expression patterns, or biological categories based on Gene Ontology. RefEx also provides information about genes with tissue-specific expression, and the relative gene expression values are shown as choropleth maps on 3D human body images from BodyParts3D. Combined with the newly incorporated Functional Annotation of Mammals (FANTOM) dataset, RefEx provides insight regarding the functional interpretation of unfamiliar genes. RefEx is publicly available at http://refex.dbcls.jp/.
RefEx, a reference gene expression dataset as a web tool for the functional analysis of genes
Ono, Hiromasa; Ogasawara, Osamu; Okubo, Kosaku; Bono, Hidemasa
2017-01-01
Gene expression data are exponentially accumulating; thus, the functional annotation of such sequence data from metadata is urgently required. However, life scientists have difficulty utilizing the available data due to its sheer magnitude and complicated access. We have developed a web tool for browsing reference gene expression pattern of mammalian tissues and cell lines measured using different methods, which should facilitate the reuse of the precious data archived in several public databases. The web tool is called Reference Expression dataset (RefEx), and RefEx allows users to search by the gene name, various types of IDs, chromosomal regions in genetic maps, gene family based on InterPro, gene expression patterns, or biological categories based on Gene Ontology. RefEx also provides information about genes with tissue-specific expression, and the relative gene expression values are shown as choropleth maps on 3D human body images from BodyParts3D. Combined with the newly incorporated Functional Annotation of Mammals (FANTOM) dataset, RefEx provides insight regarding the functional interpretation of unfamiliar genes. RefEx is publicly available at http://refex.dbcls.jp/. PMID:28850115
Ultrastructurally-smooth thick partitioning and volume stitching for larger-scale connectomics
Hayworth, Kenneth J.; Xu, C. Shan; Lu, Zhiyuan; Knott, Graham W.; Fetter, Richard D.; Tapia, Juan Carlos; Lichtman, Jeff W.; Hess, Harald F.
2015-01-01
FIB-SEM has become an essential tool for studying neural tissue at resolutions below 10×10×10 nm, producing datasets superior for automatic connectome tracing. We present a technical advance, ultrathick sectioning, which reliably subdivides embedded tissue samples into chunks (20 µm thick) optimally sized and mounted for efficient, parallel FIB-SEM imaging. These chunks are imaged separately and then ‘volume stitched’ back together, producing a final 3D dataset suitable for connectome tracing. PMID:25686390
Analysis of tomographic mineralogical data using YaDiV—Overview and practical case study
NASA Astrophysics Data System (ADS)
Friese, Karl-Ingo; Cichy, Sarah B.; Wolter, Franz-Erich; Botcharnikov, Roman E.
2013-07-01
We introduce the 3D-segmentation and -visualization software YaDiV to the mineralogical application of rock texture analysis. YaDiV has been originally designed to process medical DICOM datasets. But due to software advancements and additional plugins, this open-source software can now be easily used for the fast quantitative morphological characterization of geological objects from tomographic datasets. In this paper, we give a summary of YaDiV's features and demonstrate the advantages of 3D-stereographic visualization and the accuracy of 3D-segmentation for the analysis of geological samples. For this purpose, we present a virtual and a real use case (here: experimentally crystallized and vesiculated magmatic rocks, corresponding to the composition of the 1991-1995 Unzen eruption, Japan). Especially the spacial representation of structures in YaDiV allows an immediate, intuitive understanding of the 3D-structures, which may not become clear by only looking on 2D-images. We compare our results of object number density calculations with the established classical stereological 3D-correction methods for 2D-images and show that it was possible to achieve a seriously higher quality and accuracy. The methods described in this paper are not dependent on the nature of the object. The fact, that YaDiV is open-source and users with programming skills can create new plugins themselves, may allow this platform to become applicable to a variety of geological scenarios from the analysis of textures in tiny rock samples to the interpretation of global geophysical data, as long as the data are provided in tomographic form.
NHDPlusHR: A national geospatial framework for surface-water information
Viger, Roland; Rea, Alan H.; Simley, Jeffrey D.; Hanson, Karen M.
2016-01-01
The U.S. Geological Survey is developing a new geospatial hydrographic framework for the United States, called the National Hydrography Dataset Plus High Resolution (NHDPlusHR), that integrates a diversity of the best-available information, robustly supports ongoing dataset improvements, enables hydrographic generalization to derive alternate representations of the network while maintaining feature identity, and supports modern scientific computing and Internet accessibility needs. This framework is based on the High Resolution National Hydrography Dataset, the Watershed Boundaries Dataset, and elevation from the 3-D Elevation Program, and will provide an authoritative, high precision, and attribute-rich geospatial framework for surface-water information for the United States. Using this common geospatial framework will provide a consistent basis for indexing water information in the United States, eliminate redundancy, and harmonize access to, and exchange of water information.
3D reconstruction software comparison for short sequences
NASA Astrophysics Data System (ADS)
Strupczewski, Adam; Czupryński, BłaŻej
2014-11-01
Large scale multiview reconstruction is recently a very popular area of research. There are many open source tools that can be downloaded and run on a personal computer. However, there are few, if any, comparisons between all the available software in terms of accuracy on small datasets that a single user can create. The typical datasets for testing of the software are archeological sites or cities, comprising thousands of images. This paper presents a comparison of currently available open source multiview reconstruction software for small datasets. It also compares the open source solutions with a simple structure from motion pipeline developed by the authors from scratch with the use of OpenCV and Eigen libraries.
Fast Detection of Material Deformation through Structural Dissimilarity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushizima, Daniela; Perciano, Talita; Parkinson, Dilworth
2015-10-29
Designing materials that are resistant to extreme temperatures and brittleness relies on assessing structural dynamics of samples. Algorithms are critically important to characterize material deformation under stress conditions. Here, we report on our design of coarse-grain parallel algorithms for image quality assessment based on structural information and on crack detection of gigabyte-scale experimental datasets. We show how key steps can be decomposed into distinct processing flows, one based on structural similarity (SSIM) quality measure, and another on spectral content. These algorithms act upon image blocks that fit into memory, and can execute independently. We discuss the scientific relevance of themore » problem, key developments, and decomposition of complementary tasks into separate executions. We show how to apply SSIM to detect material degradation, and illustrate how this metric can be allied to spectral analysis for structure probing, while using tiled multi-resolution pyramids stored in HDF5 chunked multi-dimensional arrays. Results show that the proposed experimental data representation supports an average compression rate of 10X, and data compression scales linearly with the data size. We also illustrate how to correlate SSIM to crack formation, and how to use our numerical schemes to enable fast detection of deformation from 3D datasets evolving in time.« less