NASA Astrophysics Data System (ADS)
Zeraatpisheh, Mojtaba; Ayoubi, Shamsollah; Jafari, Azam; Finke, Peter
2017-05-01
The efficiency of different digital and conventional soil mapping approaches to produce categorical maps of soil types is determined by cost, sample size, accuracy and the selected taxonomic level. The efficiency of digital and conventional soil mapping approaches was examined in the semi-arid region of Borujen, central Iran. This research aimed to (i) compare two digital soil mapping approaches including Multinomial logistic regression and random forest, with the conventional soil mapping approach at four soil taxonomic levels (order, suborder, great group and subgroup levels), (ii) validate the predicted soil maps by the same validation data set to determine the best method for producing the soil maps, and (iii) select the best soil taxonomic level by different approaches at three sample sizes (100, 80, and 60 point observations), in two scenarios with and without a geomorphology map as a spatial covariate. In most predicted maps, using both digital soil mapping approaches, the best results were obtained using the combination of terrain attributes and the geomorphology map, although differences between the scenarios with and without the geomorphology map were not significant. Employing the geomorphology map increased map purity and the Kappa index, and led to a decrease in the 'noisiness' of soil maps. Multinomial logistic regression had better performance at higher taxonomic levels (order and suborder levels); however, random forest showed better performance at lower taxonomic levels (great group and subgroup levels). Multinomial logistic regression was less sensitive than random forest to a decrease in the number of training observations. The conventional soil mapping method produced a map with larger minimum polygon size because of traditional cartographic criteria used to make the geological map 1:100,000 (on which the conventional soil mapping map was largely based). Likewise, conventional soil mapping map had also a larger average polygon size that resulted in a lower level of detail. Multinomial logistic regression at the order level (map purity of 0.80), random forest at the suborder (map purity of 0.72) and great group level (map purity of 0.60), and conventional soil mapping at the subgroup level (map purity of 0.48) produced the most accurate maps in the study area. The multinomial logistic regression method was identified as the most effective approach based on a combined index of map purity, map information content, and map production cost. The combined index also showed that smaller sample size led to a preference for the order level, while a larger sample size led to a preference for the great group level.
Botsaris, George; Slana, Iva; Liapi, Maria; Dodd, Christine; Economides, Constantinos; Rees, Catherine; Pavlik, Ivo
2010-07-31
Mycobacterium avium subsp. paratuberculosis (MAP) may have a role in the development of Crohn's disease in humans via the consumption of contaminated milk and milk products. Detection of MAP from milk and dairy products has been reported from countries on the European continent, Argentina, the UK and Australia. In this study three different methods (quantitative real time PCR, combined phage IS900 PCR and conventional cultivation) were used to detect the presence of MAP in bulk tank milk (BTM) and cheese originating from sheep, goat and mixed milks from farms and products in Cyprus. During the first survey the presence of MAP was detected in 63 (28.6%) of cows' BTM samples by quantitative real time PCR. A second survey of BTM used a new combined phage IS900 PCR assay, and in this case MAP was detected in 50 (22.2%) samples showing a good level of agreement by both methods. None of the herds tested were known to be affected by Johne's disease and the presence of viable MAP was confirmed by conventional culture in only two cases of cows BTM. This suggests that either rapid method used is more sensitive than the conventional culture when testing raw milk samples for MAP. The two isolates recovered from BTM were identified by IS1311 PCR REA as cattle and sheep strains, respectively. In contrast when cheese samples were tested, MAP DNA was detected by quantitative real time PCR in seven (25.0%) samples (n=28). However no viable MAP was detected when either the combined phage IS900 PCR or conventional culture methods were used. Copyright 2010 Elsevier B.V. All rights reserved.
Forest and range mapping in the Houston area with ERTS-1
NASA Technical Reports Server (NTRS)
Heath, G. R.; Parker, H. D.
1973-01-01
ERTS-1 data acquired over the Houston area has been analyzed for applications to forest and range mapping. In the field of forestry the Sam Houston National Forest (Texas) was chosen as a test site, (Scene ID 1037-16244). Conventional imagery interpretation as well as computer processing methods were used to make classification maps of timber species, condition and land-use. The results were compared with timber stand maps which were obtained from aircraft imagery and checked in the field. The preliminary investigations show that conventional interpretation techniques indicated an accuracy in classification of 63 percent. The computer-aided interpretations made by a clustering technique gave 70 percent accuracy. Computer-aided and conventional multispectral analysis techniques were applied to range vegetation type mapping in the gulf coast marsh. Two species of salt marsh grasses were mapped.
A close-range photogrammetric technique for mapping neotectonic features in trenches
Fairer, G.M.; Whitney, J.W.; Coe, J.A.
1989-01-01
Close-range photogrammetric techniques and newly available computerized plotting equipment were used to map exploratory trench walls that expose Quaternary faults in the vicinity of Yucca Mountain, Nevada. Small-scale structural, lithologic, and stratigraphic features can be rapidly mapped by the photogrammetric method. This method is more accurate and significantly more rapid than conventional trench-mapping methods, and the analytical plotter is capable of producing cartographic definition of high resolution when detailed trench maps are necessary. -from Authors
Win, Khin Thanda; Vegas, Juan; Zhang, Chunying; Song, Kihwan; Lee, Sanghyeob
2017-01-01
QTL mapping using NGS-assisted BSA was successfully applied to an F 2 population for downy mildew resistance in cucumber. QTLs detected by NGS-assisted BSA were confirmed by conventional QTL analysis. Downy mildew (DM), caused by Pseudoperonospora cubensis, is one of the most destructive foliar diseases in cucumber. QTL mapping is a fundamental approach for understanding the genetic inheritance of DM resistance in cucumber. Recently, many studies have reported that a combination of bulked segregant analysis (BSA) and next-generation sequencing (NGS) can be a rapid and cost-effective way of mapping QTLs. In this study, we applied NGS-assisted BSA to QTL mapping of DM resistance in cucumber and confirmed the results by conventional QTL analysis. By sequencing two DNA pools each consisting of ten individuals showing high resistance and susceptibility to DM from a F 2 population, we identified single nucleotide polymorphisms (SNPs) between the two pools. We employed a statistical method for QTL mapping based on these SNPs. Five QTLs, dm2.2, dm4.1, dm5.1, dm5.2, and dm6.1, were detected and dm2.2 showed the largest effect on DM resistance. Conventional QTL analysis using the F 2 confirmed dm2.2 (R 2 = 10.8-24 %) and dm5.2 (R 2 = 14-27.2 %) as major QTLs and dm4.1 (R 2 = 8 %) as two minor QTLs, but could not detect dm5.1 and dm6.1. A new QTL on chromosome 2, dm2.1 (R 2 = 28.2 %) was detected by the conventional QTL method using an F 3 population. This study demonstrated the effectiveness of NGS-assisted BSA for mapping QTLs conferring DM resistance in cucumber and revealed the unique genetic inheritance of DM resistance in this population through two distinct major QTLs on chromosome 2 that mainly harbor DM resistance.
Computer-based self-organized tectonic zoning: a tentative pattern recognition for Iran
NASA Astrophysics Data System (ADS)
Zamani, Ahmad; Hashemi, Naser
2004-08-01
Conventional methods of tectonic zoning are frequently characterized by two deficiencies. The first one is the large uncertainty involved in tectonic zoning based on non-quantitative and subjective analysis. Failure to interpret accurately a large amount of data "by eye" is the second. In order to alleviate each of these deficiencies, the multivariate statistical method of cluster analysis has been utilized to seek and separate zones with similar tectonic pattern and construct automated self-organized multivariate tectonic zoning maps. This analytical method of tectonic regionalization is particularly useful for showing trends in tectonic evolution of a region that could not be discovered by any other means. To illustrate, this method has been applied for producing a general-purpose numerical tectonic zoning map of Iran. While there are some similarities between the self-organized multivariate numerical maps and the conventional maps, the cluster solution maps reveal some remarkable features that cannot be observed on the current tectonic maps. The following specific examples need to be noted: (1) The much disputed extent and rigidity of the Lut Rigid Block, described as the microplate of east Iran, is clearly revealed on the self-organized numerical maps. (2) The cluster solution maps reveal a striking similarity between this microplate and the northern Central Iran—including the Great Kavir region. (3) Contrary to the conventional map, the cluster solution maps make a clear distinction between the East Iranian Ranges and the Makran Mountains. (4) Moreover, an interesting similarity between the Azarbaijan region in the northwest and the Makran Mountains in the southeast and between the Kopet Dagh Ranges in the northeast and the Zagros Folded Belt in the southwest of Iran are revealed in the clustering process. This new approach to tectonic zoning is a starting point and is expected to be improved and refined by collection of new data. The method is also a useful tool in studying neotectonics, seismotectonics, seismic zoning, and hazard estimation of the seismogenic regions.
Modeling a color-rendering operator for high dynamic range images using a cone-response function
NASA Astrophysics Data System (ADS)
Choi, Ho-Hyoung; Kim, Gi-Seok; Yun, Byoung-Ju
2015-09-01
Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.
A flood map based DOI decoding method for block detector: a GATE simulation study.
Shi, Han; Du, Dong; Su, Zhihong; Peng, Qiyu
2014-01-01
Positron Emission Tomography (PET) systems using detectors with Depth of Interaction (DOI) capabilities could achieve higher spatial resolution and better image quality than those without DOI. Up till now, most DOI methods developed are not cost-efficient for a whole body PET system. In this paper, we present a DOI decoding method based on flood map for low-cost conventional block detector with four-PMT readout. Using this method, the DOI information can be directly extracted from the DOI-related crystal spot deformation in the flood map. GATE simulations are then carried out to validate the method, confirming a DOI sorting accuracy of 85.27%. Therefore, we conclude that this method has the potential to be applied in conventional detectors to achieve a reasonable DOI measurement without dramatically increasing their complexity and cost of an entire PET system.
Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.
Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L
2018-02-01
This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Complementary and conventional medicine: a concept map
Baldwin, Carol M; Kroesen, Kendall; Trochim, William M; Bell, Iris R
2004-01-01
Background Despite the substantive literature from survey research that has accumulated on complementary and alternative medicine (CAM) in the United States and elsewhere, very little research has been done to assess conceptual domains that CAM and conventional providers would emphasize in CAM survey studies. The objective of this study is to describe and interpret the results of concept mapping with conventional and CAM practitioners from a variety of backgrounds on the topic of CAM. Methods Concept mapping, including free sorts, ratings, and multidimensional scaling was used to organize conceptual domains relevant to CAM into a visual "cluster map." The panel consisted of CAM providers, conventional providers, and university faculty, and was convened to help formulate conceptual domains to guide the development of a CAM survey for use with United States military veterans. Results Eight conceptual clusters were identified: 1) Self-assessment, Self-care, and Quality of Life; 2) Health Status, Health Behaviors; 3) Self-assessment of Health; 4) Practical/Economic/ Environmental Concerns; 5) Needs Assessment; 6) CAM vs. Conventional Medicine; 7) Knowledge of CAM; and 8) Experience with CAM. The clusters suggest panelists saw interactions between CAM and conventional medicine as a critical component of the current medical landscape. Conclusions Concept mapping provided insight into how CAM and conventional providers view the domain of health care, and was shown to be a useful tool in the formulation of CAM-related conceptual domains. PMID:15018623
Goodwin, Cody R; Sherrod, Stacy D; Marasco, Christina C; Bachmann, Brian O; Schramm-Sapyta, Nicole; Wikswo, John P; McLean, John A
2014-07-01
A metabolic system is composed of inherently interconnected metabolic precursors, intermediates, and products. The analysis of untargeted metabolomics data has conventionally been performed through the use of comparative statistics or multivariate statistical analysis-based approaches; however, each falls short in representing the related nature of metabolic perturbations. Herein, we describe a complementary method for the analysis of large metabolite inventories using a data-driven approach based upon a self-organizing map algorithm. This workflow allows for the unsupervised clustering, and subsequent prioritization of, correlated features through Gestalt comparisons of metabolic heat maps. We describe this methodology in detail, including a comparison to conventional metabolomics approaches, and demonstrate the application of this method to the analysis of the metabolic repercussions of prolonged cocaine exposure in rat sera profiles.
Russo, Mario S; Drago, Fabrizio; Silvetti, Massimo S; Righi, Daniela; Di Mambro, Corrado; Placidi, Silvia; Prosperi, Monica; Ciani, Michele; Naso Onofrio, Maria T; Cannatà, Vittorio
2016-06-01
Aim Transcatheter cryoablation is a well-established technique for the treatment of atrioventricular nodal re-entry tachycardia and atrioventricular re-entry tachycardia in children. Fluoroscopy or three-dimensional mapping systems can be used to perform the ablation procedure. The aim of this study was to compare the success rate of cryoablation procedures for the treatment of right septal accessory pathways and atrioventricular nodal re-entry circuits in children using conventional or three-dimensional mapping and to evaluate whether three-dimensional mapping was associated with reduced patient radiation dose compared with traditional mapping. In 2013, 81 children underwent transcatheter cryoablation at our institution, using conventional mapping in 41 children - 32 atrioventricular nodal re-entry tachycardia and nine atrioventricular re-entry tachycardia - and three-dimensional mapping in 40 children - 24 atrioventricular nodal re-entry tachycardia and 16 atrioventricular re-entry tachycardia. Using conventional mapping, the overall success rate was 78.1 and 66.7% in patients with atrioventricular nodal re-entry tachycardia or atrioventricular re-entry tachycardia, respectively. Using three-dimensional mapping, the overall success rate was 91.6 and 75%, respectively (p=ns). The use of three-dimensional mapping was associated with a reduction in cumulative air kerma and cumulative air kerma-area product of 76.4 and 67.3%, respectively (p<0.05). The use of three-dimensional mapping compared with the conventional fluoroscopy-guided method for cryoablation of right septal accessory pathways and atrioventricular nodal re-entry circuits in children was associated with a significant reduction in patient radiation dose without an increase in success rate.
New methods, algorithms, and software for rapid mapping of tree positions in coordinate forest plots
A. Dan Wilson
2000-01-01
The theories and methodologies for two new tree mapping methods, the Sequential-target method and the Plot-origin radial method, are described. The methods accommodate the use of any conventional distance measuring device and compass to collect horizontal distance and azimuth data between source or reference positions (origins) and target trees. Conversion equations...
Nauleau, Pierre; Apostolakis, Iason; McGarry, Matthew; Konofagou, Elisa
2018-05-29
The stiffness of the arteries is known to be an indicator of the progression of various cardiovascular diseases. Clinically, the pulse wave velocity (PWV) is used as a surrogate for arterial stiffness. Pulse wave imaging (PWI) is a non-invasive, ultrasound-based imaging technique capable of mapping the motion of the vessel walls, allowing the local assessment of arterial properties. Conventionally, a distinctive feature of the displacement wave (e.g. the 50% upstroke) is tracked across the map to estimate the PWV. However, the presence of reflections, such as those generated at the carotid bifurcation, can bias the PWV estimation. In this paper, we propose a two-step cross-correlation based method to characterize arteries using the information available in the PWI spatio-temporal map. First, the area under the cross-correlation curve is proposed as an index for locating the regions of different properties. Second, a local peak of the cross-correlation function is tracked to obtain a less biased estimate of the PWV. Three series of experiments were conducted in phantoms to evaluate the capabilities of the proposed method compared with the conventional method. In the ideal case of a homogeneous phantom, the two methods performed similarly and correctly estimated the PWV. In the presence of reflections, the proposed method provided a more accurate estimate than conventional processing: e.g. for the soft phantom, biases of -0.27 and -0.71 m · s -1 were observed. In a third series of experiments, the correlation-based method was able to locate two regions of different properties with an error smaller than 1 mm. It also provided more accurate PWV estimates than conventional processing (biases: -0.12 versus -0.26 m · s -1 ). Finally, the in vivo feasibility of the proposed method was demonstrated in eleven healthy subjects. The results indicate that the correlation-based method might be less precise in vivo but more accurate than the conventional method.
Effectiveness of Mind Mapping in English Teaching among VIII Standard Students
ERIC Educational Resources Information Center
Hallen, D.; Sangeetha, N.
2015-01-01
The aim of the study is to find out the effectiveness of mind mapping technique over conventional method in teaching English at high school level (VIII), in terms of Control and Experimental group. The sample of the study comprised, 60 VIII Standard students in Tiruchendur Taluk. Mind Maps and Achievement Test (Pretest & Posttest) were…
A new axial smoothing method based on elastic mapping
NASA Astrophysics Data System (ADS)
Yang, J.; Huang, S. C.; Lin, K. P.; Czernin, J.; Wolfenden, P.; Dahlbom, M.; Hoh, C. K.; Phelps, M. E.
1996-12-01
New positron emission tomography (PET) scanners have higher axial and in-plane spatial resolutions but at the expense of reduced per plane sensitivity, which prevents the higher resolution from being fully realized. Normally, Gaussian-weighted interplane axial smoothing is used to reduce noise. In this study, the authors developed a new algorithm that first elastically maps adjacent planes, and then the mapped images are smoothed axially to reduce the image noise level. Compared to those obtained by the conventional axial-directional smoothing method, the images by the new method have improved signal-to-noise ratio. To quantify the signal-to-noise improvement, both simulated and real cardiac PET images were studied. Various Hanning reconstruction filters with cutoff frequency=0.5, 0.7, 1.0/spl times/Nyquist frequency and Ramp filter were tested on simulated images. Effective in-plane resolution was measured by the effective global Gaussian resolution (EGGR) and noise reduction was evaluated by the cross-correlation coefficient. Results showed that the new method was robust to various noise levels and indicated larger noise reduction or better image feature preservation (i.e., smaller EGGR) than by the conventional method.
NASA Astrophysics Data System (ADS)
Nauleau, Pierre; Apostolakis, Iason; McGarry, Matthew; Konofagou, Elisa
2018-06-01
The stiffness of the arteries is known to be an indicator of the progression of various cardiovascular diseases. Clinically, the pulse wave velocity (PWV) is used as a surrogate for arterial stiffness. Pulse wave imaging (PWI) is a non-invasive, ultrasound-based imaging technique capable of mapping the motion of the vessel walls, allowing the local assessment of arterial properties. Conventionally, a distinctive feature of the displacement wave (e.g. the 50% upstroke) is tracked across the map to estimate the PWV. However, the presence of reflections, such as those generated at the carotid bifurcation, can bias the PWV estimation. In this paper, we propose a two-step cross-correlation based method to characterize arteries using the information available in the PWI spatio-temporal map. First, the area under the cross-correlation curve is proposed as an index for locating the regions of different properties. Second, a local peak of the cross-correlation function is tracked to obtain a less biased estimate of the PWV. Three series of experiments were conducted in phantoms to evaluate the capabilities of the proposed method compared with the conventional method. In the ideal case of a homogeneous phantom, the two methods performed similarly and correctly estimated the PWV. In the presence of reflections, the proposed method provided a more accurate estimate than conventional processing: e.g. for the soft phantom, biases of ‑0.27 and ‑0.71 m · s–1 were observed. In a third series of experiments, the correlation-based method was able to locate two regions of different properties with an error smaller than 1 mm. It also provided more accurate PWV estimates than conventional processing (biases: ‑0.12 versus ‑0.26 m · s–1). Finally, the in vivo feasibility of the proposed method was demonstrated in eleven healthy subjects. The results indicate that the correlation-based method might be less precise in vivo but more accurate than the conventional method.
Structured light system calibration method with optimal fringe angle.
Li, Beiwen; Zhang, Song
2014-11-20
For structured light system calibration, one popular approach is to treat the projector as an inverse camera. This is usually performed by projecting horizontal and vertical sequences of patterns to establish one-to-one mapping between camera points and projector points. However, for a well-designed system, either horizontal or vertical fringe images are not sensitive to depth variation and thus yield inaccurate mapping. As a result, the calibration accuracy is jeopardized if a conventional calibration method is used. To address this limitation, this paper proposes a novel calibration method based on optimal fringe angle determination. Experiments demonstrate that our calibration approach can increase the measurement accuracy up to 38% compared to the conventional calibration method with a calibration volume of 300(H) mm×250(W) mm×500(D) mm.
GESFIDE-PROPELLER Approach for Simultaneous R2 and R2* Measurements in the Abdomen
Jin, Ning; Guo, Yang; Zhang, Zhuoli; Zhang, Longjiang; Lu, Guangming; Larson, Andrew C.
2013-01-01
Purpose To investigate the feasibility of combining GESFIDE with PROPELLER sampling approaches for simultaneous abdominal R2 and R2* mapping. Materials and Methods R2 and R2* measurements were performed in 9 healthy volunteers and phantoms using the GESFIDE-PROPELLER and the conventional Cartesian-sampling GESFIDE approaches. Results Images acquired with the GESFIDE-PROPELLER sequence effectively mitigated the respiratory motion artifacts, which were clearly evident in the images acquired using the conventional GESFIDE approach. There were no significant difference between GESFIDE-PROPELLER and reference MGRE R2* measurements (p = 0.162) whereas the Cartesian-sampling based GESFIDE methods significantly overestimated R2* values compared to MGRE measurements (p < 0.001). Conclusion The GESFIDE-PROPELLER sequence provided high quality images and accurate abdominal R2 and R2* maps while avoiding the motion artifacts common to the conventional Cartesian-sampling GESFIDE approaches. PMID:24041478
NASA Technical Reports Server (NTRS)
Drury, H. A.; Van Essen, D. C.; Anderson, C. H.; Lee, C. W.; Coogan, T. A.; Lewis, J. W.
1996-01-01
We present a new method for generating two-dimensional maps of the cerebral cortex. Our computerized, two-stage flattening method takes as its input any well-defined representation of a surface within the three-dimensional cortex. The first stage rapidly converts this surface to a topologically correct two-dimensional map, without regard for the amount of distortion introduced. The second stage reduces distortions using a multiresolution strategy that makes gross shape changes on a coarsely sampled map and further shape refinements on progressively finer resolution maps. We demonstrate the utility of this approach by creating flat maps of the entire cerebral cortex in the macaque monkey and by displaying various types of experimental data on such maps. We also introduce a surface-based coordinate system that has advantages over conventional stereotaxic coordinates and is relevant to studies of cortical organization in humans as well as non-human primates. Together, these methods provide an improved basis for quantitative studies of individual variability in cortical organization.
A Comparison of Fuzzy Models in Similarity Assessment of Misregistered Area Class Maps
NASA Astrophysics Data System (ADS)
Brown, Scott
Spatial uncertainty refers to unknown error and vagueness in geographic data. It is relevant to land change and urban growth modelers, soil and biome scientists, geological surveyors and others, who must assess thematic maps for similarity, or categorical agreement. In this paper I build upon prior map comparison research, testing the effectiveness of similarity measures on misregistered data. Though several methods compare uncertain thematic maps, few methods have been tested on misregistration. My objective is to test five map comparison methods for sensitivity to misregistration, including sub-pixel errors in both position and rotation. Methods included four fuzzy categorical models: fuzzy kappa's model, fuzzy inference, cell aggregation, and the epsilon band. The fifth method used conventional crisp classification. I applied these methods to a case study map and simulated data in two sets: a test set with misregistration error, and a control set with equivalent uniform random error. For all five methods, I used raw accuracy or the kappa statistic to measure similarity. Rough-set epsilon bands report the most similarity increase in test maps relative to control data. Conversely, the fuzzy inference model reports a decrease in test map similarity.
NASA Technical Reports Server (NTRS)
Bodechtel, J.; Nithack, J.; Dibernardo, G.; Hiller, K.; Jaskolla, F.; Smolka, A.
1975-01-01
Utilizing LANDSAT and Skylab multispectral imagery of 1972 and 1973, a land use map of the mountainous regions of Italy was evaluated at a scale of 1:250,000. Seven level I categories were identified by conventional methods of photointerpretation. Images of multispectral scanner (MSS) bands 5 and 7, or equivalents were mainly used. Areas of less than 200 by 200 m were classified and standard procedures were established for interpretation of multispectral satellite imagery. Land use maps were produced for central and southern Europe indicating that the existing land use maps could be updated and optimized. The complexity of European land use patterns, the intensive morphology of young mountain ranges, and time-cost calculations are the reasons that the applied conventional techniques are superior to automatic evaluation.
Single Image Super-Resolution Using Global Regression Based on Multiple Local Linear Mappings.
Choi, Jae-Seok; Kim, Munchurl
2017-03-01
Super-resolution (SR) has become more vital, because of its capability to generate high-quality ultra-high definition (UHD) high-resolution (HR) images from low-resolution (LR) input images. Conventional SR methods entail high computational complexity, which makes them difficult to be implemented for up-scaling of full-high-definition input images into UHD-resolution images. Nevertheless, our previous super-interpolation (SI) method showed a good compromise between Peak-Signal-to-Noise Ratio (PSNR) performances and computational complexity. However, since SI only utilizes simple linear mappings, it may fail to precisely reconstruct HR patches with complex texture. In this paper, we present a novel SR method, which inherits the large-to-small patch conversion scheme from SI but uses global regression based on local linear mappings (GLM). Thus, our new SR method is called GLM-SI. In GLM-SI, each LR input patch is divided into 25 overlapped subpatches. Next, based on the local properties of these subpatches, 25 different local linear mappings are applied to the current LR input patch to generate 25 HR patch candidates, which are then regressed into one final HR patch using a global regressor. The local linear mappings are learned cluster-wise in our off-line training phase. The main contribution of this paper is as follows: Previously, linear-mapping-based conventional SR methods, including SI only used one simple yet coarse linear mapping to each patch to reconstruct its HR version. On the contrary, for each LR input patch, our GLM-SI is the first to apply a combination of multiple local linear mappings, where each local linear mapping is found according to local properties of the current LR patch. Therefore, it can better approximate nonlinear LR-to-HR mappings for HR patches with complex texture. Experiment results show that the proposed GLM-SI method outperforms most of the state-of-the-art methods, and shows comparable PSNR performance with much lower computational complexity when compared with a super-resolution method based on convolutional neural nets (SRCNN15). Compared with the previous SI method that is limited with a scale factor of 2, GLM-SI shows superior performance with average 0.79 dB higher in PSNR, and can be used for scale factors of 3 or higher.
NASA Technical Reports Server (NTRS)
Schweikhard, W. G.; Dennon, S. R.
1986-01-01
A review of the Melick method of inlet flow dynamic distortion prediction by statistical means is provided. These developments include the general Melick approach with full dynamic measurements, a limited dynamic measurement approach, and a turbulence modelling approach which requires no dynamic rms pressure fluctuation measurements. These modifications are evaluated by comparing predicted and measured peak instantaneous distortion levels from provisional inlet data sets. A nonlinear mean-line following vortex model is proposed and evaluated as a potential criterion for improving the peak instantaneous distortion map generated from the conventional linear vortex of the Melick method. The model is simplified to a series of linear vortex segments which lay along the mean line. Maps generated with this new approach are compared with conventionally generated maps, as well as measured peak instantaneous maps. Inlet data sets include subsonic, transonic, and supersonic inlets under various flight conditions.
Kim, Hee Kyung; Laor, Tal; Horn, Paul S; Wong, Brenda
2010-01-01
To determine the feasibility of using T2 mapping as a quantitative method to longitudinally follow the disease activity in children with Duchenne muscular dystrophy (DMD) who are treated with steroids. ELEVEN BOYS WITH DMD (AGE RANGE: 5-14 years) underwent evaluation with the clinical functional score (CFS), and conventional pelvic MRI and T2 mapping before and during steroid therapy. The gluteus muscle inflammation and fatty infiltration were evaluated on conventional MRI. The histograms and mean T2 relaxation times were obtained from the T2 maps. The CFS, the conventional MRI findings and the T2 values were compared before and during steroid therapy. None of the patients showed interval change of their CFSs. On conventional MRI, none of the images showed muscle inflammation. During steroid treatment, two boys showed increased fatty infiltration on conventional MRI, and both had an increase of the mean T2 relaxation time (p < 0.05). The remaining nine boys had no increase in fatty infiltration. Of these, three showed an increased mean T2 relaxation time (p < 0.05), two showed no change and four showed a decreased mean T2 relaxation time (p < 0.05). T2 mapping is a feasible technique to evaluate the longitudinal muscle changes in those children who receive steroid therapy for DMD. The differences of the mean T2 relaxation time may reflect alterations in disease activity, and even when the conventional MRI and CFS remain stable.
IMPROVEMENT OF EFFICIENCY OF CUT AND OVERLAY ASPHALT WORKS BY USING MOBILE MAPPING SYSTEM
NASA Astrophysics Data System (ADS)
Yabuki, Nobuyoshi; Nakaniwa, Kazuhide; Kidera, Hiroki; Nishi, Daisuke
When the cut-and-overlay asphalt work is done for improving road pavement, conventional road surface elevation survey with levels often requires traffic regulation and takes much time and effort. Recently, although new surveying methods using non-prismatic total stations or fixed 3D laser scanners have been proposed in industry, they have not been adopted much due to their high cost. In this research, we propose a new method using Mobile Mapping Systems (MMS) in order to increase the efficiency and to reduce the cost. In this method, small white marks are painted at the intervals of 10m along the road to identify cross sections and to modify the elevations of the white marks with accurate survey data. To verify this proposed method, we executed an experiment and compared this method with the conventional level survey method and the fixed 3D laser scanning method at a road of Osaka University. The result showed that the proposed method had a similar accuracy with other methods and it was more efficient.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, S; Chao, C; Columbia University, NY, NY
2014-06-01
Purpose: This study investigates the calibration error of detector sensitivity for MapCheck due to inaccurate positioning of the device, which is not taken into account by the current commercial iterative calibration algorithm. We hypothesize the calibration is more vulnerable to the positioning error for the flatten filter free (FFF) beams than the conventional flatten filter flattened beams. Methods: MapCheck2 was calibrated with 10MV conventional and FFF beams, with careful alignment and with 1cm positioning error during calibration, respectively. Open fields of 37cmx37cm were delivered to gauge the impact of resultant calibration errors. The local calibration error was modeled as amore » detector independent multiplication factor, with which propagation error was estimated with positioning error from 1mm to 1cm. The calibrated sensitivities, without positioning error, were compared between the conventional and FFF beams to evaluate the dependence on the beam type. Results: The 1cm positioning error leads to 0.39% and 5.24% local calibration error in the conventional and FFF beams respectively. After propagating to the edges of MapCheck, the calibration errors become 6.5% and 57.7%, respectively. The propagation error increases almost linearly with respect to the positioning error. The difference of sensitivities between the conventional and FFF beams was small (0.11 ± 0.49%). Conclusion: The results demonstrate that the positioning error is not handled by the current commercial calibration algorithm of MapCheck. Particularly, the calibration errors for the FFF beams are ~9 times greater than those for the conventional beams with identical positioning error, and a small 1mm positioning error might lead to up to 8% calibration error. Since the sensitivities are only slightly dependent of the beam type and the conventional beam is less affected by the positioning error, it is advisable to cross-check the sensitivities between the conventional and FFF beams to detect potential calibration errors due to inaccurate positioning. This work was partially supported by a DOD Grant No.; DOD W81XWH1010862.« less
Landslide inventory maps: New tools for an old problem
NASA Astrophysics Data System (ADS)
Guzzetti, Fausto; Mondini, Alessandro Cesare; Cardinali, Mauro; Fiorucci, Federica; Santangelo, Michele; Chang, Kang-Tsung
2012-04-01
Landslides are present in all continents, and play an important role in the evolution of landscapes. They also represent a serious hazard in many areas of the world. Despite their importance, we estimate that landslide maps cover less than 1% of the slopes in the landmasses, and systematic information on the type, abundance, and distribution of landslides is lacking. Preparing landslide maps is important to document the extent of landslide phenomena in a region, to investigate the distribution, types, pattern, recurrence and statistics of slope failures, to determine landslide susceptibility, hazard, vulnerability and risk, and to study the evolution of landscapes dominated by mass-wasting processes. Conventional methods for the production of landslide maps rely chiefly on the visual interpretation of stereoscopic aerial photography, aided by field surveys. These methods are time consuming and resource intensive. New and emerging techniques based on satellite, airborne, and terrestrial remote sensing technologies, promise to facilitate the production of landslide maps, reducing the time and resources required for their compilation and systematic update. In this work, we first outline the principles for landslide mapping, and we review the conventional methods for the preparation of landslide maps, including geomorphological, event, seasonal, and multi-temporal inventories. Next, we examine recent and new technologies for landslide mapping, considering (i) the exploitation of very-high resolution digital elevation models to analyze surface morphology, (ii) the visual interpretation and semi-automatic analysis of different types of satellite images, including panchromatic, multispectral, and synthetic aperture radar images, and (iii) tools that facilitate landslide field mapping. Next, we discuss the advantages and the limitations of the new remote sensing data and technology for the production of geomorphological, event, seasonal, and multi-temporal inventory maps. We conclude by arguing that the new tools will help to improve the quality of landslide maps, with positive effects on all derivative products and analyses, including erosion studies and landscape modeling, susceptibility and hazard assessments, and risk evaluations.
Method for Pre-Conditioning a Measured Surface Height Map for Model Validation
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2012-01-01
This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.
Kondo, Yukihito; Okunishi, Eiji
2014-10-01
Moiré method in scanning transmission electron microscopy allows observing a magnified two-dimensional atomic column elemental map of a higher pixel resolution with a lower electron dose unlike conventional atomic column mapping. The magnification of the map is determined by the ratio between the pixel size and the lattice spacing. With proper ratios for the x and y directions, we could observe magnified elemental maps, homothetic to the atomic arrangement in the sample of SrTiO3 [0 0 1]. The map showed peaks at all expected oxygen sites in SrTiO3 [0 0 1]. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.
Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun
2015-08-31
Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments.
A Probabilistic Feature Map-Based Localization System Using a Monocular Camera
Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun
2015-01-01
Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments. PMID:26404284
NASA Astrophysics Data System (ADS)
Fang, Jingyu; Xu, Haisong; Wang, Zhehong; Wu, Xiaomin
2016-05-01
With colorimetric characterization, digital cameras can be used as image-based tristimulus colorimeters for color communication. In order to overcome the restriction of fixed capture settings adopted in the conventional colorimetric characterization procedures, a novel method was proposed considering capture settings. The method calculating colorimetric value of the measured image contains five main steps, including conversion from RGB values to equivalent ones of training settings through factors based on imaging system model so as to build the bridge between different settings, scaling factors involved in preparation steps for transformation mapping to avoid errors resulted from nonlinearity of polynomial mapping for different ranges of illumination levels. The experiment results indicate that the prediction error of the proposed method, which was measured by CIELAB color difference formula, reaches less than 2 CIELAB units under different illumination levels and different correlated color temperatures. This prediction accuracy for different capture settings remains the same level as the conventional method for particular lighting condition.
Cong, Fengyu; Puoliväli, Tuomas; Alluri, Vinoo; Sipola, Tuomo; Burunat, Iballa; Toiviainen, Petri; Nandi, Asoke K; Brattico, Elvira; Ristaniemi, Tapani
2014-02-15
Independent component analysis (ICA) has been often used to decompose fMRI data mostly for the resting-state, block and event-related designs due to its outstanding advantage. For fMRI data during free-listening experiences, only a few exploratory studies applied ICA. For processing the fMRI data elicited by 512-s modern tango, a FFT based band-pass filter was used to further pre-process the fMRI data to remove sources of no interest and noise. Then, a fast model order selection method was applied to estimate the number of sources. Next, both individual ICA and group ICA were performed. Subsequently, ICA components whose temporal courses were significantly correlated with musical features were selected. Finally, for individual ICA, common components across majority of participants were found by diffusion map and spectral clustering. The extracted spatial maps (by the new ICA approach) common across most participants evidenced slightly right-lateralized activity within and surrounding the auditory cortices. Meanwhile, they were found associated with the musical features. Compared with the conventional ICA approach, more participants were found to have the common spatial maps extracted by the new ICA approach. Conventional model order selection methods underestimated the true number of sources in the conventionally pre-processed fMRI data for the individual ICA. Pre-processing the fMRI data by using a reasonable band-pass digital filter can greatly benefit the following model order selection and ICA with fMRI data by naturalistic paradigms. Diffusion map and spectral clustering are straightforward tools to find common ICA spatial maps. Copyright © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harry, T; Yaddanapudi, S; Mutic, S
Purpose: New techniques and materials have recently been developed to expedite the conventional Linac Acceptance Testing Procedure (ATP). The new ATP method uses the Electronic Portal Imaging Device (EPID) for data collection and is presented separately. This new procedure is meant to be more efficient then conventional methods. While not clinically implemented yet, a prospective risk assessment is warranted for any new techniques. The purpose of this work is to investigate the risks and establish the pros and cons between the conventional approach and the new ATP method. Methods: ATP tests that were modified and performed with the EPID weremore » analyzed. Five domain experts (Medical Physicists) comprised the core analysis team. Ranking scales were adopted from previous publications related to TG 100. The number of failure pathways for each ATP test procedure were compared as well as the number of risk priority numbers (RPN’s) greater than 100 were compared. Results: There were fewer failure pathways with the new ATP compared to the conventional, 262 and 556, respectively. There were fewer RPN’s > 100 in the new ATP compared to the conventional, 41 and 115. Failure pathways and RPN’s > 100 for individual ATP tests on average were 2 and 3.5 times higher in the conventional ATP compared to the new, respectively. The pixel sensitivity map of the EPID was identified as a key hazard to the new ATP procedure with an RPN of 288 for verifying beam parameters. Conclusion: The significant decrease in failure pathways and RPN’s >100 for the new ATP mitigates the possibilities of a catastrophic error occurring. The Pixel Sensitivity Map determining the response and inherent characteristics of the EPID is crucial as all data and hence results are dependent on that process. Grant from Varian Medical Systems Inc.« less
Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun
2015-01-01
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach. PMID:26151203
Optical Mapping of Membrane Potential and Epicardial Deformation in Beating Hearts.
Zhang, Hanyu; Iijima, Kenichi; Huang, Jian; Walcott, Gregory P; Rogers, Jack M
2016-07-26
Cardiac optical mapping uses potentiometric fluorescent dyes to image membrane potential (Vm). An important limitation of conventional optical mapping is that contraction is usually arrested pharmacologically to prevent motion artifacts from obscuring Vm signals. However, these agents may alter electrophysiology, and by abolishing contraction, also prevent optical mapping from being used to study coupling between electrical and mechanical function. Here, we present a method to simultaneously map Vm and epicardial contraction in the beating heart. Isolated perfused swine hearts were stained with di-4-ANEPPS and fiducial markers were glued to the epicardium for motion tracking. The heart was imaged at 750 Hz with a video camera. Fluorescence was excited with cyan or blue LEDs on alternating camera frames, thus providing a 375-Hz effective sampling rate. Marker tracking enabled the pixel(s) imaging any epicardial site within the marked region to be identified in each camera frame. Cyan- and blue-elicited fluorescence have different sensitivities to Vm, but other signal features, primarily motion artifacts, are common. Thus, taking the ratio of fluorescence emitted by a motion-tracked epicardial site in adjacent frames removes artifacts, leaving Vm (excitation ratiometry). Reconstructed Vm signals were validated by comparison to monophasic action potentials and to conventional optical mapping signals. Binocular imaging with additional video cameras enabled marker motion to be tracked in three dimensions. From these data, epicardial deformation during the cardiac cycle was quantified by computing finite strain fields. We show that the method can simultaneously map Vm and strain in a left-sided working heart preparation and can image changes in both electrical and mechanical function 5 min after the induction of regional ischemia. By allowing high-resolution optical mapping in the absence of electromechanical uncoupling agents, the method relieves a long-standing limitation of optical mapping and has potential to enhance new studies in coupled cardiac electromechanics. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Mapped Chebyshev Pseudo-Spectral Method for Dynamic Aero-Elastic Problem of Limit Cycle Oscillation
NASA Astrophysics Data System (ADS)
Im, Dong Kyun; Kim, Hyun Soon; Choi, Seongim
2018-05-01
A mapped Chebyshev pseudo-spectral method is developed as one of the Fourier-spectral approaches and solves nonlinear PDE systems for unsteady flows and dynamic aero-elastic problem in a given time interval, where the flows or elastic motions can be periodic, nonperiodic, or periodic with an unknown frequency. The method uses the Chebyshev polynomials of the first kind for the basis function and redistributes the standard Chebyshev-Gauss-Lobatto collocation points more evenly by a conformal mapping function for improved numerical stability. Contributions of the method are several. It can be an order of magnitude more efficient than the conventional finite difference-based, time-accurate computation, depending on the complexity of solutions and the number of collocation points. The method reformulates the dynamic aero-elastic problem in spectral form for coupled analysis of aerodynamics and structures, which can be effective for design optimization of unsteady and dynamic problems. A limit cycle oscillation (LCO) is chosen for the validation and a new method to determine the LCO frequency is introduced based on the minimization of a second derivative of the aero-elastic formulation. Two examples of the limit cycle oscillation are tested: nonlinear, one degree-of-freedom mass-spring-damper system and two degrees-of-freedom oscillating airfoil under pitch and plunge motions. Results show good agreements with those of the conventional time-accurate simulations and wind tunnel experiments.
Applying high resolution remote sensing image and DEM to falling boulder hazard assessment
NASA Astrophysics Data System (ADS)
Huang, Changqing; Shi, Wenzhong; Ng, K. C.
2005-10-01
Boulder fall hazard assessing generally requires gaining the boulder information. The extensive mapping and surveying fieldwork is a time-consuming, laborious and dangerous conventional method. So this paper proposes an applying image processing technology to extract boulder and assess boulder fall hazard from high resolution remote sensing image. The method can replace the conventional method and extract the boulder information in high accuracy, include boulder size, shape, height and the slope and aspect of its position. With above boulder information, it can be satisfied for assessing, prevention and cure boulder fall hazard.
NASA Technical Reports Server (NTRS)
Vegas, P. L.
1974-01-01
A procedure for obtaining land use data from satellite imagery by the use of conventional interpretation methods is presented. The satellite is described briefly, and the advantages of various scales and multispectral scanner bands are discussed. Methods for obtaining satellite imagery and the sources of this imagery are given. Equipment used in the study is described, and samples of land use maps derived from satellite imagery are included together with the land use classification system used. Accuracy percentages are cited and are compared to those of a previous experiment using small scale aerial photography.
Jafari, Ramin; Chhabra, Shalini; Prince, Martin R; Wang, Yi; Spincemaille, Pascal
2018-04-01
To propose an efficient algorithm to perform dual input compartment modeling for generating perfusion maps in the liver. We implemented whole field-of-view linear least squares (LLS) to fit a delay-compensated dual-input single-compartment model to very high temporal resolution (four frames per second) contrast-enhanced 3D liver data, to calculate kinetic parameter maps. Using simulated data and experimental data in healthy subjects and patients, whole-field LLS was compared with the conventional voxel-wise nonlinear least-squares (NLLS) approach in terms of accuracy, performance, and computation time. Simulations showed good agreement between LLS and NLLS for a range of kinetic parameters. The whole-field LLS method allowed generating liver perfusion maps approximately 160-fold faster than voxel-wise NLLS, while obtaining similar perfusion parameters. Delay-compensated dual-input liver perfusion analysis using whole-field LLS allows generating perfusion maps with a considerable speedup compared with conventional voxel-wise NLLS fitting. Magn Reson Med 79:2415-2421, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-11
... must be made to the NRCS State Technical Guides concerning State wetland mapping conventions. The two States are proposing to issue joint State wetland mapping conventions. The joint State wetland mapping conventions will be used as part of the technical documents to conduct wetland determinations on agriculture...
NASA Technical Reports Server (NTRS)
Butera, M. K.
1979-01-01
The success of remotely mapping wetland vegetation of the southwestern coast of Florida is examined. A computerized technique to process aircraft and LANDSAT multispectral scanner data into vegetation classification maps was used. The cost effectiveness of this mapping technique was evaluated in terms of user requirements, accuracy, and cost. Results indicate that mangrove communities are classified most cost effectively by the LANDSAT technique, with an accuracy of approximately 87 percent and with a cost of approximately 3 cent per hectare compared to $46.50 per hectare for conventional ground survey methods.
Bulluck, Heerajnarain; Hammond-Haley, Matthew; Fontana, Marianna; Knight, Daniel S; Sirker, Alex; Herrey, Anna S; Manisty, Charlotte; Kellman, Peter; Moon, James C; Hausenloy, Derek J
2017-08-01
A comprehensive cardiovascular magnetic resonance (CMR) in reperfused ST-segment myocardial infarction (STEMI) patients can be challenging to perform and can be time-consuming. We aimed to investigate whether native T1-mapping can accurately delineate the edema-based area-at-risk (AAR) and post-contrast T1-mapping and synthetic late gadolinium (LGE) images can quantify MI size at 1.5 T. Conventional LGE imaging and T2-mapping could then be omitted, thereby shortening the scan duration. Twenty-eight STEMI patients underwent a CMR scan at 1.5 T, 3 ± 1 days following primary percutaneous coronary intervention. The AAR was quantified using both native T1 and T2-mapping. MI size was quantified using conventional LGE, post-contrast T1-mapping and synthetic magnitude-reconstructed inversion recovery (MagIR) LGE and synthetic phase-sensitive inversion recovery (PSIR) LGE, derived from the post-contrast T1 maps. Native T1-mapping performed as well as T2-mapping in delineating the AAR (41.6 ± 11.9% of the left ventricle [% LV] versus 41.7 ± 12.2% LV, P = 0.72; R 2 0.97; ICC 0.986 (0.969-0.993); bias -0.1 ± 4.2% LV). There were excellent correlation and inter-method agreement with no bias, between MI size by conventional LGE, synthetic MagIR LGE (bias 0.2 ± 2.2%LV, P = 0.35), synthetic PSIR LGE (bias 0.4 ± 2.2% LV, P = 0.060) and post-contrast T1-mapping (bias 0.3 ± 1.8% LV, P = 0.10). The mean scan duration was 58 ± 4 min. Not performing T2 mapping (6 ± 1 min) and conventional LGE (10 ± 1 min) would shorten the CMR study by 15-20 min. T1-mapping can accurately quantify both the edema-based AAR (using native T1 maps) and acute MI size (using post-contrast T1 maps) in STEMI patients without major cardiovascular risk factors. This approach would shorten the duration of a comprehensive CMR study without significantly compromising on data acquisition and would obviate the need to perform T2 maps and LGE imaging.
Pickup, William; Bremer, Phil; Peng, Mei
2018-03-01
The extensive time and cost associated with conventional sensory profiling methods has spurred sensory researchers to develop rapid method alternatives, such as Napping® with Ultra-Flash Profiling (UFP). Napping®-UFP generates sensory maps by requiring untrained panellists to separate samples based on perceived sensory similarities. Evaluations of this method have been restrained to manufactured/formulated food models, and predominantly structured on comparisons against the conventional descriptive method. The present study aims to extend the validation of Napping®-UFP (N = 72) to natural biological products; and to evaluate this method against Descriptive Analysis (DA; N = 8) with physiochemical measurements as an additional evaluative criterion. The results revealed that sample configurations generated by DA and Napping®-UFP were not significantly correlated (RV = 0.425, P = 0.077); however, they were both correlated with the product map generated based on the instrumental measures (P < 0.05). The finding also noted that sample characterisations from DA and Napping®-UFP were driven by different sensory attributes, indicating potential structural differences between these two methods in configuring samples. Overall, these findings lent support for the extended use of Napping®-UFP for evaluations of natural biological products. Although DA was shown to be a better method for establishing sensory-instrumental relationships, Napping®-UFP exhibited strengths in generating informative sample configurations based on holistic perception of products. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
GESFIDE-PROPELLER approach for simultaneous R2 and R2* measurements in the abdomen.
Jin, Ning; Guo, Yang; Zhang, Zhuoli; Zhang, Longjiang; Lu, Guangming; Larson, Andrew C
2013-12-01
To investigate the feasibility of combining GESFIDE with PROPELLER sampling approaches for simultaneous abdominal R2 and R2* mapping. R2 and R2* measurements were performed in 9 healthy volunteers and phantoms using the GESFIDE-PROPELLER and the conventional Cartesian-sampling GESFIDE approaches. Images acquired with the GESFIDE-PROPELLER sequence effectively mitigated the respiratory motion artifacts, which were clearly evident in the images acquired using the conventional GESFIDE approach. There was no significant difference between GESFIDE-PROPELLER and reference MGRE R2* measurements (p=0.162) whereas the Cartesian-sampling based GESFIDE methods significantly overestimated R2* values compared to MGRE measurements (p<0.001). The GESFIDE-PROPELLER sequence provided high quality images and accurate abdominal R2 and R2* maps while avoiding the motion artifacts common to the conventional Cartesian-sampling GESFIDE approaches. © 2013 Elsevier Inc. All rights reserved.
Importance of Calibration Method in Central Blood Pressure for Cardiac Structural Abnormalities.
Negishi, Kazuaki; Yang, Hong; Wang, Ying; Nolan, Mark T; Negishi, Tomoko; Pathan, Faraz; Marwick, Thomas H; Sharman, James E
2016-09-01
Central blood pressure (CBP) independently predicts cardiovascular risk, but calibration methods may affect accuracy of central systolic blood pressure (CSBP). Standard central systolic blood pressure (Stan-CSBP) from peripheral waveforms is usually derived with calibration using brachial SBP and diastolic BP (DBP). However, calibration using oscillometric mean arterial pressure (MAP) and DBP (MAP-CSBP) is purported to provide more accurate representation of true invasive CSBP. This study sought to determine which derived CSBP could more accurately discriminate cardiac structural abnormalities. A total of 349 community-based patients with risk factors (71±5years, 161 males) had CSBP measured by brachial oscillometry (Mobil-O-Graph, IEM GmbH, Stolberg, Germany) using 2 calibration methods: MAP-CSBP and Stan-CSBP. Left ventricular hypertrophy (LVH) and left atrial dilatation (LAD) were measured based on standard guidelines. MAP-CSBP was higher than Stan-CSBP (149±20 vs. 128±15mm Hg, P < 0.0001). Although they were modestly correlated (rho = 0.74, P < 0.001), the Bland-Altman plot demonstrated a large bias (21mm Hg) and limits of agreement (24mm Hg). In receiver operating characteristic (ROC) curve analyses, MAP-CSBP significantly better discriminated LVH compared with Stan-CSBP (area under the curve (AUC) 0.66 vs. 0.59, P = 0.0063) and brachial SBP (0.62, P = 0.027). Continuous net reclassification improvement (NRI) (P < 0.001) and integrated discrimination improvement (IDI) (P < 0.001) corroborated superior discrimination of LVH by MAP-CSBP. Similarly, MAP-CSBP better distinguished LAD than Stan-CSBP (AUC 0.63 vs. 0.56, P = 0.005) and conventional brachial SBP (0.58, P = 0.006), whereas Stan-CSBP provided no better discrimination than conventional brachial BP (P = 0.09). CSBP is calibration dependent and when oscillometric MAP and DBP are used, the derived CSBP is a better discriminator for cardiac structural abnormalities. © American Journal of Hypertension, Ltd 2016. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Hattingen, Elke; Jurcoane, Alina; Daneshvar, Keivan; Pilatus, Ulrich; Mittelbronn, Michel; Steinbach, Joachim P.; Bähr, Oliver
2013-01-01
Background Anti-angiogenic treatment in recurrent glioblastoma patients suppresses contrast enhancement and reduces vasogenic edema while non-enhancing tumor progression is common. Thus, the importance of T2-weighted imaging is increasing. We therefore quantified T2 relaxation times, which are the basis for the image contrast on T2-weighted images. Methods Conventional and quantitative MRI procedures were performed on 18 patients with recurrent glioblastoma before treatment with bevacizumab and every 8 weeks thereafter until further tumor progression. We segmented the tumor on conventional MRI into 3 subvolumes: enhancing tumor, non-enhancing tumor, and edema. Using coregistered quantitative maps, we followed changes in T2 relaxation time in each subvolume. Moreover, we generated differential T2 maps by a voxelwise subtraction using the first T2 map under bevacizumab as reference. Results Visually segmented areas of tumor and edema did not differ in T2 relaxation times. Non-enhancing tumor volume did not decrease after commencement of bevacizumab treatment but strikingly increased at progression. Differential T2 maps clearly showed non-enhancing tumor progression in previously normal brain. T2 relaxation times decreased under bevacizumab without re-increasing at tumor progression. A decrease of <26 ms in the enhancing tumor following exposure to bevacizumab was associated with longer overall survival. Conclusions Combining quantitative MRI and tumor segmentation improves monitoring of glioblastoma patients under bevacizumab. The degree of change in T2 relaxation time under bevacizumab may be an early response parameter predictive of overall survival. The sustained decrease in T2 relaxation times toward values of healthy tissue masks progressive tumor on conventional T2-weighted images. Therefore, quantitative T2 relaxation times may detect non-enhancing progression better than conventional T2-weighted imaging. PMID:23925453
Bouhrara, Mustapha; Reiter, David A; Sexton, Kyle W; Bergeron, Christopher M; Zukley, Linda M; Spencer, Richard G
2017-11-01
We applied our recently introduced Bayesian analytic method to achieve clinically-feasible in-vivo mapping of the proteoglycan water fraction (PgWF) of human knee cartilage with improved spatial resolution and stability as compared to existing methods. Multicomponent driven equilibrium single-pulse observation of T 1 and T 2 (mcDESPOT) datasets were acquired from the knees of two healthy young subjects and one older subject with previous knee injury. Each dataset was processed using Bayesian Monte Carlo (BMC) analysis incorporating a two-component tissue model. We assessed the performance and reproducibility of BMC and of the conventional analysis of stochastic region contraction (SRC) in the estimation of PgWF. Stability of the BMC analysis of PgWF was tested by comparing independent high-resolution (HR) datasets from each of the two young subjects. Unlike SRC, the BMC-derived maps from the two HR datasets were essentially identical. Furthermore, SRC maps showed substantial random variation in estimated PgWF, and mean values that differed from those obtained using BMC. In addition, PgWF maps derived from conventional low-resolution (LR) datasets exhibited partial volume and magnetic susceptibility effects. These artifacts were absent in HR PgWF images. Finally, our analysis showed regional variation in PgWF estimates, and substantially higher values in the younger subjects as compared to the older subject. BMC-mcDESPOT permits HR in-vivo mapping of PgWF in human knee cartilage in a clinically-feasible acquisition time. HR mapping reduces the impact of partial volume and magnetic susceptibility artifacts compared to LR mapping. Finally, BMC-mcDESPOT demonstrated excellent reproducibility in the determination of PgWF. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Saito, Asaki; Yasutomi, Shin-ichi; Tamura, Jun-ichi; Ito, Shunji
2015-06-01
We introduce a true orbit generation method enabling exact simulations of dynamical systems defined by arbitrary-dimensional piecewise linear fractional maps, including piecewise linear maps, with rational coefficients. This method can generate sufficiently long true orbits which reproduce typical behaviors (inherent behaviors) of these systems, by properly selecting algebraic numbers in accordance with the dimension of the target system, and involving only integer arithmetic. By applying our method to three dynamical systems—that is, the baker's transformation, the map associated with a modified Jacobi-Perron algorithm, and an open flow system—we demonstrate that it can reproduce their typical behaviors that have been very difficult to reproduce with conventional simulation methods. In particular, for the first two maps, we show that we can generate true orbits displaying the same statistical properties as typical orbits, by estimating the marginal densities of their invariant measures. For the open flow system, we show that an obtained true orbit correctly converges to the stable period-1 orbit, which is inherently possessed by the system.
NASA Astrophysics Data System (ADS)
Fischer, J.; Doolan, C.
2017-12-01
A method to improve the quality of acoustic beamforming in reverberant environments is proposed in this paper. The processing is based on a filtering of the cross-correlation matrix of the microphone signals obtained using a microphone array. The main advantage of the proposed method is that it does not require information about the geometry of the reverberant environment and thus it can be applied to any configuration. The method is applied to the particular example of aeroacoustic testing in a hard-walled low-speed wind tunnel; however, the technique can be used in any reverberant environment. Two test cases demonstrate the technique. The first uses a speaker placed in the hard-walled working section with no wind tunnel flow. In the second test case, an airfoil is placed in a flow and acoustic beamforming maps are obtained. The acoustic maps have been improved, as the reflections observed in the conventional maps have been removed after application of the proposed method.
Mapping urban environmental noise: a land use regression method.
Xie, Dan; Liu, Yi; Chen, Jining
2011-09-01
Forecasting and preventing urban noise pollution are major challenges in urban environmental management. Most existing efforts, including experiment-based models, statistical models, and noise mapping, however, have limited capacity to explain the association between urban growth and corresponding noise change. Therefore, these conventional methods can hardly forecast urban noise at a given outlook of development layout. This paper, for the first time, introduces a land use regression method, which has been applied for simulating urban air quality for a decade, to construct an urban noise model (LUNOS) in Dalian Municipality, Northwest China. The LUNOS model describes noise as a dependent variable of surrounding various land areas via a regressive function. The results suggest that a linear model performs better in fitting monitoring data, and there is no significant difference of the LUNOS's outputs when applied to different spatial scales. As the LUNOS facilitates a better understanding of the association between land use and urban environmental noise in comparison to conventional methods, it can be regarded as a promising tool for noise prediction for planning purposes and aid smart decision-making.
Medical Image Fusion Based on Feature Extraction and Sparse Representation
Wei, Gao; Zongxi, Song
2017-01-01
As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM) and energy information map (EM) as well as structure and energy map (SEM) to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG) and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods. PMID:28321246
Analysis and quality control of carbohydrates in therapeutic proteins with fluorescence HPLC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Kun; Huang, Jian; Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu 610054
Conbercept is an Fc fusion protein with very complicated carbohydrate profiles which must be carefully monitored through manufacturing process. Here, we introduce an optimized fluorescence derivatization high-performance liquid chromatographic method for glycan mapping in conbercept. Compared with conventional glycan analysis method, this method has much better resolution and higher reproducibility making it excellent for product quality control.
Mapping Diffuse Seismicity Using Empirical Matched Field Processing Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, J; Templeton, D C; Harris, D B
The objective of this project is to detect and locate more microearthquakes using the empirical matched field processing (MFP) method than can be detected using only conventional earthquake detection techniques. We propose that empirical MFP can complement existing catalogs and techniques. We test our method on continuous seismic data collected at the Salton Sea Geothermal Field during November 2009 and January 2010. In the Southern California Earthquake Data Center (SCEDC) earthquake catalog, 619 events were identified in our study area during this time frame and our MFP technique identified 1094 events. Therefore, we believe that the empirical MFP method combinedmore » with conventional methods significantly improves the network detection ability in an efficient matter.« less
Cartography of irregularly shaped satellites
NASA Technical Reports Server (NTRS)
Batson, R. M.; Edwards, Kathleen
1987-01-01
Irregularly shaped satellites, such as Phobos and Amalthea, do not lend themselves to mapping by conventional methods because mathematical projections of their surfaces fail to convey an accurate visual impression of the landforms, and because large and irregular scale changes make their features difficult to measure on maps. A digital mapping technique has therefore been developed by which maps are compiled from digital topographic and spacecraft image files. The digital file is geometrically transformed as desired for human viewing, either on video screens or on hard copy. Digital files of this kind consist of digital images superimposed on another digital file representing the three-dimensional form of a body.
Ye, Huihui; Ma, Dan; Jiang, Yun; Cauley, Stephen F.; Du, Yiping; Wald, Lawrence L.; Griswold, Mark A.; Setsompop, Kawin
2015-01-01
Purpose We incorporate Simultaneous Multi-Slice (SMS) acquisition into MR Fingerprinting (MRF) to accelerate the MRF acquisition. Methods The t-Blipped SMS-MRF method is achieved by adding a Gz blip before each data acquisition window and balancing it with a Gz blip of opposing polarity at the end of each TR. Thus the signal from different simultaneously excited slices are encoded with different phases without disturbing the signal evolution. Further, by varying the Gz blip area and/or polarity as a function of TR, the slices’ differential phase can also be made to vary as a function of time. For reconstruction of t-Blipped SMS-MRF data, we demonstrate a combined slice-direction SENSE and modified dictionary matching method. Results In Monte Carlo simulation, the parameter mapping from Multi-band factor (MB)=2 t-Blipped SMS-MRF shows good accuracy and precision when compared to results from reference conventional MRF data with concordance correlation coefficients (CCC) of 0.96 for T1 estimates and 0.90 for T2 estimates. For in vivo experiments, T1 and T2 maps from MB=2 t-Blipped SMS-MRF have a high agreement with ones from conventional MRF. Conclusions The MB=2 t-Blipped SMS-MRF acquisition/reconstruction method has been demonstrated and validated to provide more rapid parameter mapping in the MRF framework. PMID:26059430
Personal sleep pattern visualization using sequence-based kernel self-organizing map on sound data.
Wu, Hongle; Kato, Takafumi; Yamada, Tomomi; Numao, Masayuki; Fukui, Ken-Ichi
2017-07-01
We propose a method to discover sleep patterns via clustering of sound events recorded during sleep. The proposed method extends the conventional self-organizing map algorithm by kernelization and sequence-based technologies to obtain a fine-grained map that visualizes the distribution and changes of sleep-related events. We introduced features widely applied in sound processing and popular kernel functions to the proposed method to evaluate and compare performance. The proposed method provides a new aspect of sleep monitoring because the results demonstrate that sound events can be directly correlated to an individual's sleep patterns. In addition, by visualizing the transition of cluster dynamics, sleep-related sound events were found to relate to the various stages of sleep. Therefore, these results empirically warrant future study into the assessment of personal sleep quality using sound data. Copyright © 2017 Elsevier B.V. All rights reserved.
Automatic face recognition in HDR imaging
NASA Astrophysics Data System (ADS)
Pereira, Manuela; Moreno, Juan-Carlos; Proença, Hugo; Pinheiro, António M. G.
2014-05-01
The gaining popularity of the new High Dynamic Range (HDR) imaging systems is raising new privacy issues caused by the methods used for visualization. HDR images require tone mapping methods for an appropriate visualization on conventional and non-expensive LDR displays. These visualization methods might result in completely different visualization raising several issues on privacy intrusion. In fact, some visualization methods result in a perceptual recognition of the individuals, while others do not even show any identity. Although perceptual recognition might be possible, a natural question that can rise is how computer based recognition will perform using tone mapping generated images? In this paper, a study where automatic face recognition using sparse representation is tested with images that result from common tone mapping operators applied to HDR images. Its ability for the face identity recognition is described. Furthermore, typical LDR images are used for the face recognition training.
Performance analysis of a finite radon transform in OFDM system under different channel models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawood, Sameer A.; Anuar, M. S.; Fayadh, Rashid A.
In this paper, a class of discrete Radon transforms namely Finite Radon Transform (FRAT) was proposed as a modulation technique in the realization of Orthogonal Frequency Division Multiplexing (OFDM). The proposed FRAT operates as a data mapper in the OFDM transceiver instead of the conventional phase shift mapping and quadrature amplitude mapping that are usually used with the standard OFDM based on Fast Fourier Transform (FFT), by the way that ensure increasing the orthogonality of the system. The Fourier domain approach was found here to be the more suitable way for obtaining the forward and inverse FRAT. This structure resultedmore » in a more suitable realization of conventional FFT- OFDM. It was shown that this application increases the orthogonality significantly in this case due to the use of Inverse Fast Fourier Transform (IFFT) twice, namely, in the data mapping and in the sub-carrier modulation also due to the use of an efficient algorithm in determining the FRAT coefficients called the optimal ordering method. The proposed approach was tested and compared with conventional OFDM, for additive white Gaussian noise (AWGN) channel, flat fading channel, and multi-path frequency selective fading channel. The obtained results showed that the proposed system has improved the bit error rate (BER) performance by reducing inter-symbol interference (ISI) and inter-carrier interference (ICI), comparing with conventional OFDM system.« less
Performance analysis of a finite radon transform in OFDM system under different channel models
NASA Astrophysics Data System (ADS)
Dawood, Sameer A.; Malek, F.; Anuar, M. S.; Fayadh, Rashid A.; Abdullah, Farrah Salwani
2015-05-01
In this paper, a class of discrete Radon transforms namely Finite Radon Transform (FRAT) was proposed as a modulation technique in the realization of Orthogonal Frequency Division Multiplexing (OFDM). The proposed FRAT operates as a data mapper in the OFDM transceiver instead of the conventional phase shift mapping and quadrature amplitude mapping that are usually used with the standard OFDM based on Fast Fourier Transform (FFT), by the way that ensure increasing the orthogonality of the system. The Fourier domain approach was found here to be the more suitable way for obtaining the forward and inverse FRAT. This structure resulted in a more suitable realization of conventional FFT- OFDM. It was shown that this application increases the orthogonality significantly in this case due to the use of Inverse Fast Fourier Transform (IFFT) twice, namely, in the data mapping and in the sub-carrier modulation also due to the use of an efficient algorithm in determining the FRAT coefficients called the optimal ordering method. The proposed approach was tested and compared with conventional OFDM, for additive white Gaussian noise (AWGN) channel, flat fading channel, and multi-path frequency selective fading channel. The obtained results showed that the proposed system has improved the bit error rate (BER) performance by reducing inter-symbol interference (ISI) and inter-carrier interference (ICI), comparing with conventional OFDM system.
Applications of rapid prototyping technology in maxillofacial prosthetics.
Sykes, Leanne M; Parrott, Andrew M; Owen, C Peter; Snaddon, Donald R
2004-01-01
The purpose of this study was to compare the accuracy, required time, and potential advantages of rapid prototyping technology with traditional methods in the manufacture of wax patterns for two facial prostheses. Two clinical situations were investigated: the production of an auricular prosthesis and the duplication of an existing maxillary prosthesis, using a conventional and a rapid prototyping method for each. Conventional wax patterns were created from impressions taken of a patient's remaining ear and an oral prosthesis. For the rapid prototyping method, a cast of the ear and the original maxillary prosthesis were scanned, and rapid prototyping was used to construct the wax patterns. For the auricular prosthesis, both patterns were refined clinically and then flasked and processed in silicone using routine procedures. Twenty-six independent observers evaluated these patterns by comparing them to the cast of the patient's remaining ear. For the duplication procedure, both wax patterns were scanned and compared to scans of the original prosthesis by generating color error maps to highlight volumetric changes. There was a significant difference in opinions for the two auricular prostheses with regard to shape and esthetic appeal, where the hand-carved prosthesis was found to be of poorer quality. The color error maps showed higher errors with the conventional duplication process compared with the rapid prototyping method. The main advantage of rapid prototyping is the ability to produce physical models using digital methods instead of traditional impression techniques. The disadvantage of equipment costs could be overcome by establishing a centralized service.
NASA Astrophysics Data System (ADS)
Leherte, L.; Allen, F. H.; Vercauteren, D. P.
1995-04-01
A computational method is described for mapping the volume within the DNA double helix accessible to a groove-binding antibiotic, netropsin. Topological critical point analysis is used to locate maxima in electron density maps reconstructed from crystallographically determined atomic coordinates. The peaks obtained in this way are represented as ellipsoids with axes related to local curvature of the electron density function. Combining the ellipsoids produces a single electron density function which can be probed to estimate effective volumes of the interacting species. Close complementarity between host and ligand in this example shows the method to be a good representation of the electron density function at various resolutions; while at the atomic level the ellipsoid method gives results which are in close agreement with those from the conventional, spherical, van der Waals approach.
NASA Astrophysics Data System (ADS)
Leherte, Laurence; Allen, Frank H.
1994-06-01
A computational method is described for mapping the volume within the DNA double helix accessible to the groove-binding antibiotic netropsin. Topological critical point analysis is used to locate maxima in electron density maps reconstructed from crystallographically determined atomic coordinates. The peaks obtained in this way are represented as ellipsoids with axes related to local curvature of the electron density function. Combining the ellipsoids produces a single electron density function which can be probed to estimate effective volumes of the interacting species. Close complementarity between host and ligand in this example shows the method to give a good representation of the electron density function at various resolutions. At the atomic level, the ellipsoid method gives results which are in close agreement with those from the conventional spherical van der Waals approach.
NASA Astrophysics Data System (ADS)
Eriksen, Vibeke R.; Hahn, Gitte H.; Greisen, Gorm
2015-03-01
The aim was to compare two conventional methods used to describe cerebral autoregulation (CA): frequency-domain analysis and time-domain analysis. We measured cerebral oxygenation (as a surrogate for cerebral blood flow) and mean arterial blood pressure (MAP) in 60 preterm infants. In the frequency domain, outcome variables were coherence and gain, whereas the cerebral oximetry index (COx) and the regression coefficient were the outcome variables in the time domain. Correlation between coherence and COx was poor. The disagreement between the two methods was due to the MAP and cerebral oxygenation signals being in counterphase in three cases. High gain and high coherence may arise spuriously when cerebral oxygenation decreases as MAP increases; hence, time-domain analysis appears to be a more robust-and simpler-method to describe CA.
Comparison of crop stress and soil maps to enhance variable rate irrigation prescriptions
USDA-ARS?s Scientific Manuscript database
Soil textural variability within many irrigated fields diminishes the effectiveness of conventional irrigation management, and scheduling methods that assume uniform soil conditions may produce less than satisfactory results. Furthermore, benefits of variable-rate application of agrochemicals, seeds...
Cell force mapping using a double-sided micropillar array based on the moiré fringe method
NASA Astrophysics Data System (ADS)
Zhang, F.; Anderson, S.; Zheng, X.; Roberts, E.; Qiu, Y.; Liao, R.; Zhang, X.
2014-07-01
The mapping of traction forces is crucial to understanding the means by which cells regulate their behavior and physiological function to adapt to and communicate with their local microenvironment. To this end, polymeric micropillar arrays have been used for measuring cell traction force. However, the small scale of the micropillar deflections induced by cell traction forces results in highly inefficient force analyses using conventional optical approaches; in many cases, cell forces may be below the limits of detection achieved using conventional microscopy. To address these limitations, the moiré phenomenon has been leveraged as a visualization tool for cell force mapping due to its inherent magnification effect and capacity for whole-field force measurements. This Letter reports an optomechanical cell force sensor, namely, a double-sided micropillar array (DMPA) made of poly(dimethylsiloxane), on which one side is employed to support cultured living cells while the opposing side serves as a reference pattern for generating moiré patterns. The distance between the two sides, which is a crucial parameter influencing moiré pattern contrast, is predetermined during fabrication using theoretical calculations based on the Talbot effect that aim to optimize contrast. Herein, double-sided micropillar arrays were validated by mapping mouse embryo fibroblast contraction forces and the resulting force maps compared to conventional microscopy image analyses as the reference standard. The DMPA-based approach precludes the requirement for aligning two independent periodic substrates, improves moiré contrast, and enables efficient moiré pattern generation. Furthermore, the double-sided structure readily allows for the integration of moiré-based cell force mapping into microfabricated cell culture environments or lab-on-a-chip devices.
ERIC Educational Resources Information Center
Ramachandran, Sridhar; Pandia Vadivu, P.
2014-01-01
This study examines the effectiveness of Neurocognitive Based Concept Mapping (NBCM) on students' learning in a science course. A total of 32 grade IX of high school Central Board of Secondary Education (CBSE) students were involved in this study by pre-test and post-test measurements. They were divided into two groups: NBCM group as an…
Gao, Mingzhong; Yu, Bin; Qiu, Zhiqiang; Yin, Xiangang; Li, Shengwei; Liu, Qiang
2017-01-01
Rectangular caverns are increasingly used in underground engineering projects, the failure mechanism of rectangular cavern wall rock is significantly different as a result of the cross-sectional shape and variations in wall stress distributions. However, the conventional computational method always results in a long-winded computational process and multiple displacement solutions of internal rectangular wall rock. This paper uses a Laurent series complex method to obtain a mapping function expression based on complex variable function theory and conformal transformation. This method is combined with the Schwarz-Christoffel method to calculate the mapping function coefficient and to determine the rectangular cavern wall rock deformation. With regard to the inverse mapping concept, the mapping relation between the polar coordinate system within plane ς and a corresponding unique plane coordinate point inside the cavern wall rock is discussed. The disadvantage of multiple solutions when mapping from the plane to the polar coordinate system is addressed. This theoretical formula is used to calculate wall rock boundary deformation and displacement field nephograms inside the wall rock for a given cavern height and width. A comparison with ANSYS numerical software results suggests that the theoretical solution and numerical solution exhibit identical trends, thereby demonstrating the method's validity. This method greatly improves the computing accuracy and reduces the difficulty in solving for cavern boundary and internal wall rock displacements. The proposed method provides a theoretical guide for controlling cavern wall rock deformation failure.
Interactive computer methods for generating mineral-resource maps
Calkins, James Alfred; Crosby, A.S.; Huffman, T.E.; Clark, A.L.; Mason, G.T.; Bascle, R.J.
1980-01-01
Inasmuch as maps are a basic tool of geologists, the U.S. Geological Survey's CRIB (Computerized Resources Information Bank) was constructed so that the data it contains can be used to generate mineral-resource maps. However, by the standard methods used-batch processing and off-line plotting-the production of a finished map commonly takes 2-3 weeks. To produce computer-generated maps more rapidly, cheaply, and easily, and also to provide an effective demonstration tool, we have devised two related methods for plotting maps as alternatives to conventional batch methods. These methods are: 1. Quick-Plot, an interactive program whose output appears on a CRT (cathode-ray-tube) device, and 2. The Interactive CAM (Cartographic Automatic Mapping system), which combines batch and interactive runs. The output of the Interactive CAM system is final compilation (not camera-ready) paper copy. Both methods are designed to use data from the CRIB file in conjunction with a map-plotting program. Quick-Plot retrieves a user-selected subset of data from the CRIB file, immediately produces an image of the desired area on a CRT device, and plots data points according to a limited set of user-selected symbols. This method is useful for immediate evaluation of the map and for demonstrating how trial maps can be made quickly. The Interactive CAM system links the output of an interactive CRIB retrieval to a modified version of the CAM program, which runs in the batch mode and stores plotting instructions on a disk, rather than on a tape. The disk can be accessed by a CRT, and, thus, the user can view and evaluate the map output on a CRT immediately after a batch run, without waiting 1-3 days for an off-line plot. The user can, therefore, do most of the layout and design work in a relatively short time by use of the CRT, before generating a plot tape and having the map plotted on an off-line plotter.
Haenssgen, Marco J
2015-01-01
The increasing availability of online maps, satellite imagery, and digital technology can ease common constraints of survey sampling in low- and middle-income countries. However, existing approaches require specialised software and user skills, professional GPS equipment, and/or commercial data sources; they tend to neglect spatial sampling considerations when using satellite maps; and they continue to face implementation challenges analogous to conventional survey implementation methods. This paper presents an alternative way of utilising satellite maps and digital aides that aims to address these challenges. The case studies of two rural household surveys in Rajasthan (India) and Gansu (China) compare conventional survey sampling and implementation techniques with the use of online map services such as Google, Bing, and HERE maps. Modern yet basic digital technology can be integrated into the processes of preparing, implementing, and monitoring a rural household survey. Satellite-aided systematic random sampling enhanced the spatial representativeness of the village samples and entailed savings of approximately £4000 compared to conventional household listing, while reducing the duration of the main survey by at least 25 %. This low-cost/low-tech satellite-aided survey sampling approach can be useful for student researchers and resource-constrained research projects operating in low- and middle-income contexts with high survey implementation costs. While achieving transparent and efficient survey implementation at low costs, researchers aiming to adopt a similar process should be aware of the locational, technical, and logistical requirements as well as the methodological challenges of this strategy.
MRI Volume Fusion Based on 3D Shearlet Decompositions.
Duan, Chang; Wang, Shuai; Wang, Xue Gang; Huang, Qi Hong
2014-01-01
Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST) is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods.
Citrus breeding, genetics and genomics in Japan
Omura, Mitsuo; Shimada, Takehiko
2016-01-01
Citrus is one of the most cultivated fruits in the world, and satsuma mandarin (Citrus unshiu Marc.) is a major cultivated citrus in Japan. Many excellent cultivars derived from satsuma mandarin have been released through the improvement of mandarins using a conventional breeding method. The citrus breeding program is a lengthy process owing to the long juvenility, and it is predicted that marker-assisted selection (MAS) will overcome the obstacle and improve the efficiency of conventional breeding methods. To promote citrus molecular breeding in Japan, a genetic mapping was initiated in 1987, and the experimental tools and resources necessary for citrus functional genomics have been developed in relation to the physiological analysis of satsuma mandarin. In this paper, we review the progress of citrus breeding and genome researches in Japan and report the studies on genetic mapping, expression sequence tag cataloguing, and molecular characterization of breeding characteristics, mainly in terms of the metabolism of bio-functional substances as well as factors relating to, for example, fruit quality, disease resistance, polyembryony, and flowering. PMID:27069387
Novel method for measuring a dense 3D strain map of robotic flapping wings
NASA Astrophysics Data System (ADS)
Li, Beiwen; Zhang, Song
2018-04-01
Measuring dense 3D strain maps of the inextensible membranous flapping wings of robots is of vital importance to the field of bio-inspired engineering. Conventional high-speed 3D videography methods typically reconstruct the wing geometries through measuring sparse points with fiducial markers, and thus cannot obtain the full-field mechanics of the wings in detail. In this research, we propose a novel system to measure a dense strain map of inextensible membranous flapping wings by developing a superfast 3D imaging system and a computational framework for strain analysis. Specifically, first we developed a 5000 Hz 3D imaging system based on the digital fringe projection technique using the defocused binary patterns to precisely measure the dynamic 3D geometries of rapidly flapping wings. Then, we developed a geometry-based algorithm to perform point tracking on the precisely measured 3D surface data. Finally, we developed a dense strain computational method using the Kirchhoff-Love shell theory. Experiments demonstrate that our method can effectively perform point tracking and measure a highly dense strain map of the wings without many fiducial markers.
Regional gene mapping using mixed radiation hybrids and reverse chromosome painting.
Lin, J Y; Bedford, J S
1997-11-01
We describe a new approach for low-resolution physical mapping using pooled DNA probe from mixed (non-clonal) populations of human-CHO cell hybrids and reverse chromosome painting. This mapping method is based on a process in which the human chromosome fragments bearing a complementing gene were selectively retained in a large non-clonal population of CHO-human hybrid cells during a series of 12- to 15-Gy gamma irradiations each followed by continuous growth selection. The location of the gene could then be identified by reverse chromosome painting on normal human metaphase spreads using biotinylated DNA from this population of "enriched" hybrid cells. We tested the validity of this method by correctly mapping the complementing human HPRT gene, whose location is well established. We then demonstrated the method's usefulness by mapping the chromosome location of a human gene which complemented the defect responsible for the hypersensitivity to ionizing radiation in CHO irs-20 cells. This method represents an efficient alternative to conventional concordance analysis in somatic cell hybrids where detailed chromosome analysis of numerous hybrid clones is necessary. Using this approach, it is possible to localize a gene for which there is no prior sequence or linkage information to a subchromosomal region, thus facilitating association with known mapping landmarks (e.g. RFLP, YAC or STS contigs) for higher-resolution mapping.
NASA Astrophysics Data System (ADS)
Ma, Jinlei; Zhou, Zhiqiang; Wang, Bo; Zong, Hua
2017-05-01
The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional "averaging" fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.
Gao, Mingzhong; Qiu, Zhiqiang; Yin, Xiangang; Li, Shengwei; Liu, Qiang
2017-01-01
Rectangular caverns are increasingly used in underground engineering projects, the failure mechanism of rectangular cavern wall rock is significantly different as a result of the cross-sectional shape and variations in wall stress distributions. However, the conventional computational method always results in a long-winded computational process and multiple displacement solutions of internal rectangular wall rock. This paper uses a Laurent series complex method to obtain a mapping function expression based on complex variable function theory and conformal transformation. This method is combined with the Schwarz-Christoffel method to calculate the mapping function coefficient and to determine the rectangular cavern wall rock deformation. With regard to the inverse mapping concept, the mapping relation between the polar coordinate system within plane ς and a corresponding unique plane coordinate point inside the cavern wall rock is discussed. The disadvantage of multiple solutions when mapping from the plane to the polar coordinate system is addressed. This theoretical formula is used to calculate wall rock boundary deformation and displacement field nephograms inside the wall rock for a given cavern height and width. A comparison with ANSYS numerical software results suggests that the theoretical solution and numerical solution exhibit identical trends, thereby demonstrating the method’s validity. This method greatly improves the computing accuracy and reduces the difficulty in solving for cavern boundary and internal wall rock displacements. The proposed method provides a theoretical guide for controlling cavern wall rock deformation failure. PMID:29155892
NASA Astrophysics Data System (ADS)
Farsadnia, Farhad; Ghahreman, Bijan
2016-04-01
Hydrologic homogeneous group identification is considered both fundamental and applied research in hydrology. Clustering methods are among conventional methods to assess the hydrological homogeneous regions. Recently, Self-Organizing feature Map (SOM) method has been applied in some studies. However, the main problem of this method is the interpretation on the output map of this approach. Therefore, SOM is used as input to other clustering algorithms. The aim of this study is to apply a two-level Self-Organizing feature map and Ward hierarchical clustering method to determine the hydrologic homogenous regions in North and Razavi Khorasan provinces. At first by principal component analysis, we reduced SOM input matrix dimension, then the SOM was used to form a two-dimensional features map. To determine homogeneous regions for flood frequency analysis, SOM output nodes were used as input into the Ward method. Generally, the regions identified by the clustering algorithms are not statistically homogeneous. Consequently, they have to be adjusted to improve their homogeneity. After adjustment of the homogeneity regions by L-moment tests, five hydrologic homogeneous regions were identified. Finally, adjusted regions were created by a two-level SOM and then the best regional distribution function and associated parameters were selected by the L-moment approach. The results showed that the combination of self-organizing maps and Ward hierarchical clustering by principal components as input is more effective than the hierarchical method, by principal components or standardized inputs to achieve hydrologic homogeneous regions.
Ye, Huihui; Ma, Dan; Jiang, Yun; Cauley, Stephen F; Du, Yiping; Wald, Lawrence L; Griswold, Mark A; Setsompop, Kawin
2016-05-01
We incorporate simultaneous multislice (SMS) acquisition into MR fingerprinting (MRF) to accelerate the MRF acquisition. The t-Blipped SMS-MRF method is achieved by adding a Gz blip before each data acquisition window and balancing it with a Gz blip of opposing polarity at the end of each TR. Thus the signal from different simultaneously excited slices are encoded with different phases without disturbing the signal evolution. Furthermore, by varying the Gz blip area and/or polarity as a function of repetition time, the slices' differential phase can also be made to vary as a function of time. For reconstruction of t-Blipped SMS-MRF data, we demonstrate a combined slice-direction SENSE and modified dictionary matching method. In Monte Carlo simulation, the parameter mapping from multiband factor (MB) = 2 t-Blipped SMS-MRF shows good accuracy and precision when compared with results from reference conventional MRF data with concordance correlation coefficients (CCC) of 0.96 for T1 estimates and 0.90 for T2 estimates. For in vivo experiments, T1 and T2 maps from MB=2 t-Blipped SMS-MRF have a high agreement with ones from conventional MRF. The MB=2 t-Blipped SMS-MRF acquisition/reconstruction method has been demonstrated and validated to provide more rapid parameter mapping in the MRF framework. © 2015 Wiley Periodicals, Inc.
Evaluation of EREP techniques for geological mapping. [southern Pyrenees and Ebro basin in Spain
NASA Technical Reports Server (NTRS)
Vandermeermohr, H. E. C.; Srivastava, G. S. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Skylab photographs may be successfully utilized for preparing a reconnaissance geological map in the areas where no maps or semi-detailed maps exist. Large coverage of area and regional perspective from Skylab photographs can help better coordination in regional mapping. It is possible to delineate major structural trends and other features like mega-lineaments, geofractures, and faults, which have evaded their detection by conventional methods. The photointerpretability is better in areas dominated by sedimentary rocks. Rock units of smaller extent and having poor geomorphic expressions are difficult to map. Demarcation of quaternary river alluvium can be made with better precision and ease with the Skylab photographs. Stereoscopic viewing greatly helps in interpretation of area structures. Skylab photographs are not good for preparing geological maps larger than 1:270,000 scale.
Geological mapping goes 3-D in response to societal needs
Thorleifson, H.; Berg, R.C.; Russell, H.A.J.
2010-01-01
The transition to 3-D mapping has been made possible by technological advances in digital cartography, GIS, data storage, analysis, and visualization. Despite various challenges, technological advancements facilitated a gradual transition from 2-D maps to 2.5-D draped maps to 3-D geological mapping, supported by digital spatial and relational databases that can be interrogated horizontally or vertically and viewed interactively. Challenges associated with data collection, human resources, and information management are daunting due to their resource and training requirements. The exchange of strategies at the workshops has highlighted the use of basin analysis to develop a process-based predictive knowledge framework that facilitates data integration. Three-dimensional geological information meets a public demand that fills in the blanks left by conventional 2-D mapping. Two-dimensional mapping will, however, remain the standard method for extensive areas of complex geology, particularly where deformed igneous and metamorphic rocks defy attempts at 3-D depiction.
Geographic Information System Software to Remodel Population Data Using Dasymetric Mapping Methods
Sleeter, Rachel; Gould, Michael
2007-01-01
The U.S. Census Bureau provides decadal demographic data collected at the household level and aggregated to larger enumeration units for anonymity purposes. Although this system is appropriate for the dissemination of large amounts of national demographic data, often the boundaries of the enumeration units do not reflect the distribution of the underlying statistical phenomena. Conventional mapping methods such as choropleth mapping, are primarily employed due to their ease of use. However, the analytical drawbacks of choropleth methods are well known ranging from (1) the artificial transition of population at the boundaries of mapping units to (2) the assumption that the phenomena is evenly distributed across the enumeration unit (when in actuality there can be significant variation). Many methods to map population distribution have been practiced in geographic information systems (GIS) and remote sensing fields. Many cartographers prefer dasymetric mapping to map population because of its ability to more accurately distribute data over geographic space. Similar to ?choropleth maps?, a dasymetric map utilizes standardized data (for example, census data). However, rather than using arbitrary enumeration zones to symbolize population distribution, a dasymetric approach introduces ancillary information to redistribute the standardized data into zones relative to land use and land cover (LULC), taking into consideration actual changing densities within the boundaries of the enumeration unit. Thus, new zones are created that correlate to the function of the map, capturing spatial variations in population density. The transfer of data from census enumeration units to ancillary-driven homogenous zones is performed by a process called areal interpolation.
A Bike Built for Magnetic Mapping
NASA Astrophysics Data System (ADS)
Schattner, U.; Segev, A.; Lyakhovsky, V.
2017-12-01
Understanding the magnetic signature of the subsurface geology is crucial for structural, groundwater, earthquake propagation, and mineral studies. The cheapest measuring method is by walking with sensors. This approach yields high-resolution maps, yet its coverage is limited. We invented a new design that records magnetic data while riding a bicycle. The new concept offers an efficient, low-cost method of collecting high-resolution ground magnetic field data over rough terrain where conventional vehicles dare not venture. It improves the efficiency of the traditional method by more than five times. The Bike-magnetic scales up ground magnetism from a localized site survey to regional coverage. By now we covered 3300 square KM (about the size of Rhode Island) across northern Israel, in profile spacing of 1-2 km. Initial Total Magnetic Intensity maps reveal a myriad of new features that were not detected by the low-resolution regional aeromagnetic survey that collected data from 1000 m height.
Transmission imaging for integrated PET-MR systems.
Bowen, Spencer L; Fuin, Niccolò; Levine, Michael A; Catana, Ciprian
2016-08-07
Attenuation correction for PET-MR systems continues to be a challenging problem, particularly for body regions outside the head. The simultaneous acquisition of transmission scan based μ-maps and MR images on integrated PET-MR systems may significantly increase the performance of and offer validation for new MR-based μ-map algorithms. For the Biograph mMR (Siemens Healthcare), however, use of conventional transmission schemes is not practical as the patient table and relatively small diameter scanner bore significantly restrict radioactive source motion and limit source placement. We propose a method for emission-free coincidence transmission imaging on the Biograph mMR. The intended application is not for routine subject imaging, but rather to improve and validate MR-based μ-map algorithms; particularly for patient implant and scanner hardware attenuation correction. In this study we optimized source geometry and assessed the method's performance with Monte Carlo simulations and phantom scans. We utilized a Bayesian reconstruction algorithm, which directly generates μ-map estimates from multiple bed positions, combined with a robust scatter correction method. For simulations with a pelvis phantom a single torus produced peak noise equivalent count rates (34.8 kcps) dramatically larger than a full axial length ring (11.32 kcps) and conventional rotating source configurations. Bias in reconstructed μ-maps for head and pelvis simulations was ⩽4% for soft tissue and ⩽11% for bone ROIs. An implementation of the single torus source was filled with (18)F-fluorodeoxyglucose and the proposed method quantified for several test cases alone or in comparison with CT-derived μ-maps. A volume average of 0.095 cm(-1) was recorded for an experimental uniform cylinder phantom scan, while a bias of <2% was measured for the cortical bone equivalent insert of the multi-compartment phantom. Single torus μ-maps of a hip implant phantom showed significantly less artifacts and improved dynamic range, and differed greatly for highly attenuating materials in the case of the patient table, compared to CT results. Use of a fixed torus geometry, in combination with translation of the patient table to perform complete tomographic sampling, generated highly quantitative measured μ-maps and is expected to produce images with significantly higher SNR than competing fixed geometries at matched total acquisition time.
Fu, Yongqing; Li, Xingyuan; Li, Yanan; Yang, Wei; Song, Hailiang
2013-03-01
Chaotic communication has aroused general interests in recent years, but its communication effect is not ideal with the restriction of chaos synchronization. In this paper a new chaos M-ary digital modulation and demodulation method is proposed. By using region controllable characteristics of spatiotemporal chaos Hamilton map in phase plane and chaos unique characteristic, which is sensitive to initial value, zone mapping method is proposed. It establishes the map relationship between M-ary digital information and the region of Hamilton map phase plane, thus the M-ary information chaos modulation is realized. In addition, zone partition demodulation method is proposed based on the structure characteristic of Hamilton modulated information, which separates M-ary information from phase trajectory of chaotic Hamilton map, and the theory analysis of zone partition demodulator's boundary range is given. Finally, the communication system based on the two methods is constructed on the personal computer. The simulation shows that in high speed transmission communications and with no chaos synchronization circumstance, the proposed chaotic M-ary modulation and demodulation method has outperformed some conventional M-ary modulation methods, such as quadrature phase shift keying and M-ary pulse amplitude modulation in bit error rate. Besides, it has performance improvement in bandwidth efficiency, transmission efficiency and anti-noise performance, and the system complexity is low and chaos signal is easy to generate.
NASA Astrophysics Data System (ADS)
Xue, Wei; Wang, Qi; Wang, Tianyu
2018-04-01
This paper presents an improved parallel combinatory spread spectrum (PC/SS) communication system with the method of double information matching (DIM). Compared with conventional PC/SS system, the new model inherits the advantage of high transmission speed, large information capacity and high security. Besides, the problem traditional system will face is the high bit error rate (BER) and since its data-sequence mapping algorithm. Hence the new model presented shows lower BER and higher efficiency by its optimization of mapping algorithm.
Interpretation of ERTS-MSS images of a Savanna area in eastern Colombia
NASA Technical Reports Server (NTRS)
Elberson, G. W. W.
1973-01-01
The application of ERTS-1 imagery for extrapolating existing soil maps into unmapped areas of the Llanos Orientales of Colombia, South America is discussed. Interpretations of ERTS-1 data were made according to conventional photointerpretation techniques. Most units delineated in the existing reconnaissance soil map at a scale of 1:250,000 could be recognized and delineated in the ERTS image. The methods of interpretation are described and the results obtained for specific areas are analyzed.
Patch-Based Super-Resolution of MR Spectroscopic Images: Application to Multiple Sclerosis
Jain, Saurabh; Sima, Diana M.; Sanaei Nezhad, Faezeh; Hangel, Gilbert; Bogner, Wolfgang; Williams, Stephen; Van Huffel, Sabine; Maes, Frederik; Smeets, Dirk
2017-01-01
Purpose: Magnetic resonance spectroscopic imaging (MRSI) provides complementary information to conventional magnetic resonance imaging. Acquiring high resolution MRSI is time consuming and requires complex reconstruction techniques. Methods: In this paper, a patch-based super-resolution method is presented to increase the spatial resolution of metabolite maps computed from MRSI. The proposed method uses high resolution anatomical MR images (T1-weighted and Fluid-attenuated inversion recovery) to regularize the super-resolution process. The accuracy of the method is validated against conventional interpolation techniques using a phantom, as well as simulated and in vivo acquired human brain images of multiple sclerosis subjects. Results: The method preserves tissue contrast and structural information, and matches well with the trend of acquired high resolution MRSI. Conclusions: These results suggest that the method has potential for clinically relevant neuroimaging applications. PMID:28197066
Bang, Yoonsik; Kim, Jiyoung; Yu, Kiyun
2016-01-01
Wearable and smartphone technology innovations have propelled the growth of Pedestrian Navigation Services (PNS). PNS need a map-matching process to project a user’s locations onto maps. Many map-matching techniques have been developed for vehicle navigation services. These techniques are inappropriate for PNS because pedestrians move, stop, and turn in different ways compared to vehicles. In addition, the base map data for pedestrians are more complicated than for vehicles. This article proposes a new map-matching method for locating Global Positioning System (GPS) trajectories of pedestrians onto road network datasets. The theory underlying this approach is based on the Fréchet distance, one of the measures of geometric similarity between two curves. The Fréchet distance approach can provide reasonable matching results because two linear trajectories are parameterized with the time variable. Then we improved the method to be adaptive to the positional error of the GPS signal. We used an adaptation coefficient to adjust the search range for every input signal, based on the assumption of auto-correlation between consecutive GPS points. To reduce errors in matching, the reliability index was evaluated in real time for each match. To test the proposed map-matching method, we applied it to GPS trajectories of pedestrians and the road network data. We then assessed the performance by comparing the results with reference datasets. Our proposed method performed better with test data when compared to a conventional map-matching technique for vehicles. PMID:27782091
DOT National Transportation Integrated Search
2004-09-01
Conventionally, the road centerline surveys have : been performed by the traditional survey methods, : providing rather high, even sub-centimeter level of : accuracy. The major problem, however, that the : Departments of Transportation face, is the s...
NASA Technical Reports Server (NTRS)
Tanton, George; Kesmodel, Roy; Burden, Judy; Su, Ching-Hua; Cobb, Sharon D.; Lehoczky, S. L.
2000-01-01
HgZnSe and HgZnTe are electronic materials of interest for potential IR detector and focal plane array applications due to their improved strength and compositional stability over HgCdTe, but they are difficult to grow on Earth and to fully characterize. Conventional contact methods of characterization, such as Hall and van der Paw, although adequate for many situations are typically labor intensive and not entirely suitable where only very small samples are available. To adequately characterize and compare properties of electronic materials grown in low earth orbit with those grown on Earth, innovative techniques are needed that complement existing methods. This paper describes the implementation and test results of a unique non-contact method of characterizing uniformity, mobility, and carrier concentration together with results from conventional methods applied to HgZnSe and HgZnTe. The innovative method has advantages over conventional contact methods since it circumvents problems of possible contamination from alloying electrical contacts to a sample and also has the capability to map a sample. Non- destructive mapping, the determination of the carrier concentration and mobility at each place on a sample, provides a means to quantitatively compare, at high spatial resolution, effects of microgravity on electronic properties and uniformity of electronic materials grown in low-Earth orbit with Earth grown materials. The mapping technique described here uses a 1mm diameter polarized beam of radiation to probe the sample. Activation of a magnetic field, in which the sample is placed, causes the plane of polarization of the probe beam to rotate. This Faraday rotation is a function of the free carrier concentration and the band parameters of the material. Maps of carrier concentration, mobility, and transmission generated from measurements of the Faraday rotation angles over the temperature range from 300K to 77K will be presented. New information on band parameters, obtained by combining results from conventional Hall measurements of the free carrier concentration with Faraday rotation measurements, will also be presented. One example of how this type of information was derived is illustrated in the following figure which shows Faraday rotation vs wavelength modeled for Hg(l-x)ZnxSe at a temperature of 300K and x=0.07. The plasma contribution, total Faraday rotation, and interband contribution to the Faraday rotation, are designated in the Figure as del(p), FR tot, and del(i) respectively. Experimentally measured values of FR tot, each indicated by + , agree acceptably well with the model at the probe wavelength of 10.6 microns. The model shows that at the probe wavelength, practically all the rotation is due to the plasma component, which can be expressed as delta(sub p)= 2pi(e(sup 3))NBL/c(sup 2)nm*(sup 2) omega(sup 2). In this equation, delta(sub p) is the rotation angle due to the free carrier plasma, N is the free carrier concentration, B the magnetic field strength, L the thickness of the sample, n the index of refraction, omega the probe radiation frequency, c the speed of light, e the electron charge, and m* the effective mass. A measurement of N by conventional techniques, combined with a measurement of the Faraday rotation angle allows m* to be accurately determined since it is an inverse square function.
Dynamical density delay maps: simple, new method for visualising the behaviour of complex systems
2014-01-01
Background Physiologic signals, such as cardiac interbeat intervals, exhibit complex fluctuations. However, capturing important dynamical properties, including nonstationarities may not be feasible from conventional time series graphical representations. Methods We introduce a simple-to-implement visualisation method, termed dynamical density delay mapping (“D3-Map” technique) that provides an animated representation of a system’s dynamics. The method is based on a generalization of conventional two-dimensional (2D) Poincaré plots, which are scatter plots where each data point, x(n), in a time series is plotted against the adjacent one, x(n + 1). First, we divide the original time series, x(n) (n = 1,…, N), into a sequence of segments (windows). Next, for each segment, a three-dimensional (3D) Poincaré surface plot of x(n), x(n + 1), h[x(n),x(n + 1)] is generated, in which the third dimension, h, represents the relative frequency of occurrence of each (x(n),x(n + 1)) point. This 3D Poincaré surface is then chromatised by mapping the relative frequency h values onto a colour scheme. We also generate a colourised 2D contour plot from each time series segment using the same colourmap scheme as for the 3D Poincaré surface. Finally, the original time series graph, the colourised 3D Poincaré surface plot, and its projection as a colourised 2D contour map for each segment, are animated to create the full “D3-Map.” Results We first exemplify the D3-Map method using the cardiac interbeat interval time series from a healthy subject during sleeping hours. The animations uncover complex dynamical changes, such as transitions between states, and the relative amount of time the system spends in each state. We also illustrate the utility of the method in detecting hidden temporal patterns in the heart rate dynamics of a patient with atrial fibrillation. The videos, as well as the source code, are made publicly available. Conclusions Animations based on density delay maps provide a new way of visualising dynamical properties of complex systems not apparent in time series graphs or standard Poincaré plot representations. Trainees in a variety of fields may find the animations useful as illustrations of fundamental but challenging concepts, such as nonstationarity and multistability. For investigators, the method may facilitate data exploration. PMID:24438439
Photogrammetric application of viking orbital photography
Wu, S.S.C.; Elassal, A.A.; Jordan, R.; Schafer, F.J.
1982-01-01
Special techniques are described for the photogrammetric compilation of topographic maps and profiles from stereoscopic photographs taken by the two Viking Orbiter spacecraft. These techniques were developed because the extremely narrow field of view of the Viking cameras precludes compilation by conventional photogrammetric methods. The techniques adjust for internal consistency the Supplementary Experiment Data Record (SEDR-the record of spacecraft orientation when photographs were taken) and the computation of geometric orientation parameters of the stereo models. A series of contour maps of Mars is being compiled by these new methods using a wide variety of Viking Orbiter photographs, to provide the planetary research community with topographic information. ?? 1982.
Decision-level fusion of SAR and IR sensor information for automatic target detection
NASA Astrophysics Data System (ADS)
Cho, Young-Rae; Yim, Sung-Hyuk; Cho, Hyun-Woong; Won, Jin-Ju; Song, Woo-Jin; Kim, So-Hyeon
2017-05-01
We propose a decision-level architecture that combines synthetic aperture radar (SAR) and an infrared (IR) sensor for automatic target detection. We present a new size-based feature, called target-silhouette to reduce the number of false alarms produced by the conventional target-detection algorithm. Boolean Map Visual Theory is used to combine a pair of SAR and IR images to generate the target-enhanced map. Then basic belief assignment is used to transform this map into a belief map. The detection results of sensors are combined to build the target-silhouette map. We integrate the fusion mass and the target-silhouette map on the decision level to exclude false alarms. The proposed algorithm is evaluated using a SAR and IR synthetic database generated by SE-WORKBENCH simulator, and compared with conventional algorithms. The proposed fusion scheme achieves higher detection rate and lower false alarm rate than the conventional algorithms.
Assessing Volunteered Geographic Information (vgi) Quality Based on CONTRIBUTORS' Mapping Behaviours
NASA Astrophysics Data System (ADS)
Bégin, D.; Devillers, R.; Roche, S.
2013-05-01
VGI changed the mapping landscape by allowing people that are not professional cartographers to contribute to large mapping projects, resulting at the same time in concerns about the quality of the data produced. While a number of early VGI studies used conventional methods to assess data quality, such approaches are not always well adapted to VGI. Since VGI is a user-generated content, we posit that features and places mapped by contributors largely reflect contributors' personal interests. This paper proposes studying contributors' mapping processes to understand the characteristics and quality of the data produced. We argue that contributors' behaviour when mapping reflects contributors' motivation and individual preferences in selecting mapped features and delineating mapped areas. Such knowledge of contributors' behaviour could allow for the derivation of information about the quality of VGI datasets. This approach was tested using a sample area from OpenStreetMap, leading to a better understanding of data completeness for contributor's preferred features.
NASA Astrophysics Data System (ADS)
Abedi, Maysam; Norouzi, Gholam-Hossain
2016-04-01
This work presents the promising application of three variants of TOPSIS method (namely the conventional, adjusted and modified versions) as a straightforward knowledge-driven technique in multi criteria decision making processes for data fusion of a broad exploratory geo-dataset in mineral potential/prospectivity mapping. The method is implemented to airborne geophysical data (e.g. potassium radiometry, aeromagnetic and frequency domain electromagnetic data), surface geological layers (fault and host rock zones), extracted alteration layers from remote sensing satellite imagery data, and five evidential attributes from stream sediment geochemical data. The central Iranian volcanic-sedimentary belt in Kerman province at the SE of Iran that is embedded in the Urumieh-Dokhtar Magmatic Assemblage arc (UDMA) is chosen to integrate broad evidential layers in the region of prospect. The studied area has high potential of ore mineral occurrences especially porphyry copper/molybdenum and the generated mineral potential maps aim to outline new prospect zones for further investigation in future. Two evidential layers of the downward continued aeromagnetic data and its analytic signal filter are prepared to be incorporated in fusion process as geophysical plausible footprints of the porphyry type mineralization. The low values of the apparent resistivity layer calculated from the airborne frequency domain electromagnetic data are also used as an electrical criterion in this investigation. Four remote sensing evidential layers of argillic, phyllic, propylitic and hydroxyl alterations were extracted from ASTER images in order to map the altered areas associated with porphyry type deposits, whilst the ETM+ satellite imagery data were used as well to map iron oxide layer. Since potassium alteration is generally the mainstay of porphyry ore mineralization, the airborne potassium radiometry data was used. The geochemical layers of Cu/B/Pb/Zn elements and the first component of PCA analysis were considered as powerful traces to prepare final maps. The conventional, adjusted and modified variants of the TOPSIS method produced three mineral potential maps, in which the outputs indicate adequately matching of high potential zones with previous working and active mines in the region.
Applied photo interpretation for airbrush cartography
NASA Technical Reports Server (NTRS)
Inge, J. L.; Bridges, P. M.
1976-01-01
New techniques of cartographic portrayal have been developed for the compilation of maps of lunar and planetary surfaces. Conventional photo interpretation methods utilizing size, shape, shadow, tone, pattern, and texture are applied to computer processed satellite television images. The variety of the image data allows the illustrator to interpret image details by inter-comparison and intra-comparison of photographs. Comparative judgements are affected by illumination, resolution, variations in surface coloration, and transmission or processing artifacts. The validity of the interpretation process is tested by making a representational drawing by an airbrush portrayal technique. Production controls insure the consistency of a map series. Photo interpretive cartographic portrayal skills are used to prepare two kinds of map series and are adaptable to map products of different kinds and purposes.
NASA Astrophysics Data System (ADS)
Pussak, Marcin; Bauer, Klaus; Stiller, Manfred; Bujakowski, Wieslaw
2014-04-01
Within a seismic reflection processing work flow, the common-reflection-surface (CRS) stack can be applied as an alternative for the conventional normal moveout (NMO) or the dip moveout (DMO) stack. The advantages of the CRS stack include (1) data-driven automatic determination of stacking operator parameters, (2) imaging of arbitrarily curved geological boundaries, and (3) significant increase in signal-to-noise (S/N) ratio by stacking far more traces than used in a conventional stack. In this paper we applied both NMO and CRS stackings to process a sparse 3D seismic data set acquired within a geothermal exploration study in the Polish Basin. The stacked images show clear enhancements in quality achieved by the CRS stack in comparison with the conventional stack. While this was expected from previous studies, we also found remarkable improvements in the quality of seismic attributes when the CRS stack was applied instead of the conventional stack. For the major geothermal target reservoir (Lower Jurassic horizon Ja1), we present a comparison between both stacking methods for a number of common attributes, including root-mean-square (RMS) amplitudes, instantaneous frequencies, coherency, and spectral decomposition attributes derived from the continuous wavelet transform. The attribute maps appear noisy and highly fluctuating after the conventional stack, and are clearly structured after the CRS stack. A seismic facies analysis was finally carried out for the Ja1 horizon using the attributes derived from the CRS stack by using self-organizing map clustering techniques. A corridor parallel to a fault system was identified, which is characterized by decreased RMS amplitudes and decreased instantaneous frequencies. In our interpretation, this region represents a fractured, fluid-bearing compartment within the sandstone reservoir, which indicates favorable conditions for geothermal exploitation.
Doppler Processing with Ultra-Wideband (UWB) Radar Revisited
2018-01-01
grating lobes as compared to the conventional Doppler processing counterpart. 15. SUBJECT TERMS Doppler radar, UWB radar, matched filter , ambiguity...maps by the matched filter method, illustrating the radar data support in (a) the frequency-slow time domain and (b) the ρ-u domain. The samples...example, obtained by the matched filter method, for a 1.2-s CPI centered at t = 1.5 s
A NEW LOG EVALUATION METHOD TO APPRAISE MESAVERDE RE-COMPLETION OPPORTUNITIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert Greer
2003-09-11
Artificial intelligence tools, fuzzy logic and neural networks were used to evaluate the potential of the behind pipe Mesaverde formation in BMG's Mancos formation wells. A fractal geostatistical mapping algorithm was also used to predict Mesaverde production. Additionally, a conventional geological study was conducted. To date one Mesaverde completion has been performed. The Janet No.3 Mesaverde completion was non-economic. Both the AI method and the geostatistical methods predicted the failure of the Janet No.3. The Gavilan No.1 in the Mesaverde was completed during the course of the study and was an extremely good well. This well was not included inmore » the statistical dataset. The AI method predicted very good production while the fractal map predicted a poor producer.« less
Li, Jingjing; Li, Jinrong; Chen, Zidong; Liu, Jing; Yuan, Junpeng; Cai, Xiaoxiao; Deng, Daming; Yu, Minbin
2017-01-01
We investigate the efficacy of a novel dichoptic mapping paradigm in evaluating visual function of anisometropic amblyopes. Using standard clinical measures of visual function (visual acuity, stereo acuity, Bagolini lenses, and neutral density filters) and a novel quantitative mapping technique, 26 patients with anisometropic amblyopia (mean age = 19.15 ± 4.42 years) were assessed. Two additional psychophysical interocular suppression measurements were tested with dichoptic global motion coherence and binocular phase combination tasks. Luminance reduction was achieved by placing neutral density filters in front of the normal eye. Our study revealed that suppression changes across the central 10° visual field by mean luminance modulation in amblyopes as well as normal controls. Using simulation and an elimination of interocular suppression, we identified a novel method to effectively reflect the distribution of suppression in anisometropic amblyopia. Additionally, the new quantitative mapping technique was in good agreement with conventional clinical measures, such as interocular acuity difference (P < 0.001) and stereo acuity (P = 0.005). There was a good consistency between the results of interocular suppression with dichoptic mapping paradigm and the results of the other two psychophysical methods (suppression mapping versus binocular phase combination, P < 0.001; suppression mapping versus global motion coherence, P = 0.005). The dichoptic suppression mapping technique is an effective method to represent impaired visual function in patients with anisometropic amblyopia. It offers a potential in "micro-"antisuppression mapping tests and therapies for amblyopia.
Zhao, Chenguang; Bolan, Patrick J.; Royce, Melanie; Lakkadi, Navneeth; Eberhardt, Steven; Sillerud, Laurel; Lee, Sang-Joon; Posse, Stefan
2012-01-01
Purpose To quantitatively measure tCho levels in healthy breasts using Proton-Echo-Planar-Spectroscopic-Imaging (PEPSI). Material and Methods The 2-dimensional mapping of tCho at 3 Tesla across an entire breast slice using PEPSI and a hybrid spectral quantification method based on LCModel fitting and integration of tCho using the fitted spectrum were developed. This method was validated in 19 healthy females and compared with single voxel spectroscopy (SVS) and with PRESS prelocalized conventional Magnetic Resonance Spectroscopic Imaging (MRSI) using identical voxel size (8 cc) and similar scan times (~7 min). Results A tCho peak with a signal to noise ratio larger than 2 was detected in 10 subjects using both PEPSI and SVS. The average tCho concentration in these subjects was 0.45 ± 0.2 mmol/kg using PEPSI and 0.48±0.3 mmol/kg using SVS. Comparable results were obtained in 2 subjects using conventional MRSI. High lipid content in the spectra of 9 tCho negative subjects was associated with spectral line broadening of more than 26 Hz, which made tCho detection impossible. Conventional MRSI with PRESS prelocalization in glandular tissue in two of these subjects yielded tCho concentrations comparable to PEPSI. Conclusion The detection sensitivity of PEPSI is comparable to SVS and conventional PRESS-MRSI. PEPSI can be potentially used in the evaluation of tCho in breast cancer. A tCho threshold concentration value of ~0.7mmol/kg might be used to differentiate between cancerous and healthy (or benign) breast tissues based on this work and previous studies. PMID:22782667
Structure-aware depth super-resolution using Gaussian mixture model
NASA Astrophysics Data System (ADS)
Kim, Sunok; Oh, Changjae; Kim, Youngjung; Sohn, Kwanghoon
2015-03-01
This paper presents a probabilistic optimization approach to enhance the resolution of a depth map. Conventionally, a high-resolution color image is considered as a cue for depth super-resolution under the assumption that the pixels with similar color likely belong to similar depth. This assumption might induce a texture transferring from the color image into the depth map and an edge blurring artifact to the depth boundaries. In order to alleviate these problems, we propose an efficient depth prior exploiting a Gaussian mixture model in which an estimated depth map is considered to a feature for computing affinity between two pixels. Furthermore, a fixed-point iteration scheme is adopted to address the non-linearity of a constraint derived from the proposed prior. The experimental results show that the proposed method outperforms state-of-the-art methods both quantitatively and qualitatively.
Lee, Jung-Ju; Lee, Sang Kun; Choi, Jang Wuk; Kim, Dong-Wook; Park, Kyung Il; Kim, Bom Sahn; Kang, Hyejin; Lee, Dong Soo; Lee, Seo-Young; Kim, Sung Hun; Chung, Chun Kee; Nam, Hyeon Woo; Kim, Kwang Ki
2009-12-01
Ictal single-photon emission computed tomography (SPECT) is a valuable method for localizing the ictal onset zone in the presurgical evaluation of patients with intractable epilepsy. Conventional methods used to localize the ictal onset zone have problems with time lag from seizure onset to injection. To evaluate the clinical usefulness of a method that we developed, which involves an attachable automated injector (AAI), in reducing time lag and improving the ability to localize the zone of seizure onset. Patients admitted to the epilepsy monitoring unit (EMU) between January 1, 2003, and June 30, 2008, were included. The definition of ictal onset zone was made by comprehensive review of medical records, magnetic resonance imaging (MRI), data from video electroencephalography (EEG) monitoring, and invasive EEG monitoring if available. We comprehensively evaluated the time lag to injection and the image patterns of ictal SPECT using traditional visual analysis, statistical parametric mapping-assisted, and subtraction ictal SPECT coregistered to an MRI-assisted means of analysis. Image patterns were classified as localizing, lateralizing, and nonlateralizing. The whole number of patients was 99: 48 in the conventional group and 51 in the AAI group. The mean (SD) delay time to injection from seizure onset was 12.4+/-12.0 s in the group injected by our AAI method and 40.4+/-26.3 s in the group injected by the conventional method (P=0.000). The mean delay time to injection from seizure detection was 3.2+/-2.5 s in the group injected by the AAI method and 21.4+/-9.7 s in the group injected by the conventional method (P=0.000). The AAI method was superior to the conventional method in localizing the area of seizure onset (36 out of 51 with AAI method vs. 21 out of 48 with conventional method, P=0.009), especially in non-temporal lobe epilepsy (non-TLE) patients (17 out of 27 with AAI method vs. 3 out of 13 with conventional method, P=0.041), and in lateralizing the seizure onset hemisphere (47 out of 51 with AAI method vs. 33 out of 48 with conventional method, P=0.004). The AAI method was superior to the conventional method in reducing the time lag of tracer injection and in localizing and lateralizing the ictal onset zone, especially in patients with non-TLE.
The application of multiple reaction monitoring and multi-analyte profiling to HDL proteins
2014-01-01
Background HDL carries a rich protein cargo and examining HDL protein composition promises to improve our understanding of its functions. Conventional mass spectrometry methods can be lengthy and difficult to extend to large populations. In addition, without prior enrichment of the sample, the ability of these methods to detect low abundance proteins is limited. Our objective was to develop a high-throughput approach to examine HDL protein composition applicable to diabetes and cardiovascular disease (CVD). Methods We optimized two multiplexed assays to examine HDL proteins using a quantitative immunoassay (Multi-Analyte Profiling- MAP) and mass spectrometric-based quantitative proteomics (Multiple Reaction Monitoring-MRM). We screened HDL proteins using human xMAP (90 protein panel) and MRM (56 protein panel). We extended the application of these two methods to HDL isolated from a group of participants with diabetes and prior cardiovascular events and a group of non-diabetic controls. Results We were able to quantitate 69 HDL proteins using MAP and 32 proteins using MRM. For several common proteins, the use of MRM and MAP was highly correlated (p < 0.01). Using MAP, several low abundance proteins implicated in atherosclerosis and inflammation were found on HDL. On the other hand, MRM allowed the examination of several HDL proteins not available by MAP. Conclusions MAP and MRM offer a sensitive and high-throughput approach to examine changes in HDL proteins in diabetes and CVD. This approach can be used to measure the presented HDL proteins in large clinical studies. PMID:24397693
Mapping experiment with space station
NASA Technical Reports Server (NTRS)
Wu, S. S. C.
1986-01-01
Mapping of the Earth from space stations can be approached in two areas. One is to collect gravity data for defining topographic datum using Earth's gravity field in terms of spherical harmonics. The other is to search and explore techniques of mapping topography using either optical or radar images with or without reference to ground central points. Without ground control points, an integrated camera system can be designed. With ground control points, the position of the space station (camera station) can be precisely determined at any instant. Therefore, terrestrial topography can be precisely mapped either by conventional photogrammetric methods or by current digital technology of image correlation. For the mapping experiment, it is proposed to establish four ground points either in North America or Africa (including the Sahara desert). If this experiment should be successfully accomplished, it may also be applied to the defense charting systems.
A review of MRI evaluation of demyelination in cuprizone murine model
NASA Astrophysics Data System (ADS)
Krutenkova, E.; Pan, E.; Khodanovich, M.
2015-11-01
The cuprizone mouse model of non-autoimmune demyelination reproduces some phenomena of multiple sclerosis and is appropriate for validation and specification of a new method of non-invasive diagnostics. In the review new data which are collected using the new MRI method are compared with one or more conventional MRI tools. Also the paper reviewed the validation of MRI approaches using histological or immunohistochemical methods. Luxol fast blue histological staining and myelin basic protein immunostaining is widespread. To improve the accuracy of non-invasive conventional MRI, multimodal scanning could be applied. The new quantitative MRI method of fast mapping of the macromolecular proton fraction is a reliable biomarker of myelin in the brain and can be used for research of demyelination in animals. To date, a validation of MPF method on the CPZ mouse model of demyelination is not performed, although this method is probably the best way to evaluate demyelination using MRI.
Chen, Zikuan; Calhoun, Vince D
2016-03-01
Conventionally, independent component analysis (ICA) is performed on an fMRI magnitude dataset to analyze brain functional mapping (AICA). By solving the inverse problem of fMRI, we can reconstruct the brain magnetic susceptibility (χ) functional states. Upon the reconstructed χ dataspace, we propose an ICA-based brain functional χ mapping method (χICA) to extract task-evoked brain functional map. A complex division algorithm is applied to a timeseries of fMRI phase images to extract temporal phase changes (relative to an OFF-state snapshot). A computed inverse MRI (CIMRI) model is used to reconstruct a 4D brain χ response dataset. χICA is implemented by applying a spatial InfoMax ICA algorithm to the reconstructed 4D χ dataspace. With finger-tapping experiments on a 7T system, the χICA-extracted χ-depicted functional map is similar to the SPM-inferred functional χ map by a spatial correlation of 0.67 ± 0.05. In comparison, the AICA-extracted magnitude-depicted map is correlated with the SPM magnitude map by 0.81 ± 0.05. The understanding of the inferiority of χICA to AICA for task-evoked functional map is an ongoing research topic. For task-evoked brain functional mapping, we compare the data-driven ICA method with the task-correlated SPM method. In particular, we compare χICA with AICA for extracting task-correlated timecourses and functional maps. χICA can extract a χ-depicted task-evoked brain functional map from a reconstructed χ dataspace without the knowledge about brain hemodynamic responses. The χICA-extracted brain functional χ map reveals a bidirectional BOLD response pattern that is unavailable (or different) from AICA. Copyright © 2016 Elsevier B.V. All rights reserved.
Stone, B N; Griesinger, G L; Modelevsky, J L
1984-01-01
We describe an interactive computational tool, PLASMAP, which allows the user to electronically store, retrieve, and display circular restriction maps. PLASMAP permits users to construct libraries of plasmid restriction maps as a set of files which may be edited in the laboratory at any time. The display feature of PLASMAP quickly generates device-independent, artist-quality, full-color or monochrome, hard copies or CRT screens of complex, conventional circular restriction maps. PMID:6320096
Experimental realization of a highly secure chaos communication under strong channel noise
NASA Astrophysics Data System (ADS)
Ye, Weiping; Dai, Qionglin; Wang, Shihong; Lu, Huaping; Kuang, Jinyu; Zhao, Zhenfeng; Zhu, Xiangqing; Tang, Guoning; Huang, Ronghuai; Hu, Gang
2004-09-01
A one-way coupled spatiotemporally chaotic map lattice is used to construct cryptosystem. With the combinatorial applications of both chaotic computations and conventional algebraic operations, our system has optimal cryptographic properties much better than the separative applications of known chaotic and conventional methods. We have realized experiments to practice duplex voice secure communications in realistic Wired Public Switched Telephone Network by applying our chaotic system and the system of Advanced Encryption Standard (AES), respectively, for cryptography. Our system can work stably against strong channel noise when AES fails to work.
Gram-Schmidt algorithms for covariance propagation
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1977-01-01
This paper addresses the time propagation of triangular covariance factors. Attention is focused on the square-root free factorization, P = UD(transpose of U), where U is unit upper triangular and D is diagonal. An efficient and reliable algorithm for U-D propagation is derived which employs Gram-Schmidt orthogonalization. Partitioning the state vector to distinguish bias and coloured process noise parameters increase mapping efficiency. Cost comparisons of the U-D, Schmidt square-root covariance and conventional covariance propagation methods are made using weighted arithmetic operation counts. The U-D time update is shown to be less costly than the Schmidt method; and, except in unusual circumstances, it is within 20% of the cost of conventional propagation.
Gram-Schmidt algorithms for covariance propagation
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1975-01-01
This paper addresses the time propagation of triangular covariance factors. Attention is focused on the square-root free factorization, P = UDU/T/, where U is unit upper triangular and D is diagonal. An efficient and reliable algorithm for U-D propagation is derived which employs Gram-Schmidt orthogonalization. Partitioning the state vector to distinguish bias and colored process noise parameters increases mapping efficiency. Cost comparisons of the U-D, Schmidt square-root covariance and conventional covariance propagation methods are made using weighted arithmetic operation counts. The U-D time update is shown to be less costly than the Schmidt method; and, except in unusual circumstances, it is within 20% of the cost of conventional propagation.
Passaro, Antony D; Vettel, Jean M; McDaniel, Jonathan; Lawhern, Vernon; Franaszczuk, Piotr J; Gordon, Stephen M
2017-03-01
During an experimental session, behavioral performance fluctuates, yet most neuroimaging analyses of functional connectivity derive a single connectivity pattern. These conventional connectivity approaches assume that since the underlying behavior of the task remains constant, the connectivity pattern is also constant. We introduce a novel method, behavior-regressed connectivity (BRC), to directly examine behavioral fluctuations within an experimental session and capture their relationship to changes in functional connectivity. This method employs the weighted phase lag index (WPLI) applied to a window of trials with a weighting function. Using two datasets, the BRC results are compared to conventional connectivity results during two time windows: the one second before stimulus onset to identify predictive relationships, and the one second after onset to capture task-dependent relationships. In both tasks, we replicate the expected results for the conventional connectivity analysis, and extend our understanding of the brain-behavior relationship using the BRC analysis, demonstrating subject-specific BRC maps that correspond to both positive and negative relationships with behavior. Comparison with Existing Method(s): Conventional connectivity analyses assume a consistent relationship between behaviors and functional connectivity, but the BRC method examines performance variability within an experimental session to understand dynamic connectivity and transient behavior. The BRC approach examines connectivity as it covaries with behavior to complement the knowledge of underlying neural activity derived from conventional connectivity analyses. Within this framework, BRC may be implemented for the purpose of understanding performance variability both within and between participants. Published by Elsevier B.V.
Song, Pengfei; Manduca, Armando; Zhao, Heng; Urban, Matthew W.; Greenleaf, James F.; Chen, Shigao
2014-01-01
A fast shear compounding method was developed in this study using only one shear wave push-detect cycle, such that the shear wave imaging frame rate is preserved and motion artifacts are minimized. The proposed method is composed of the following steps: 1. applying a comb-push to produce multiple differently angled shear waves at different spatial locations simultaneously; 2. decomposing the complex shear wave field into individual shear wave fields with differently oriented shear waves using a multi-directional filter; 3. using a robust two-dimensional (2D) shear wave speed calculation to reconstruct 2D shear elasticity maps from each filter direction; 4. compounding these 2D maps from different directions into a final map. An inclusion phantom study showed that the fast shear compounding method could achieve comparable performance to conventional shear compounding without sacrificing the imaging frame rate. A multi-inclusion phantom experiment showed that the fast shear compounding method could provide a full field-of-view (FOV), 2D, and compounded shear elasticity map with three types of inclusions clearly resolved and stiffness measurements showing excellent agreement to the nominal values. PMID:24613636
NASA Astrophysics Data System (ADS)
Kinkingnehun, Serge R. J.; du Boisgueheneuc, Foucaud; Golmard, Jean-Louis; Zhang, Sandy X.; Levy, Richard; Dubois, Bruno
2004-04-01
We have developed a new technique to analyze correlations between brain anatomy and its neurological functions. The technique is based on the anatomic MRI of patients with brain lesions who are administered neuropsychological tests. Brain lesions of the MRI scans are first manually segmented. The MRI volumes are then normalized to a reference map, using the segmented area as a mask. After normalization, the brain lesions of the MRI are segmented again in order to redefine the border of the lesions in the context of the normalized brain. Once the MRI is segmented, the patient's score on the neuropsychological test is assigned to each voxel in the lesioned area, while the rest of the voxels of the image are set to 0. Subsequently, the individual patient's MRI images are superimposed, and each voxel is reassigned the average score of the patients who have a lesion at that voxel. A threshold is applied to remove regions having less than three overlaps. This process leads to an anatomo-functional map that links brain areas to functional loss. Other maps can be created to aid in analyzing the functional maps, such as one that indicates the 95% confidence interval of the averaged scores for each area. This anatomo-clinical overlapping map (AnaCOM) method was used to obtain functional maps from patients with lesions in the superior frontal gyrus. By finding particular subregions more responsible for a particular deficit, this method can generate new hypotheses to be tested by conventional group methods.
Development of optimized segmentation map in dual energy computed tomography
NASA Astrophysics Data System (ADS)
Yamakawa, Keisuke; Ueki, Hironori
2012-03-01
Dual energy computed tomography (DECT) has been widely used in clinical practice and has been particularly effective for tissue diagnosis. In DECT the difference of two attenuation coefficients acquired by two kinds of X-ray energy enables tissue segmentation. One problem in conventional DECT is that the segmentation deteriorates in some cases, such as bone removal. This is due to two reasons. Firstly, the segmentation map is optimized without considering the Xray condition (tube voltage and current). If we consider the tube voltage, it is possible to create an optimized map, but unfortunately we cannot consider the tube current. Secondly, the X-ray condition is not optimized. The condition can be set empirically, but this means that the optimized condition is not used correctly. To solve these problems, we have developed methods for optimizing the map (Method-1) and the condition (Method-2). In Method-1, the map is optimized to minimize segmentation errors. The distribution of the attenuation coefficient is modeled by considering the tube current. In Method-2, the optimized condition is decided to minimize segmentation errors depending on tube voltagecurrent combinations while keeping the total exposure constant. We evaluated the effectiveness of Method-1 by performing a phantom experiment under the fixed condition and of Method-2 by performing a phantom experiment under different combinations calculated from the total exposure constant. When Method-1 was followed with Method-2, the segmentation error was reduced from 37.8 to 13.5 %. These results demonstrate that our developed methods can achieve highly accurate segmentation while keeping the total exposure constant.
A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots.
Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il Dan
2016-03-01
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.
NASA Technical Reports Server (NTRS)
Maruyasu, T. (Principal Investigator); Nakajima, I.
1977-01-01
The author has identified the following significant results. Practical use of recognition results of LANDSAT data as the base map of the field survey or the retouching work of vegetation and land use has the effective benefit to cut down the cost, labor, and time lower than 10% of a conventional method. Correct and detailed vegetation maps were prepared using combined interpretation of repetition of data of different seasons at warm and temperate forested areas.
Communication: Time- and space-sliced velocity map electron imaging
NASA Astrophysics Data System (ADS)
Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Fan, Lin; Winney, Alexander H.; Li, Wen
2014-12-01
We develop a new method to achieve slice electron imaging using a conventional velocity map imaging apparatus with two additional components: a fast frame complementary metal-oxide semiconductor camera and a high-speed digitizer. The setup was previously shown to be capable of 3D detection and coincidence measurements of ions. Here, we show that when this method is applied to electron imaging, a time slice of 32 ps and a spatial slice of less than 1 mm thick can be achieved. Each slice directly extracts 3D velocity distributions of electrons and provides electron velocity distributions that are impossible or difficult to obtain with a standard 2D imaging electron detector.
High-definition X-ray fluorescence elemental mapping of paintings.
Howard, Daryl L; de Jonge, Martin D; Lau, Deborah; Hay, David; Varcoe-Cocks, Michael; Ryan, Chris G; Kirkham, Robin; Moorhead, Gareth; Paterson, David; Thurrowgood, David
2012-04-03
A historical self-portrait painted by Sir Arthur Streeton (1867-1943) has been studied with fast-scanning X-ray fluorescence microscopy using synchrotron radiation. One of the technique's unique strengths is the ability to reveal metal distributions in the pigments of underlying brushstrokes, thus providing information critical to the interpretation of a painting. We have applied the nondestructive technique with the event-mode Maia X-ray detector, which has the capability to record elemental maps at megapixels per hour with the full X-ray fluorescence spectrum collected per pixel. The painting poses a difficult challenge to conventional X-ray analysis, because it was completely obscured with heavy brushstrokes of highly X-ray absorptive lead white paint (2PbCO(3)·Pb(OH)(2)) by the artist, making it an excellent candidate for the application of the synchrotron-based technique. The 25 megapixel elemental maps were successfully observed through the lead white paint across the 200 × 300 mm(2) scan area. The sweeping brushstrokes of the lead white overpaint contributed significant detrimental structure to the elemental maps. A corrective procedure was devised to enhance the visualization of the elemental maps by using the elastic X-ray scatter as a proxy for the lead white overpaint. We foresee the technique applied to the most demanding of culturally significant artworks where conventional analytical methods are inadequate.
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.
2017-01-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
Transmission imaging for integrated PET-MR systems
NASA Astrophysics Data System (ADS)
Bowen, Spencer L.; Fuin, Niccolò; Levine, Michael A.; Catana, Ciprian
2016-08-01
Attenuation correction for PET-MR systems continues to be a challenging problem, particularly for body regions outside the head. The simultaneous acquisition of transmission scan based μ-maps and MR images on integrated PET-MR systems may significantly increase the performance of and offer validation for new MR-based μ-map algorithms. For the Biograph mMR (Siemens Healthcare), however, use of conventional transmission schemes is not practical as the patient table and relatively small diameter scanner bore significantly restrict radioactive source motion and limit source placement. We propose a method for emission-free coincidence transmission imaging on the Biograph mMR. The intended application is not for routine subject imaging, but rather to improve and validate MR-based μ-map algorithms; particularly for patient implant and scanner hardware attenuation correction. In this study we optimized source geometry and assessed the method’s performance with Monte Carlo simulations and phantom scans. We utilized a Bayesian reconstruction algorithm, which directly generates μ-map estimates from multiple bed positions, combined with a robust scatter correction method. For simulations with a pelvis phantom a single torus produced peak noise equivalent count rates (34.8 kcps) dramatically larger than a full axial length ring (11.32 kcps) and conventional rotating source configurations. Bias in reconstructed μ-maps for head and pelvis simulations was ⩽4% for soft tissue and ⩽11% for bone ROIs. An implementation of the single torus source was filled with 18F-fluorodeoxyglucose and the proposed method quantified for several test cases alone or in comparison with CT-derived μ-maps. A volume average of 0.095 cm-1 was recorded for an experimental uniform cylinder phantom scan, while a bias of <2% was measured for the cortical bone equivalent insert of the multi-compartment phantom. Single torus μ-maps of a hip implant phantom showed significantly less artifacts and improved dynamic range, and differed greatly for highly attenuating materials in the case of the patient table, compared to CT results. Use of a fixed torus geometry, in combination with translation of the patient table to perform complete tomographic sampling, generated highly quantitative measured μ-maps and is expected to produce images with significantly higher SNR than competing fixed geometries at matched total acquisition time.
Mapping Urban Risk: Flood Hazards, Race, & Environmental Justice In New York”
Maantay, Juliana; Maroko, Andrew
2009-01-01
This paper demonstrates the importance of disaggregating population data aggregated by census tracts or other units, for more realistic population distribution/location. A newly-developed mapping method, the Cadastral-based Expert Dasymetric System (CEDS), calculates population in hyper-heterogeneous urban areas better than traditional mapping techniques. A case study estimating population potentially impacted by flood hazard in New York City compares the impacted population determined by CEDS with that derived by centroid-containment method and filtered areal weighting interpolation. Compared to CEDS, 37 percent and 72 percent fewer people are estimated to be at risk from floods city-wide, using conventional areal weighting of census data, and centroid-containment selection, respectively. Undercounting of impacted population could have serious implications for emergency management and disaster planning. Ethnic/racial populations are also spatially disaggregated to determine any environmental justice impacts with flood risk. Minorities are disproportionately undercounted using traditional methods. Underestimating more vulnerable sub-populations impairs preparedness and relief efforts. PMID:20047020
AEKF-SLAM: A New Algorithm for Robotic Underwater Navigation
Yuan, Xin; Martínez-Ortega, José-Fernán; Fernández, José Antonio Sánchez; Eckert, Martina
2017-01-01
In this work, we focus on key topics related to underwater Simultaneous Localization and Mapping (SLAM) applications. Moreover, a detailed review of major studies in the literature and our proposed solutions for addressing the problem are presented. The main goal of this paper is the enhancement of the accuracy and robustness of the SLAM-based navigation problem for underwater robotics with low computational costs. Therefore, we present a new method called AEKF-SLAM that employs an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-based SLAM approach stores the robot poses and map landmarks in a single state vector, while estimating the state parameters via a recursive and iterative estimation-update process. Hereby, the prediction and update state (which exist as well in the conventional EKF) are complemented by a newly proposed augmentation stage. Applied to underwater robot navigation, the AEKF-SLAM has been compared with the classic and popular FastSLAM 2.0 algorithm. Concerning the dense loop mapping and line mapping experiments, it shows much better performances in map management with respect to landmark addition and removal, which avoid the long-term accumulation of errors and clutters in the created map. Additionally, the underwater robot achieves more precise and efficient self-localization and a mapping of the surrounding landmarks with much lower processing times. Altogether, the presented AEKF-SLAM method achieves reliably map revisiting, and consistent map upgrading on loop closure. PMID:28531135
Kim, Yusung; Tomé, Wolfgang A
2008-01-01
Voxel based iso-Tumor Control Probability (TCP) maps and iso-Complication maps are proposed as a plan-review tool especially for functional image-guided intensity-modulated radiotherapy (IMRT) strategies such as selective boosting (dose painting) and conformal avoidance IMRT. The maps employ voxel-based phenomenological biological dose-response models for target volumes and normal organs. Two IMRT strategies for prostate cancer, namely conventional uniform IMRT delivering an EUD = 84 Gy (equivalent uniform dose) to the entire PTV and selective boosting delivering an EUD = 82 Gy to the entire PTV, are investigated, to illustrate the advantages of this approach over iso-dose maps. Conventional uniform IMRT did yield a more uniform isodose map to the entire PTV while selective boosting did result in a nonuniform isodose map. However, when employing voxel based iso-TCP maps selective boosting exhibited a more uniform tumor control probability map compared to what could be achieved using conventional uniform IMRT, which showed TCP cold spots in high-risk tumor subvolumes despite delivering a higher EUD to the entire PTV. Voxel based iso-Complication maps are presented for rectum and bladder, and their utilization for selective avoidance IMRT strategies are discussed. We believe as the need for functional image guided treatment planning grows, voxel based iso-TCP and iso-Complication maps will become an important tool to assess the integrity of such treatment plans.
Harden, Bradley J; Nichols, Scott R; Frueh, Dominique P
2014-09-24
Nuclear magnetic resonance (NMR) studies of larger proteins are hampered by difficulties in assigning NMR resonances. Human intervention is typically required to identify NMR signals in 3D spectra, and subsequent procedures depend on the accuracy of this so-called peak picking. We present a method that provides sequential connectivities through correlation maps constructed with covariance NMR, bypassing the need for preliminary peak picking. We introduce two novel techniques to minimize false correlations and merge the information from all original 3D spectra. First, we take spectral derivatives prior to performing covariance to emphasize coincident peak maxima. Second, we multiply covariance maps calculated with different 3D spectra to destroy erroneous sequential correlations. The maps are easy to use and can readily be generated from conventional triple-resonance experiments. Advantages of the method are demonstrated on a 37 kDa nonribosomal peptide synthetase domain subject to spectral overlap.
Improving Arterial Spin Labeling by Using Deep Learning.
Kim, Ki Hwan; Choi, Seung Hong; Park, Sung-Hong
2018-05-01
Purpose To develop a deep learning algorithm that generates arterial spin labeling (ASL) perfusion images with higher accuracy and robustness by using a smaller number of subtraction images. Materials and Methods For ASL image generation from pair-wise subtraction, we used a convolutional neural network (CNN) as a deep learning algorithm. The ground truth perfusion images were generated by averaging six or seven pairwise subtraction images acquired with (a) conventional pseudocontinuous arterial spin labeling from seven healthy subjects or (b) Hadamard-encoded pseudocontinuous ASL from 114 patients with various diseases. CNNs were trained to generate perfusion images from a smaller number (two or three) of subtraction images and evaluated by means of cross-validation. CNNs from the patient data sets were also tested on 26 separate stroke data sets. CNNs were compared with the conventional averaging method in terms of mean square error and radiologic score by using a paired t test and/or Wilcoxon signed-rank test. Results Mean square errors were approximately 40% lower than those of the conventional averaging method for the cross-validation with the healthy subjects and patients and the separate test with the patients who had experienced a stroke (P < .001). Region-of-interest analysis in stroke regions showed that cerebral blood flow maps from CNN (mean ± standard deviation, 19.7 mL per 100 g/min ± 9.7) had smaller mean square errors than those determined with the conventional averaging method (43.2 ± 29.8) (P < .001). Radiologic scoring demonstrated that CNNs suppressed noise and motion and/or segmentation artifacts better than the conventional averaging method did (P < .001). Conclusion CNNs provided superior perfusion image quality and more accurate perfusion measurement compared with those of the conventional averaging method for generation of ASL images from pair-wise subtraction images. © RSNA, 2017.
Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck
2008-04-10
One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.
Lucente, Giuseppe; Lam, Steven; Schneider, Heike; Picht, Thomas
2018-02-01
Non-invasive pre-surgical mapping of eloquent brain areas with navigated transcranial magnetic stimulation (nTMS) is a useful technique linked to the improvement of surgical planning and patient outcomes. The stimulator output intensity and subsequent resting motor threshold determination (rMT) are based on the motor-evoked potential (MEP) elicited in the target muscle with an amplitude above a predetermined threshold of 50 μV. However, a subset of patients is unable to achieve complete relaxation in the target muscles, resulting in false positives that jeopardize mapping validity with conventional MEP determination protocols. Our aim is to explore the feasibility and reproducibility of a novel mapping approach that investigates how an increase of the MEP amplitude threshold to 300 and 500 μV affects subsequent motor maps. Seven healthy subjects underwent motor mapping with nTMS. RMT was calculated with the conventional methodology in conjunction with experimental 300- and 500-μV MEP amplitude thresholds. Motor mapping was performed with 105% of rMT stimulator intensity using the FDI as the target muscle. Motor mapping was possible in all patients with both the conventional and experimental setups. Motor area maps with a conventional 50-μV threshold showed poor correlation with 300-μV (α = 0.446, p < 0.001) maps, but showed excellent consistency with 500-μV motor area maps (α = 0.974, p < 0.001). MEP latencies were significantly less variable (23 ms for 50 μV vs. 23.7 ms for 300 μV vs. 23.7 ms for 500 μV, p < 0.001). A slight but significant increase of the electric field (EF) value was found (EF: 60.8 V/m vs. 64.8 V/m vs. 66 V/m p < 0.001). Our study demonstrates the feasibility of increasing the MEP detection threshold to 500 μV in rMT determination and motor area mapping with nTMS without losing precision.
Improving mapping and SNP-calling performance in multiplexed targeted next-generation sequencing
2012-01-01
Background Compared to classical genotyping, targeted next-generation sequencing (tNGS) can be custom-designed to interrogate entire genomic regions of interest, in order to detect novel as well as known variants. To bring down the per-sample cost, one approach is to pool barcoded NGS libraries before sample enrichment. Still, we lack a complete understanding of how this multiplexed tNGS approach and the varying performance of the ever-evolving analytical tools can affect the quality of variant discovery. Therefore, we evaluated the impact of different software tools and analytical approaches on the discovery of single nucleotide polymorphisms (SNPs) in multiplexed tNGS data. To generate our own test model, we combined a sequence capture method with NGS in three experimental stages of increasing complexity (E. coli genes, multiplexed E. coli, and multiplexed HapMap BRCA1/2 regions). Results We successfully enriched barcoded NGS libraries instead of genomic DNA, achieving reproducible coverage profiles (Pearson correlation coefficients of up to 0.99) across multiplexed samples, with <10% strand bias. However, the SNP calling quality was substantially affected by the choice of tools and mapping strategy. With the aim of reducing computational requirements, we compared conventional whole-genome mapping and SNP-calling with a new faster approach: target-region mapping with subsequent ‘read-backmapping’ to the whole genome to reduce the false detection rate. Consequently, we developed a combined mapping pipeline, which includes standard tools (BWA, SAMtools, etc.), and tested it on public HiSeq2000 exome data from the 1000 Genomes Project. Our pipeline saved 12 hours of run time per Hiseq2000 exome sample and detected ~5% more SNPs than the conventional whole genome approach. This suggests that more potential novel SNPs may be discovered using both approaches than with just the conventional approach. Conclusions We recommend applying our general ‘two-step’ mapping approach for more efficient SNP discovery in tNGS. Our study has also shown the benefit of computing inter-sample SNP-concordances and inspecting read alignments in order to attain more confident results. PMID:22913592
Numerical simulation of multi-dimensional NMR response in tight sandstone
NASA Astrophysics Data System (ADS)
Guo, Jiangfeng; Xie, Ranhong; Zou, Youlong; Ding, Yejiao
2016-06-01
Conventional logging methods have limitations in the evaluation of tight sandstone reservoirs. The multi-dimensional nuclear magnetic resonance (NMR) logging method has the advantage that it can simultaneously measure transverse relaxation time (T 2), longitudinal relaxation time (T 1) and diffusion coefficient (D). In this paper, we simulate NMR measurements of tight sandstone with different wettability and saturations by the random walk method and obtain the magnetization decays of Carr-Purcell-Meiboom-Gill pulse sequences with different wait times (TW) and echo spacings (TE) under a magnetic field gradient, resulting in D-T 2-T 1 maps by the multiple echo trains joint inversion method. We also study the effects of wettability, saturation, signal-to-noise ratio (SNR) of data and restricted diffusion on the D-T 2-T 1 maps in tight sandstone. The results show that with decreasing wetting fluid saturation, the surface relaxation rate of the wetting fluid gradually increases and the restricted diffusion phenomenon becomes more and more obvious, which leads to the wetting fluid signal moving along the direction of short relaxation and the direction of the diffusion coefficient decreasing in D-T 2-T 1 maps. Meanwhile, the non-wetting fluid position in D-T 2-T 1 maps does not change with saturation variation. With decreasing SNR, the ability to identify water and oil signals based on NMR maps gradually decreases. The wetting fluid D-T 1 and D-T 2 correlations in NMR diffusion-relaxation maps of tight sandstone are obtained through expanding the wetting fluid restricted diffusion models, and are further applied to recognize the wetting fluid in simulated D-T 2 maps and D-T 1 maps.
Sci—Thur AM: YIS - 08: Constructing an Attenuation map for a PET/MR Breast coil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patrick, John C.; Imaging, Lawson Health Research Institute, Knoxville, TN; London Regional Cancer Program, Knoxville, TN
2014-08-15
In 2013, around 23000 Canadian women and 200 Canadian men were diagnosed with breast cancer. An estimated 5100 women and 55 men died from the disease. Using the sensitivity of MRI with the selectivity of PET, PET/MRI combines anatomical and functional information within the same scan and could help with early detection in high-risk patients. MRI requires radiofrequency coils for transmitting energy and receiving signal but the breast coil attenuates PET signal. To correct for this PET attenuation, a 3-dimensional map of linear attenuation coefficients (μ-map) of the breast coil must be created and incorporated into the PET reconstruction process.more » Several approaches have been proposed for building hardware μ-maps, some of which include the use of conventional kVCT and Dual energy CT. These methods can produce high resolution images based on the electron densities of materials that can be converted into μ-maps. However, imaging hardware containing metal components with photons in the kV range is susceptible to metal artifacts. These artifacts can compromise the accuracy of the resulting μ-map and PET reconstruction; therefore high-Z components should be removed. We propose a method for calculating μ-maps without removing coil components, based on megavoltage (MV) imaging with a linear accelerator that has been detuned for imaging at 1.0MeV. Containers of known geometry with F18 were placed in the breast coil for imaging. A comparison between reconstructions based on the different μ-map construction methods was made. PET reconstructions with our method show a maximum of 6% difference over the existing kVCT-based reconstructions.« less
Advantage of spatial map ion imaging in the study of large molecule photodissociation
NASA Astrophysics Data System (ADS)
Lee, Chin; Lin, Yen-Cheng; Lee, Shih-Huang; Lee, Yin-Yu; Tseng, Chien-Ming; Lee, Yuan-Tseh; Ni, Chi-Kung
2017-07-01
The original ion imaging technique has low velocity resolution, and currently, photodissociation is mostly investigated using velocity map ion imaging. However, separating signals from the background (resulting from undissociated excited parent molecules) is difficult when velocity map ion imaging is used for the photodissociation of large molecules (number of atoms ≥ 10). In this study, we used the photodissociation of phenol at the S1 band origin as an example to demonstrate how our multimass ion imaging technique, based on modified spatial map ion imaging, can overcome this difficulty. The photofragment translational energy distribution obtained when multimass ion imaging was used differed considerably from that obtained when velocity map ion imaging and Rydberg atom tagging were used. We used conventional translational spectroscopy as a second method to further confirm the experimental results, and we conclude that data should be interpreted carefully when velocity map ion imaging or Rydberg atom tagging is used in the photodissociation of large molecules. Finally, we propose a modified velocity map ion imaging technique without the disadvantages of the current velocity map ion imaging technique.
Cubic map algebra functions for spatio-temporal analysis
Mennis, J.; Viger, R.; Tomlin, C.D.
2005-01-01
We propose an extension of map algebra to three dimensions for spatio-temporal data handling. This approach yields a new class of map algebra functions that we call "cube functions." Whereas conventional map algebra functions operate on data layers representing two-dimensional space, cube functions operate on data cubes representing two-dimensional space over a third-dimensional period of time. We describe the prototype implementation of a spatio-temporal data structure and selected cube function versions of conventional local, focal, and zonal map algebra functions. The utility of cube functions is demonstrated through a case study analyzing the spatio-temporal variability of remotely sensed, southeastern U.S. vegetation character over various land covers and during different El Nin??o/Southern Oscillation (ENSO) phases. Like conventional map algebra, the application of cube functions may demand significant data preprocessing when integrating diverse data sets, and are subject to limitations related to data storage and algorithm performance. Solutions to these issues include extending data compression and computing strategies for calculations on very large data volumes to spatio-temporal data handling.
Depth map generation using a single image sensor with phase masks.
Jang, Jinbeum; Park, Sangwoo; Jo, Jieun; Paik, Joonki
2016-06-13
Conventional stereo matching systems generate a depth map using two or more digital imaging sensors. It is difficult to use the small camera system because of their high costs and bulky sizes. In order to solve this problem, this paper presents a stereo matching system using a single image sensor with phase masks for the phase difference auto-focusing. A novel pattern of phase mask array is proposed to simultaneously acquire two pairs of stereo images. Furthermore, a noise-invariant depth map is generated from the raw format sensor output. The proposed method consists of four steps to compute the depth map: (i) acquisition of stereo images using the proposed mask array, (ii) variational segmentation using merging criteria to simplify the input image, (iii) disparity map generation using the hierarchical block matching for disparity measurement, and (iv) image matting to fill holes to generate the dense depth map. The proposed system can be used in small digital cameras without additional lenses or sensors.
Game Theory Based Trust Model for Cloud Environment
Gokulnath, K.; Uthariaraj, Rhymend
2015-01-01
The aim of this work is to propose a method to establish trust at bootload level in cloud computing environment. This work proposes a game theoretic based approach for achieving trust at bootload level of both resources and users perception. Nash equilibrium (NE) enhances the trust evaluation of the first-time users and providers. It also restricts the service providers and the users to violate service level agreement (SLA). Significantly, the problem of cold start and whitewashing issues are addressed by the proposed method. In addition appropriate mapping of cloud user's application to cloud service provider for segregating trust level is achieved as a part of mapping. Thus, time complexity and space complexity are handled efficiently. Experiments were carried out to compare and contrast the performance of the conventional methods and the proposed method. Several metrics like execution time, accuracy, error identification, and undecidability of the resources were considered. PMID:26380365
Kim, Yusung; Tomé, Wolfgang A.
2010-01-01
Summary Voxel based iso-Tumor Control Probability (TCP) maps and iso-Complication maps are proposed as a plan-review tool especially for functional image-guided intensity-modulated radiotherapy (IMRT) strategies such as selective boosting (dose painting) and conformal avoidance IMRT. The maps employ voxel-based phenomenological biological dose-response models for target volumes and normal organs. Two IMRT strategies for prostate cancer, namely conventional uniform IMRT delivering an EUD = 84 Gy (equivalent uniform dose) to the entire PTV and selective boosting delivering an EUD = 82 Gy to the entire PTV, are investigated, to illustrate the advantages of this approach over iso-dose maps. Conventional uniform IMRT did yield a more uniform isodose map to the entire PTV while selective boosting did result in a nonuniform isodose map. However, when employing voxel based iso-TCP maps selective boosting exhibited a more uniform tumor control probability map compared to what could be achieved using conventional uniform IMRT, which showed TCP cold spots in high-risk tumor subvolumes despite delivering a higher EUD to the entire PTV. Voxel based iso-Complication maps are presented for rectum and bladder, and their utilization for selective avoidance IMRT strategies are discussed. We believe as the need for functional image guided treatment planning grows, voxel based iso-TCP and iso-Complication maps will become an important tool to assess the integrity of such treatment plans. PMID:21151734
Song, Pengfei; Manduca, Armando; Zhao, Heng; Urban, Matthew W; Greenleaf, James F; Chen, Shigao
2014-06-01
A fast shear compounding method was developed in this study using only one shear wave push-detect cycle, such that the shear wave imaging frame rate is preserved and motion artifacts are minimized. The proposed method is composed of the following steps: 1. Applying a comb-push to produce multiple differently angled shear waves at different spatial locations simultaneously; 2. Decomposing the complex shear wave field into individual shear wave fields with differently oriented shear waves using a multi-directional filter; 3. Using a robust 2-D shear wave speed calculation to reconstruct 2-D shear elasticity maps from each filter direction; and 4. Compounding these 2-D maps from different directions into a final map. An inclusion phantom study showed that the fast shear compounding method could achieve comparable performance to conventional shear compounding without sacrificing the imaging frame rate. A multi-inclusion phantom experiment showed that the fast shear compounding method could provide a full field-of-view, 2-D and compounded shear elasticity map with three types of inclusions clearly resolved and stiffness measurements showing excellent agreement to the nominal values. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Communicating and visualizing data quality through Web Map Services
NASA Astrophysics Data System (ADS)
Roberts, Charles; Blower, Jon; Maso, Joan; Diaz, Daniel; Griffiths, Guy; Lewis, Jane
2014-05-01
The sharing and visualization of environmental data through OGC Web Map Services is becoming increasingly common. However, information about the quality of data is rarely presented. (In this presentation we consider mostly data uncertainty as a measure of quality, although we acknowledge that many other quality measures are relevant to the geoscience community.) In the context of the GeoViQua project (http://www.geoviqua.org) we have developed conventions and tools for using WMS to deliver data quality information. The "WMS-Q" convention describes how the WMS specification can be used to publish quality information at the level of datasets, variables and individual pixels (samples). WMS-Q requires no extensions to the WMS 1.3.0 specification, being entirely backward-compatible. (An earlier version of WMS-Q was published as OGC Engineering Report 12-160.) To complement the WMS-Q convention, we have also developed extensions to the OGC Symbology Encoding (SE) specification, enabling uncertain geoscience data to be portrayed using a variety of visualization techniques. These include contours, stippling, blackening, whitening, opacity, bivariate colour maps, confidence interval triangles and glyphs. There may also be more extensive applications of these methods beyond the visual representation of uncertainty. In this presentation we will briefly describe the scope of the WMS-Q and "extended SE" specifications and then demonstrate the innovations using open-source software based upon ncWMS (http://ncwms.sf.net). We apply the tools to a variety of datasets including Earth Observation data from the European Space Agency's Climate Change Initiative. The software allows uncertain raster data to be shared through Web Map Services, giving the user fine control over data visualization.
Integrating Databases with Maps: The Delivery of Cultural Data through TimeMap.
ERIC Educational Resources Information Center
Johnson, Ian
TimeMap is a unique integration of database management, metadata and interactive maps, designed to contextualise and deliver cultural data through maps. TimeMap extends conventional maps with the time dimension, creating and animating maps "on-the-fly"; delivers them as a kiosk application or embedded in Web pages; links flexibly to…
MRT letter: Guided filtering of image focus volume for 3D shape recovery of microscopic objects.
Mahmood, Muhammad Tariq
2014-12-01
In this letter, a shape from focus (SFF) method is proposed that utilizes the guided image filtering to enhance the image focus volume efficiently. First, image focus volume is computed using a conventional focus measure. Then each layer of image focus volume is filtered using guided filtering. In this work, the all-in-focus image, which can be obtained from the initial focus volume, is used as guidance image. Finally, improved depth map is obtained from the filtered image focus volume by maximizing the focus measure along the optical axis. The proposed SFF method is efficient and provides better depth maps. The improved performance is highlighted by conducting several experiments using image sequences of simulated and real microscopic objects. The comparative analysis demonstrates the effectiveness of the proposed SFF method. © 2014 Wiley Periodicals, Inc.
A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots
Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”
2016-01-01
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540
NASA Technical Reports Server (NTRS)
Viljoen, R. P.
1974-01-01
A number of base metal finds have recently focussed attention on the North Western Cape Province of South Africa as an area of great potential mineral wealth. From the point of view of competitive mineral exploration it was essential that an insight into the regional geological controls of the base metal mineralization of the area be obtained as rapidly as possible. Conventional methods of producing a suitable regional geological map were considered to be too time-consuming and ERTS-1 imagery was consequently examined. This imagery has made a significant contribution in the compilation of a suitable map on which to base further mineral exploration programmes. The time involved in the compilation of maps of this nature was found to be only a fraction of the time necessary for the production of similar maps using other methods. ERTS imagery is therefore considered to be valuable in producing accurate regional maps in areas where little or no geological data are available, or in areas of poor access. Furthermore, these images have great potential for rapidly defining the regional extent of metallogenic provinces.
ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION
Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey
2013-01-01
MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053
NASA Astrophysics Data System (ADS)
le Roux, J. P.
1992-12-01
Although sandstone grain-size maps can be a powerful means of reconstructing ancient depositional environments, they have rarely been used in the past. In this paper, two case studies are presented to illustrate the potential of this technique where other, more conventional methods may not be applicable. In the first case, a braided to anastomosing river system in the Triassic Molteno Formation of the South African Karoo Basin is examined. The weighted mean grain-size map clearly portrays the distribution of channels and islands and compares very well with other methods of reconstruction. The second case study examines an offshore shoal in the Permian Nowra Sandstone of the Sydney Basin in Australia. Here the grain-size map shows a north-northeasterly trend parallel to the orientation of the shoal, with a zone of coarsest grains displaced to the east of the shoal crest. This probably reflects the location of the breaker zone. As grain size is an important factor controlling the porosity and permeability of sediments, these maps can provide very useful information when exploring for epigenetic, stratabound ore deposits such as uranium, or planning production wells for oil and gas.
A review of MRI evaluation of demyelination in cuprizone murine model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krutenkova, E., E-mail: len--k@yandex.ru; Pan, E.; Khodanovich, M., E-mail: khodanovich@mail.tsu.ru
The cuprizone mouse model of non-autoimmune demyelination reproduces some phenomena of multiple sclerosis and is appropriate for validation and specification of a new method of non-invasive diagnostics. In the review new data which are collected using the new MRI method are compared with one or more conventional MRI tools. Also the paper reviewed the validation of MRI approaches using histological or immunohistochemical methods. Luxol fast blue histological staining and myelin basic protein immunostaining is widespread. To improve the accuracy of non-invasive conventional MRI, multimodal scanning could be applied. The new quantitative MRI method of fast mapping of the macromolecular protonmore » fraction is a reliable biomarker of myelin in the brain and can be used for research of demyelination in animals. To date, a validation of MPF method on the CPZ mouse model of demyelination is not performed, although this method is probably the best way to evaluate demyelination using MRI.« less
Development of a Coordinate Transformation method for direct georeferencing in map projection frames
NASA Astrophysics Data System (ADS)
Zhao, Haitao; Zhang, Bing; Wu, Changshan; Zuo, Zhengli; Chen, Zhengchao
2013-03-01
This paper develops a novel Coordinate Transformation method (CT-method), with which the orientation angles (roll, pitch, heading) of the local tangent frame of the GPS/INS system are transformed into those (omega, phi, kappa) of the map projection frame for direct georeferencing (DG). Especially, the orientation angles in the map projection frame were derived from a sequence of coordinate transformations. The effectiveness of orientation angles transformation was verified through comparing with DG results obtained from conventional methods (Legat method and POSPac method) using empirical data. Moreover, the CT-method was also validated with simulated data. One advantage of the proposed method is that the orientation angles can be acquired simultaneously while calculating position elements of exterior orientation (EO) parameters and auxiliary points coordinates by coordinate transformation. These three methods were demonstrated and compared using empirical data. Empirical results show that the CT-method is both as sound and effective as Legat method. Compared with POSPac method, the CT-method is more suitable for calculating EO parameters for DG in map projection frames. DG accuracy of the CT-method and Legat method are at the same level. DG results of all these three methods have systematic errors in height due to inconsistent length projection distortion in the vertical and horizontal components, and these errors can be significantly reduced using the EO height correction technique in Legat's approach. Similar to the results obtained with empirical data, the effectiveness of the CT-method was also proved with simulated data. POSPac method: The method is presented by Applanix POSPac software technical note (Hutton and Savina, 1997). It is implemented in the POSEO module of POSPac software.
Zhao, Chenguang; Bolan, Patrick J; Royce, Melanie; Lakkadi, Navneeth; Eberhardt, Steven; Sillerud, Laurel; Lee, Sang-Joon; Posse, Stefan
2012-11-01
To quantitatively measure tCho levels in healthy breasts using Proton-Echo-Planar-Spectroscopic-Imaging (PEPSI). The two-dimensional mapping of tCho at 3 Tesla across an entire breast slice using PEPSI and a hybrid spectral quantification method based on LCModel fitting and integration of tCho using the fitted spectrum were developed. This method was validated in 19 healthy females and compared with single voxel spectroscopy (SVS) and with PRESS prelocalized conventional Magnetic Resonance Spectroscopic Imaging (MRSI) using identical voxel size (8 cc) and similar scan times (∼7 min). A tCho peak with a signal to noise ratio larger than 2 was detected in 10 subjects using both PEPSI and SVS. The average tCho concentration in these subjects was 0.45 ± 0.2 mmol/kg using PEPSI and 0.48 ± 0.3 mmol/kg using SVS. Comparable results were obtained in two subjects using conventional MRSI. High lipid content in the spectra of nine tCho negative subjects was associated with spectral line broadening of more than 26 Hz, which made tCho detection impossible. Conventional MRSI with PRESS prelocalization in glandular tissue in two of these subjects yielded tCho concentrations comparable to PEPSI. The detection sensitivity of PEPSI is comparable to SVS and conventional PRESS-MRSI. PEPSI can be potentially used in the evaluation of tCho in breast cancer. A tCho threshold concentration value of ∼0.7 mmol/kg might be used to differentiate between cancerous and healthy (or benign) breast tissues based on this work and previous studies. Copyright © 2012 Wiley Periodicals, Inc.
A new vegetation map of the western Seward Peninsula, Alaska, based on ERTS-1 imagery
NASA Technical Reports Server (NTRS)
Anderson, J. H.; Belon, A. E. (Principal Investigator)
1973-01-01
The author has identified the following significant results. A reconstituted, simulated color-infrared ERTS-1 image covering the western Seward Peninsula was prepared and it is used for identifying and mapping vegetation types by direct visual examination. The image, NASA ERTS E-1009-22095, was obtained approximately at 1110 hours, 165 degrees WMT on August 1, 1972. Seven major colors are identified. Four of these are matched with units on existing vegetation maps: bright red - shrub thicket; light gray-red - upland tundra; medium gray-red - coastal coastal wet tundra; gray - alpine barrens. The three colors having no map equivalents are tentatively interpreted as follows: pink - grassland tundra; dark gray-red - burn scars; light orange-red - senescent vegetation. A vegetation map, drawn by tracing on an acetate overlay of the image is presented. Significantly more information is depicted than on existing maps with regards to vegetation types and their areal distribution. Furthermore the preparation of the new map from ERTS-1 imagery required little time relative to conventional methods and extent of areal coverage.
RAY-RAMSES: a code for ray tracing on the fly in N-body simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barreira, Alexandre; Llinares, Claudio; Bose, Sownak
2016-05-01
We present a ray tracing code to compute integrated cosmological observables on the fly in AMR N-body simulations. Unlike conventional ray tracing techniques, our code takes full advantage of the time and spatial resolution attained by the N-body simulation by computing the integrals along the line of sight on a cell-by-cell basis through the AMR simulation grid. Moroever, since it runs on the fly in the N-body run, our code can produce maps of the desired observables without storing large (or any) amounts of data for post-processing. We implemented our routines in the RAMSES N-body code and tested the implementationmore » using an example of weak lensing simulation. We analyse basic statistics of lensing convergence maps and find good agreement with semi-analytical methods. The ray tracing methodology presented here can be used in several cosmological analysis such as Sunyaev-Zel'dovich and integrated Sachs-Wolfe effect studies as well as modified gravity. Our code can also be used in cross-checks of the more conventional methods, which can be important in tests of theory systematics in preparation for upcoming large scale structure surveys.« less
Cartography of asteroids and comet nuclei from low resolution data
NASA Technical Reports Server (NTRS)
Stooke, Philip J.
1992-01-01
High resolution images of non-spherical objects, such as Viking images of Phobos and the anticipated Galileo images of Gaspra, lend themselves to conventional planetary cartographic procedures: control network analysis, stereophotogrammetry, image mosaicking in 2D or 3D, and airbrush mapping. There remains the problem of a suitable map projection for bodies which are extremely elongated or irregular in shape. Many bodies will soon be seen at lower resolution (5-30 pixels across the disk) in images from speckle interferometry, the Hubble Space Telescope, ground-based radar, distinct spacecraft encounters, and closer images degraded by smear. Different data with similar effective resolutions are available from stellar occultations, radar or lightcurve convex hulls, lightcurve modeling of albedo variations, and cometary jet modeling. With such low resolution, conventional methods of shape determination will be less useful or will fail altogether, leaving limb and terminator topography as the principal sources of topographic information. A method for shape determination based on limb and terminator topography was developed. It has been applied to the nucleus of Comet Halley and the jovian satellite Amalthea. The Amalthea results are described to give an example of the cartographic possibilities and problems of anticipated data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Y; Sharp, G
2014-06-15
Purpose: Gain calibration for X-ray imaging systems with movable flat panel detectors (FPD) and intrinsic crosshairs is a challenge due to the geometry dependence of the heel effect and crosshair artifact. This study aims to develop a gain correction method for such systems by implementing the multi-acquisition gain image correction (MAGIC) technique. Methods: Raw flat-field images containing crosshair shadows and heel effect were acquired in 4 different FPD positions with fixed exposure parameters. The crosshair region was automatically detected and substituted with interpolated values from nearby exposed regions, generating a conventional single-image gain-map for each FPD position. Large kernel-based correctionmore » was applied to these images to correct the heel effect. A mask filter was used to invalidate the original cross-hair regions previously filled with the interpolated values. A final, seamless gain-map was created from the processed images by either the sequential filling (SF) or selective averaging (SA) techniques developed in this study. Quantitative evaluation was performed based on detective quantum efficiency improvement factor (DQEIF) for gain-corrected images using the conventional and proposed techniques. Results: Qualitatively, the MAGIC technique was found to be more effective in eliminating crosshair artifacts compared to the conventional single-image method. The mean DQEIF over the range of frequencies from 0.5 to 3.5 mm-1 were 1.09±0.06, 2.46±0.32, and 3.34±0.36 in the crosshair-artifact region and 2.35±0.31, 2.33±0.31, and 3.09±0.34 in the normal region, for the conventional, MAGIC-SF, and MAGIC-SA techniques, respectively. Conclusion: The introduced MAGIC technique is appropriate for gain calibration of an imaging system associated with a moving FPD and an intrinsic crosshair. The technique showed advantages over a conventional single image-based technique by successfully reducing residual crosshair artifacts, and higher image quality with respect to DQE.« less
NASA Astrophysics Data System (ADS)
Federico, Alejandro; Kaufmann, Guillermo H.
2004-08-01
We evaluate the application of the Wigner-Ville distribution (WVD) to measure phase gradient maps in digital speckle pattern interferometry (DSPI), when the generated correlation fringes present phase discontinuities. The performance of the WVD method is evaluated using computer-simulated fringes. The influence of the filtering process to smooth DSPI fringes and additional drawbacks that emerge when this method is applied are discussed. A comparison with the conventional method based on the continuous wavelet transform in the stationary phase approximation is also presented.
Improving IMRT delivery efficiency with reweighted L1-minimization for inverse planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hojin; Becker, Stephen; Lee, Rena
2013-07-15
Purpose: This study presents an improved technique to further simplify the fluence-map in intensity modulated radiation therapy (IMRT) inverse planning, thereby reducing plan complexity and improving delivery efficiency, while maintaining the plan quality.Methods: First-order total-variation (TV) minimization (min.) based on L1-norm has been proposed to reduce the complexity of fluence-map in IMRT by generating sparse fluence-map variations. However, with stronger dose sparing to the critical structures, the inevitable increase in the fluence-map complexity can lead to inefficient dose delivery. Theoretically, L0-min. is the ideal solution for the sparse signal recovery problem, yet practically intractable due to its nonconvexity of themore » objective function. As an alternative, the authors use the iteratively reweighted L1-min. technique to incorporate the benefits of the L0-norm into the tractability of L1-min. The weight multiplied to each element is inversely related to the magnitude of the corresponding element, which is iteratively updated by the reweighting process. The proposed penalizing process combined with TV min. further improves sparsity in the fluence-map variations, hence ultimately enhancing the delivery efficiency. To validate the proposed method, this work compares three treatment plans obtained from quadratic min. (generally used in clinic IMRT), conventional TV min., and our proposed reweighted TV min. techniques, implemented by a large-scale L1-solver (template for first-order conic solver), for five patient clinical data. Criteria such as conformation number (CN), modulation index (MI), and estimated treatment time are employed to assess the relationship between the plan quality and delivery efficiency.Results: The proposed method yields simpler fluence-maps than the quadratic and conventional TV based techniques. To attain a given CN and dose sparing to the critical organs for 5 clinical cases, the proposed method reduces the number of segments by 10-15 and 30-35, relative to TV min. and quadratic min. based plans, while MIs decreases by about 20%-30% and 40%-60% over the plans by two existing techniques, respectively. With such conditions, the total treatment time of the plans obtained from our proposed method can be reduced by 12-30 s and 30-80 s mainly due to greatly shorter multileaf collimator (MLC) traveling time in IMRT step-and-shoot delivery.Conclusions: The reweighted L1-minimization technique provides a promising solution to simplify the fluence-map variations in IMRT inverse planning. It improves the delivery efficiency by reducing the entire segments and treatment time, while maintaining the plan quality in terms of target conformity and critical structure sparing.« less
Shermeyer, Jacob S.; Haack, Barry N.
2015-01-01
Two forestry-change detection methods are described, compared, and contrasted for estimating deforestation and growth in threatened forests in southern Peru from 2000 to 2010. The methods used in this study rely on freely available data, including atmospherically corrected Landsat 5 Thematic Mapper and Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation continuous fields (VCF). The two methods include a conventional supervised signature extraction method and a unique self-calibrating method called MODIS VCF guided forest/nonforest (FNF) masking. The process chain for each of these methods includes a threshold classification of MODIS VCF, training data or signature extraction, signature evaluation, k-nearest neighbor classification, analyst-guided reclassification, and postclassification image differencing to generate forest change maps. Comparisons of all methods were based on an accuracy assessment using 500 validation pixels. Results of this accuracy assessment indicate that FNF masking had a 5% higher overall accuracy and was superior to conventional supervised classification when estimating forest change. Both methods succeeded in classifying persistently forested and nonforested areas, and both had limitations when classifying forest change.
Multi-Compartment T2 Relaxometry Using a Spatially Constrained Multi-Gaussian Model
Raj, Ashish; Pandya, Sneha; Shen, Xiaobo; LoCastro, Eve; Nguyen, Thanh D.; Gauthier, Susan A.
2014-01-01
The brain’s myelin content can be mapped by T2-relaxometry, which resolves multiple differentially relaxing T2 pools from multi-echo MRI. Unfortunately, the conventional fitting procedure is a hard and numerically ill-posed problem. Consequently, the T2 distributions and myelin maps become very sensitive to noise and are frequently difficult to interpret diagnostically. Although regularization can improve stability, it is generally not adequate, particularly at relatively low signal to noise ratio (SNR) of around 100–200. The purpose of this study was to obtain a fitting algorithm which is able to overcome these difficulties and generate usable myelin maps from noisy acquisitions in a realistic scan time. To this end, we restrict the T2 distribution to only 3 distinct resolvable tissue compartments, modeled as Gaussians: myelin water, intra/extra-cellular water and a slow relaxing cerebrospinal fluid compartment. We also impose spatial smoothness expectation that volume fractions and T2 relaxation times of tissue compartments change smoothly within coherent brain regions. The method greatly improves robustness to noise, reduces spatial variations, improves definition of white matter fibers, and enhances detection of demyelinating lesions. Due to efficient design, the additional spatial aspect does not cause an increase in processing time. The proposed method was applied to fast spiral acquisitions on which conventional fitting gives uninterpretable results. While these fast acquisitions suffer from noise and inhomogeneity artifacts, our preliminary results indicate the potential of spatially constrained 3-pool T2 relaxometry. PMID:24896833
NASA Astrophysics Data System (ADS)
Jafarzadegan, K.; Merwade, V.; Saksena, S.
2017-12-01
Using conventional hydrodynamic methods for floodplain mapping in large-scale and data-scarce regions is problematic due to the high cost of these methods, lack of reliable data and uncertainty propagation. In this study a new framework is proposed to generate 100-year floodplains for any gauged or ungauged watershed across the United States (U.S.). This framework uses Flood Insurance Rate Maps (FIRMs), topographic, climatic and land use data which are freely available for entire U.S. for floodplain mapping. The framework consists of three components, including a Random Forest classifier for watershed classification, a Probabilistic Threshold Binary Classifier (PTBC) for generating the floodplains, and a lookup table for linking the Random Forest classifier to the PTBC. The effectiveness and reliability of the proposed framework is tested on 145 watersheds from various geographical locations in the U.S. The validation results show that around 80 percent of total watersheds are predicted well, 14 percent have acceptable fit and less than five percent are predicted poorly compared to FIRMs. Another advantage of this framework is its ability in generating floodplains for all small rivers and tributaries. Due to the high accuracy and efficiency of this framework, it can be used as a preliminary decision making tool to generate 100-year floodplain maps for data-scarce regions and all tributaries where hydrodynamic methods are difficult to use.
A Bone-Thickness Map as a Guide for Bone-Anchored Port Implantation Surgery in the Temporal Bone
Guignard, Jérémie; Arnold, Andreas; Weisstanner, Christian; Caversaccio, Marco; Stieger, Christof
2013-01-01
The bone-anchored port (BAP) is an investigational implant, which is intended to be fixed on the temporal bone and provide vascular access. There are a number of implants taking advantage of the stability and available room in the temporal bone. These devices range from implantable hearing aids to percutaneous ports. During temporal bone surgery, injuring critical anatomical structures must be avoided. Several methods for computer-assisted temporal bone surgery are reported, which typically add an additional procedure for the patient. We propose a surgical guide in the form of a bone-thickness map displaying anatomical landmarks that can be used for planning of the surgery, and for the intra-operative decision of the implant’s location. The retro-auricular region of the temporal and parietal bone was marked on cone-beam computed tomography scans and tridimensional surfaces displaying the bone thickness were created from this space. We compared this method using a thickness map (n = 10) with conventional surgery without assistance (n = 5) in isolated human anatomical whole head specimens. The use of the thickness map reduced the rate of Dura Mater exposition from 100% to 20% and suppressed sigmoid sinus exposures. The study shows that a bone-thickness map can be used as a low-complexity method to improve patient’s safety during BAP surgery in the temporal bone. PMID:28788390
A Bone-Thickness Map as a Guide for Bone-Anchored Port Implantation Surgery in the Temporal Bone.
Guignard, Jérémie; Arnold, Andreas; Weisstanner, Christian; Caversaccio, Marco; Stieger, Christof
2013-11-19
The bone-anchored port (BAP) is an investigational implant, which is intended to be fixed on the temporal bone and provide vascular access. There are a number of implants taking advantage of the stability and available room in the temporal bone. These devices range from implantable hearing aids to percutaneous ports. During temporal bone surgery, injuring critical anatomical structures must be avoided. Several methods for computer-assisted temporal bone surgery are reported, which typically add an additional procedure for the patient. We propose a surgical guide in the form of a bone-thickness map displaying anatomical landmarks that can be used for planning of the surgery, and for the intra-operative decision of the implant's location. The retro-auricular region of the temporal and parietal bone was marked on cone-beam computed tomography scans and tridimensional surfaces displaying the bone thickness were created from this space. We compared this method using a thickness map ( n = 10) with conventional surgery without assistance ( n = 5) in isolated human anatomical whole head specimens. The use of the thickness map reduced the rate of Dura Mater exposition from 100% to 20% and suppressed sigmoid sinus exposures. The study shows that a bone-thickness map can be used as a low-complexity method to improve patient's safety during BAP surgery in the temporal bone.
NASA Astrophysics Data System (ADS)
Zagouras, Athanassios; Argiriou, Athanassios A.; Flocas, Helena A.; Economou, George; Fotopoulos, Spiros
2012-11-01
Classification of weather maps at various isobaric levels as a methodological tool is used in several problems related to meteorology, climatology, atmospheric pollution and to other fields for many years. Initially the classification was performed manually. The criteria used by the person performing the classification are features of isobars or isopleths of geopotential height, depending on the type of maps to be classified. Although manual classifications integrate the perceptual experience and other unquantifiable qualities of the meteorology specialists involved, these are typically subjective and time consuming. Furthermore, during the last years different approaches of automated methods for atmospheric circulation classification have been proposed, which present automated and so-called objective classifications. In this paper a new method of atmospheric circulation classification of isobaric maps is presented. The method is based on graph theory. It starts with an intelligent prototype selection using an over-partitioning mode of fuzzy c-means (FCM) algorithm, proceeds to a graph formulation for the entire dataset and produces the clusters based on the contemporary dominant sets clustering method. Graph theory is a novel mathematical approach, allowing a more efficient representation of spatially correlated data, compared to the classical Euclidian space representation approaches, used in conventional classification methods. The method has been applied to the classification of 850 hPa atmospheric circulation over the Eastern Mediterranean. The evaluation of the automated methods is performed by statistical indexes; results indicate that the classification is adequately comparable with other state-of-the-art automated map classification methods, for a variable number of clusters.
NASA Technical Reports Server (NTRS)
Hoffer, R. M.
1975-01-01
Skylab data were obtained over a mountainous test site containing a complex association of cover types and rugged topography. The application of computer-aided analysis techniques to the multispectral scanner data produced a number of significant results. Techniques were developed to digitally overlay topographic data (elevation, slope, and aspect) onto the S-192 MSS data to provide a method for increasing the effectiveness and accuracy of computer-aided analysis techniques for cover type mapping. The S-192 MSS data were analyzed using computer techniques developed at Laboratory for Applications of Remote Sensing (LARS), Purdue University. Land use maps, forest cover type maps, snow cover maps, and area tabulations were obtained and evaluated. These results compared very well with information obtained by conventional techniques. Analysis of the spectral characteristics of Skylab data has conclusively proven the value of the middle infrared portion of the spectrum (about 1.3-3.0 micrometers), a wavelength region not previously available in multispectral satellite data.
Multispectral imaging approach for simplified non-invasive in-vivo evaluation of gingival erythema
NASA Astrophysics Data System (ADS)
Eckhard, Timo; Valero, Eva M.; Nieves, Juan L.; Gallegos-Rueda, José M.; Mesa, Francisco
2012-03-01
Erythema is a common visual sign of gingivitis. In this work, a new and simple low-cost image capture and analysis method for erythema assessment is proposed. The method is based on digital still images of gingivae and applied on a pixel-by-pixel basis. Multispectral images are acquired with a conventional digital camera and multiplexed LED illumination panels at 460nm and 630nm peak wavelength. An automatic work-flow segments teeth from gingiva regions in the images and creates a map of local blood oxygenation levels, which relates to the presence of erythema. The map is computed from the ratio of the two spectral images. An advantage of the proposed approach is that the whole process is easy to manage by dental health care professionals in clinical environment.
Stiller, Wolfram; Skornitzke, Stephan; Fritz, Franziska; Klauss, Miriam; Hansen, Jens; Pahn, Gregor; Grenacher, Lars; Kauczor, Hans-Ulrich
2015-10-01
Study objectives were the quantitative evaluation of whether conventional abdominal computed tomography (CT) perfusion measurements mathematically correlate with quantitative single-acquisition dual-energy CT (DECT) iodine concentration maps, the determination of the optimum time of acquisition for achieving maximum correlation, and the estimation of the potential for radiation exposure reduction when replacing conventional CT perfusion by single-acquisition DECT iodine concentration maps. Dual-energy CT perfusion sequences were dynamically acquired over 51 seconds (34 acquisitions every 1.5 seconds) in 24 patients with histologically verified pancreatic carcinoma using dual-source DECT at tube potentials of 80 kVp and 140 kVp. Using software developed in-house, perfusion maps were calculated from 80-kVp image series using the maximum slope model after deformable motion correction. In addition, quantitative iodine maps were calculated for each of the 34 DECT acquisitions per patient. Within a manual segmentation of the pancreas, voxel-by-voxel correlation between the perfusion map and each of the iodine maps was calculated for each patient to determine the optimum time of acquisition topt defined as the acquisition time of the iodine map with the highest correlation coefficient. Subsequently, regions of interest were placed inside the tumor and inside healthy pancreatic tissue, and correlation between mean perfusion values and mean iodine concentrations within these regions of interest at topt was calculated for the patient sample. The mean (SD) topt was 31.7 (5.4) seconds after the start of contrast agent injection. The mean (SD) perfusion values for healthy pancreatic and tumor tissues were 67.8 (26.7) mL per 100 mL/min and 43.7 (32.2) mL per 100 mL/min, respectively. At topt, the mean (SD) iodine concentrations were 2.07 (0.71) mg/mL in healthy pancreatic and 1.69 (0.98) mg/mL in tumor tissue, respectively. Overall, the correlation between perfusion values and iodine concentrations was high (0.77), with correlation of 0.89 in tumor and of 0.56 in healthy pancreatic tissue at topt. Comparing radiation exposure associated with a single DECT acquisition at topt (0.18 mSv) to that of an 80 kVp CT perfusion sequence (2.96 mSv) indicates that an average reduction of Deff by 94% could be achieved by replacing conventional CT perfusion with a single-acquisition DECT iodine concentration map. Quantitative iodine concentration maps obtained with DECT correlate well with conventional abdominal CT perfusion measurements, suggesting that quantitative iodine maps calculated from a single DECT acquisition at an organ-specific and patient-specific optimum time of acquisition might be able to replace conventional abdominal CT perfusion measurements if the time of acquisition is carefully calibrated. This could lead to large reductions of radiation exposure to the patients while offering quantitative perfusion data for diagnosis.
Entz, Michael; King, D Ryan; Poelzing, Steven
2017-12-01
With the sudden increase in affordable manufacturing technologies, the relationship between experimentalists and the designing process for laboratory equipment is rapidly changing. While experimentalists are still dependent on engineers and manufacturers for precision electrical, mechanical, and optical equipment, it has become a realistic option for in house manufacturing of other laboratory equipment with less precise design requirements. This is possible due to decreasing costs and increasing functionality of desktop three-dimensional (3-D) printers and 3-D design software. With traditional manufacturing methods, iterative design processes are expensive and time consuming, and making more than one copy of a custom piece of equipment is prohibitive. Here, we provide an overview to design a tissue bath and stabilizer for a customizable, suspended, whole heart optical mapping apparatus that can be produced significantly faster and less expensive than conventional manufacturing techniques. This was accomplished through a series of design steps to prevent fluid leakage in the areas where the optical imaging glass was attached to the 3-D printed bath. A combination of an acetone dip along with adhesive was found to create a water tight bath. Optical mapping was used to quantify cardiac conduction velocity and action potential duration to compare 3-D printed baths to a bath that was designed and manufactured in a machine shop. Importantly, the manufacturing method did not significantly affect conduction, action potential duration, or contraction, suggesting that 3-D printed baths are equally effective for optical mapping experiments. NEW & NOTEWORTHY This article details three-dimensional printable equipment for use in suspended whole heart optical mapping experiments. This equipment is less expensive than conventional manufactured equipment as well as easily customizable to the experimentalist. The baths can be waterproofed using only a three-dimensional printer, acetone, a glass microscope slide, c-clamps, and adhesive. Copyright © 2017 the American Physiological Society.
Evaluation of hot forming effects mapping for CAE analyses
NASA Astrophysics Data System (ADS)
Knoerr, L.; Faath, T.; Dykeman, J.; Malcolm, S.
2016-08-01
Hot forming has grown significantly in the manufacturing of structural components within the vehicle Body-In-White construction. The superior strength of press hardened steels not only guarantee high resistance to deformation, it also brings a significant weight saving compared to conventional cold formed products. However, the benefit of achieving ultrahigh strength with hot stamping, comes with a reduction in ductility of the press hardened part. This will require advanced material modeling to capture the predicted performances accurately. A technique to optically measure and map the thinning distribution after hot stamping has shown to improve numerical analysis for fracture prediction. The proposed method to determine the forming effects and mapping to CAE models can be integrated into the Vehicle Development Process to shorten the time to production.
Holmes, Robert R.; Dunn, Chad J.
1996-01-01
A simplified method to estimate total-streambed scour was developed for application to bridges in the State of Illinois. Scour envelope curves, developed as empirical relations between calculated total scour and bridge-site chracteristics for 213 State highway bridges in Illinois, are used in the method to estimate the 500-year flood scour. These 213 bridges, geographically distributed throughout Illinois, had been previously evaluated for streambed scour with the application of conventional hydraulic and scour-analysis methods recommended by the Federal Highway Administration. The bridge characteristics necessary for application of the simplified bridge scour-analysis method can be obtained from an office review of bridge plans, examination of topographic maps, and reconnaissance-level site inspection. The estimates computed with the simplified method generally resulted in a larger value of 500-year flood total-streambed scour than with the more detailed conventional method. The simplified method was successfully verified with a separate data set of 106 State highway bridges, which are geographically distributed throughout Illinois, and 15 county highway bridges.
Mucke, M; Zhaunerchyk, V; Frasinski, L J; ...
2015-07-01
Few-photon ionization and relaxation processes in acetylene (C 2H 2) and ethane (C 2H 6) were investigated at the linac coherent light source x-ray free electron laser (FEL) at SLAC, Stanford using a highly efficient multi-particle correlation spectroscopy technique based on a magnetic bottle. The analysis method of covariance mapping has been applied and enhanced, allowing us to identify electron pairs associated with double core hole (DCH) production and competing multiple ionization processes including Auger decay sequences. The experimental technique and the analysis procedure are discussed in the light of earlier investigations of DCH studies carried out at the samemore » FEL and at third generation synchrotron radiation sources. In particular, we demonstrate the capability of the covariance mapping technique to disentangle the formation of molecular DCH states which is barely feasible with conventional electron spectroscopy methods.« less
NASA Astrophysics Data System (ADS)
Daffara, Claudia; Parisotto, Simone; Ambrosini, Dario
2018-05-01
We present a multi-purpose, dual-mode imaging method in the Mid-Wavelength Infrared (MWIR) range (from 3 μm to 5 μm) for a more efficient nondestructive analysis of artworks. Using a setup based on a MWIR thermal camera and multiple radiation sources, two radiometric image datasets are acquired in different acquisition modalities, the image in quasi-reflectance mode (TQR) and the thermal sequence in emission mode. Here, the advantages are: the complementarity of the information; the use of the quasi-reflectance map for calculating the emissivity map; the use of TQR map for a referentiation to the visible of the thermographic images. The concept of the method is presented, the practical feasibility is demonstrated through a custom imaging setup, the potentiality for the nondestructive analysis is shown on a notable application to cultural heritage. The method has been used as experimental tool in support of the restoration of the mural painting "Monocromo" by Leonardo da Vinci. Feedback from the operators and a comparison with some conventional diagnostic techniques is also given to underline the novelty and potentiality of the method.
On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models
NASA Astrophysics Data System (ADS)
Xu, S.; Wang, B.; Liu, J.
2015-10-01
In this article we propose two grid generation methods for global ocean general circulation models. Contrary to conventional dipolar or tripolar grids, the proposed methods are based on Schwarz-Christoffel conformal mappings that map areas with user-prescribed, irregular boundaries to those with regular boundaries (i.e., disks, slits, etc.). The first method aims at improving existing dipolar grids. Compared with existing grids, the sample grid achieves a better trade-off between the enlargement of the latitudinal-longitudinal portion and the overall smooth grid cell size transition. The second method addresses more modern and advanced grid design requirements arising from high-resolution and multi-scale ocean modeling. The generated grids could potentially achieve the alignment of grid lines to the large-scale coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the grids are orthogonal curvilinear, they can be easily utilized by the majority of ocean general circulation models that are based on finite difference and require grid orthogonality. The proposed grid generation algorithms can also be applied to the grid generation for regional ocean modeling where complex land-sea distribution is present.
Tani, Kazuki; Mio, Motohira; Toyofuku, Tatsuo; Kato, Shinichi; Masumoto, Tomoya; Ijichi, Tetsuya; Matsushima, Masatoshi; Morimoto, Shoichi; Hirata, Takumi
2017-01-01
Spatial normalization is a significant image pre-processing operation in statistical parametric mapping (SPM) analysis. The purpose of this study was to clarify the optimal method of spatial normalization for improving diagnostic accuracy in SPM analysis of arterial spin-labeling (ASL) perfusion images. We evaluated the SPM results of five spatial normalization methods obtained by comparing patients with Alzheimer's disease or normal pressure hydrocephalus complicated with dementia and cognitively healthy subjects. We used the following methods: 3DT1-conventional based on spatial normalization using anatomical images; 3DT1-DARTEL based on spatial normalization with DARTEL using anatomical images; 3DT1-conventional template and 3DT1-DARTEL template, created by averaging cognitively healthy subjects spatially normalized using the above methods; and ASL-DARTEL template created by averaging cognitively healthy subjects spatially normalized with DARTEL using ASL images only. Our results showed that ASL-DARTEL template was small compared with the other two templates. Our SPM results obtained with ASL-DARTEL template method were inaccurate. Also, there were no significant differences between 3DT1-conventional and 3DT1-DARTEL template methods. In contrast, the 3DT1-DARTEL method showed higher detection sensitivity, and precise anatomical location. Our SPM results suggest that we should perform spatial normalization with DARTEL using anatomical images.
Hagen, C K; Diemoz, P C; Endrizzi, M; Rigon, L; Dreossi, D; Arfelli, F; Lopez, F C M; Longo, R; Olivo, A
2014-04-07
X-ray phase contrast imaging (XPCi) methods are sensitive to phase in addition to attenuation effects and, therefore, can achieve improved image contrast for weakly attenuating materials, such as often encountered in biomedical applications. Several XPCi methods exist, most of which have already been implemented in computed tomographic (CT) modality, thus allowing volumetric imaging. The Edge Illumination (EI) XPCi method had, until now, not been implemented as a CT modality. This article provides indications that quantitative 3D maps of an object's phase and attenuation can be reconstructed from EI XPCi measurements. Moreover, a theory for the reconstruction of combined phase and attenuation maps is presented. Both reconstruction strategies find applications in tissue characterisation and the identification of faint, weakly attenuating details. Experimental results for wires of known materials and for a biological object validate the theory and confirm the superiority of the phase over conventional, attenuation-based image contrast.
Effect of increased surface tension and assisted ventilation on /sup 99m/Tc-DTPA clearance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jefferies, A.L.; Kawano, T.; Mori, S.
1988-02-01
Experiments were performed to determine the effects of conventional mechanical ventilation (CMV) and high-frequency oscillation (HFO) on the clearance of technetium-99m-labeled diethylenetriamine pentaacetate (/sup 99m/Tc-DTPA) from lungs with altered surface tension properties. A submicronic aerosol of /sup 99m/Tc-DTPA was insufflated into the lungs of anesthetized, tracheotomized rabbits before and 1 h after the administration of the aerosolized detergent dioctyl sodium sulfosuccinate (OT). Rabbits were ventilated by one of four methods: 1) spontaneous breathing; 2) CMV at 12 cmH2O mean airway pressure (MAP); 3) HFO at 12 cmH2O MAP; 4) HFO at 16 cmH2O MAP. Administration of OT resulted in decreasedmore » arterial PO2 (PaO2), increased lung wet-to-dry weight ratios, and abnormal lung pressure-volume relationships, compatible with increased surface tension. /sup 99m/Tc-DTPA clearance was accelerated after OT in all groups. The post-OT rate of clearance (k) was significantly faster (P less than 0.05) in the CMV at 12 cmH2O MAP (k = 7.57 +/- 0.71%/min (SE)) and HFO at 16 cmH2O MAP (k = 6.92 +/- 0.61%/min) groups than in the spontaneously breathing (k = 4.32 +/- 0.55%/min) and HFO at 12 cmH2O MAP (4.68 +/- 0.63%/min) groups. The clearance curves were biexponential in the former two groups. We conclude that pulmonary clearance of /sup 99m/Tc-DTPA is accelerated in high surface tension pulmonary edema, and this effect is enhanced by both conventional ventilation and HFO at high mean airway pressure.« less
Global mapping of infectious disease
Hay, Simon I.; Battle, Katherine E.; Pigott, David M.; Smith, David L.; Moyes, Catherine L.; Bhatt, Samir; Brownstein, John S.; Collier, Nigel; Myers, Monica F.; George, Dylan B.; Gething, Peter W.
2013-01-01
The primary aim of this review was to evaluate the state of knowledge of the geographical distribution of all infectious diseases of clinical significance to humans. A systematic review was conducted to enumerate cartographic progress, with respect to the data available for mapping and the methods currently applied. The results helped define the minimum information requirements for mapping infectious disease occurrence, and a quantitative framework for assessing the mapping opportunities for all infectious diseases. This revealed that of 355 infectious diseases identified, 174 (49%) have a strong rationale for mapping and of these only 7 (4%) had been comprehensively mapped. A variety of ambitions, such as the quantification of the global burden of infectious disease, international biosurveillance, assessing the likelihood of infectious disease outbreaks and exploring the propensity for infectious disease evolution and emergence, are limited by these omissions. An overview of the factors hindering progress in disease cartography is provided. It is argued that rapid improvement in the landscape of infectious diseases mapping can be made by embracing non-conventional data sources, automation of geo-positioning and mapping procedures enabled by machine learning and information technology, respectively, in addition to harnessing labour of the volunteer ‘cognitive surplus’ through crowdsourcing. PMID:23382431
Kim, Youngwoo; Hong, Byung Woo; Kim, Seung Ja; Kim, Jong Hyo
2014-07-01
A major challenge when distinguishing glandular tissues on mammograms, especially for area-based estimations, lies in determining a boundary on a hazy transition zone from adipose to glandular tissues. This stems from the nature of mammography, which is a projection of superimposed tissues consisting of different structures. In this paper, the authors present a novel segmentation scheme which incorporates the learned prior knowledge of experts into a level set framework for fully automated mammographic density estimations. The authors modeled the learned knowledge as a population-based tissue probability map (PTPM) that was designed to capture the classification of experts' visual systems. The PTPM was constructed using an image database of a selected population consisting of 297 cases. Three mammogram experts extracted regions for dense and fatty tissues on digital mammograms, which was an independent subset used to create a tissue probability map for each ROI based on its local statistics. This tissue class probability was taken as a prior in the Bayesian formulation and was incorporated into a level set framework as an additional term to control the evolution and followed the energy surface designed to reflect experts' knowledge as well as the regional statistics inside and outside of the evolving contour. A subset of 100 digital mammograms, which was not used in constructing the PTPM, was used to validate the performance. The energy was minimized when the initial contour reached the boundary of the dense and fatty tissues, as defined by experts. The correlation coefficient between mammographic density measurements made by experts and measurements by the proposed method was 0.93, while that with the conventional level set was 0.47. The proposed method showed a marked improvement over the conventional level set method in terms of accuracy and reliability. This result suggests that the proposed method successfully incorporated the learned knowledge of the experts' visual systems and has potential to be used as an automated and quantitative tool for estimations of mammographic breast density levels.
Magnetic field mapping of the UCNTau magneto-gravitational trap: design study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Libersky, Matthew Murray
2014-09-04
The beta decay lifetime of the free neutron is an important input to the Standard Model of particle physics, but values measured using different methods have exhibited substantial disagreement. The UCN r experiment in development at Los Alamos National Laboratory (LANL) plans to explore better methods of measuring the neutron lifetime using ultracold neutrons (UCNs). In this experiment, UCNs are confined in a magneto-gravitational trap formed by a curved, asymmetric Halbach array placed inside a vacuum vessel and surrounded by holding field coils. If any defects present in the Halbach array are sufficient to reduce the local field near themore » surface below that needed to repel the desired energy level UCNs, loss by material interaction can occur at a rate similar to the loss by beta decay. A map of the magnetic field near the surface of the array is necessary to identify any such defects, but the array's curved geometry and placement in a vacuum vessel make conventional field mapping methods difficult. A system consisting of computer vision-based tracking and a rover holding a Hall probe has been designed to map the field near the surface of the array, and construction of an initial prototype has begun at LANL. The design of the system and initial results will be described here.« less
Color-coded visualization of magnetic resonance imaging multiparametric maps
NASA Astrophysics Data System (ADS)
Kather, Jakob Nikolas; Weidner, Anja; Attenberger, Ulrike; Bukschat, Yannick; Weis, Cleo-Aron; Weis, Meike; Schad, Lothar R.; Zöllner, Frank Gerrit
2017-01-01
Multiparametric magnetic resonance imaging (mpMRI) data are emergingly used in the clinic e.g. for the diagnosis of prostate cancer. In contrast to conventional MR imaging data, multiparametric data typically include functional measurements such as diffusion and perfusion imaging sequences. Conventionally, these measurements are visualized with a one-dimensional color scale, allowing only for one-dimensional information to be encoded. Yet, human perception places visual information in a three-dimensional color space. In theory, each dimension of this space can be utilized to encode visual information. We addressed this issue and developed a new method for tri-variate color-coded visualization of mpMRI data sets. We showed the usefulness of our method in a preclinical and in a clinical setting: In imaging data of a rat model of acute kidney injury, the method yielded characteristic visual patterns. In a clinical data set of N = 13 prostate cancer mpMRI data, we assessed diagnostic performance in a blinded study with N = 5 observers. Compared to conventional radiological evaluation, color-coded visualization was comparable in terms of positive and negative predictive values. Thus, we showed that human observers can successfully make use of the novel method. This method can be broadly applied to visualize different types of multivariate MRI data.
NASA Astrophysics Data System (ADS)
Watkins, Hannah; Bond, Clare; Butler, Rob
2016-04-01
Geological mapping techniques have advanced significantly in recent years from paper fieldslips to Toughbook, smartphone and tablet mapping; but how do the methods used to create a geological map affect the thought processes that result in the final map interpretation? Geological maps have many key roles in the field of geosciences including understanding geological processes and geometries in 3D, interpreting geological histories and understanding stratigraphic relationships in 2D and 3D. Here we consider the impact of the methods used to create a map on the thought processes that result in the final geological map interpretation. As mapping technology has advanced in recent years, the way in which we produce geological maps has also changed. Traditional geological mapping is undertaken using paper fieldslips, pencils and compass clinometers. The map interpretation evolves through time as data is collected. This interpretive process that results in the final geological map is often supported by recording in a field notebook, observations, ideas and alternative geological models explored with the use of sketches and evolutionary diagrams. In combination the field map and notebook can be used to challenge the map interpretation and consider its uncertainties. These uncertainties and the balance of data to interpretation are often lost in the creation of published 'fair' copy geological maps. The advent of Toughbooks, smartphones and tablets in the production of geological maps has changed the process of map creation. Digital data collection, particularly through the use of inbuilt gyrometers in phones and tablets, has changed smartphones into geological mapping tools that can be used to collect lots of geological data quickly. With GPS functionality this data is also geospatially located, assuming good GPS connectivity, and can be linked to georeferenced infield photography. In contrast line drawing, for example for lithological boundary interpretation and sketching, is yet to find the digital flow that is achieved with pencil on notebook page or map. Free-form integrated sketching and notebook functionality in geological mapping software packages is in its nascence. Hence, the result is a tendency for digital geological mapping to focus on the ease of data collection rather than on the thoughts and careful observations that come from notebook sketching and interpreting boundaries on a map in the field. The final digital geological map can be assessed for when and where data was recorded, but the thought processes of the mapper are less easily assessed, and the use of observations and sketching to generate ideas and interpretations maybe inhibited by reliance on digital mapping methods. All mapping methods used have their own distinct advantages and disadvantages and with more recent technologies both hardware and software issues have arisen. We present field examples of using conventional fieldslip mapping, and compare these with more advanced technologies to highlight some of the main advantages and disadvantages of each method and discuss where geological mapping may be going in the future.
Optical constants of solid ammonia in the infrared
NASA Technical Reports Server (NTRS)
Robertson, C. W.; Downing, H. D.; Curnutte, B.; Williams, D.
1975-01-01
No direct measurements of the refractive index for solid ammonia could be obtained because of failures in attempts to map the reflection spectrum. Kramers-Kronig techniques were, therefore, used in the investigation. The subtractive Kramers-Kronig techniques employed are similar to those discussed by Ahrenkiel (1971). The subtractive method provides a more-rapid convergence than the conventional techniques when data are available over a limited spectral range.
Varley, Adam; Tyler, Andrew; Smith, Leslie; Dale, Paul; Davies, Mike
2016-03-01
Radium ((226)Ra) contamination derived from military, industrial, and pharmaceutical products can be found at a number of historical sites across the world posing a risk to human health. The analysis of spectral data derived using gamma-ray spectrometry can offer a powerful tool to rapidly estimate and map the activity, depth, and lateral distribution of (226)Ra contamination covering an extensive area. Subsequently, reliable risk assessments can be developed for individual sites in a fraction of the timeframe compared to traditional labour-intensive sampling techniques: for example soil coring. However, local heterogeneity of the natural background, statistical counting uncertainty, and non-linear source response are confounding problems associated with gamma-ray spectral analysis. This is particularly challenging, when attempting to deal with enhanced concentrations of a naturally occurring radionuclide such as (226)Ra. As a result, conventional surveys tend to attribute the highest activities to the largest total signal received by a detector (Gross counts): an assumption that tends to neglect higher activities at depth. To overcome these limitations, a methodology was developed making use of Monte Carlo simulations, Principal Component Analysis and Machine Learning based algorithms to derive depth and activity estimates for (226)Ra contamination. The approach was applied on spectra taken using two gamma-ray detectors (Lanthanum Bromide and Sodium Iodide), with the aim of identifying an optimised combination of detector and spectral processing routine. It was confirmed that, through a combination of Neural Networks and Lanthanum Bromide, the most accurate depth and activity estimates could be found. The advantage of the method was demonstrated by mapping depth and activity estimates at a case study site in Scotland. There the method identified significantly higher activity (<3 Bq g(-1)) occurring at depth (>0.4m), that conventional gross counting algorithms failed to identify. It was concluded that the method could easily be employed to identify areas of high activity potentially occurring at depth, prior to intrusive investigation using conventional sampling techniques. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Herkül, Kristjan; Peterson, Anneliis; Paekivi, Sander
2017-06-01
Both basic science and marine spatial planning are in a need of high resolution spatially continuous data on seabed habitats and biota. As conventional point-wise sampling is unable to cover large spatial extents in high detail, it must be supplemented with remote sensing and modeling in order to fulfill the scientific and management needs. The combined use of in situ sampling, sonar scanning, and mathematical modeling is becoming the main method for mapping both abiotic and biotic seabed features. Further development and testing of the methods in varying locations and environmental settings is essential for moving towards unified and generally accepted methodology. To fill the relevant research gap in the Baltic Sea, we used multibeam sonar and mathematical modeling methods - generalized additive models (GAM) and random forest (RF) - together with underwater video to map seabed substrate and epibenthos of offshore shallows. In addition to testing the general applicability of the proposed complex of techniques, the predictive power of different sonar-based variables and modeling algorithms were tested. Mean depth, followed by mean backscatter, were the most influential variables in most of the models. Generally, mean values of sonar-based variables had higher predictive power than their standard deviations. The predictive accuracy of RF was higher than that of GAM. To conclude, we found the method to be feasible and with predictive accuracy similar to previous studies of sonar-based mapping.
Lim, Dae-Woon; Kim, Sungjune; Harale, Aadesh; Yoon, Minyoung; Suh, Myunghyun Paik; Kim, Jihan
2017-01-01
Structural deformation and collapse in metal-organic frameworks (MOFs) can lead to loss of long-range order, making it a challenge to model these amorphous materials using conventional computational methods. In this work, we show that a structure–property map consisting of simulated data for crystalline MOFs can be used to indirectly obtain adsorption properties of structurally deformed MOFs. The structure–property map (with dimensions such as Henry coefficient, heat of adsorption, and pore volume) was constructed using a large data set of over 12000 crystalline MOFs from molecular simulations. By mapping the experimental data points of deformed SNU-200, MOF-5, and Ni-MOF-74 onto this structure–property map, we show that the experimentally deformed MOFs share similar adsorption properties with their nearest neighbor crystalline structures. Once the nearest neighbor crystalline MOFs for a deformed MOF are selected from a structure–property map at a specific condition, then the adsorption properties of these MOFs can be successfully transformed onto the degraded MOFs, leading to a new way to obtain properties of materials whose structural information is lost. PMID:28696307
Grouiller, Frédéric; Thornton, Rachel C.; Groening, Kristina; Spinelli, Laurent; Duncan, John S.; Schaller, Karl; Siniatchkin, Michael; Lemieux, Louis; Seeck, Margitta; Michel, Christoph M.
2011-01-01
In patients with medically refractory focal epilepsy who are candidates for epilepsy surgery, concordant non-invasive neuroimaging data are useful to guide invasive electroencephalographic recordings or surgical resection. Simultaneous electroencephalography and functional magnetic resonance imaging recordings can reveal regions of haemodynamic fluctuations related to epileptic activity and help localize its generators. However, many of these studies (40–70%) remain inconclusive, principally due to the absence of interictal epileptiform discharges during simultaneous recordings, or lack of haemodynamic changes correlated to interictal epileptiform discharges. We investigated whether the presence of epilepsy-specific voltage maps on scalp electroencephalography correlated with haemodynamic changes and could help localize the epileptic focus. In 23 patients with focal epilepsy, we built epilepsy-specific electroencephalographic voltage maps using averaged interictal epileptiform discharges recorded during long-term clinical monitoring outside the scanner and computed the correlation of this map with the electroencephalographic recordings in the scanner for each time frame. The time course of this correlation coefficient was used as a regressor for functional magnetic resonance imaging analysis to map haemodynamic changes related to these epilepsy-specific maps (topography-related haemodynamic changes). The method was first validated in five patients with significant haemodynamic changes correlated to interictal epileptiform discharges on conventional analysis. We then applied the method to 18 patients who had inconclusive simultaneous electroencephalography and functional magnetic resonance imaging studies due to the absence of interictal epileptiform discharges or absence of significant correlated haemodynamic changes. The concordance of the results with subsequent intracranial electroencephalography and/or resection area in patients who were seizure free after surgery was assessed. In the validation group, haemodynamic changes correlated to voltage maps were similar to those obtained with conventional analysis in 5/5 patients. In 14/18 patients (78%) with previously inconclusive studies, scalp maps related to epileptic activity had haemodynamic correlates even when no interictal epileptiform discharges were detected during simultaneous recordings. Haemodynamic changes correlated to voltage maps were spatially concordant with intracranial electroencephalography or with the resection area. We found better concordance in patients with lateral temporal and extratemporal neocortical epilepsy compared to medial/polar temporal lobe epilepsy, probably due to the fact that electroencephalographic voltage maps specific to lateral temporal and extratemporal epileptic activity are more dissimilar to maps of physiological activity. Our approach significantly increases the yield of simultaneous electroencephalography and functional magnetic resonance imaging to localize the epileptic focus non-invasively, allowing better targeting for surgical resection or implantation of intracranial electrode arrays. PMID:21752790
Gheza, Davide; Paul, Katharina; Pourtois, Gilles
2017-11-24
Evaluative feedback provided during performance monitoring (PM) elicits either a positive or negative deflection ~250-300ms after its onset in the event-related potential (ERP) depending on whether the outcome is reward-related or not, as well as expected or not. However, it remains currently unclear whether these two deflections reflect a unitary process, or rather dissociable effects arising from non-overlapping brain networks. To address this question, we recorded 64-channel EEG in healthy adult participants performing a standard gambling task where valence and expectancy were manipulated in a factorial design. We analyzed the feedback-locked ERP data using a conventional ERP analysis, as well as an advanced topographic ERP mapping analysis supplemented with distributed source localization. Results reveal two main topographies showing opposing valence effects, and being differently modulated by expectancy. The first one was short-lived and sensitive to no-reward irrespective of expectancy. Source-estimation associated with this topographic map comprised mainly regions of the dorsal anterior cingulate cortex. The second one was primarily driven by reward, had a prolonged time-course and was monotonically influenced by expectancy. Moreover, this reward-related topographical map was best accounted for by intracranial generators estimated in the posterior cingulate cortex. These new findings suggest the existence of dissociable brain systems depending on feedback valence and expectancy. More generally, they inform about the added value of using topographic ERP mapping methods, besides conventional ERP measurements, to characterize qualitative changes occurring in the spatio-temporal dynamic of reward processing during PM. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mao, Cuili; Lu, Rongsheng; Liu, Zhijian
2018-07-01
In fringe projection profilometry, the phase errors caused by the nonlinear intensity response of digital projectors needs to be correctly compensated. In this paper, a multi-frequency inverse-phase method is proposed. The theoretical model of periodical phase errors is analyzed. The periodical phase errors can be adaptively compensated in the wrapped maps by using a set of fringe patterns. The compensated phase is then unwrapped with multi-frequency method. Compared with conventional methods, the proposed method can greatly reduce the periodical phase error without calibrating measurement system. Some simulation and experimental results are presented to demonstrate the validity of the proposed approach.
NASA Astrophysics Data System (ADS)
Novellino, A.; Cigna, F.; Sowter, A.; Ramondini, M.; Calcaterra, D.
2017-03-01
A large scale study of landslide processes was undertaken by coupling conventional geomorphological field surveys with aerial photographs along with an advanced Interferometric Synthetic Aperture Radar (InSAR) analysis of ground instability in north-western Sicily. COSMO-SkyMed satellite images for the period between 2008 and 2011 were processed using the Intermittent Small BAseline Subset (ISBAS) technique, recently developed at the Department of Civil Engineering of the University of Nottingham. The use of ISBAS allowed the derivation of ground surface displacements across non-urbanized areas, thus overcoming one of the main limitations of conventional interferometric techniques. ISBAS provides ground motion information not only for urban but also for rural, woodland, grassland and agricultural terrains, which cover > 60% of north-western Sicily, thereby improving by 40 times in some cases, the slope instability investigation capabilities of InSAR methods. ISBAS ground motion data enabled the updating of the landslide inventory for the areas of Piana degli Albanesi and Marineo (over 130 km2), which encompass a number of active, dormant and inactive landslides according to the pre-existing landslide inventory maps produced through aerial photo-interpretation and local field checks. An average of ∼ 7000 ISBAS pixels km- 2 allowed the detection of small displacements in regions difficult to access. In particular, 226 landslides - mainly slides, flows and creep and four badlands were identified, comprising a total area of 25.3 km2. When compared to the previous landslide inventory maps, 84 phenomena were confirmed, 67 new events were detected and 79 previously mapped events were re-assessed, modifying their typology, boundary and/or state of activity. Because the InSAR method used here is designed to measure slow rates of velocity and therefore may not detect fast-moving, events such as falls and topples, the results for Piana degli Albanesi and Marineo demonstrate the validity of this method to support land management, underlying the time and cost benefits of a combined approach using traditional monitoring procedures and satellite InSAR methods especially if slow-moving slope movements prevail.
Shareef, Hussain; Mohamed, Azah
2017-01-01
The electric vehicle (EV) is considered a premium solution to global warming and various types of pollution. Nonetheless, a key concern is the recharging of EV batteries. Therefore, this study proposes a novel approach that considers the costs of transportation loss, buildup, and substation energy loss and that incorporates harmonic power loss into optimal rapid charging station (RCS) planning. A novel optimization technique, called binary lightning search algorithm (BLSA), is proposed to solve the optimization problem. BLSA is also applied to a conventional RCS planning method. A comprehensive analysis is conducted to assess the performance of the two RCS planning methods by using the IEEE 34-bus test system as the power grid. The comparative studies show that the proposed BLSA is better than other optimization techniques. The daily total cost in RCS planning of the proposed method, including harmonic power loss, decreases by 10% compared with that of the conventional method. PMID:29220396
Islam, Md Mainul; Shareef, Hussain; Mohamed, Azah
2017-01-01
The electric vehicle (EV) is considered a premium solution to global warming and various types of pollution. Nonetheless, a key concern is the recharging of EV batteries. Therefore, this study proposes a novel approach that considers the costs of transportation loss, buildup, and substation energy loss and that incorporates harmonic power loss into optimal rapid charging station (RCS) planning. A novel optimization technique, called binary lightning search algorithm (BLSA), is proposed to solve the optimization problem. BLSA is also applied to a conventional RCS planning method. A comprehensive analysis is conducted to assess the performance of the two RCS planning methods by using the IEEE 34-bus test system as the power grid. The comparative studies show that the proposed BLSA is better than other optimization techniques. The daily total cost in RCS planning of the proposed method, including harmonic power loss, decreases by 10% compared with that of the conventional method.
Real-time volume rendering of 4D image using 3D texture mapping
NASA Astrophysics Data System (ADS)
Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il
2001-05-01
Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.
Evaluation of MRI sequences for quantitative T1 brain mapping
NASA Astrophysics Data System (ADS)
Tsialios, P.; Thrippleton, M.; Glatz, A.; Pernet, C.
2017-11-01
T1 mapping constitutes a quantitative MRI technique finding significant application in brain imaging. It allows evaluation of contrast uptake, blood perfusion, volume, providing a more specific biomarker of disease progression compared to conventional T1-weighted images. While there are many techniques for T1-mapping there is a wide range of reported T1-values in tissues, raising the issue of protocols reproducibility and standardization. The gold standard for obtaining T1-maps is based on acquiring IR-SE sequence. Widely used alternative sequences are IR-SE-EPI, VFA (DESPOT), DESPOT-HIFI and MP2RAGE that speed up scanning and fitting procedures. A custom MRI phantom was used to assess the reproducibility and accuracy of the different methods. All scans were performed using a 3T Siemens Prisma scanner. The acquired data processed using two different codes. The main difference was observed for VFA (DESPOT) which grossly overestimated T1 relaxation time by 214 ms [126 270] compared to the IR-SE sequence. MP2RAGE and DESPOT-HIFI sequences gave slightly shorter time than IR-SE (~20 to 30ms) and can be considered as alternative and time-efficient methods for acquiring accurate T1 maps of the human brain, while IR-SE-EPI gave identical result, at a cost of a lower image quality.
Optimal Mass Transport for Shape Matching and Comparison
Su, Zhengyu; Wang, Yalin; Shi, Rui; Zeng, Wei; Sun, Jian; Luo, Feng; Gu, Xianfeng
2015-01-01
Surface based 3D shape analysis plays a fundamental role in computer vision and medical imaging. This work proposes to use optimal mass transport map for shape matching and comparison, focusing on two important applications including surface registration and shape space. The computation of the optimal mass transport map is based on Monge-Brenier theory, in comparison to the conventional method based on Monge-Kantorovich theory, this method significantly improves the efficiency by reducing computational complexity from O(n2) to O(n). For surface registration problem, one commonly used approach is to use conformal map to convert the shapes into some canonical space. Although conformal mappings have small angle distortions, they may introduce large area distortions which are likely to cause numerical instability thus resulting failures of shape analysis. This work proposes to compose the conformal map with the optimal mass transport map to get the unique area-preserving map, which is intrinsic to the Riemannian metric, unique, and diffeomorphic. For shape space study, this work introduces a novel Riemannian framework, Conformal Wasserstein Shape Space, by combing conformal geometry and optimal mass transport theory. In our work, all metric surfaces with the disk topology are mapped to the unit planar disk by a conformal mapping, which pushes the area element on the surface to a probability measure on the disk. The optimal mass transport provides a map from the shape space of all topological disks with metrics to the Wasserstein space of the disk and the pullback Wasserstein metric equips the shape space with a Riemannian metric. We validate our work by numerous experiments and comparisons with prior approaches and the experimental results demonstrate the efficiency and efficacy of our proposed approach. PMID:26440265
Kimura, Keisaku; Sato, Seiichi
2014-05-01
A conventional laser microscope can be used to derive the index of refractivity by the ratio of geometrical height of the transparent platelet to the apparent height of the normal incident light for very small crystals in the wide size range. We demonstrate that the simple method is effective for the samples from 100 μm to 16 μm in size using alkali halide crystals as a model system. The method is also applied for the surface fractured micro-crystals and an inclined crystal with microscopic size regime. Furthermore, we present two-dimensional refractive index mapping as well as two-dimensional height profile for the mixture of three alkali halides, KCl, KI, and NaCl, all are μm in size.
Application of Satellite SAR Imagery in Mapping the Active Layer of Arctic Permafrost
NASA Technical Reports Server (NTRS)
Li, Shu-Sun; Romanovsky, V.; Lovick, Joe; Wang, Z.; Peterson, Rorik
2003-01-01
A method of mapping the active layer of Arctic permafrost using a combination of conventional synthetic aperture radar (SAR) backscatter and more sophisticated interferometric SAR (INSAR) techniques is proposed. The proposed research is based on the sensitivity of radar backscatter to the freeze and thaw status of the surface soil, and the sensitivity of INSAR techniques to centimeter- to sub-centimeter-level surface differential deformation. The former capability of SAR is investigated for deriving the timing and duration of the thaw period for surface soil of the active layer over permafrost. The latter is investigated for the feasibility of quantitative measurement of frost heaving and thaw settlement of the active layer during the freezing and thawing processes. The resulting knowledge contributes to remote sensing mapping of the active layer dynamics and Arctic land surface hydrology.
Dynamic Behavior and Optimization of Advanced Armor Ceramics: January-December 2011 Annual Report
2015-03-01
however, under conventional methods of processing. To develop plasticity in ceramic like SiC, new fracture mechanisms and interesting behaviors need...and new fracture mechanisms . These improvements, in turn, could offer the potential for improved ballistic performance. Co-precipitation has been...experiments, the following deformed fragments were recovered for extensive SEM and TEM study. A fracture mechanism map has been constructed in
Geological evaluation and applications of ERTS-1 imagery over Georgia
NASA Technical Reports Server (NTRS)
Pickering, S. M.; Jones, R. C.
1974-01-01
ERTS-1 70mm and 9 x 9 film negatives are being used by conventional and color enhancement methods as a tool for geologic investigation. Geologic mapping and mineral exploration by conventional methods is very difficult in Georgia. Thick soil cover and heavy vegetation cause outcrops of bed rock to be small, rare and obscure. ERTS imagery, and remote sensing in general have helped delineate: (1) major tectonic boundaries; (2) lithologic contacts; (3) foliation trends; (4) topographic lineaments; and (5) faults. The ERTS-1 MSS imagery yields the greatest amount of geologic information on the Piedomont, Blue Ridge, and Valley and Ridge Provinces of Georgia where topography is strongly controlled by the bedrock geology. ERTS imagery, and general remote sensing techniques, have provided us with a powerful tool to assist geologic research; have significantly increased the mapping efficiency of our field geologists; have shown new lineaments associated with known shear and fault zones; have delineated new structural features; have provided a tool to re-evaluate our tectonic history; have helped to locate potential ground water sources and areas of aquifer recharge; have defined areas of geologic hazards; have shown areas of heavy siltation in major reservoirs; and by its close interval repetition, have aided in monitoring surface mine reclamation activities and the environmental protection of our intricate marshland system.
Murri, L; Gori, S; Massetani, R; Bonanni, E; Marcella, F; Milani, S
1998-06-01
The sensitivity of quantitative electroencephalogram (EEG) was compared with that of conventional EEG in patients with acute ischaemic stroke. In addition, a correlation between quantitative EEG data and computerized tomography (CT) scan findings was carried out for all the areas of lesion in order to reassess the actual role of EEG in the evaluation of stroke. Sixty-five patients were tested with conventional and quantitative EEG within 24 h from the onset of neurological symptoms, whereas CT scan was performed within 4 days from the onset of stroke. EEG was recorded from 19 electrodes placed upon the scalp according to the International 10-20 System. Spectral analysis was carried out on 30 artefact-free 4-sec epochs. For each channel absolute and relative power were calculated for the delta, theta, alpha and beta frequency bands and such data were successively represented in colour-coded maps. Ten patients with extensive lesions documented by CT scan were excluded. The results indicated that conventional EEG revealed abnormalities in 40 of 55 cases, while EEG mapping showed abnormalities in 46 of 55 cases: it showed focal abnormalities in five cases and nonfocal abnormalities in one of six cases which had appeared to be normal according to visual inspection of EEG. In a further 11 cases, where the conventional EEG revealed abnormalities in one hemisphere, the quantitative EEG and maps allowed to further localize abnormal activity in a more localized way. The sensitivity of both methods was higher for frontocentral, temporal and parieto-occipital cortical-subcortical infarctions than for basal ganglia and internal capsule lesions; however, quantitative EEG was more efficient for all areas of lesion in detecting cases that had appeared normal by visual inspection and was clearly superior in revealing focal abnormalities. When we considered the electrode related to which the maximum power of the delta frequency band is recorded, a fairly close correlation was found between the localization of the maximum delta power and the position of lesions documented by CT scan for all areas of lesion excepting those located in the striatocapsular area.
NASA Astrophysics Data System (ADS)
Fan, Tiantian; Yu, Hongbin
2018-03-01
A novel shape from focus method combining 3D steerable filter for improved performance on treating textureless region was proposed in this paper. Different from conventional spatial methods focusing on the search of maximum edges' response to estimate the depth map, the currently proposed method took both of the edges' response and the axial imaging blur degree into consideration during treatment. As a result, more robust and accurate identification for the focused location can be achieved, especially when treating textureless objects. Improved performance in depth measurement has been successfully demonstrated from both of the simulation and experiment results.
Using the Landsat 7 enhanced thematic mapper tasseled cap transformation to extract shoreline
Scott, J.W.
2003-01-01
A semiautomated method for objectively interpreting and extracting the land-water interface has been devised and used successfully to generate multiple shoreline data for the test States of Louisiana and Delaware. The method is based on the application of tasseled cap transformation coefficients derived by the EROS Data Center for Landsat 7 Enhanced Thematic Mapper Data, and is used in conjunction with ERDAS Imagine software. Shoreline data obtained using this method are cost effective compared with conventional mapping methods for State, regional, and national coastline applications. Attempts to attribute vector shoreline data with orthometric elevation values derived from tide observation stations, however, proved unsuccessful.
a Fast and Flexible Method for Meta-Map Building for Icp Based Slam
NASA Astrophysics Data System (ADS)
Kurian, A.; Morin, K. W.
2016-06-01
Recent developments in LiDAR sensors make mobile mapping fast and cost effective. These sensors generate a large amount of data which in turn improves the coverage and details of the map. Due to the limited range of the sensor, one has to collect a series of scans to build the entire map of the environment. If we have good GNSS coverage, building a map is a well addressed problem. But in an indoor environment, we have limited GNSS reception and an inertial solution, if available, can quickly diverge. In such situations, simultaneous localization and mapping (SLAM) is used to generate a navigation solution and map concurrently. SLAM using point clouds possesses a number of computational challenges even with modern hardware due to the shear amount of data. In this paper, we propose two strategies for minimizing the cost of computation and storage when a 3D point cloud is used for navigation and real-time map building. We have used the 3D point cloud generated by Leica Geosystems's Pegasus Backpack which is equipped with Velodyne VLP-16 LiDARs scanners. To improve the speed of the conventional iterative closest point (ICP) algorithm, we propose a point cloud sub-sampling strategy which does not throw away any key features and yet significantly reduces the number of points that needs to be processed and stored. In order to speed up the correspondence finding step, a dual kd-tree and circular buffer architecture is proposed. We have shown that the proposed method can run in real time and has excellent navigation accuracy characteristics.
Inter-Slice Blood Flow and Magnetization Transfer Effects as A New Simultaneous Imaging Strategy.
Han, Paul Kyu; Barker, Jeffrey W; Kim, Ki Hwan; Choi, Seung Hong; Bae, Kyongtae Ty; Park, Sung-Hong
2015-01-01
The recent blood flow and magnetization transfer (MT) technique termed alternate ascending/descending directional navigation (ALADDIN) achieves the contrast using interslice blood flow and MT effects with no separate preparation RF pulse, thereby potentially overcoming limitations of conventional methods. In this study, we examined the signal characteristics of ALADDIN as a simultaneous blood flow and MT imaging strategy, by comparing it with pseudo-continuous ASL (pCASL) and conventional MT asymmetry (MTA) methods, all of which had the same bSSFP readout. Bloch-equation simulations and experiments showed ALADDIN perfusion signals increased with flip angle, whereas MTA signals peaked at flip angle around 45°-60°. ALADDIN provided signals comparable to those of pCASL and conventional MTA methods emulating the first, second, and third prior slices of ALADDIN under the same scan conditions, suggesting ALADDIN signals to be superposition of signals from multiple labeling planes. The quantitative cerebral blood flow signals from a modified continuous ASL model overestimated the perfusion signals compared to those measured with a pulsed ASL method. Simultaneous mapping of blood flow, MTA, and MT ratio in the whole brain is feasible with ALADDIN within a clinically reasonable time, which can potentially help diagnosis of various diseases.
Deng, Jie; Virmani, Sumeet; Young, Joseph; Harris, Kathleen; Yang, Guang-Yu; Rademaker, Alfred; Woloschak, Gayle; Omary, Reed A.; Larson, Andrew C.
2010-01-01
Purpose To test the hypothesis that diffusion-weighted (DW)-PROPELLER (periodically rotated overlapping parallel lines with enhanced reconstruction) MRI provides more accurate liver tumor necrotic fraction (NF) and viable tumor volume (VTV) measurements than conventional DW-SE-EPI (spin echo echo-planar imaging) methods. Materials and Methods Our institutional Animal Care and Use Committee approved all experiments. In six rabbits implanted with 10 VX2 liver tumors, DW-PROPELLER and DW-SE-EPI scans were performed at contiguous axial slice positions covering each tumor volume. Apparent diffusion coefficient maps of each tumor were used to generate spatially resolved tumor viability maps for NF and VTV measurements. We compared NF, whole tumor volume (WTV), and VTV measurements to corresponding reference standard histological measurements based on correlation and concordance coefficients and the Bland–Altman analysis. Results DW-PROPELLER generally improved image quality with less distortion compared to DW-SE-EPI. DW-PROPELLER NF, WTV, and VTV measurements were strongly correlated and satisfactorily concordant with histological measurements. DW-SE-EPI NF measurements were weakly correlated and poorly concordant with histological measurements. Bland–Altman analysis demonstrated that DWPROPELLER WTV and VTV measurements were less biased from histological measurements than the corresponding DW-SE-EPI measurements. Conclusion DW-PROPELLER MRI can provide spatially resolved liver tumor viability maps for accurate NF and VTV measurements, superior to DW-SE-EPI approaches. DWPROPELLER measurements may serve as a noninvasive surrogate for pathology, offering the potential for more accurate assessments of therapy response than conventional anatomic size measurements. PMID:18407540
NASA Astrophysics Data System (ADS)
Sawayama, Shuhei; Nurdin, Nurjannah; Akbar AS, Muhammad; Sakamoto, Shingo X.; Komatsu, Teruhisa
2015-06-01
Coral reef ecosystems worldwide are now being harmed by various stresses accompanying the degradation of fish habitats and thus knowledge of fish-habitat relationships is urgently required. Because conventional research methods were not practical for this purpose due to the lack of a geospatial perspective, we attempted to develop a research method integrating visual fish observation with a seabed habitat map and to expand knowledge to a two-dimensional scale. WorldView-2 satellite imagery of Spermonde Archipelago, Indonesia obtained in September 2012 was analyzed and classified into four typical substrates: live coral, dead coral, seagrass and sand. Overall classification accuracy of this map was 81.3% and considered precise enough for subsequent analyses. Three sub-areas (CC: continuous coral reef, BC: boundary of coral reef and FC: few live coral zone) around reef slopes were extracted from the map. Visual transect surveys for several fish species were conducted within each sub-area in June 2013. As a result, Mean density (Ind. / 300 m2) of Chaetodon octofasciatus, known as an obligate feeder of corals, was significantly higher at BC than at the others (p < 0.05), implying that this species' density is strongly influenced by spatial configuration of its habitat, like the "edge effect." This indicates that future conservation procedures for coral reef fishes should consider not only coral cover but also its spatial configuration. The present study also indicates that the introduction of a geospatial perspective derived from remote sensing has great potential to progress conventional ecological studies on coral reef fishes.
Imaging of conductivity distributions using audio-frequency electromagnetic data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Ki Ha; Morrison, H.F.
1990-10-01
The objective of this study has been to develop mathematical methods for mapping conductivity distributions between boreholes using low frequency electromagnetic (em) data. In relation to this objective this paper presents two recent developments in high-resolution crosshole em imaging techniques. These are (1) audio-frequency diffusion tomography, and (2) a transform method in which low frequency data is first transformed into a wave-like field. The idea in the second approach is that we can then treat the transformed field using conventional techniques designed for wave field analysis.
Methodology of remote sensing data interpretation and geological applications. [Brazil
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Veneziani, P.; Dosanjos, C. E.
1982-01-01
Elements of photointerpretation discussed include the analysis of photographic texture and structure as well as film tonality. The method used is based on conventional techniques developed for interpreting aerial black and white photographs. By defining the properties which characterize the form and individuality of dual images, homologous zones can be identified. Guy's logic method (1966) was adapted and used on functions of resolution, scale, and spectral characteristics of remotely sensed products. Applications of LANDSAT imagery are discussed for regional geological mapping, mineral exploration, hydrogeology, and geotechnical engineering in Brazil.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrow, A; Rangaraj, D; Perez-Andujar, A
2016-06-15
Purpose: This work’s objective is to determine the overlap of processes, in terms of sub-processes and time, between acceptance testing and commissioning of a conventional medical linear accelerator and to evaluate the time saved by consolidating the two processes. Method: A process map for acceptance testing for medical linear accelerators was created from vendor documentation (Varian and Elekta). Using AAPM TG-106 and inhouse commissioning procedures, a process map was created for commissioning of said accelerators. The time to complete each sub-process in each process map was evaluated. Redundancies in the processes were found and the time spent on each weremore » calculated. Results: Mechanical testing significantly overlaps between the two processes - redundant work here amounts to 9.5 hours. Many beam non-scanning dosimetry tests overlap resulting in another 6 hours of overlap. Beam scanning overlaps somewhat - acceptance tests include evaluating PDDs and multiple profiles but for only one field size while commissioning beam scanning includes multiple field sizes and depths of profiles. This overlap results in another 6 hours of rework. Absolute dosimetry, field outputs, and end to end tests are not done at all in acceptance testing. Finally, all imaging tests done in acceptance are repeated in commissioning, resulting in about 8 hours of rework. The total time overlap between the two processes is about 30 hours. Conclusion: The process mapping done in this study shows that there are no tests done in acceptance testing that are not also recommended to do for commissioning. This results in about 30 hours of redundant work when preparing a conventional linear accelerator for clinical use. Considering these findings in the context of the 5000 linacs in the United states, consolidating acceptance testing and commissioning would have allowed for the treatment of an additional 25000 patients using no additional resources.« less
Detecting spatial regimes in ecosystems | Science Inventory ...
Research on early warning indicators has generally focused on assessing temporal transitions with limited application of these methods to detecting spatial regimes. Traditional spatial boundary detection procedures that result in ecoregion maps are typically based on ecological potential (i.e. potential vegetation), and often fail to account for ongoing changes due to stressors such as land use change and climate change and their effects on plant and animal communities. We use Fisher information, an information theory based method, on both terrestrial and aquatic animal data (US Breeding Bird Survey and marine zooplankton) to identify ecological boundaries, and compare our results to traditional early warning indicators, conventional ecoregion maps, and multivariate analysis such as nMDS (non-metric Multidimensional Scaling) and cluster analysis. We successfully detect spatial regimes and transitions in both terrestrial and aquatic systems using Fisher information. Furthermore, Fisher information provided explicit spatial information about community change that is absent from other multivariate approaches. Our results suggest that defining spatial regimes based on animal communities may better reflect ecological reality than do traditional ecoregion maps, especially in our current era of rapid and unpredictable ecological change. Use an information theory based method to identify ecological boundaries and compare our results to traditional early warning
NASA Astrophysics Data System (ADS)
Zhou, Y.; Zhao, H.; Hao, H.; Wang, C.
2018-05-01
Accurate remote sensing water extraction is one of the primary tasks of watershed ecological environment study. Since the Yanhe water system has typical characteristics of a small water volume and narrow river channel, which leads to the difficulty for conventional water extraction methods such as Normalized Difference Water Index (NDWI). A new Multi-Spectral Threshold segmentation of the NDWI (MST-NDWI) water extraction method is proposed to achieve the accurate water extraction in Yanhe watershed. In the MST-NDWI method, the spectral characteristics of water bodies and typical backgrounds on the Landsat/TM images have been evaluated in Yanhe watershed. The multi-spectral thresholds (TM1, TM4, TM5) based on maximum-likelihood have been utilized before NDWI water extraction to realize segmentation for a division of built-up lands and small linear rivers. With the proposed method, a water map is extracted from the Landsat/TM images in 2010 in China. An accuracy assessment is conducted to compare the proposed method with the conventional water indexes such as NDWI, Modified NDWI (MNDWI), Enhanced Water Index (EWI), and Automated Water Extraction Index (AWEI). The result shows that the MST-NDWI method generates better water extraction accuracy in Yanhe watershed and can effectively diminish the confusing background objects compared to the conventional water indexes. The MST-NDWI method integrates NDWI and Multi-Spectral Threshold segmentation algorithms, with richer valuable information and remarkable results in accurate water extraction in Yanhe watershed.
Secure positioning technique based on encrypted visible light map for smart indoor service
NASA Astrophysics Data System (ADS)
Lee, Yong Up; Jung, Gillyoung
2018-03-01
Indoor visible light (VL) positioning systems for smart indoor services are negatively affected by both cochannel interference from adjacent light sources and VL reception position irregularity in the three-dimensional (3-D) VL channel. A secure positioning methodology based on a two-dimensional (2-D) encrypted VL map is proposed, implemented in prototypes of the specific positioning system, and analyzed based on performance tests. The proposed positioning technique enhances the positioning performance by more than 21.7% compared to the conventional method in real VL positioning tests. Further, the pseudonoise code is found to be the optimal encryption key for secure VL positioning for this smart indoor service.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, T; Zhu, L
Purpose: Conventional dual energy CT (DECT) reconstructs CT and basis material images from two full-size projection datasets with different energy spectra. To relax the data requirement, we propose an iterative DECT reconstruction algorithm using one full scan and a second sparse-view scan by utilizing redundant structural information of the same object acquired at two different energies. Methods: We first reconstruct a full-scan CT image using filtered-backprojection (FBP) algorithm. The material similarities of each pixel with other pixels are calculated by an exponential function about pixel value differences. We assume that the material similarities of pixels remains in the second CTmore » scan, although pixel values may vary. An iterative method is designed to reconstruct the second CT image from reduced projections. Under the data fidelity constraint, the algorithm minimizes the L2 norm of the difference between pixel value and its estimation, which is the average of other pixel values weighted by their similarities. The proposed algorithm, referred to as structure preserving iterative reconstruction (SPIR), is evaluated on physical phantoms. Results: On the Catphan600 phantom, SPIR-based DECT method with a second 10-view scan reduces the noise standard deviation of a full-scan FBP CT reconstruction by a factor of 4 with well-maintained spatial resolution, while iterative reconstruction using total-variation regularization (TVR) degrades the spatial resolution at the same noise level. The proposed method achieves less than 1% measurement difference on electron density map compared with the conventional two-full-scan DECT. On an anthropomorphic pediatric phantom, our method successfully reconstructs the complicated vertebra structures and decomposes bone and soft tissue. Conclusion: We develop an effective method to reduce the number of views and therefore data acquisition in DECT. We show that SPIR-based DECT using one full scan and a second 10-view scan can provide high-quality DECT images and accurate electron density maps as conventional two-full-scan DECT.« less
Mapping Neurodegenerative Disease Onset and Progression.
Seeley, William W
2017-08-01
Brain networks have been of long-standing interest to neurodegeneration researchers, including but not limited to investigators focusing on conventional prion diseases, which are known to propagate along neural pathways. Tools for human network mapping, however, remained inadequate, limiting our understanding of human brain network architecture and preventing clinical research applications. Until recently, neuropathological studies were the only viable approach to mapping disease onset and progression in humans but required large autopsy cohorts and laborious methods for whole-brain sectioning and staining. Despite important advantages, postmortem studies cannot address in vivo, physiological, or longitudinal questions and have limited potential to explore early-stage disease except for the most common disorders. Emerging in vivo network-based neuroimaging strategies have begun to address these issues, providing data that complement the neuropathological tradition. Overall, findings to date highlight several fundamental principles of neurodegenerative disease anatomy and pathogenesis, as well as some enduring mysteries. These principles and mysteries provide a road map for future research. Copyright © 2017 Cold Spring Harbor Laboratory Press; all rights reserved.
Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers
NASA Astrophysics Data System (ADS)
Jiang, Chufan; Li, Beiwen; Zhang, Song
2017-04-01
This paper presents a method that can recover absolute phase pixel by pixel without embedding markers on three phase-shifted fringe patterns, acquiring additional images, or introducing additional hardware component(s). The proposed three-dimensional (3D) absolute shape measurement technique includes the following major steps: (1) segment the measured object into different regions using rough priori knowledge of surface geometry; (2) artificially create phase maps at different z planes using geometric constraints of structured light system; (3) unwrap the phase pixel by pixel for each region by properly referring to the artificially created phase map; and (4) merge unwrapped phases from all regions into a complete absolute phase map for 3D reconstruction. We demonstrate that conventional three-step phase-shifted fringe patterns can be used to create absolute phase map pixel by pixel even for large depth range objects. We have successfully implemented our proposed computational framework to achieve absolute 3D shape measurement at 40 Hz.
Spranger, T; Hettelingh, J-P; Slootweg, J; Posch, M
2008-08-01
Long-range transboundary air pollution has caused severe environmental effects in Europe. European air pollution abatement policy, in the framework of the UNECE Convention on Long-range Transboundary Air Pollution (LRTAP Convention) and the European Union Clean Air for Europe (CAFE) programme, has used critical loads and their exceedances by atmospheric deposition to design emission abatement targets and strategies. The LRTAP Convention International Cooperative Programme on Modelling and Mapping Critical Loads and Levels and Air Pollution Effects, Risks and Trends (ICP M&M) generates European critical loads datasets to enable this work. Developing dynamic nitrogen flux models and using them for a prognosis and assessment of nitrogen effects remains a challenge. Further research is needed on links between nitrogen deposition effects, climate change, and biodiversity.
An imaging colorimeter for noncontact tissue color mapping.
Balas, C
1997-06-01
There has been a considerable effort in several medical fields, for objective color analysis and characterization of biological tissues. Conventional colorimeters have proved inadequate for this purpose, since they do not provide spatial color information and because the measuring procedure randomly affects the color of the tissue. In this paper an imaging colorimeter is presented, where the nonimaging optical photodetector of colorimeters is replaced with the charge-coupled device (CCD) sensor of a color video camera, enabling the independent capturing of the color information for any spatial point within its field-of-view. Combining imaging and colorimetry methods, the acquired image is calibrated and corrected, under several ambient light conditions, providing noncontact reproducible color measurements and mapping, free of the errors and the limitations present in conventional colorimeters. This system was used for monitoring of blood supply changes of psoriatic plaques, that have undergone Psoralens and ultraviolet-A radiation (PUVA) therapy, where reproducible and reliable measurements were demonstrated. These features highlight the potential of the imaging colorimeters as clinical and research tools for the standardization of clinical diagnosis and for the objective evaluation of treatment effectiveness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayah, N; Weiss, E; Watkins, W
Purpose: To evaluate the dose-mapping error (DME) inherent to conventional dose-mapping algorithms as a function of dose-matrix resolution. Methods: As DME has been reported to be greatest where dose-gradients overlap tissue-density gradients, non-clinical 66 Gy IMRT plans were generated for 11 lung patients with the target edge defined as the maximum 3D density gradient on the 0% (end of inhale) breathing phase. Post-optimization, Beams were copied to 9 breathing phases. Monte Carlo dose computed (with 2*2*2 mm{sup 3} resolution) on all 10 breathing phases was deformably mapped to phase 0% using the Monte Carlo energy-transfer method with congruent mass-mapping (EMCM);more » an externally implemented tri-linear interpolation method with voxel sub-division; Pinnacle’s internal (tri-linear) method; and a post-processing energy-mass voxel-warping method (dTransform). All methods used the same base displacement-vector-field (or it’s pseudo-inverse as appropriate) for the dose mapping. Mapping was also performed at 4*4*4 mm{sup 3} by merging adjacent dose voxels. Results: Using EMCM as the reference standard, no clinically significant (>1 Gy) DMEs were found for the mean lung dose (MLD), lung V20Gy, or esophagus dose-volume indices, although MLD and V20Gy were statistically different (2*2*2 mm{sup 3}). Pinnacle-to-EMCM target D98% DMEs of 4.4 and 1.2 Gy were observed ( 2*2*2 mm{sup 3}). However dTransform, which like EMCM conserves integral dose, had DME >1 Gy for one case. The root mean square RMS of the DME for the tri-linear-to- EMCM methods was lower for the smaller voxel volume for the tumor 4D-D98%, lung V20Gy, and cord D1%. Conclusion: When tissue gradients overlap with dose gradients, organs-at-risk DME was statistically significant but not clinically significant. Target-D98%-DME was deemed clinically significant for 2/11 patients (2*2*2 mm{sup 3}). Since tri-linear RMS-DME between EMCM and tri-linear was reduced at 2*2*2 mm{sup 3}, use of this resolution is recommended for dose mapping. Interpolative dose methods are sufficiently accurate for the majority of cases. J.V. Siebers receives funding support from Varian Medical Systems.« less
Multi-template tensor-based morphometry: Application to analysis of Alzheimer's disease
Koikkalainen, Juha; Lötjönen, Jyrki; Thurfjell, Lennart; Rueckert, Daniel; Waldemar, Gunhild; Soininen, Hilkka
2012-01-01
In this paper methods for using multiple templates in tensor-based morphometry (TBM) are presented and comparedtothe conventional single-template approach. TBM analysis requires non-rigid registrations which are often subject to registration errors. When using multiple templates and, therefore, multiple registrations, it can be assumed that the registration errors are averaged and eventually compensated. Four different methods are proposed for multi-template TBM. The methods were evaluated using magnetic resonance (MR) images of healthy controls, patients with stable or progressive mild cognitive impairment (MCI), and patients with Alzheimer's disease (AD) from the ADNI database (N=772). The performance of TBM features in classifying images was evaluated both quantitatively and qualitatively. Classification results show that the multi-template methods are statistically significantly better than the single-template method. The overall classification accuracy was 86.0% for the classification of control and AD subjects, and 72.1%for the classification of stable and progressive MCI subjects. The statistical group-level difference maps produced using multi-template TBM were smoother, formed larger continuous regions, and had larger t-values than the maps obtained with single-template TBM. PMID:21419228
Weingärtner, Sebastian; Meßner, Nadja M; Zöllner, Frank G; Akçakaya, Mehmet; Schad, Lothar R
2017-08-01
To study the feasibility of black-blood contrast in native T 1 mapping for reduction of partial voluming at the blood-myocardium interface. A saturation pulse prepared heart-rate-independent inversion recovery (SAPPHIRE) T 1 mapping sequence was combined with motion-sensitized driven-equilibrium (MSDE) blood suppression for black-blood T 1 mapping at 3 Tesla. Phantom scans were performed to assess the T 1 time accuracy. In vivo black-blood and conventional SAPPHIRE T 1 mapping was performed in eight healthy subjects and analyzed for T 1 times, precision, and inter- and intraobserver variability. Furthermore, manually drawn regions of interest (ROIs) in all T 1 maps were dilated and eroded to analyze the dependence of septal T 1 times on the ROI thickness. Phantom results and in vivo myocardial T 1 times show comparable accuracy with black-blood compared to conventional SAPPHIRE (in vivo: black-blood: 1562 ± 56 ms vs. conventional: 1583 ± 58 ms, P = 0.20); Using black-blood SAPPHIRE precision was significantly lower (standard deviation: 133.9 ± 24.6 ms vs. 63.1 ± 6.4 ms, P < .0001), and blood T 1 time measurement was not possible. Significantly increased interobserver interclass correlation coefficient (ICC) (0.996 vs. 0.967, P = 0.011) and similar intraobserver ICC (0.979 vs. 0.939, P = 0.11) was obtained with the black-blood sequence. Conventional SAPPHIRE showed strong dependence on the ROI thickness (R 2 = 0.99). No such trend was observed using the black-blood approach (R 2 = 0.29). Black-blood SAPPHIRE successfully eliminates partial voluming at the blood pool in native myocardial T 1 mapping while providing accurate T 1 times, albeit at a reduced precision. Magn Reson Med 78:484-493, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Paavolainen, Lassi; Acar, Erman; Tuna, Uygar; Peltonen, Sari; Moriya, Toshio; Soonsawad, Pan; Marjomäki, Varpu; Cheng, R Holland; Ruotsalainen, Ulla
2014-01-01
Electron tomography (ET) of biological samples is used to study the organization and the structure of the whole cell and subcellular complexes in great detail. However, projections cannot be acquired over full tilt angle range with biological samples in electron microscopy. ET image reconstruction can be considered an ill-posed problem because of this missing information. This results in artifacts, seen as the loss of three-dimensional (3D) resolution in the reconstructed images. The goal of this study was to achieve isotropic resolution with a statistical reconstruction method, sequential maximum a posteriori expectation maximization (sMAP-EM), using no prior morphological knowledge about the specimen. The missing wedge effects on sMAP-EM were examined with a synthetic cell phantom to assess the effects of noise. An experimental dataset of a multivesicular body was evaluated with a number of gold particles. An ellipsoid fitting based method was developed to realize the quantitative measures elongation and contrast in an automated, objective, and reliable way. The method statistically evaluates the sub-volumes containing gold particles randomly located in various parts of the whole volume, thus giving information about the robustness of the volume reconstruction. The quantitative results were also compared with reconstructions made with widely-used weighted backprojection and simultaneous iterative reconstruction technique methods. The results showed that the proposed sMAP-EM method significantly suppresses the effects of the missing information producing isotropic resolution. Furthermore, this method improves the contrast ratio, enhancing the applicability of further automatic and semi-automatic analysis. These improvements in ET reconstruction by sMAP-EM enable analysis of subcellular structures with higher three-dimensional resolution and contrast than conventional methods.
Detecting spatial regimes in ecosystems
Sundstrom, Shana M.; Eason, Tarsha; Nelson, R. John; Angeler, David G.; Barichievy, Chris; Garmestani, Ahjond S.; Graham, Nicholas A.J.; Granholm, Dean; Gunderson, Lance; Knutson, Melinda; Nash, Kirsty L.; Spanbauer, Trisha; Stow, Craig A.; Allen, Craig R.
2017-01-01
Research on early warning indicators has generally focused on assessing temporal transitions with limited application of these methods to detecting spatial regimes. Traditional spatial boundary detection procedures that result in ecoregion maps are typically based on ecological potential (i.e. potential vegetation), and often fail to account for ongoing changes due to stressors such as land use change and climate change and their effects on plant and animal communities. We use Fisher information, an information theory-based method, on both terrestrial and aquatic animal data (U.S. Breeding Bird Survey and marine zooplankton) to identify ecological boundaries, and compare our results to traditional early warning indicators, conventional ecoregion maps and multivariate analyses such as nMDS and cluster analysis. We successfully detected spatial regimes and transitions in both terrestrial and aquatic systems using Fisher information. Furthermore, Fisher information provided explicit spatial information about community change that is absent from other multivariate approaches. Our results suggest that defining spatial regimes based on animal communities may better reflect ecological reality than do traditional ecoregion maps, especially in our current era of rapid and unpredictable ecological change.
Flat ion milling: a powerful tool for preparation of cross-sections of lead-silver alloys.
Brodusch, Nicolas; Boisvert, Sophie; Gauvin, Raynald
2013-06-01
While conventional mechanical and chemical polishing results in stress, deformation and polishing particles embedded on the surface, flat milling with Ar+ ions erodes the material with no mechanical artefacts. This flat milling process is presented as an alternative method to prepare a Pb-Ag alloy cross-section for scanning electron microscopy. The resulting surface is free of scratches with very little to no stress induced, so that electron diffraction and channelling contrast are possible. The results have shown that energy dispersive spectrometer (EDS) mapping, electron channelling contrast imaging and electron backscatter diffraction can be conducted with only one sample preparation step. Electron diffraction patterns acquired at 5 keV possessed very good pattern quality, highlighting an excellent surface condition. An orientation map was acquired at 20 keV with an indexing rate of 90.1%. An EDS map was performed at 5 keV, and Pb-Ag precipitates of sizes lower than 100 nm were observed. However, the drawback of the method is the generation of a noticeable surface topography resulting from the interaction of the ion beam with a polycrystalline and biphasic sample.
Zheng, Changlin; Zhu, Ye; Lazar, Sorin; Etheridge, Joanne
2014-04-25
We introduce off-axis chromatic scanning confocal electron microscopy, a technique for fast mapping of inelastically scattered electrons in a scanning transmission electron microscope without a spectrometer. The off-axis confocal mode enables the inelastically scattered electrons to be chromatically dispersed both parallel and perpendicular to the optic axis. This enables electrons with different energy losses to be separated and detected in the image plane, enabling efficient energy filtering in a confocal mode with an integrating detector. We describe the experimental configuration and demonstrate the method with nanoscale core-loss chemical mapping of silver (M4,5) in an aluminium-silver alloy and atomic scale imaging of the low intensity core-loss La (M4,5@840 eV) signal in LaB6. Scan rates up to 2 orders of magnitude faster than conventional methods were used, enabling a corresponding reduction in radiation dose and increase in the field of view. If coupled with the enhanced depth and lateral resolution of the incoherent confocal configuration, this offers an approach for nanoscale three-dimensional chemical mapping.
NASA Astrophysics Data System (ADS)
Chang, Alice Chinghsuan; Liu, Bernard Haochih
2018-02-01
The categorization of microbial strains is conventionally based on the molecular method, and seldom are the morphological characteristics in the bacterial strains studied. In this research, we revealed the macromolecular structures of the bacterial surface via AFM mechanical mapping, whose resolution was not only determined by the nanoscale tip size but also the mechanical properties of the specimen. This technique enabled the nanoscale study of membranous structures of microbial strains with simple specimen preparation and flexible working environments, which overcame the multiple restrictions in electron microscopy and label-enable biochemical analytical methods. The characteristic macromolecules located among cellular surface were considered as surface layer proteins and were found to be specific to the Escherichia coli genotypes, from which the averaged molecular sizes were characterized with diameters ranging from 38 to 66 nm, and the molecular shapes were kidney-like or round. In conclusion, the surface macromolecular structures have unique characteristics that link to the E. coli genotype, which suggests that the genomic effects on cellular morphologies can be rapidly identified using AFM mechanical mapping. [Figure not available: see fulltext.
A fast method for optical simulation of flood maps of light-sharing detector modules.
Shi, Han; Du, Dong; Xu, JianFeng; Moses, William W; Peng, Qiyu
2015-12-01
Optical simulation of the detector module level is highly desired for Position Emission Tomography (PET) system design. Commonly used simulation toolkits such as GATE are not efficient in the optical simulation of detector modules with complicated light-sharing configurations, where a vast amount of photons need to be tracked. We present a fast approach based on a simplified specular reflectance model and a structured light-tracking algorithm to speed up the photon tracking in detector modules constructed with polished finish and specular reflector materials. We simulated conventional block detector designs with different slotted light guide patterns using the new approach and compared the outcomes with those from GATE simulations. While the two approaches generated comparable flood maps, the new approach was more than 200-600 times faster. The new approach has also been validated by constructing a prototype detector and comparing the simulated flood map with the experimental flood map. The experimental flood map has nearly uniformly distributed spots similar to those in the simulated flood map. In conclusion, the new approach provides a fast and reliable simulation tool for assisting in the development of light-sharing-based detector modules with a polished surface finish and using specular reflector materials.
Application of photogrammetry for analysis of occlusal contacts.
Shigeta, Yuko; Hirabayashi, Rio; Ikawa, Tomoko; Kihara, Takuya; Ando, Eriko; Hirai, Shinya; Fukushima, Shunji; Ogawa, Takumi
2013-04-01
The conventional 2D-analysis methods for occlusal contacts provided limited information on tooth morphology. This present study aims to detect 3D positional information of occlusal contacts from 2D-photos via photogrammetry. We propose an image processing solution for analysis of occlusal contacts and facets via the black silicone method and a photogrammetric technique. The occlusal facets were reconstructed from a 2D-photograph data-set of inter-occlusal records into a 3D image via photogrammetry. The configuration of the occlusal surface was reproduced with polygons. In addition, the textures of the occlusal contacts were mapped to each polygon. DIFFERENCE FROM CONVENTIONAL METHODS: Constructing occlusal facets with 3D polygons from 2D-photos with photogrammetry was a defining characteristic of this image processing technique. It allowed us to better observe findings of the black silicone method. Compared with conventional 3D analysis using a 3D scanner, our 3D models did not reproduce the detail of the anatomical configuration. However, by merging the findings of the inter-occlusal record, the deformation of mandible and the displacement of periodontal ligaments under occlusal force were reflected in our model. EFFECT OR PERFORMANCE: Through the use of polygons in the conversion of 2D images to 3D images, we were able to define the relation between the location and direction of the occlusal contacts and facets, which was difficult to detect via conventional methods. Through our method of making a 3D polygon model, the findings of inter-occlusal records which reflected the jaw/teeth behavior under occlusal force could be observed 3-dimensionally. Copyright © 2012 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ma, Pei; Gu, Shi; Wang, Yves T.; Jenkins, Michael W.; Rollins, Andrew M.
2016-03-01
Optical mapping (OM) using fluorescent voltage-sensitive dyes (VSD) to measure membrane potential is currently the most effective method for electrophysiology studies in early embryonic hearts due to its noninvasiveness and large field-of-view. Conventional OM acquires bright-field images, collecting signals that are integrated in depth and projected onto a 2D plane, not capturing the 3D structure of the sample. Early embryonic hearts, especially at looping stages, have a complicated, tubular geometry. Therefore, conventional OM cannot provide a full picture of the electrical conduction circumferentially around the heart, and may result in incomplete and inaccurate measurements. Here, we demonstrate OM of Hamburger and Hamilton stage 14 embryonic quail hearts using a new commercially-available VSD, Fluovolt, and depth sectioning using a custom built light-sheet microscopy system. Axial and lateral resolution of the system is 14µm and 8µm respectively. For OM imaging, the field-of-view was set to 900µm×900µm to cover the entire heart. 2D over time OM image sets at multiple cross-sections through the looping-stage heart were recorded. The shapes of both atrial and ventricular action potentials acquired were consistent with previous reports using conventional VSD (di-4-ANNEPS). With Fluovolt, signal-to-noise ratio (SNR) is improved significantly by a factor of 2-10 (compared with di-4-ANNEPS) enabling light-sheet OM, which intrinsically has lower SNR due to smaller sampling volumes. Electrophysiologic parameters are rate dependent. Optical pacing was successfully integrated into the system to ensure heart rate consistency. This will also enable accurately gated reconstruction of full four dimensional conduction maps and 3D conduction velocity measurements.
Land-based lidar mapping: a new surveying technique to shed light on rapid topographic change
Collins, Brian D.; Kayen, Robert
2006-01-01
The rate of natural change in such dynamic environments as rivers and coastlines can sometimes overwhelm the monitoring capacity of conventional surveying methods. In response to this limitation, U.S. Geological Survey (USGS) scientists are pioneering new applications of light detection and ranging (lidar), a laser-based scanning technology that promises to greatly increase our ability to track rapid topographic changes and manage their impact on affected communities.
Vina, Andres; Peters, Albert J.; Ji, Lei
2003-01-01
There is a global concern about the increase in atmospheric concentrations of greenhouse gases. One method being discussed to encourage greenhouse gas mitigation efforts is based on a trading system whereby carbon emitters can buy effective mitigation efforts from farmers implementing conservation tillage practices. These practices sequester carbon from the atmosphere, and such a trading system would require a low-cost and accurate method of verification. Remote sensing technology can offer such a verification technique. This paper is focused on the use of standard image processing procedures applied to a multispectral Ikonos image, to determine whether it is possible to validate that farmers have complied with agreements to implement conservation tillage practices. A principal component analysis (PCA) was performed in order to isolate image variance in cropped fields. Analyses of variance (ANOVA) statistical procedures were used to evaluate the capability of each Ikonos band and each principal component to discriminate between conventional and conservation tillage practices. A logistic regression model was implemented on the principal component most effective in discriminating between conventional and conservation tillage, in order to produce a map of the probability of conventional tillage. The Ikonos imagery, in combination with ground-reference information, proved to be a useful tool for verification of conservation tillage practices.
Validation of a novel mapping system and utility for mapping complex atrial tachycardias.
Honarbakhsh, S; Hunter, R J; Dhillon, G; Ullah, W; Keating, E; Providencia, R; Chow, A; Earley, M J; Schilling, R J
2018-03-01
This study sought to validate a novel wavefront mapping system utilizing whole-chamber basket catheters (CARTOFINDER, Biosense Webster). The system was validated in terms of (1) mapping atrial-paced beats and (2) mapping complex wavefront patterns in atrial tachycardia (AT). Patients undergoing catheter ablation for AT and persistent AF were included. A 64-pole-basket catheter was used to acquire unipolar signals that were processed by CARTOFINDER mapping system to generate dynamic wavefront propagation maps. The left atrium was paced from four sites to demonstrate focal activation. ATs were mapped with the mechanism confirmed by conventional mapping, entrainment, and response to ablation. Twenty-two patients were included in the study (16 with AT and 6 with AF initially who terminated to AT during ablation). In total, 172 maps were created with the mapping system. It correctly identified atrial-pacing sites in all paced maps. It accurately mapped 9 focal/microreentrant and 18 macroreentrant ATs both in the left and right atrium. A third and fourth observer independently identified the sites of atrial pacing and the AT mechanism from the CARTOFINDER maps, while being blinded to the conventional activation maps. This novel mapping system was effectively validated by mapping focal activation patterns from atrial-paced beats. The system was also effective in mapping complex wavefront patterns in a range of ATs in patients with scarred atria. The system may therefore be of practical use in the mapping and ablation of AT and could have potential for mapping wavefront activations in AF. © 2018 Wiley Periodicals, Inc.
a Method for the Positioning and Orientation of Rail-Bound Vehicles in Gnss-Free Environments
NASA Astrophysics Data System (ADS)
Hung, R.; King, B. A.; Chen, W.
2016-06-01
Mobile Mapping System (MMS) are increasingly applied for spatial data collection to support different fields because of their efficiencies and the levels of detail they can provide. The Position and Orientation System (POS), which is conventionally employed for locating and orienting MMS, allows direct georeferencing of spatial data in real-time. Since the performance of a POS depends on both the Inertial Navigation System (INS) and the Global Navigation Satellite System (GNSS), poor GNSS conditions, such as in long tunnels and underground, introduce the necessity for post-processing. In above-ground railways, mobile mapping technology is employed with high performance sensors for finite usage, which has considerable potential for enhancing railway safety and management in real-time. In contrast, underground railways present a challenge for a conventional POS thus alternative configurations are necessary to maintain data accuracy and alleviate the need for post-processing. This paper introduces a method of rail-bound navigation to replace the role of GNSS for railway applications. The proposed method integrates INS and track alignment data for environment-independent navigation and reduces the demand of post-processing. The principle of rail-bound navigation is presented and its performance is verified by an experiment using a consumer-grade Inertial Measurement Unit (IMU) and a small-scale railway model. The method produced a substantial improvement in position and orientation for a poorly initialised system in centimetre positional accuracy. The potential improvements indicated by, and limitations of rail-bound navigation are also considered for further development in existing railway systems.
Gradient-based multiresolution image fusion.
Petrović, Valdimir S; Xydeas, Costas S
2004-02-01
A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.
Neural networks for data compression and invariant image recognition
NASA Technical Reports Server (NTRS)
Gardner, Sheldon
1989-01-01
An approach to invariant image recognition (I2R), based upon a model of biological vision in the mammalian visual system (MVS), is described. The complete I2R model incorporates several biologically inspired features: exponential mapping of retinal images, Gabor spatial filtering, and a neural network associative memory. In the I2R model, exponentially mapped retinal images are filtered by a hierarchical set of Gabor spatial filters (GSF) which provide compression of the information contained within a pixel-based image. A neural network associative memory (AM) is used to process the GSF coded images. We describe a 1-D shape function method for coding of scale and rotationally invariant shape information. This method reduces image shape information to a periodic waveform suitable for coding as an input vector to a neural network AM. The shape function method is suitable for near term applications on conventional computing architectures equipped with VLSI FFT chips to provide a rapid image search capability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, Ryosuke; Okajima, Takaharu, E-mail: okajima@ist.hokudai.ac.jp
We present multi-frequency force modulation atomic force microscopy (AFM) for mapping the complex shear modulus G* of living cells as a function of frequency over the range of 50–500 Hz in the same measurement time as the single-frequency force modulation measurement. The AFM technique enables us to reconstruct image maps of rheological parameters, which exhibit a frequency-dependent power-law behavior with respect to G{sup *}. These quantitative rheological measurements reveal a large spatial variation in G* in this frequency range for single cells. Moreover, we find that the reconstructed images of the power-law rheological parameters are much different from those obtained inmore » force-curve or single-frequency force modulation measurements. This indicates that the former provide information about intracellular mechanical structures of the cells that are usually not resolved with the conventional force measurement methods.« less
Mapping a surgeon’s becoming with Deleuze
Cristancho, Sayra; Fenwick, Tara
2017-01-01
The process of ‘becoming’ shapes professionals’ capability, confidence and identity. In contrast to notions of rugged individuals who achieve definitive status as experts, ‘becoming’ is a continuous emergent condition. It is often a process of struggle, and is always interminably linked to its environs and relationships. ‘Becoming’ is a way of understanding the tensions of everyday practice and knowledge of professionals. In this paper, we explore the notion of ‘becoming’ from the perspective of surgeons. We suggest that ‘becoming’, as theorised by Deleuze, offers a more nuanced understanding than is often represented using conventional vocabularies of competence, error, quality and improvement. We develop this conception by drawing from our Deleuze-inspired study of mapping experience in surgery. We argue for Deleuzian mapping as a method to research health professionals’ practice and experience, and suggest the utility of this approach as a pedagogical tool for medical education. PMID:26194103
NIRS-SPM: statistical parametric mapping for near infrared spectroscopy
NASA Astrophysics Data System (ADS)
Tak, Sungho; Jang, Kwang Eun; Jung, Jinwook; Jang, Jaeduck; Jeong, Yong; Ye, Jong Chul
2008-02-01
Even though there exists a powerful statistical parametric mapping (SPM) tool for fMRI, similar public domain tools are not available for near infrared spectroscopy (NIRS). In this paper, we describe a new public domain statistical toolbox called NIRS-SPM for quantitative analysis of NIRS signals. Specifically, NIRS-SPM statistically analyzes the NIRS data using GLM and makes inference as the excursion probability which comes from the random field that are interpolated from the sparse measurement. In order to obtain correct inference, NIRS-SPM offers the pre-coloring and pre-whitening method for temporal correlation estimation. For simultaneous recording NIRS signal with fMRI, the spatial mapping between fMRI image and real coordinate in 3-D digitizer is estimated using Horn's algorithm. These powerful tools allows us the super-resolution localization of the brain activation which is not possible using the conventional NIRS analysis tools.
Applied photo interpretation for airbrush cartography
NASA Technical Reports Server (NTRS)
Inge, J. L.; Bridges, P. M.
1976-01-01
Lunar and planetary exploration has required the development of new techniques of cartographic portrayal. Conventional photo-interpretive methods employing size, shape, shadow, tone, pattern, and texture are applied to computer-processed satellite television images. Comparative judgements are affected by illumination, resolution, variations in surface coloration, and transmission or processing artifacts. The portrayal of tonal densities in a relief illustration is performed using a unique airbrush technique derived from hill-shading of contour maps. The control of tone and line quality is essential because the mid-gray to dark tone densities must be finalized prior to the addition of highlights to the drawing. This is done with an electric eraser until the drawing is completed. The drawing density is controlled with a reflectance-reading densitometer to meet certain density guidelines. The versatility of planetary photo-interpretive methods for airbrushed map portrayals is demonstrated by the application of these techniques to the synthesis of nonrelief data.
Dye, Dennis G.; Middleton, Barry R.; Vogel, John M.; Wu, Zhuoting; Velasco, Miguel G.
2016-01-01
We developed and evaluated a methodology for subpixel discrimination and large-area mapping of the perennial warm-season (C4) grass component of vegetation cover in mixed-composition landscapes of the southwestern United States and northern Mexico. We describe the methodology within a general, conceptual framework that we identify as the differential vegetation phenology (DVP) paradigm. We introduce a DVP index, the Normalized Difference Phenometric Index (NDPI) that provides vegetation type-specific information at the subpixel scale by exploiting differential patterns of vegetation phenology detectable in time-series spectral vegetation index (VI) data from multispectral land imagers. We used modified soil-adjusted vegetation index (MSAVI2) data from Landsat to develop the NDPI, and MSAVI2 data from MODIS to compare its performance relative to one alternate DVP metric (difference of spring average MSAVI2 and summer maximum MSAVI2), and two simple, conventional VI metrics (summer average MSAVI2, summer maximum MSAVI2). The NDPI in a scaled form (NDPIs) performed best in predicting variation in perennial C4 grass cover as estimated from landscape photographs at 92 sites (R2 = 0.76, p < 0.001), indicating improvement over the alternate DVP metric (R2 = 0.73, p < 0.001) and substantial improvement over the two conventional VI metrics (R2 = 0.62 and 0.56, p < 0.001). The results suggest DVP-based methods, and the NDPI in particular, can be effective for subpixel discrimination and mapping of exposed perennial C4 grass cover within mixed-composition landscapes of the Southwest, and potentially for monitoring of its response to drought, climate change, grazing and other factors, including land management. With appropriate adjustments, the method could potentially be used for subpixel discrimination and mapping of grass or other vegetation types in other regions where the vegetation components of the landscape exhibit contrasting seasonal patterns of phenology.
Juras, Vladimir; Bohndorf, Klaus; Heule, Rahel; Kronnerwetter, Claudia; Szomolanyi, Pavol; Hager, Benedikt; Bieri, Oliver; Zbyn, Stefan; Trattnig, Siegfried
2016-06-01
To assess the clinical relevance of T2 relaxation times, measured by 3D triple-echo steady-state (3D-TESS), in knee articular cartilage compared to conventional multi-echo spin-echo T2-mapping. Thirteen volunteers and ten patients with focal cartilage lesions were included in this prospective study. All subjects underwent 3-Tesla MRI consisting of a multi-echo multi-slice spin-echo sequence (CPMG) as a reference method for T2 mapping, and 3D TESS with the same geometry settings, but variable acquisition times: standard (TESSs 4:35min) and quick (TESSq 2:05min). T2 values were compared in six different regions in the femoral and tibial cartilage using a Wilcoxon signed ranks test and the Pearson correlation coefficient (r). The local ethics committee approved this study, and all participants gave written informed consent. The mean quantitative T2 values measured by CPMG (mean: 46±9ms) in volunteers were significantly higher compared to those measured with TESS (mean: 31±5ms) in all regions. Both methods performed similarly in patients, but CPMG provided a slightly higher difference between lesions and native cartilage (CPMG: 90ms→61ms [31%],p=0.0125;TESS 32ms→24ms [24%],p=0.0839). 3D-TESS provides results similar to those of a conventional multi-echo spin-echo sequence with many benefits, such as shortening of total acquisition time and insensitivity to B1 and B0 changes. • 3D-TESS T 2 mapping provides clinically comparable results to CPMG in shorter scan-time. • Clinical and investigational studies may benefit from high temporal resolution of 3D-TESS. • 3D-TESS T 2 values are able to differentiate between healthy and damaged cartilage.
NASA Astrophysics Data System (ADS)
Liu, Jingbin; Liang, Xinlian; Hyyppä, Juha; Yu, Xiaowei; Lehtomäki, Matti; Pyörälä, Jiri; Zhu, Lingli; Wang, Yunsheng; Chen, Ruizhi
2017-04-01
Terrestrial laser scanning has been widely used to analyze the 3D structure of a forest in detail and to generate data at the level of a reference plot for forest inventories without destructive measurements. Multi-scan terrestrial laser scanning is more commonly applied to collect plot-level data so that all of the stems can be detected and analyzed. However, it is necessary to match the point clouds of multiple scans to yield a point cloud with automated processing. Mismatches between datasets will lead to errors during the processing of multi-scan data. Classic registration methods based on flat surfaces cannot be directly applied in forest environments; therefore, artificial reference objects have conventionally been used to assist with scan matching. The use of artificial references requires additional labor and expertise, as well as greatly increasing the cost. In this study, we present an automated processing method for plot-level stem mapping that matches multiple scans without artificial references. In contrast to previous studies, the registration method developed in this study exploits the natural geometric characteristics among a set of tree stems in a plot and combines the point clouds of multiple scans into a unified coordinate system. Integrating multiple scans improves the overall performance of stem mapping in terms of the correctness of tree detection, as well as the bias and the root-mean-square errors of forest attributes such as diameter at breast height and tree height. In addition, the automated processing method makes stem mapping more reliable and consistent among plots, reduces the costs associated with plot-based stem mapping, and enhances the efficiency.
Power-efficient method for IM-DD optical transmission of multiple OFDM signals.
Effenberger, Frank; Liu, Xiang
2015-05-18
We propose a power-efficient method for transmitting multiple frequency-division multiplexed (FDM) orthogonal frequency-division multiplexing (OFDM) signals in intensity-modulation direct-detection (IM-DD) optical systems. This method is based on quadratic soft clipping in combination with odd-only channel mapping. We show, both analytically and experimentally, that the proposed approach is capable of improving the power efficiency by about 3 dB as compared to conventional FDM OFDM signals under practical bias conditions, making it a viable solution in applications such as optical fiber-wireless integrated systems where both IM-DD optical transmission and OFDM signaling are important.
Sussman, Marshall S; Yang, Issac Y; Fok, Kai-Ho; Wintersperger, Bernd J
2016-06-01
The Modified Look-Locker Inversion Recovery (MOLLI) technique is used for T1 mapping in the heart. However, a drawback of this technique is that it requires lengthy rest periods in between inversion groupings to allow for complete magnetization recovery. In this work, a new MOLLI fitting algorithm (inversion group [IG] fitting) is presented that allows for arbitrary combinations of inversion groupings and rest periods (including no rest period). Conventional MOLLI algorithms use a three parameter fitting model. In IG fitting, the number of parameters is two plus the number of inversion groupings. This increased number of parameters permits any inversion grouping/rest period combination. Validation was performed through simulation, phantom, and in vivo experiments. IG fitting provided T1 values with less than 1% discrepancy across a range of inversion grouping/rest period combinations. By comparison, conventional three parameter fits exhibited up to 30% discrepancy for some combinations. The one drawback with IG fitting was a loss of precision-approximately 30% worse than the three parameter fits. IG fitting permits arbitrary inversion grouping/rest period combinations (including no rest period). The cost of the algorithm is a loss of precision relative to conventional three parameter fits. Magn Reson Med 75:2332-2340, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
HapMap scanning of novel human minor histocompatibility antigens.
Kamei, Michi; Nannya, Yasuhito; Torikai, Hiroki; Kawase, Takakazu; Taura, Kenjiro; Inamoto, Yoshihiro; Takahashi, Taro; Yazaki, Makoto; Morishima, Satoko; Tsujimura, Kunio; Miyamura, Koichi; Ito, Tetsuya; Togari, Hajime; Riddell, Stanley R; Kodera, Yoshihisa; Morishima, Yasuo; Takahashi, Toshitada; Kuzushima, Kiyotaka; Ogawa, Seishi; Akatsuka, Yoshiki
2009-05-21
Minor histocompatibility antigens (mHags) are molecular targets of allo-immunity associated with hematopoietic stem cell transplantation (HSCT) and involved in graft-versus-host disease, but they also have beneficial antitumor activity. mHags are typically defined by host SNPs that are not shared by the donor and are immunologically recognized by cytotoxic T cells isolated from post-HSCT patients. However, the number of molecularly identified mHags is still too small to allow prospective studies of their clinical importance in transplantation medicine, mostly due to the lack of an efficient method for isolation. Here we show that when combined with conventional immunologic assays, the large data set from the International HapMap Project can be directly used for genetic mapping of novel mHags. Based on the immunologically determined mHag status in HapMap panels, a target mHag locus can be uniquely mapped through whole genome association scanning taking advantage of the unprecedented resolution and power obtained with more than 3 000 000 markers. The feasibility of our approach could be supported by extensive simulations and further confirmed by actually isolating 2 novel mHags as well as 1 previously identified example. The HapMap data set represents an invaluable resource for investigating human variation, with obvious applications in genetic mapping of clinically relevant human traits.
Concept mapping learning strategy to enhance students' mathematical connection ability
NASA Astrophysics Data System (ADS)
Hafiz, M.; Kadir, Fatra, Maifalinda
2017-05-01
The concept mapping learning strategy in teaching and learning mathematics has been investigated by numerous researchers. However, there are still less researchers who have scrutinized about the roles of map concept which is connected to the mathematical connection ability. Being well understood on map concept, it may help students to have ability to correlate one concept to other concept in order that the student can solve mathematical problems faced. The objective of this research was to describe the student's mathematical connection ability and to analyze the effect of using concept mapping learning strategy to the students' mathematical connection ability. This research was conducted at senior high school in Jakarta. The method used a quasi-experimental with randomized control group design with the total number was 72 students as the sample. Data obtained through using test in the post-test after giving the treatment. The results of the research are: 1) Students' mathematical connection ability has reached the good enough level category; 2) Students' mathematical connection ability who had taught with concept mapping learning strategy is higher than who had taught with conventional learning strategy. Based on the results above, it can be concluded that concept mapping learning strategycould enhance the students' mathematical connection ability, especially in trigonometry.
ERIC Educational Resources Information Center
Richardson, R. Thomas; Sammons, Dotty; Del-Parte, Donna
2018-01-01
This study compared learning performance during and following AR and non-AR topographic map instruction and practice Two-way ANOVA testing indicated no significant differences on a posttest assessment between map type and spatial ability. Prior learning activity results revealed a significant performance difference between AR and non-AR treatment…
Classification of hyperspectral imagery with neural networks: comparison to conventional tools
NASA Astrophysics Data System (ADS)
Merényi, Erzsébet; Farrand, William H.; Taranik, James V.; Minor, Timothy B.
2014-12-01
Efficient exploitation of hyperspectral imagery is of great importance in remote sensing. Artificial intelligence approaches have been receiving favorable reviews for classification of hyperspectral data because the complexity of such data challenges the limitations of many conventional methods. Artificial neural networks (ANNs) were shown to outperform traditional classifiers in many situations. However, studies that use the full spectral dimensionality of hyperspectral images to classify a large number of surface covers are scarce if non-existent. We advocate the need for methods that can handle the full dimensionality and a large number of classes to retain the discovery potential and the ability to discriminate classes with subtle spectral differences. We demonstrate that such a method exists in the family of ANNs. We compare the maximum likelihood, Mahalonobis distance, minimum distance, spectral angle mapper, and a hybrid ANN classifier for real hyperspectral AVIRIS data, using the full spectral resolution to map 23 cover types and using a small training set. Rigorous evaluation of the classification accuracies shows that the ANN outperforms the other methods and achieves ≈90% accuracy on test data.
The Safety Course Design and Operations of Composite Overwrapped Pressure Vessels (COPV)
NASA Technical Reports Server (NTRS)
Saulsberry, Regor; Prosser, William
2015-01-01
Following a Commercial Launch Vehicle On-Pad COPV (Composite Overwrapped Pressure Vessels) failure, a request was received by the NESC (NASA Engineering and Safety Center) June 14, 2014. An assessment was approved July 10, 2014, to develop and assess the capability of scanning eddy current (EC) nondestructive evaluation (NDE) methods for mapping thickness and inspection for flaws. Current methods could not identify thickness reduction from necking and critical flaw detection was not possible with conventional dye penetrant (PT) methods, so sensitive EC scanning techniques were needed. Developmental methods existed, but had not been fully developed, nor had the requisite capability assessment (i.e., a POD (Probability of Detection) study) been performed.
Dimov, Alexey V; Liu, Zhe; Spincemaille, Pascal; Prince, Martin R; Du, Jiang; Wang, Yi
2018-01-01
To develop quantitative susceptibility mapping (QSM) of bone using an ultrashort echo time (UTE) gradient echo (GRE) sequence for signal acquisition and a bone-specific effective transverse relaxation rate ( R2*) to model water-fat MR signals for field mapping. Three-dimensional radial UTE data (echo times ≥ 40 μs) was acquired on a 3 Tesla scanner and fitted with a bone-specific signal model to map the chemical species and susceptibility field. Experiments were performed ex vivo on a porcine hoof and in vivo on healthy human subjects (n = 7). For water-fat separation, a bone-specific model assigning R2* decay mostly to water was compared with the standard models that assigned the same decay for both fat and water. In the ex vivo experiment, bone QSM was correlated with CT. Compared with standard models, the bone-specific R2* method significantly reduced errors in the fat fraction within the cortical bone in all tested data sets, leading to reduced artifacts in QSM. Good correlation was found between bone CT and QSM values in the porcine hoof (R 2 = 0.77). Bone QSM was successfully generated in all subjects. The QSM of bone is feasible using UTE with a conventional echo time GRE acquisition and a bone-specific R2* signal model. Magn Reson Med 79:121-128, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Moelich, Erika Ilette; Muller, Magdalena; Joubert, Elizabeth; Næs, Tormod; Kidd, Martin
2017-09-01
Honeybush herbal tea is produced from the endemic South African Cyclopia species. Plant material subjected to a high-temperature oxidation step ("fermentation") forms the bulk of production. Production lags behind demand forcing tea merchants to use blends of available material to supply local and international markets. The distinct differences in the sensory profiles of the herbal tea produced from the different Cyclopia species require that special care is given to blending to ensure a consistent, high quality product. Although conventional descriptive sensory analysis (DSA) is highly effective in providing a detailed sensory profile of herbal tea infusions, industry requires a method that is more time- and cost-effective. Recent advances in sensory science have led to the development of rapid profiling methodologies. The question is whether projective mapping can successfully be used for the sensory characterisation of herbal tea infusions. Trained assessors performed global and partial projective mapping to determine the validity of this technique for the sensory characterisation of infusions of five Cyclopia species. Similar product configurations were obtained when comparing results of DSA and global and partial projective mapping. Comparison of replicate sessions showed RV coefficients >0.8. A similarity index, based on multifactor analysis, was calculated to determine assessor repeatability. Global projective mapping, demonstrated to be a valid method for providing a broad sensory characterisation of Cyclopia species, is thus suitable as a rapid quality control method of honeybush infusions. Its application by the honeybush industry could improve the consistency of the sensory profile of blended products. Copyright © 2017 Elsevier Ltd. All rights reserved.
SIDS-toADF File Mapping Manual
NASA Technical Reports Server (NTRS)
McCarthy, Douglas; Smith, Matthew; Poirier, Diane; Smith, Charles A. (Technical Monitor)
2002-01-01
The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: 1) The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its I/O software, which handles the actual reading and writing of data from and to external storage media; 2) The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; 3) The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and 4) The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The SIDS-toADF File Mapping Manual specifies the exact manner in which, under CGNS conventions, CFD data structures (the SIDS) are to be stored in (i.e., mapped onto) the file structure provided by the database manager (ADF). The result is a conforming CGNS database. Adherence to the mapping conventions guarantees uniform meaning and location of CFD data within ADF files, and thereby allows the construction of universal software to read and write the data.
A high-resolution computational localization method for transcranial magnetic stimulation mapping.
Aonuma, Shinta; Gomez-Tames, Jose; Laakso, Ilkka; Hirata, Akimasa; Takakura, Tomokazu; Tamura, Manabu; Muragaki, Yoshihiro
2018-05-15
Transcranial magnetic stimulation (TMS) is used for the mapping of brain motor functions. The complexity of the brain deters determining the exact localization of the stimulation site using simplified methods (e.g., the region below the center of the TMS coil) or conventional computational approaches. This study aimed to present a high-precision localization method for a specific motor area by synthesizing computed non-uniform current distributions in the brain for multiple sessions of TMS. Peritumoral mapping by TMS was conducted on patients who had intra-axial brain neoplasms located within or close to the motor speech area. The electric field induced by TMS was computed using realistic head models constructed from magnetic resonance images of patients. A post-processing method was implemented to determine a TMS hotspot by combining the computed electric fields for the coil orientations and positions that delivered high motor-evoked potentials during peritumoral mapping. The method was compared to the stimulation site localized via intraoperative direct brain stimulation and navigated TMS. Four main results were obtained: 1) the dependence of the computed hotspot area on the number of peritumoral measurements was evaluated; 2) the estimated localization of the hand motor area in eight non-affected hemispheres was in good agreement with the position of a so-called "hand-knob"; 3) the estimated hotspot areas were not sensitive to variations in tissue conductivity; and 4) the hand motor areas estimated by this proposal and direct electric stimulation (DES) were in good agreement in the ipsilateral hemisphere of four glioma patients. The TMS localization method was validated by well-known positions of the "hand-knob" in brains for the non-affected hemisphere, and by a hotspot localized via DES during awake craniotomy for the tumor-containing hemisphere. Copyright © 2018 Elsevier Inc. All rights reserved.
Coniferous forest classification and inventory using Landsat and digital terrain data
NASA Technical Reports Server (NTRS)
Franklin, J.; Logan, T. L.; Woodcock, C. E.; Strahler, A. H.
1986-01-01
Machine-processing techniques were used in a Forest Classification and Inventory System (FOCIS) procedure to extract and process tonal, textural, and terrain information from registered Landsat multispectral and digital terrain data. Using FOCIS as a basis for stratified sampling, the softwood timber volumes of the Klamath National Forest and Eldorado National Forest were estimated within standard errors of 4.8 and 4.0 percent, respectively. The accuracy of these large-area inventories is comparable to the accuracy yielded by use of conventional timber inventory methods, but, because of automation, the FOCIS inventories are more rapid (9-12 months compared to 2-3 years for conventional manual photointerpretation, map compilation and drafting, field sampling, and data processing) and are less costly.
A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays
NASA Technical Reports Server (NTRS)
Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.
2011-01-01
Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.
1994 ASPRS/ACSM annual convention exposition. Volume 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-01-01
This report is Volume II of presented papers at the joint 1994 convention of the American Society for Photgrammetry and Remote Sensing and American Congress on Surveying and Mapping. Topic areas covered include the following: Data Base/GPS Issues; Survey Management Issues; Surveying computations; Surveying education; Digital mapping; global change, EOS and NALC issues; GPS issues; Battelle Research in Remote Sensing and in GIS; Advanced Image Processing;GIS Issues; Surveying and Geodesy Issues; water resource issues; Advanced applications of remote sensing; Landsat Pathfinder I.
Kim, Seung-Cheol; Kim, Eun-Soo
2009-02-20
In this paper we propose a new approach for fast generation of computer-generated holograms (CGHs) of a 3D object by using the run-length encoding (RLE) and the novel look-up table (N-LUT) methods. With the RLE method, spatially redundant data of a 3D object are extracted and regrouped into the N-point redundancy map according to the number of the adjacent object points having the same 3D value. Based on this redundancy map, N-point principle fringe patterns (PFPs) are newly calculated by using the 1-point PFP of the N-LUT, and the CGH pattern for the 3D object is generated with these N-point PFPs. In this approach, object points to be involved in calculation of the CGH pattern can be dramatically reduced and, as a result, an increase of computational speed can be obtained. Some experiments with a test 3D object are carried out and the results are compared to those of the conventional methods.
ERIC Educational Resources Information Center
Thibodeau, Paul; Durgin, Frank H.
2008-01-01
Three experiments explored whether conceptual mappings in conventional metaphors are productive, by testing whether the comprehension of novel metaphors was facilitated by first reading conceptually related conventional metaphors. The first experiment, a replication and extension of Keysar et al. [Keysar, B., Shen, Y., Glucksberg, S., Horton, W.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Virnstein, R.; Tepera, M.; Beazley, L.
1997-06-01
A pilot study is very briefly summarized in the article. The study tested the potential of multi-spectral digital imagery for discrimination of seagrass densities and species, algae, and bottom types. Imagery was obtained with the Compact Airborne Spectral Imager (casi) and two flight lines flown with hyper-spectral mode. The photogrammetric method used allowed interpretation of the highest quality product, eliminating limitations caused by outdated or poor quality base maps and the errors associated with transfer of polygons. Initial image analysis indicates that the multi-spectral imagery has several advantages, including sophisticated spectral signature recognition and classification, ease of geo-referencing, and rapidmore » mosaicking.« less
Threshold automatic selection hybrid phase unwrapping algorithm for digital holographic microscopy
NASA Astrophysics Data System (ADS)
Zhou, Meiling; Min, Junwei; Yao, Baoli; Yu, Xianghua; Lei, Ming; Yan, Shaohui; Yang, Yanlong; Dan, Dan
2015-01-01
Conventional quality-guided (QG) phase unwrapping algorithm is hard to be applied to digital holographic microscopy because of the long execution time. In this paper, we present a threshold automatic selection hybrid phase unwrapping algorithm that combines the existing QG algorithm and the flood-filled (FF) algorithm to solve this problem. The original wrapped phase map is divided into high- and low-quality sub-maps by selecting a threshold automatically, and then the FF and QG unwrapping algorithms are used in each level to unwrap the phase, respectively. The feasibility of the proposed method is proved by experimental results, and the execution speed is shown to be much faster than that of the original QG unwrapping algorithm.
Deep learning-based depth estimation from a synthetic endoscopy image training set
NASA Astrophysics Data System (ADS)
Mahmood, Faisal; Durr, Nicholas J.
2018-03-01
Colorectal cancer is the fourth leading cause of cancer deaths worldwide. The detection and removal of premalignant lesions through an endoscopic colonoscopy is the most effective way to reduce colorectal cancer mortality. Unfortunately, conventional colonoscopy has an almost 25% polyp miss rate, in part due to the lack of depth information and contrast of the surface of the colon. Estimating depth using conventional hardware and software methods is challenging in endoscopy due to limited endoscope size and deformable mucosa. In this work, we use a joint deep learning and graphical model-based framework for depth estimation from endoscopy images. Since depth is an inherently continuous property of an object, it can easily be posed as a continuous graphical learning problem. Unlike previous approaches, this method does not require hand-crafted features. Large amounts of augmented data are required to train such a framework. Since there is limited availability of colonoscopy images with ground-truth depth maps and colon texture is highly patient-specific, we generated training images using a synthetic, texture-free colon phantom to train our models. Initial results show that our system can estimate depths for phantom test data with a relative error of 0.164. The resulting depth maps could prove valuable for 3D reconstruction and automated Computer Aided Detection (CAD) to assist in identifying lesions.
NASA Astrophysics Data System (ADS)
Saha, Suman; Das, Saptarshi; Das, Shantanu; Gupta, Amitava
2012-09-01
A novel conformal mapping based fractional order (FO) methodology is developed in this paper for tuning existing classical (Integer Order) Proportional Integral Derivative (PID) controllers especially for sluggish and oscillatory second order systems. The conventional pole placement tuning via Linear Quadratic Regulator (LQR) method is extended for open loop oscillatory systems as well. The locations of the open loop zeros of a fractional order PID (FOPID or PIλDμ) controller have been approximated in this paper vis-à-vis a LQR tuned conventional integer order PID controller, to achieve equivalent integer order PID control system. This approach eases the implementation of analog/digital realization of a FOPID controller with its integer order counterpart along with the advantages of fractional order controller preserved. It is shown here in the paper that decrease in the integro-differential operators of the FOPID/PIλDμ controller pushes the open loop zeros of the equivalent PID controller towards greater damping regions which gives a trajectory of the controller zeros and dominant closed loop poles. This trajectory is termed as "M-curve". This phenomena is used to design a two-stage tuning algorithm which reduces the existing PID controller's effort in a significant manner compared to that with a single stage LQR based pole placement method at a desired closed loop damping and frequency.
Unsupervised learning on scientific ocean drilling datasets from the South China Sea
NASA Astrophysics Data System (ADS)
Tse, Kevin C.; Chiu, Hon-Chim; Tsang, Man-Yin; Li, Yiliang; Lam, Edmund Y.
2018-06-01
Unsupervised learning methods were applied to explore data patterns in multivariate geophysical datasets collected from ocean floor sediment core samples coming from scientific ocean drilling in the South China Sea. Compared to studies on similar datasets, but using supervised learning methods which are designed to make predictions based on sample training data, unsupervised learning methods require no a priori information and focus only on the input data. In this study, popular unsupervised learning methods including K-means, self-organizing maps, hierarchical clustering and random forest were coupled with different distance metrics to form exploratory data clusters. The resulting data clusters were externally validated with lithologic units and geologic time scales assigned to the datasets by conventional methods. Compact and connected data clusters displayed varying degrees of correspondence with existing classification by lithologic units and geologic time scales. K-means and self-organizing maps were observed to perform better with lithologic units while random forest corresponded best with geologic time scales. This study sets a pioneering example of how unsupervised machine learning methods can be used as an automatic processing tool for the increasingly high volume of scientific ocean drilling data.
MIND Demons for MR-to-CT Deformable Image Registration In Image-Guided Spine Surgery
Reaungamornrat, S.; De Silva, T.; Uneri, A.; Wolinsky, J.-P.; Khanna, A. J.; Kleinszig, G.; Vogt, S.; Prince, J. L.; Siewerdsen, J. H.
2016-01-01
Purpose Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. Method The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. Result The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. Conclusions A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The method yields registration accuracy suitable to application in image-guided spine surgery across a broad range of anatomical sites and modes of deformation. PMID:27330239
MIND Demons for MR-to-CT deformable image registration in image-guided spine surgery
NASA Astrophysics Data System (ADS)
Reaungamornrat, S.; De Silva, T.; Uneri, A.; Wolinsky, J.-P.; Khanna, A. J.; Kleinszig, G.; Vogt, S.; Prince, J. L.; Siewerdsen, J. H.
2016-03-01
Purpose: Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. Method: The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. Result: The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. Conclusions: A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The method yields registration accuracy suitable to application in image-guided spine surgery across a broad range of anatomical sites and modes of deformation.
Testing Beyond Words: Using Tests to Enhance Visuospatial Map Learning
ERIC Educational Resources Information Center
Carpenter, Shana K.; Pashler, Harold
2007-01-01
Psychological research shows that learning can be powerfully enhanced through testing, but this finding has so far been confined to memory tasks requiring verbal responses. We explored whether testing can enhance learning of visuospatial information in maps. Fifty subjects each studied 2 maps, one through conventional study, and the other through…
ERIC Educational Resources Information Center
Hsu, Hsiao-Ping; Tsai, Bor-Wen; Chen, Che-Ming
2018-01-01
Teaching high-school geomorphological concepts and topographic map reading entails many challenges. This research reports the applicability and effectiveness of Google Earth in teaching topographic map skills and geomorphological concepts, by a single teacher, in a one-computer classroom. Compared to learning via a conventional instructional…
Self-organized neural maps of human protein sequences.
Ferrán, E. A.; Pflugfelder, B.; Ferrara, P.
1994-01-01
We have recently described a method based on artificial neural networks to cluster protein sequences into families. The network was trained with Kohonen's unsupervised learning algorithm using, as inputs, the matrix patterns derived from the dipeptide composition of the proteins. We present here a large-scale application of that method to classify the 1,758 human protein sequences stored in the SwissProt database (release 19.0), whose lengths are greater than 50 amino acids. In the final 2-dimensional topologically ordered map of 15 x 15 neurons, proteins belonging to known families were associated with the same neuron or with neighboring ones. Also, as an attempt to reduce the time-consuming learning procedure, we compared 2 learning protocols: one of 500 epochs (100 SUN CPU-hours [CPU-h]), and another one of 30 epochs (6.7 CPU-h). A further reduction of learning-computing time, by a factor of about 3.3, with similar protein clustering results, was achieved using a matrix of 11 x 11 components to represent the sequences. Although network training is time consuming, the classification of a new protein in the final ordered map is very fast (14.6 CPU-seconds). We also show a comparison between the artificial neural network approach and conventional methods of biosequence analysis. PMID:8019421
Li, Cheng; Pan, Xinyi; Ying, Kui; Zhang, Qiang; An, Jing; Weng, Dehe; Qin, Wen; Li, Kuncheng
2009-11-01
The conventional phase difference method for MR thermometry suffers from disturbances caused by the presence of lipid protons, motion-induced error, and field drift. A signal model is presented with multi-echo gradient echo (GRE) sequence using a fat signal as an internal reference to overcome these problems. The internal reference signal model is fit to the water and fat signals by the extended Prony algorithm and the Levenberg-Marquardt algorithm to estimate the chemical shifts between water and fat which contain temperature information. A noise analysis of the signal model was conducted using the Cramer-Rao lower bound to evaluate the noise performance of various algorithms, the effects of imaging parameters, and the influence of the water:fat signal ratio in a sample on the temperature estimate. Comparison of the calculated temperature map and thermocouple temperature measurements shows that the maximum temperature estimation error is 0.614 degrees C, with a standard deviation of 0.06 degrees C, confirming the feasibility of this model-based temperature mapping method. The influence of sample water:fat signal ratio on the accuracy of the temperature estimate is evaluated in a water-fat mixed phantom experiment with an optimal ratio of approximately 0.66:1. (c) 2009 Wiley-Liss, Inc.
2011-01-01
Introduction Microtubule associated proteins (MAPs) endogenously regulate microtubule stabilization and have been reported as prognostic and predictive markers for taxane response. The microtubule stabilizer, MAP-tau, has shown conflicting results. We quantitatively assessed MAP-tau expression in two independent breast cancer cohorts to determine prognostic and predictive value of this biomarker. Methods MAP-tau expression was evaluated in the retrospective Yale University breast cancer cohort (n = 651) using tissue microarrays and also in the TAX 307 cohort, a clinical trial randomized for TAC versus FAC chemotherapy (n = 140), using conventional whole tissue sections. Expression was measured using the AQUA method for quantitative immunofluorescence. Scores were correlated with clinicopathologic variables, survival, and response to therapy. Results Assessment of the Yale cohort using Cox univariate analysis indicated an improved overall survival (OS) in tumors with a positive correlation between high MAP-tau expression and overall survival (OS) (HR = 0.691, 95% CI = 0.489-0.974; P = 0.004). Kaplan Meier analysis showed 10-year survival for 65% of patients with high MAP-tau expression compared to 52% with low expression (P = .006). In TAX 307, high expression was associated with significantly longer median time to tumor progression (TTP) regardless of treatment arm (33.0 versus 23.4 months, P = 0.010) with mean TTP of 31.2 months. Response rates did not differ by MAP-tau expression (P = 0.518) or by treatment arm (P = 0.584). Conclusions Quantitative measurement of MAP-tau expression has prognostic value in both cohorts, with high expression associated with longer TTP and OS. Differences by treatment arm or response rate in low versus high MAP-tau groups were not observed, indicating that MAP-tau is not associated with response to taxanes and is not a useful predictive marker for taxane-based chemotherapy. PMID:21888627
Spatial characterization of the meltwater field from icebergs in the Weddell Sea.
Helly, John J; Kaufmann, Ronald S; Vernet, Maria; Stephenson, Gordon R
2011-04-05
We describe the results from a spatial cyberinfrastructure developed to characterize the meltwater field around individual icebergs and integrate the results with regional- and global-scale data. During the course of the cyberinfrastructure development, it became clear that we were also building an integrated sampling planning capability across multidisciplinary teams that provided greater agility in allocating expedition resources resulting in new scientific insights. The cyberinfrastructure-enabled method is a complement to the conventional methods of hydrographic sampling in which the ship provides a static platform on a station-by-station basis. We adapted a sea-floor mapping method to more rapidly characterize the sea surface geophysically and biologically. By jointly analyzing the multisource, continuously sampled biological, chemical, and physical parameters, using Global Positioning System time as the data fusion key, this surface-mapping method enables us to examine the relationship between the meltwater field of the iceberg to the larger-scale marine ecosystem of the Southern Ocean. Through geospatial data fusion, we are able to combine very fine-scale maps of dynamic processes with more synoptic but lower-resolution data from satellite systems. Our results illustrate the importance of spatial cyberinfrastructure in the overall scientific enterprise and identify key interfaces and sources of error that require improved controls for the development of future Earth observing systems as we move into an era of peta- and exascale, data-intensive computing.
Spatial characterization of the meltwater field from icebergs in the Weddell Sea
Helly, John J.; Kaufmann, Ronald S.; Vernet, Maria; Stephenson, Gordon R.
2011-01-01
We describe the results from a spatial cyberinfrastructure developed to characterize the meltwater field around individual icebergs and integrate the results with regional- and global-scale data. During the course of the cyberinfrastructure development, it became clear that we were also building an integrated sampling planning capability across multidisciplinary teams that provided greater agility in allocating expedition resources resulting in new scientific insights. The cyberinfrastructure-enabled method is a complement to the conventional methods of hydrographic sampling in which the ship provides a static platform on a station-by-station basis. We adapted a sea-floor mapping method to more rapidly characterize the sea surface geophysically and biologically. By jointly analyzing the multisource, continuously sampled biological, chemical, and physical parameters, using Global Positioning System time as the data fusion key, this surface-mapping method enables us to examine the relationship between the meltwater field of the iceberg to the larger-scale marine ecosystem of the Southern Ocean. Through geospatial data fusion, we are able to combine very fine-scale maps of dynamic processes with more synoptic but lower-resolution data from satellite systems. Our results illustrate the importance of spatial cyberinfrastructure in the overall scientific enterprise and identify key interfaces and sources of error that require improved controls for the development of future Earth observing systems as we move into an era of peta- and exascale, data-intensive computing. PMID:21444769
Predefined Redundant Dictionary for Effective Depth Maps Representation
NASA Astrophysics Data System (ADS)
Sebai, Dorsaf; Chaieb, Faten; Ghorbel, Faouzi
2016-01-01
The multi-view video plus depth (MVD) video format consists of two components: texture and depth map, where a combination of these components enables a receiver to generate arbitrary virtual views. However, MVD presents a very voluminous video format that requires a compression process for storage and especially for transmission. Conventional codecs are perfectly efficient for texture images compression but not for intrinsic depth maps properties. Depth images indeed are characterized by areas of smoothly varying grey levels separated by sharp discontinuities at the position of object boundaries. Preserving these characteristics is important to enable high quality view synthesis at the receiver side. In this paper, sparse representation of depth maps is discussed. It is shown that a significant gain in sparsity is achieved when particular mixed dictionaries are used for approximating these types of images with greedy selection strategies. Experiments are conducted to confirm the effectiveness at producing sparse representations, and competitiveness, with respect to candidate state-of-art dictionaries. Finally, the resulting method is shown to be effective for depth maps compression and represents an advantage over the ongoing 3D high efficiency video coding compression standard, particularly at medium and high bitrates.
Parallel mapping of optical near-field interactions by molecular motor-driven quantum dots.
Groß, Heiko; Heil, Hannah S; Ehrig, Jens; Schwarz, Friedrich W; Hecht, Bert; Diez, Stefan
2018-04-30
In the vicinity of metallic nanostructures, absorption and emission rates of optical emitters can be modulated by several orders of magnitude 1,2 . Control of such near-field light-matter interaction is essential for applications in biosensing 3 , light harvesting 4 and quantum communication 5,6 and requires precise mapping of optical near-field interactions, for which single-emitter probes are promising candidates 7-11 . However, currently available techniques are limited in terms of throughput, resolution and/or non-invasiveness. Here, we present an approach for the parallel mapping of optical near-field interactions with a resolution of <5 nm using surface-bound motor proteins to transport microtubules carrying single emitters (quantum dots). The deterministic motion of the quantum dots allows for the interpolation of their tracked positions, resulting in an increased spatial resolution and a suppression of localization artefacts. We apply this method to map the near-field distribution of nanoslits engraved into gold layers and find an excellent agreement with finite-difference time-domain simulations. Our technique can be readily applied to a variety of surfaces for scalable, nanometre-resolved and artefact-free near-field mapping using conventional wide-field microscopes.
A novel algorithm for thermal image encryption.
Hussain, Iqtadar; Anees, Amir; Algarni, Abdulmohsen
2018-04-16
Thermal images play a vital character at nuclear plants, Power stations, Forensic labs biological research, and petroleum products extraction. Safety of thermal images is very important. Image data has some unique features such as intensity, contrast, homogeneity, entropy and correlation among pixels that is why somehow image encryption is trickier as compare to other encryptions. With conventional image encryption schemes it is normally hard to handle these features. Therefore, cryptographers have paid attention to some attractive properties of the chaotic maps such as randomness and sensitivity to build up novel cryptosystems. That is why, recently proposed image encryption techniques progressively more depends on the application of chaotic maps. This paper proposed an image encryption algorithm based on Chebyshev chaotic map and S8 Symmetric group of permutation based substitution boxes. Primarily, parameters of chaotic Chebyshev map are chosen as a secret key to mystify the primary image. Then, the plaintext image is encrypted by the method generated from the substitution boxes and Chebyshev map. By this process, we can get a cipher text image that is perfectly twisted and dispersed. The outcomes of renowned experiments, key sensitivity tests and statistical analysis confirm that the proposed algorithm offers a safe and efficient approach for real-time image encryption.
NASA Astrophysics Data System (ADS)
Rebich, S.
2003-12-01
The concept mapping technique has been proposed as a method for examining the evolving nature of students' conceptualizations of scientific concepts, and promises insight into a dimension of learning different from the one accessible through more conventional classroom testing techniques. The theory behind concept mapping is based on an assumption that knowledge acquisition is accomplished through "linking" of new information to an existing knowledge framework, and that meaningful (as opposed to arbitrary or verbatim) links allow for deeper understanding and conceptual change. Reflecting this theory, concept maps are constructed as a network of related concepts connected by labeled links that illustrate the relationship between the concepts. Two concepts connected by one such link make up a "proposition", the basic element of the concept map structure. In this paper, we examine the results of a pre- and post-test assessment program for an upper-division undergraduate geography course entitled "Mock Environmental Summit," which was part of a research project on assessment. Concept mapping was identified as a potentially powerful assessment tool for this course, as more conventional tools such as multiple-choice tests did not seem to provide a reliable indication of the learning students were experiencing as a result of the student-directed research, presentations, and discussions that make up a substantial portion of the course. The assessment program began at the beginning of the course with a one-hour training session during which students were introduced to the theory behind concept mapping, provided with instructions and guidance for constructing a concept map using the CMap software developed and maintained by the Institute for Human and Machine Cognition at the University of West Florida, and asked to collaboratively construct a concept map on a topic not related to the one to be assessed. This training session was followed by a 45-minute "pre-test" on the topic of global climate change, for which students were provided with a list of questions to guide their thoughts during the concept map construction. Following the pre-test, students were not exposed to further concept mapping until the end of the course, when they were asked to complete a "post-test" consisting of exactly the same task. In addition to a summary of our results, this paper presents an overview of available digital concept-mapping tools, proposed scoring techniques, and design principles to keep in mind when designing a concept-mapping assessment program. We also discuss our experience with concept map assessment, the insights it provided into the evolution in student understanding of global climate change that resulted from the course, and our ideas about the potential role of concept mapping in an overall assessment program for interdisciplinary and/or student-directed curricula.
Waldron, Anna M.; Begg, Douglas J.; de Silva, Kumudika; Purdie, Auriol C.; Whittington, Richard J.
2015-01-01
Pathogenic mycobacteria are difficult to culture, requiring specialized media and a long incubation time, and have complex and exceedingly robust cell walls. Mycobacterium avium subsp. paratuberculosis (MAP), the causative agent of Johne's disease, a chronic wasting disease of ruminants, is a typical example. Culture of MAP from the feces and intestinal tissues is a commonly used test for confirmation of infection. Liquid medium offers greater sensitivity than solid medium for detection of MAP; however, support for the BD Bactec 460 system commonly used for this purpose has been discontinued. We previously developed a new liquid culture medium, M7H9C, to replace it, with confirmation of growth reliant on PCR. Here, we report an efficient DNA isolation and quantitative PCR methodology for the specific detection and confirmation of MAP growth in liquid culture media containing egg yolk. The analytical sensitivity was at least 104-fold higher than a commonly used method involving ethanol precipitation of DNA and conventional PCR; this may be partly due to the addition of a bead-beating step to manually disrupt the cell wall of the mycobacteria. The limit of detection, determined using pure cultures of two different MAP strains, was 100 to 1,000 MAP organisms/ml. The diagnostic accuracy was confirmed using a panel of cattle fecal (n = 54) and sheep fecal and tissue (n = 90) culture samples. This technique is directly relevant for diagnostic laboratories that perform MAP cultures but may also be applicable to the detection of other species, including M. avium and M. tuberculosis. PMID:25609725
Plain, Karren M; Waldron, Anna M; Begg, Douglas J; de Silva, Kumudika; Purdie, Auriol C; Whittington, Richard J
2015-04-01
Pathogenic mycobacteria are difficult to culture, requiring specialized media and a long incubation time, and have complex and exceedingly robust cell walls. Mycobacterium avium subsp. paratuberculosis (MAP), the causative agent of Johne's disease, a chronic wasting disease of ruminants, is a typical example. Culture of MAP from the feces and intestinal tissues is a commonly used test for confirmation of infection. Liquid medium offers greater sensitivity than solid medium for detection of MAP; however, support for the BD Bactec 460 system commonly used for this purpose has been discontinued. We previously developed a new liquid culture medium, M7H9C, to replace it, with confirmation of growth reliant on PCR. Here, we report an efficient DNA isolation and quantitative PCR methodology for the specific detection and confirmation of MAP growth in liquid culture media containing egg yolk. The analytical sensitivity was at least 10(4)-fold higher than a commonly used method involving ethanol precipitation of DNA and conventional PCR; this may be partly due to the addition of a bead-beating step to manually disrupt the cell wall of the mycobacteria. The limit of detection, determined using pure cultures of two different MAP strains, was 100 to 1,000 MAP organisms/ml. The diagnostic accuracy was confirmed using a panel of cattle fecal (n=54) and sheep fecal and tissue (n=90) culture samples. This technique is directly relevant for diagnostic laboratories that perform MAP cultures but may also be applicable to the detection of other species, including M. avium and M. tuberculosis. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Soft bilateral filtering volumetric shadows using cube shadow maps
Ali, Hatam H.; Sunar, Mohd Shahrizal; Kolivand, Hoshang
2017-01-01
Volumetric shadows often increase the realism of rendered scenes in computer graphics. Typical volumetric shadows techniques do not provide a smooth transition effect in real-time with conservation on crispness of boundaries. This research presents a new technique for generating high quality volumetric shadows by sampling and interpolation. Contrary to conventional ray marching method, which requires extensive time, this proposed technique adopts downsampling in calculating ray marching. Furthermore, light scattering is computed in High Dynamic Range buffer to generate tone mapping. The bilateral interpolation is used along a view rays to smooth transition of volumetric shadows with respect to preserving-edges. In addition, this technique applied a cube shadow map to create multiple shadows. The contribution of this technique isreducing the number of sample points in evaluating light scattering and then introducing bilateral interpolation to improve volumetric shadows. This contribution is done by removing the inherent deficiencies significantly in shadow maps. This technique allows obtaining soft marvelous volumetric shadows, having a good performance and high quality, which show its potential for interactive applications. PMID:28632740
NASA Astrophysics Data System (ADS)
Lee, Sangho; Suh, Jangwon; Park, Hyeong-Dong
2015-03-01
Boring logs are widely used in geological field studies since the data describes various attributes of underground and surface environments. However, it is difficult to manage multiple boring logs in the field as the conventional management and visualization methods are not suitable for integrating and combining large data sets. We developed an iPad application to enable its user to search the boring log rapidly and visualize them using the augmented reality (AR) technique. For the development of the application, a standard borehole database appropriate for a mobile-based borehole database management system was designed. The application consists of three modules: an AR module, a map module, and a database module. The AR module superimposes borehole data on camera imagery as viewed by the user and provides intuitive visualization of borehole locations. The map module shows the locations of corresponding borehole data on a 2D map with additional map layers. The database module provides data management functions for large borehole databases for other modules. Field survey was also carried out using more than 100,000 borehole data.
Snow survey and vegetation growth in high mountains (Swiss Alps)
NASA Technical Reports Server (NTRS)
Haefner, H. (Principal Investigator)
1973-01-01
The author has identified the following significant results. A method for mapping snow over large areas was developed combining the possibilities of a Quantimet (QTM 72) to evaluate the exact density level of the snow cover for each individual image (or a selected section of the photo) with the higher resolution of photographic techniques. The density level established on the monitor by visual control is used as reference for the exposure time of a lithographic film, producing a clear tonal separation of all snow- and ice-covered areas from uncovered land in black and white. The data is projected onto special maps 1:500,000 or 1:100,000 showing the contour lines and the hydrographic features only. The areal extent of the snow cover may be calculated directly with the QTM 720 or on the map. Bands 4 and 5 provide the most accurate results for mapping snow. Using all four bands a separation of an old melting snow cover from a new one is possible. Regional meteorological studies combining ERTS-1 imagery and conventional sources describe synoptical evolution of meteorological systems over the Alps.
Teng, Chaoyi; Demers, Hendrix; Brodusch, Nicolas; Waters, Kristian; Gauvin, Raynald
2018-06-04
A number of techniques for the characterization of rare earth minerals (REM) have been developed and are widely applied in the mining industry. However, most of them are limited to a global analysis due to their low spatial resolution. In this work, phase map analyses were performed on REM with an annular silicon drift detector (aSDD) attached to a field emission scanning electron microscope. The optimal conditions for the aSDD were explored, and the high-resolution phase maps generated at a low accelerating voltage identify phases at the micron scale. In comparisons between an annular and a conventional SDD, the aSDD performed at optimized conditions, making the phase map a practical solution for choosing an appropriate grinding size, judging the efficiency of different separation processes, and optimizing a REM beneficiation flowsheet.
A fast method for optical simulation of flood maps of light-sharing detector modules
Shi, Han; Du, Dong; Xu, JianFeng; Moses, William W.; Peng, Qiyu
2016-01-01
Optical simulation of the detector module level is highly desired for Position Emission Tomography (PET) system design. Commonly used simulation toolkits such as GATE are not efficient in the optical simulation of detector modules with complicated light-sharing configurations, where a vast amount of photons need to be tracked. We present a fast approach based on a simplified specular reflectance model and a structured light-tracking algorithm to speed up the photon tracking in detector modules constructed with polished finish and specular reflector materials. We simulated conventional block detector designs with different slotted light guide patterns using the new approach and compared the outcomes with those from GATE simulations. While the two approaches generated comparable flood maps, the new approach was more than 200–600 times faster. The new approach has also been validated by constructing a prototype detector and comparing the simulated flood map with the experimental flood map. The experimental flood map has nearly uniformly distributed spots similar to those in the simulated flood map. In conclusion, the new approach provides a fast and reliable simulation tool for assisting in the development of light-sharing-based detector modules with a polished surface finish and using specular reflector materials. PMID:27660376
Plenoptic mapping for imaging and retrieval of the complex field amplitude of a laser beam.
Wu, Chensheng; Ko, Jonathan; Davis, Christopher C
2016-12-26
The plenoptic sensor has been developed to sample complicated beam distortions produced by turbulence in the low atmosphere (deep turbulence or strong turbulence) with high density data samples. In contrast with the conventional Shack-Hartmann wavefront sensor, which utilizes all the pixels under each lenslet of a micro-lens array (MLA) to obtain one data sample indicating sub-aperture phase gradient and photon intensity, the plenoptic sensor uses each illuminated pixel (with significant pixel value) under each MLA lenslet as a data point for local phase gradient and intensity. To characterize the working principle of a plenoptic sensor, we propose the concept of plenoptic mapping and its inverse mapping to describe the imaging and reconstruction process respectively. As a result, we show that the plenoptic mapping is an efficient method to image and reconstruct the complex field amplitude of an incident beam with just one image. With a proof of concept experiment, we show that adaptive optics (AO) phase correction can be instantaneously achieved without going through a phase reconstruction process under the concept of plenoptic mapping. The plenoptic mapping technology has high potential for applications in imaging, free space optical (FSO) communication and directed energy (DE) where atmospheric turbulence distortion needs to be compensated.
The role of change data in a land use and land cover map updating program
Milazzo, Valerie A.
1981-01-01
An assessment of current land use and a process for identifying and measuring change are needed to evaluate trends and problems associated with the use of our Nation's land resources. The U. S. Geological Survey is designing a program to maintain the currency of its land use and land cover maps and digital data base and to provide data on changes in our Nation's land use and land cover. Ways to produce and use change data in a map updating program are being evaluated. A dual role for change data is suggested. For users whose applications require specific polygon data on land use change, showing the locations of all individual category changes and detailed statistical data on these changes can be provided as byproducts of the map-revision process. Such products can be produced quickly and inexpensively either by conventional mapmaking methods or as specialized output from a computerized geographic information system. Secondly, spatial data on land use change are used directly for updating existing maps and statistical data. By incorporating only selected change data, maps and digital data can be updated in an efficient and timely manner without the need for complete and costly detailed remapping and redigitization of polygon data.
A fast method for optical simulation of flood maps of light-sharing detector modules
Shi, Han; Du, Dong; Xu, JianFeng; ...
2015-09-03
Optical simulation of the detector module level is highly desired for Position Emission Tomography (PET) system design. Commonly used simulation toolkits such as GATE are not efficient in the optical simulation of detector modules with complicated light-sharing configurations, where a vast amount of photons need to be tracked. Here, we present a fast approach based on a simplified specular reflectance model and a structured light-tracking algorithm to speed up the photon tracking in detector modules constructed with polished finish and specular reflector materials. We also simulated conventional block detector designs with different slotted light guide patterns using the new approachmore » and compared the outcomes with those from GATE simulations. And while the two approaches generated comparable flood maps, the new approach was more than 200–600 times faster. The new approach has also been validated by constructing a prototype detector and comparing the simulated flood map with the experimental flood map. The experimental flood map has nearly uniformly distributed spots similar to those in the simulated flood map. In conclusion, the new approach provides a fast and reliable simulation tool for assisting in the development of light-sharing-based detector modules with a polished surface finish and using specular reflector materials.« less
Mapping the Antarctic grounding line with CryoSat-2 radar altimetry
NASA Astrophysics Data System (ADS)
Bamber, J. L.; Dawson, G. J.
2017-12-01
The grounding line, where grounded ice begins to float, is the boundary at which the ocean has the greatest influence on the ice-sheet. Its position and dynamics are critical in assessing the stability of the ice-sheet, for mass budget calculations and as an input into numerical models. The most reliable approaches to map the grounding line remotely are to measure the limit of tidal flexure of the ice shelf using differential synthetic aperture radar interferometry (DInSAR) or ICESat repeat-track measurements. However, these methods are yet to provide satisfactory spatial and temporal coverage of the whole of the Antarctic grounding zone. It has not been possible to use conventional radar altimetry to map the limit of tidal flexure of the ice shelf because it performs poorly near breaks in slope, commonly associated with the grounding zone. The synthetic aperture radar interferometric (SARin) mode of CryoSat-2, performs better over steeper margins of the ice sheet and allows us to achieve this. The SARin mode combines "delay Doppler" processing with a cross-track interferometer, and enables us to use elevations based on the first return (point of closest approach or POCA) and "swath processed" elevations derived from the time-delayed waveform beyond the first return, to significantly improve coverage. Here, we present a new method to map the limit of tidal motion from a combination of POCA and swath data. We test this new method on the Siple Coast region of the Ross Ice Shelf, and the mapped grounding line is in good agreement with previous observations from DinSAR and ICESat measurements. There is, however, an approximately constant seaward offset between these methods and ours, which we believe is due to the poorer precision of CryoSat-2. This new method has improved the coverage of the grounding zone across the Siple Coast, and can be applied to the rest of Antarctica.
TRANSVERSE MERCATOR MAP PROJECTION OF THE SPHEROID USING TRANSFORMATION OF THE ELLIPTIC INTEGRAL
NASA Technical Reports Server (NTRS)
Wallis, D. E.
1994-01-01
This program produces the Gauss-Kruger (constant meridional scale) Transverse Mercator Projection which is used to construct the U.S. Army's Universal Transverse Mercator (UTM) Grid System. The method is capable of mapping the entire northern hemisphere of the earth (and, by symmetry of the projection, the entire earth) accurately with respect to a single principal meridian, and is therefore mathematically insensitive to proximity either to the pole or the equator, or to the departure of the meridian from the central meridian. This program could be useful to any map-making agency. The program overcomes the limitations of the "series" method (Thomas, 1952) presently used to compute the UTM Grid, specifically its complicated derivation, non-convergence near the pole, lack of rigorous error analysis, and difficulty of obtaining increased accuracy. The method is based on the principle that the parametric colatitude of a point is the amplitude of the Elliptic Integral of the 2nd Kind, and this (irreducible) integral is the desired projection. Thus, a specification of the colatitude leads, most directly (and with strongest motivation) to a formulation in terms of amplitude. The most difficult problem to be solved was setting up the method so that the Elliptic Integral of the 2nd Kind could be used elsewhere than on the principal meridian. The point to be mapped is specified in conventional geographic coordinates (geodetic latitude and longitudinal departure from the principal meridian). Using the colatitude (complement of latitude) and the longitude (departure), the initial step is to map the point to the North Polar Stereographic Projection. The closed-form, analytic function that coincides with the North Polar Stereographic Projection of the spheroid along the principal meridian is put into a Newton-Raphson iteration that solves for the tangent of one half the parametric colatitude, generalized to the complex plane. Because the parametric colatitude is the amplitude of the (irreducible) Incomplete Elliptic Integral of the 2nd Kind, the value for the tangent of one half the amplitude of the Elliptic Integral of the 2nd Kind is now known. The elliptic integral may now be computed by any desired method, and the result will be the Gauss-Kruger Transverse Mercator Projection. This result is a consequence of the fact that these steps produce a computation of real distance along the image (in the plane) of the principal meridian, and an analytic continuation of the distance at points that don't lie on the principal meridian. The elliptic-integral method used by this program is one of the "transformations of the elliptic integral" (similar to Landen's Transformation), appearing in standard handbooks of mathematical functions. Only elementary transcendental functions are utilized. The program output is the conventional (as used by the mapping agencies) cartesian coordinates, in meters, of the Transverse Mercator projection. The origin is at the intersection of the principal meridian and the equator. This FORTRAN77 program was developed on an IBM PC series computer equipped with an Intel Math Coprocessor. Double precision complex arithmetic and transcendental functions are needed to support a projection accuracy of 1 mm. Because such functions are not usually part of the FORTRAN library, the needed functions have been explicitly programmed and included in the source code. The program was developed in 1989. TRANSVERSE MERCATOR MAP PROJECTION OF THE SPHEROID USING TRANSFORMATIONS OF THE ELLIPTIC INTEGRAL is a copyrighted work with all copyright vested in NASA.
A compilation of K-Ar-ages for southern California
Miller, Fred K.; Morton, Douglas M.; Morton, Janet L.; Miller, David M.
2014-01-01
The purpose of this report is to make available a large body of conventional K-Ar ages for granitic, volcanic, and metamorphic rocks collected in southern California. Although one interpretive map is included, the report consists primarily of a systematic listing, without discussion or interpretation, of published and unpublished ages that may be of value in future regional and other geologic studies. From 1973 to 1979, 468 rock samples from southern California were collected for conventional K-Ar dating under a regional geologic mapping project of Southern California (predecessor of the Southern California Areal Mapping Project). Most samples were collected and dated between 1974 and 1977. For 61 samples (13 percent of those collected), either they were discarded for varying reasons, or the original collection data were lost. For the remaining samples, 518 conventional K-Ar ages are reported here; coexisting mineral pairs were dated from many samples. Of these K-Ar ages, 225 are previously unpublished, and identified as such in table 1. All K-Ar ages are by conventional K-Ar analysis; no 40Ar/39Ar dating was done. Subsequent to the rock samples collected in the 1970s and reported here, 33 samples were collected and 38 conventional K-Ar ages determined under projects directed at (1) characterization of the Mesozoic and Cenozoic igneous rocks in and on both sides of the Transverse Ranges and (2) clarifying the Mesozoic and Cenozoic tectonics of the eastern Mojave Desert. Although previously published (Beckerman et al., 1982), another eight samples and 11 conventional K-Ar ages are included here, because they augment those completed under the previous two projects.
An AST-ELM Method for Eliminating the Influence of Charging Phenomenon on ECT.
Wang, Xiaoxin; Hu, Hongli; Jia, Huiqin; Tang, Kaihao
2017-12-09
Electrical capacitance tomography (ECT) is a promising imaging technology of permittivity distributions in multiphase flow. To reduce the effect of charging phenomenon on ECT measurement, an improved extreme learning machine method combined with adaptive soft-thresholding (AST-ELM) is presented and studied for image reconstruction. This method can provide a nonlinear mapping model between the capacitance values and medium distributions by using machine learning but not an electromagnetic-sensitive mechanism. Both simulation and experimental tests are carried out to validate the performance of the presented method, and reconstructed images are evaluated by relative error and correlation coefficient. The results have illustrated that the image reconstruction accuracy by the proposed AST-ELM method has greatly improved than that by the conventional methods under the condition with charging object.
An AST-ELM Method for Eliminating the Influence of Charging Phenomenon on ECT
Wang, Xiaoxin; Hu, Hongli; Jia, Huiqin; Tang, Kaihao
2017-01-01
Electrical capacitance tomography (ECT) is a promising imaging technology of permittivity distributions in multiphase flow. To reduce the effect of charging phenomenon on ECT measurement, an improved extreme learning machine method combined with adaptive soft-thresholding (AST-ELM) is presented and studied for image reconstruction. This method can provide a nonlinear mapping model between the capacitance values and medium distributions by using machine learning but not an electromagnetic-sensitive mechanism. Both simulation and experimental tests are carried out to validate the performance of the presented method, and reconstructed images are evaluated by relative error and correlation coefficient. The results have illustrated that the image reconstruction accuracy by the proposed AST-ELM method has greatly improved than that by the conventional methods under the condition with charging object. PMID:29232850
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, Zhye, E-mail: yin@ge.com; De Man, Bruno; Yao, Yangyang
Purpose: Traditionally, 2D radiographic preparatory scan images (scout scans) are used to plan diagnostic CT scans. However, a 3D CT volume with a full 3D organ segmentation map could provide superior information for customized scan planning and other purposes. A practical challenge is to design the volumetric scout acquisition and processing steps to provide good image quality (at least good enough to enable 3D organ segmentation) while delivering a radiation dose similar to that of the conventional 2D scout. Methods: The authors explored various acquisition methods, scan parameters, postprocessing methods, and reconstruction methods through simulation and cadaver data studies tomore » achieve an ultralow dose 3D scout while simultaneously reducing the noise and maintaining the edge strength around the target organ. Results: In a simulation study, the 3D scout with the proposed acquisition, preprocessing, and reconstruction strategy provided a similar level of organ segmentation capability as a traditional 240 mAs diagnostic scan, based on noise and normalized edge strength metrics. At the same time, the proposed approach delivers only 1.25% of the dose of a traditional scan. In a cadaver study, the authors’ pictorial-structures based organ localization algorithm successfully located the major abdominal-thoracic organs from the ultralow dose 3D scout obtained with the proposed strategy. Conclusions: The authors demonstrated that images with a similar degree of segmentation capability (interpretability) as conventional dose CT scans can be achieved with an ultralow dose 3D scout acquisition and suitable postprocessing. Furthermore, the authors applied these techniques to real cadaver CT scans with a CTDI dose level of less than 0.1 mGy and successfully generated a 3D organ localization map.« less
Passive optical coherence elastography using a time-reversal approach (Conference Presentation)
NASA Astrophysics Data System (ADS)
Nguyen, Thu-Mai; Zorgani, Ali; Fink, Mathias; Catheline, Stefan; Boccara, A. Claude
2017-02-01
Background and motivation - Conventional Optical Coherence Elastography (OCE) methods consist in launching controlled shear waves in tissues, and measuring their propagation speed using an ultrafast imaging system. However, the use of external shear sources limits transfer to clinical practice, especially for ophthalmic applications. Here, we propose a totally passive OCE method for ocular tissues based on time-reversal of the natural vibrations. Methods - Experiments were first conducted on a tissue-mimicking phantom containing a stiff inclusion. Pulsatile motions were reproduced by stimulating the phantom surface with two piezoelectric actuators excited asynchronously at low frequencies (50-500 Hz). The resulting random displacements were tracked at 190 frames/sec using spectral-domain optical coherence tomography (SD-OCT), with a 10x5µm² resolution over a 3x2mm² field-of-view (lateral x depth). The shear wavefield was numerically refocused (i.e. time-reversed) at each pixel using noise-correlation algorithms. The focal spot size yields the shear wavelength. Results were validated by comparison with shear wave speed measurements obtained from conventional active OCE. In vivo tests were then conducted on anesthetized rats. Results - The stiff inclusion of the phantom was delineated on the wavelength map with a wavelength ratio between the inclusion and the background (1.6) consistent with the speed ratio (1.7). This validates the wavelength measurements. In vivo, natural shear waves were detected in the eye and wavelength maps of the anterior segment showed a clear elastic contrast between the cornea, the sclera and the iris. Conclusion - We validated the time-reversal approach for passive elastography using SD-OCT imaging at low frame-rate. This method could accelerate the clinical transfer of ocular elastography.
Jeong, Woo Chul; Chauhan, Munish; Sajib, Saurav Z K; Kim, Hyung Joong; Serša, Igor; Kwon, Oh In; Woo, Eung Je
2014-09-07
Magnetic Resonance Electrical Impedance Tomography (MREIT) is an MRI method that enables mapping of internal conductivity and/or current density via measurements of magnetic flux density signals. The MREIT measures only the z-component of the induced magnetic flux density B = (Bx, By, Bz) by external current injection. The measured noise of Bz complicates recovery of magnetic flux density maps, resulting in lower quality conductivity and current-density maps. We present a new method for more accurate measurement of the spatial gradient of the magnetic flux density gradient (∇ Bz). The method relies on the use of multiple radio-frequency receiver coils and an interleaved multi-echo pulse sequence that acquires multiple sampling points within each repetition time. The noise level of the measured magnetic flux density Bz depends on the decay rate of the signal magnitude, the injection current duration, and the coil sensitivity map. The proposed method uses three key steps. The first step is to determine a representative magnetic flux density gradient from multiple receiver coils by using a weighted combination and by denoising the measured noisy data. The second step is to optimize the magnetic flux density gradient by using multi-echo magnetic flux densities at each pixel in order to reduce the noise level of ∇ Bz and the third step is to remove a random noise component from the recovered ∇ Bz by solving an elliptic partial differential equation in a region of interest. Numerical simulation experiments using a cylindrical phantom model with included regions of low MRI signal to noise ('defects') verified the proposed method. Experimental results using a real phantom experiment, that included three different kinds of anomalies, demonstrated that the proposed method reduced the noise level of the measured magnetic flux density. The quality of the recovered conductivity maps using denoised ∇ Bz data showed that the proposed method reduced the conductivity noise level up to 3-4 times at each anomaly region in comparison to the conventional method.
Irenge, Léonid M; Walravens, Karl; Govaerts, Marc; Godfroid, Jacques; Rosseels, Valérie; Huygen, Kris; Gala, Jean-Luc
2009-04-14
A triplex real-time (TRT-PCR) assay was developed to ensure a rapid and reliable detection of Mycobacterium avium subsp. paratuberculosis (Map) in faecal samples and to allow routine detection of Map in farmed livestock and wildlife species. The TRT-PCR assay was designed using IS900, ISMAP02 and f57 molecular targets. Specificity of TRT-PCR was first confirmed on a panel of control mycobacterial Map and non-Map strains and on faecal samples from Map-negative cows (n=35) and from Map-positive cows (n=20). The TRT-PCR assay was compared to direct examination after Ziehl-Neelsen (ZN) staining and to culture on 197 faecal samples collected serially from five calves experimentally exposed to Map over a 3-year period during the sub-clinical phase of the disease. The data showed a good agreement between culture and TRT-PCR (kappa score=0.63), with the TRT-PCR limit of detection of 2.5 x 10(2)microorganisms/g of faeces spiked with Map. ZN agreement with TRT-PCR was not good (kappa=0.02). Sequence analysis of IS900 amplicons from three single IS900 positive samples confirmed the true Map positivity of the samples. Highly specific IS900 amplification suggests therefore that each single IS900 positive sample from experimentally exposed animals was a true Map-positive specimen. In this controlled experimental setting, the TRT-PCT was rapid, specific and displayed a very high sensitivity for Map detection in faecal samples compared to conventional methods.
MIND Demons for MR-to-CT Deformable Image Registration In Image-Guided Spine Surgery.
Reaungamornrat, S; De Silva, T; Uneri, A; Wolinsky, J-P; Khanna, A J; Kleinszig, G; Vogt, S; Prince, J L; Siewerdsen, J H
2016-02-27
Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The method yields registration accuracy suitable to application in image-guided spine surgery across a broad range of anatomical sites and modes of deformation.
Mapping the Conjugate Gradient Algorithm onto High Performance Heterogeneous Computers
2014-05-01
Matrix Storage Formats According to J . Dongarra (Dongerra 2000), the efficiency of most iterative methods, such as CG, can be attributed to the...valh = aij) ⇒ (colh = j ). The ptr integer vector is of length n + 1 and contains the index in val where each matrix row starts. For example, the...first nonzero element of matrix rowm is found at index ptrm of val. By convention, ptrn+1 ≡ nz + 1. Notice that (aij) ⇒ (ptri ≤ j < ptri+1) for all i. An
NASA Technical Reports Server (NTRS)
Brockmann, C. E. (Principal Investigator); Ayllon, R. B.
1973-01-01
The author has identified the following significant results. Using ERTS-1 imagery, it is possible to delimit great lithological units, folds, lineaments, faults, and in lesser degree unconformities. In the morphological aspect, the images show clearly the relief necessary for geological interpretation. The ERTS-1 images are important for the preparation of the geological and tectonic map of Bolivia, on a 1:1 million scale, if conventional methods of work are used as a base.
NASA Technical Reports Server (NTRS)
Gustafson, T. D.; Adams, M. S.
1973-01-01
Research was initiated to use aerial photography as an investigative tool in studies that are part of an intensive aquatic ecosystem research effort at Lake Wingra, Madison, Wisconsin. It is anticipated that photographic techniques would supply information about the growth and distribution of littoral macrophytes with efficiency and accuracy greater than conventional methods.
Clinical Study on Acute Pancreatitis in Pregnancy in 26 Cases
Qihui, Cheng; Xiping, Zhang; Xianfeng, Ding
2012-01-01
Aim. This paper investigated the pathogenesis and treatment strategies of acute pancreatitis (AP) in pregnancy. Methods. We analyzed retrospectively the characteristics, auxiliary diagnosis, treatment strategies, and clinical outcomes of 26 cases of patients with AP in pregnancy. Results. All patients were cured finally. (1) Nine cases of 22 mild acute pancreatitis (MAP) patients selected automatic termination of pregnancy because of the unsatisfied therapeutic efficacy or those patients' requirements. (2) Four cases of all patients were complicated with severe acute pancreatitis (SAP); 2 cases underwent uterine incision delivery while one of them also received cholecystectomy, debridement and drainage of pancreatic necrosis, and percutaneous jejunostomy. One case had a fetal death when complicated with SAP; she had to receive extraction of bile duct stones and drainage of abdominal cavity after induced abortion. The other one case with hyperlipidemic pancreatitis was given induced abortion and hemofiltration. Conclusions. The first choice of MAP in pregnancy is the conventional therapy. Apart from the conventional therapy, we need to terminate pregnancy as early as possible for patients with SAP. Removing biliary calculi and drainage is supposed to be considered for acute biliary pancreatitis. Lowering blood lipids treatment should be applied to hyperlipidemic pancreatitis or given to hemofiltration when necessary. PMID:23213326
A study of the utilization of ERTS-1 data from the Wabash River Basin
NASA Technical Reports Server (NTRS)
Landgrebe, D. A. (Principal Investigator)
1974-01-01
The author has identified the following significant results. The identification and area estimation of crops experiment tested the usefulness of ERTS data for crop survey and produced results indicating that crop statistics could be obtained from ERTS imagery. Soil association mapping results showed that strong relationships exist between ERTS data derived maps and conventional soil maps. Urban land use analysis experiment results indicate potential for accurate gross land use mapping. Water resources mapping demonstrated the feasibility of mapping water bodies using ERTS imagery.
Mapping of forested wetland: use of Seasat radar images to complement conventional sources ( USA).
Place, J.L.
1985-01-01
Distinguishing forested wetland from dry forest using aerial photographs is handicapped because photographs often do not reveal the presence of water below tree canopies. Radar images obtained by the Seasat satellite reveal forested wetland as highly reflective patterns on the coastal plain between Maryland and Florida. Seasat radar images may complement aerial photographs for compiling maps of wetland. A test with experienced photointerpreters revealed that interpretation accuracy was significantly higher when using Seasat radar images than when using only conventional sources.-Author
Plouff, Donald
1998-01-01
Computer programs were written in the Fortran language to process and display gravity data with locations expressed in geographic coordinates. The programs and associated processes have been tested for gravity data in an area of about 125,000 square kilometers in northwest Nevada, southeast Oregon, and northeast California. This report discusses the geographic aspects of data processing. Utilization of the programs begins with application of a template (printed in PostScript format) to transfer locations obtained with Global Positioning Systems to and from field maps and includes a 5-digit geographic-based map naming convention for field maps. Computer programs, with source codes that can be copied, are used to display data values (printed in PostScript format) and data coverage, insert data into files, extract data from files, shift locations, test for redundancy, and organize data by map quadrangles. It is suggested that 30-meter Digital Elevation Models needed for gravity terrain corrections and other applications should be accessed in a file search by using the USGS 7.5-minute map name as a file name, for example, file '40117_B8.DEM' contains elevation data for the map with a southeast corner at lat 40? 07' 30' N. and lon 117? 52' 30' W.
Interoperability in planetary research for geospatial data analysis
NASA Astrophysics Data System (ADS)
Hare, Trent M.; Rossi, Angelo P.; Frigeri, Alessandro; Marmo, Chiara
2018-01-01
For more than a decade there has been a push in the planetary science community to support interoperable methods for accessing and working with geospatial data. Common geospatial data products for planetary research include image mosaics, digital elevation or terrain models, geologic maps, geographic location databases (e.g., craters, volcanoes) or any data that can be tied to the surface of a planetary body (including moons, comets or asteroids). Several U.S. and international cartographic research institutions have converged on mapping standards that embrace standardized geospatial image formats, geologic mapping conventions, U.S. Federal Geographic Data Committee (FGDC) cartographic and metadata standards, and notably on-line mapping services as defined by the Open Geospatial Consortium (OGC). The latter includes defined standards such as the OGC Web Mapping Services (simple image maps), Web Map Tile Services (cached image tiles), Web Feature Services (feature streaming), Web Coverage Services (rich scientific data streaming), and Catalog Services for the Web (data searching and discoverability). While these standards were developed for application to Earth-based data, they can be just as valuable for planetary domain. Another initiative, called VESPA (Virtual European Solar and Planetary Access), will marry several of the above geoscience standards and astronomy-based standards as defined by International Virtual Observatory Alliance (IVOA). This work outlines the current state of interoperability initiatives in use or in the process of being researched within the planetary geospatial community.
NASA Technical Reports Server (NTRS)
Head, James W.; Huffman, J. N.; Forsberg, A. S.; Hurwitz, D. M.; Basilevsky, A. T.; Ivanov, M. A.; Dickson, J. L.; Kumar, P. Senthil
2008-01-01
We are currently investigating new technological developments in computer visualization and analysis in order to assess their importance and utility in planetary geological analysis and mapping [1,2]. Last year we reported on the range of technologies available and on our application of these to various problems in planetary mapping [3]. In this contribution we focus on the application of these techniques and tools to Venus geological mapping at the 1:5M quadrangle scale. In our current Venus mapping projects we have utilized and tested the various platforms to understand their capabilities and assess their usefulness in defining units, establishing stratigraphic relationships, mapping structures, reaching consensus on interpretations and producing map products. We are specifically assessing how computer visualization display qualities (e.g., level of immersion, stereoscopic vs. monoscopic viewing, field of view, large vs. small display size, etc.) influence performance on scientific analysis and geological mapping. We have been exploring four different environments: 1) conventional desktops (DT), 2) semi-immersive Fishtank VR (FT) (i.e., a conventional desktop with head-tracked stereo and 6DOF input), 3) tiled wall displays (TW), and 4) fully immersive virtual reality (IVR) (e.g., "Cave Automatic Virtual Environment," or Cave system). Formal studies demonstrate that fully immersive Cave environments are superior to desktop systems for many tasks [e.g., 4].
Sparsity-constrained PET image reconstruction with learned dictionaries
NASA Astrophysics Data System (ADS)
Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie
2016-09-01
PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Yi, E-mail: yiguo@usc.edu; Zhu, Yinghua; Lingala, Sajan Goud
Purpose: To clinically evaluate a highly accelerated T1-weighted dynamic contrast-enhanced (DCE) MRI technique that provides high spatial resolution and whole-brain coverage via undersampling and constrained reconstruction with multiple sparsity constraints. Methods: Conventional (rate-2 SENSE) and experimental DCE-MRI (rate-30) scans were performed 20 minutes apart in 15 brain tumor patients. The conventional clinical DCE-MRI had voxel dimensions 0.9 × 1.3 × 7.0 mm{sup 3}, FOV 22 × 22 × 4.2 cm{sup 3}, and the experimental DCE-MRI had voxel dimensions 0.9 × 0.9 × 1.9 mm{sup 3}, and broader coverage 22 × 22 × 19 cm{sup 3}. Temporal resolution was 5 smore » for both protocols. Time-resolved images and blood–brain barrier permeability maps were qualitatively evaluated by two radiologists. Results: The experimental DCE-MRI scans showed no loss of qualitative information in any of the cases, while achieving substantially higher spatial resolution and whole-brain spatial coverage. Average qualitative scores (from 0 to 3) were 2.1 for the experimental scans and 1.1 for the conventional clinical scans. Conclusions: The proposed DCE-MRI approach provides clinically superior image quality with higher spatial resolution and coverage than currently available approaches. These advantages may allow comprehensive permeability mapping in the brain, which is especially valuable in the setting of large lesions or multiple lesions spread throughout the brain.« less
Conventions and workflows for using Situs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wriggers, Willy, E-mail: wriggers@biomachina.org
2012-04-01
Recent developments of the Situs software suite for multi-scale modeling are reviewed. Typical workflows and conventions encountered during processing of biophysical data from electron microscopy, tomography or small-angle X-ray scattering are described. Situs is a modular program package for the multi-scale modeling of atomic resolution structures and low-resolution biophysical data from electron microscopy, tomography or small-angle X-ray scattering. This article provides an overview of recent developments in the Situs package, with an emphasis on workflows and conventions that are important for practical applications. The modular design of the programs facilitates scripting in the bash shell that allows specific programs tomore » be combined in creative ways that go beyond the original intent of the developers. Several scripting-enabled functionalities, such as flexible transformations of data type, the use of symmetry constraints or the creation of two-dimensional projection images, are described. The processing of low-resolution biophysical maps in such workflows follows not only first principles but often relies on implicit conventions. Situs conventions related to map formats, resolution, correlation functions and feature detection are reviewed and summarized. The compatibility of the Situs workflow with CCP4 conventions and programs is discussed.« less
Enhanced Positive-Contrast Visualization of Paramagnetic Contrast Agents Using Phase Images
Mills, Parker H.; Ahrens, Eric T.
2009-01-01
Iron oxide–based MRI contrast agents are increasingly being used to noninvasively track cells, target molecular epitopes, and monitor gene expression in vivo. Detecting regions of contrast agent accumulation can be challenging if resulting contrast is subtle relative to endogenous tissue hypointensities. A postprocessing method is presented that yields enhanced positive-contrast images from the phase map associated with T2*-weighted MRI data. As examples, the method was applied to an agarose gel phantom doped with superparamagnetic iron-oxide nanoparticles and in vivo and ex vivo mouse brains inoculated with recombinant viruses delivering transgenes that induce overexpression of paramagnetic ferritin. Overall, this approach generates images that exhibit a 1- to 8-fold improvement in contrast-to-noise ratio in regions where paramagnetic agents are present compared to conventional magnitude images. This approach can be used in conjunction with conventional T2* pulse sequences, requires no prescans or increased scan time, and can be applied retrospectively to previously acquired data. PMID:19780169
Fourier-based linear systems description of free-breathing pulmonary magnetic resonance imaging
NASA Astrophysics Data System (ADS)
Capaldi, D. P. I.; Svenningsen, S.; Cunningham, I. A.; Parraga, G.
2015-03-01
Fourier-decomposition of free-breathing pulmonary magnetic resonance imaging (FDMRI) was recently piloted as a way to provide rapid quantitative pulmonary maps of ventilation and perfusion without the use of exogenous contrast agents. This method exploits fast pulmonary MRI acquisition of free-breathing proton (1H) pulmonary images and non-rigid registration to compensate for changes in position and shape of the thorax associated with breathing. In this way, ventilation imaging using conventional MRI systems can be undertaken but there has been no systematic evaluation of fundamental image quality measurements based on linear systems theory. We investigated the performance of free-breathing pulmonary ventilation imaging using a Fourier-based linear system description of each operation required to generate FDMRI ventilation maps. Twelve subjects with chronic obstructive pulmonary disease (COPD) or bronchiectasis underwent pulmonary function tests and MRI. Non-rigid registration was used to co-register the temporal series of pulmonary images. Pulmonary voxel intensities were aligned along a time axis and discrete Fourier transforms were performed on the periodic signal intensity pattern to generate frequency spectra. We determined the signal-to-noise ratio (SNR) of the FDMRI ventilation maps using a conventional approach (SNRC) and using the Fourier-based description (SNRF). Mean SNR was 4.7 ± 1.3 for subjects with bronchiectasis and 3.4 ± 1.8, for COPD subjects (p>.05). SNRF was significantly different than SNRC (p<.01). SNRF was approximately 50% of SNRC suggesting that the linear system model well-estimates the current approach.
NASA Astrophysics Data System (ADS)
Yazıcı, Birsen; Son, Il-Young; Cagri Yanik, H.
2018-05-01
This paper introduces a new and novel radar interferometry based on Doppler synthetic aperture radar (Doppler-SAR) paradigm. Conventional SAR interferometry relies on wideband transmitted waveforms to obtain high range resolution. Topography of a surface is directly related to the range difference between two antennas configured at different positions. Doppler-SAR is a novel imaging modality that uses ultra-narrowband continuous waves (UNCW). It takes advantage of high resolution Doppler information provided by UNCWs to form high resolution SAR images. We introduce the theory of Doppler-SAR interferometry. We derive an interferometric phase model and develop the equations of height mapping. Unlike conventional SAR interferometry, we show that the topography of a scene is related to the difference in Doppler frequency between two antennas configured at different velocities. While the conventional SAR interferometry uses range, Doppler and Doppler due to interferometric phase in height mapping; Doppler-SAR interferometry uses Doppler, Doppler-rate and Doppler-rate due to interferometric phase in height mapping. We demonstrate our theory in numerical simulations. Doppler-SAR interferometry offers the advantages of long-range, robust, environmentally friendly operations; low-power, low-cost, lightweight systems suitable for low-payload platforms, such as micro-satellites; and passive applications using sources of opportunity transmitting UNCW.
Prediction of iron oxide contents using diffuse reflectance spectroscopy
NASA Astrophysics Data System (ADS)
Marques, José, Jr.; Arantes Camargo, Livia
2015-04-01
Determining soil iron oxides using conventional analysis is relatively unfeasible when large areas are mapped, with the aim of characterizing spatial variability. Diffuse reflectance spectroscopy (DRS) is rapid, less expensive, non-destructive and sometimes more accurate than conventional analysis. Furthermore, this technique allows the simultaneous characterization of many soil attributes with agronomic and environmental relevance. This study aims to assess the DRS capability to predict iron oxides content -hematite and goethite - , characterizing their spatial variability in soils of Brazil. Soil samples collected from an 800-hectare area were scanned in the visible and near-infrared spectral range. Moreover, chemometric calibration was obtained through partial least-squares regression (PLSR). Then, spatial distribution maps of the attributes were constructed using predicted values from calibrated models through geostatistical methods. The studied area presented soils with varied contents of iron oxides as examples for the Oxisols and Entisols. In the spectra of each soil is observed that the reflectance decreases with the content of iron oxides present in the soil. In soils with a high content of iron oxides can be observed more pronounced concavities between 380 and 1100 nm which are characteristic of the presence of these oxides. In soils with higher reflectance it were observed concavity characteristics due to the presence of kaolinite, in agreement with the low iron contents of those soils. The best accuracy of prediction models [residual prediction deviation (RPD) = 1.7] was obtained for goethite within the visible region (380-800 nm), and for hematite (RPD = 2.0) within the visible near infrared (380-2300 nm). The maps of goethite and hematite predicted showed the spatial distribution pattern similar to the maps of clay and iron extracted by dithionite-citrate-bicarbonate, being consistent with the iron oxide contents of soils present in the study area. These results confirm the value of DRS in the mapping of iron oxides in large areas at detailed scale.
Usage of multivariate geostatistics in interpolation processes for meteorological precipitation maps
NASA Astrophysics Data System (ADS)
Gundogdu, Ismail Bulent
2017-01-01
Long-term meteorological data are very important both for the evaluation of meteorological events and for the analysis of their effects on the environment. Prediction maps which are constructed by different interpolation techniques often provide explanatory information. Conventional techniques, such as surface spline fitting, global and local polynomial models, and inverse distance weighting may not be adequate. Multivariate geostatistical methods can be more significant, especially when studying secondary variables, because secondary variables might directly affect the precision of prediction. In this study, the mean annual and mean monthly precipitations from 1984 to 2014 for 268 meteorological stations in Turkey have been used to construct country-wide maps. Besides linear regression, the inverse square distance and ordinary co-Kriging (OCK) have been used and compared to each other. Also elevation, slope, and aspect data for each station have been taken into account as secondary variables, whose use has reduced errors by up to a factor of three. OCK gave the smallest errors (1.002 cm) when aspect was included.
Buda, Alessandro; Papadia, Andrea; Zapardiel, Ignacio; Vizza, Enrico; Ghezzi, Fabio; De Ponti, Elena; Lissoni, Andrea Alberto; Imboden, Sara; Diestro, Maria Dolores; Verri, Debora; Gasparri, Maria Luisa; Bussi, Beatrice; Di Martino, Giampaolo; de la Noval, Begoña Diaz; Mueller, Michael; Crivellaro, Cinzia
2016-09-01
The credibility of sentinel lymph node (SLN) mapping is becoming increasingly more established in cervical cancer. We aimed to assess the sensitivity of SLN biopsy in terms of detection rate and bilateral mapping in women with cervical cancer by comparing technetium-99 radiocolloid (Tc-99(m)) and blue dye (BD) versus fluorescence mapping with indocyanine green (ICG). Data of patients with cervical cancer stage 1A2 to 1B1 from 5 European institutions were retrospectively reviewed. All centers used a laparoscopic approach with the same intracervical dye injection. Detection rate and bilateral mapping of ICG were compared, respectively, with results obtained by standard Tc-99(m) with BD. Overall, 76 (53 %) of 144 of women underwent preoperative SLN mapping with radiotracer and intraoperative BD, whereas 68 of (47 %) 144 patients underwent mapping using intraoperative ICG. The detection rate of SLN mapping was 96 % and 100 % for Tc-99(m) with BD and ICG, respectively. Bilateral mapping was achieved in 98.5 % for ICG and 76.3 % for Tc-99(m) with BD; this difference was statistically significant (p < 0.0001). The fluorescence SLN mapping with ICG achieved a significantly higher detection rate and bilateral mapping compared to standard radiocolloid and BD technique in women with early stage cervical cancer. Nodal staging with an intracervical injection of ICG is accurate, safe, and reproducible in patients with cervical cancer. Before replacing lymphadenectomy completely, the additional value of fluorescence SLN mapping on both perioperative morbidity and survival should be explored and confirmed by ongoing controlled trials.
NASA Astrophysics Data System (ADS)
Kawaguchi, Hiroshi; Hayashi, Toshiyuki; Kato, Toshinori; Okada, Eiji
2004-06-01
Near-infrared (NIR) topography can obtain a topographical distribution of the activated region in the brain cortex. Near-infrared light is strongly scattered in the head, and the volume of tissue sampled by a source-detector pair on the head surface is broadly distributed in the brain. This scattering effect results in poor resolution and contrast in the topographic image of the brain activity. In this study, a one-dimensional distribution of absorption change in a head model is calculated by mapping and reconstruction methods to evaluate the effect of the image reconstruction algorithm and the interval of measurement points for topographic imaging on the accuracy of the topographic image. The light propagation in the head model is predicted by Monte Carlo simulation to obtain the spatial sensitivity profile for a source-detector pair. The measurement points are one-dimensionally arranged on the surface of the model, and the distance between adjacent measurement points is varied from 4 mm to 28 mm. Small intervals of the measurement points improve the topographic image calculated by both the mapping and reconstruction methods. In the conventional mapping method, the limit of the spatial resolution depends upon the interval of the measurement points and spatial sensitivity profile for source-detector pairs. The reconstruction method has advantages over the mapping method which improve the results of one-dimensional analysis when the interval of measurement points is less than 12 mm. The effect of overlapping of spatial sensitivity profiles indicates that the reconstruction method may be effective to improve the spatial resolution of a two-dimensional reconstruction of topographic image obtained with larger interval of measurement points. Near-infrared topography with the reconstruction method potentially obtains an accurate distribution of absorption change in the brain even if the size of absorption change is less than 10 mm.
Susceptibility-based functional brain mapping by 3D deconvolution of an MR-phase activation map.
Chen, Zikuan; Liu, Jingyu; Calhoun, Vince D
2013-05-30
The underlying source of T2*-weighted magnetic resonance imaging (T2*MRI) for brain imaging is magnetic susceptibility (denoted by χ). T2*MRI outputs a complex-valued MR image consisting of magnitude and phase information. Recent research has shown that both the magnitude and the phase images are morphologically different from the source χ, primarily due to 3D convolution, and that the source χ can be reconstructed from complex MR images by computed inverse MRI (CIMRI). Thus, we can obtain a 4D χ dataset from a complex 4D MR dataset acquired from a brain functional MRI study by repeating CIMRI to reconstruct 3D χ volumes at each timepoint. Because the reconstructed χ is a more direct representation of neuronal activity than the MR image, we propose a method for χ-based functional brain mapping, which is numerically characterised by a temporal correlation map of χ responses to a stimulant task. Under the linear imaging conditions used for T2*MRI, we show that the χ activation map can be calculated from the MR phase map by CIMRI. We validate our approach using numerical simulations and Gd-phantom experiments. We also analyse real data from a finger-tapping visuomotor experiment and show that the χ-based functional mapping provides additional activation details (in the form of positive and negative correlation patterns) beyond those generated by conventional MR-magnitude-based mapping. Copyright © 2013 Elsevier B.V. All rights reserved.
Parallel imaging of knee cartilage at 3 Tesla.
Zuo, Jin; Li, Xiaojuan; Banerjee, Suchandrima; Han, Eric; Majumdar, Sharmila
2007-10-01
To evaluate the feasibility and reproducibility of quantitative cartilage imaging with parallel imaging at 3T and to determine the impact of the acceleration factor (AF) on morphological and relaxation measurements. An eight-channel phased-array knee coil was employed for conventional and parallel imaging on a 3T scanner. The imaging protocol consisted of a T2-weighted fast spin echo (FSE), a 3D-spoiled gradient echo (SPGR), a custom 3D-SPGR T1rho, and a 3D-SPGR T2 sequence. Parallel imaging was performed with an array spatial sensitivity technique (ASSET). The left knees of six healthy volunteers were scanned with both conventional and parallel imaging (AF = 2). Morphological parameters and relaxation maps from parallel imaging methods (AF = 2) showed comparable results with conventional method. The intraclass correlation coefficient (ICC) of the two methods for cartilage volume, mean cartilage thickness, T1rho, and T2 were 0.999, 0.977, 0.964, and 0.969, respectively, while demonstrating excellent reproducibility. No significant measurement differences were found when AF reached 3 despite the low signal-to-noise ratio (SNR). The study demonstrated that parallel imaging can be applied to current knee cartilage quantification at AF = 2 without degrading measurement accuracy with good reproducibility while effectively reducing scan time. Shorter imaging times can be achieved with higher AF at the cost of SNR. (c) 2007 Wiley-Liss, Inc.
A computer-oriented system for assembling and displaying land management information
Elliot L. Amidon
1964-01-01
Maps contain information basic to land management planning. By transforming conventional map symbols into numbers which are punched into cards, the land manager can have a computer assemble and display information required for a specific job. He can let a computer select information from several maps, combine it with such nonmap data as treatment cost or benefit per...
Environmental psychology: Mapping landscape meanings for ecosystem management
Daniel R. Williams; Michael E. Patterson
1999-01-01
An intellectual map is a good starting point for any effort to integrate research on the human dimensions of ecosystem management. We must remember going into such exercises, however, that every map maker imposes a certain point of view, sense of order, or set of conventions in the effort to represent the world. Just as there are competing ways to divide the landscape...
NASA Astrophysics Data System (ADS)
Bhattarai, Arjun; Wai, Nyunt; Schweiss, Rüdiger; Whitehead, Adam; Scherer, Günther G.; Ghimire, Purna C.; Nguyen, Tam D.; Hng, Huey Hoon
2017-08-01
Uniform flow distribution through the porous electrodes in a flow battery cell is very important for reducing Ohmic and mass transport polarization. A segmented cell approach can be used to obtain in-situ information on flow behaviour, through the local voltage or current mapping. Lateral flow of current within the thick felts in the flow battery can hamper the interpretation of the data. In this study, a new method of segmenting a conventional flow cell is introduced, which for the first time, splits up both the porous felt as well as the current collector. This dual segmentation results in higher resolution and distinct separation of voltages between flow inlet to outlet. To study the flow behavior for an undivided felt, monitoring the OCV is found to be a reliable method, instead of voltage or current mapping during charging and discharging. Our approach to segmentation is simple and applicable to any size of the cell.
Wang, Jimin; Askerka, Mikhail; Brudvig, Gary W.; ...
2017-01-12
Understanding structure–function relations in photosystem II (PSII) is important for the development of biomimetic photocatalytic systems. X-ray crystallography, computational modeling, and spectroscopy have played central roles in elucidating the structure and function of PSII. Recent breakthroughs in femtosecond X-ray crystallography offer the possibility of collecting diffraction data from the X-ray free electron laser (XFEL) before radiation damage of the sample, thereby overcoming the main challenge of conventional X-ray diffraction methods. However, the interpretation of XFEL data from PSII intermediates is challenging because of the issues regarding data-processing, uncertainty on the precise positions of light oxygen atoms next to heavy metalmore » centers, and different kinetics of the S-state transition in microcrystals compared to solution. Lastly, we summarize recent advances and outstanding challenges in PSII structure–function determination with emphasis on the implementation of quantum mechanics/molecular mechanics techniques combined with isomorphous difference Fourier maps, direct methods, and high-resolution spectroscopy.« less
Wang, Jimin; Askerka, Mikhail; Brudvig, Gary W; Batista, Victor S
2017-02-10
Understanding structure-function relations in photosystem II (PSII) is important for the development of biomimetic photocatalytic systems. X-ray crystallography, computational modeling, and spectroscopy have played central roles in elucidating the structure and function of PSII. Recent breakthroughs in femtosecond X-ray crystallography offer the possibility of collecting diffraction data from the X-ray free electron laser (XFEL) before radiation damage of the sample, thereby overcoming the main challenge of conventional X-ray diffraction methods. However, the interpretation of XFEL data from PSII intermediates is challenging because of the issues regarding data-processing, uncertainty on the precise positions of light oxygen atoms next to heavy metal centers, and different kinetics of the S-state transition in microcrystals compared to solution. Here, we summarize recent advances and outstanding challenges in PSII structure-function determination with emphasis on the implementation of quantum mechanics/molecular mechanics techniques combined with isomorphous difference Fourier maps, direct methods, and high-resolution spectroscopy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jimin; Askerka, Mikhail; Brudvig, Gary W.
Understanding structure–function relations in photosystem II (PSII) is important for the development of biomimetic photocatalytic systems. X-ray crystallography, computational modeling, and spectroscopy have played central roles in elucidating the structure and function of PSII. Recent breakthroughs in femtosecond X-ray crystallography offer the possibility of collecting diffraction data from the X-ray free electron laser (XFEL) before radiation damage of the sample, thereby overcoming the main challenge of conventional X-ray diffraction methods. However, the interpretation of XFEL data from PSII intermediates is challenging because of the issues regarding data-processing, uncertainty on the precise positions of light oxygen atoms next to heavy metalmore » centers, and different kinetics of the S-state transition in microcrystals compared to solution. Lastly, we summarize recent advances and outstanding challenges in PSII structure–function determination with emphasis on the implementation of quantum mechanics/molecular mechanics techniques combined with isomorphous difference Fourier maps, direct methods, and high-resolution spectroscopy.« less
Multisource image fusion method using support value transform.
Zheng, Sheng; Shi, Wen-Zhong; Liu, Jian; Zhu, Guang-Xi; Tian, Jin-Wen
2007-07-01
With the development of numerous imaging sensors, many images can be simultaneously pictured by various sensors. However, there are many scenarios where no one sensor can give the complete picture. Image fusion is an important approach to solve this problem and produces a single image which preserves all relevant information from a set of different sensors. In this paper, we proposed a new image fusion method using the support value transform, which uses the support value to represent the salient features of image. This is based on the fact that, in support vector machines (SVMs), the data with larger support values have a physical meaning in the sense that they reveal relative more importance of the data points for contributing to the SVM model. The mapped least squares SVM (mapped LS-SVM) is used to efficiently compute the support values of image. The support value analysis is developed by using a series of multiscale support value filters, which are obtained by filling zeros in the basic support value filter deduced from the mapped LS-SVM to match the resolution of the desired level. Compared with the widely used image fusion methods, such as the Laplacian pyramid, discrete wavelet transform methods, the proposed method is an undecimated transform-based approach. The fusion experiments are undertaken on multisource images. The results demonstrate that the proposed approach is effective and is superior to the conventional image fusion methods in terms of the pertained quantitative fusion evaluation indexes, such as quality of visual information (Q(AB/F)), the mutual information, etc.
Song, Pengfei; Macdonald, Michael C.; Behler, Russell H.; Lanning, Justin D.; Wang, Michael H.; Urban, Matthew W.; Manduca, Armando; Zhao, Heng; Callstrom, Matthew R.; Alizad, Azra; Greenleaf, James F.; Chen, Shigao
2014-01-01
Two-dimensional (2D) shear wave elastography presents 2D quantitative shear elasticity maps of tissue, which are clinically useful for both focal lesion detection and diffuse disease diagnosis. Realization of 2D shear wave elastography on conventional ultrasound scanners, however, is challenging due to the low tracking pulse-repetition-frequency (PRF) of these systems. While some clinical and research platforms support software beamforming and plane wave imaging with high PRF, the majority of current clinical ultrasound systems do not have the software beamforming capability, which presents a critical challenge for translating the 2D shear wave elastography technique from laboratory to clinical scanners. To address this challenge, this paper presents a Time Aligned Sequential Tracking (TAST) method for shear wave tracking on conventional ultrasound scanners. TAST takes advantage of the parallel beamforming capability of conventional systems and realizes high PRF shear wave tracking by sequentially firing tracking vectors and aligning shear wave data in the temporal direction. The Comb-push Ultrasound Shear Elastography (CUSE) technique was used to simultaneously produce multiple shear wave sources within the field-of-view (FOV) to enhance shear wave signal-to-noise-ratio (SNR) and facilitate robust reconstructions of 2D elasticity maps. TAST and CUSE were realized on a conventional ultrasound scanner (the General Electric LOGIQ E9). A phantom study showed that the shear wave speed measurements from the LOGIQ E9 were in good agreement to the values measured from other 2D shear wave imaging technologies. An inclusion phantom study showed that the LOGIQ E9 had comparable performance to the Aixplorer (Supersonic Imagine) in terms of bias and precision in measuring different sized inclusions. Finally, in vivo case analysis of a breast with a malignant mass, and a liver from a healthy subject demonstrated the feasibility of using the LOGIQ E9 for in vivo 2D shear wave elastography. These promising results indicate that the proposed technique can enable the implementation of 2D shear wave elastography on conventional ultrasound scanners and potentially facilitate wider clinical applications with shear wave elastography. PMID:25643079
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies.
Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong
2017-05-07
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18 F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans-each containing 1/8th of the total number of events-were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18 F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of [Formula: see text], the tracer transport rate (ml · min -1 · ml -1 ), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced [Formula: see text] maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced [Formula: see text] estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in-vivo studies
Petibon, Yoann; Rakvongthai, Yothin; Fakhri, Georges El; Ouyang, Jinsong
2017-01-01
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves -TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in-vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans - each containing 1/8th of the total number of events - were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard Ordered Subset Expectation Maximization (OSEM) reconstruction algorithm on one side, and the One-Step Late Maximum a Posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of K1, the tracer transport rate (mL.min−1.mL−1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced K1 maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced K1 estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in-vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance. PMID:28379843
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies
NASA Astrophysics Data System (ADS)
Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong
2017-05-01
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans—each containing 1/8th of the total number of events—were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of {{K}1} , the tracer transport rate (ml · min-1 · ml-1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced {{K}1} maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced {{K}1} estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.
He, Bo; Zhang, Hongjin; Li, Chao; Zhang, Shujing; Liang, Yan; Yan, Tianhong
2011-01-01
This paper addresses an autonomous navigation method for the autonomous underwater vehicle (AUV) C-Ranger applying information-filter-based simultaneous localization and mapping (SLAM), and its sea trial experiments in Tuandao Bay (Shangdong Province, P.R. China). Weak links in the information matrix in an extended information filter (EIF) can be pruned to achieve an efficient approach-sparse EIF algorithm (SEIF-SLAM). All the basic update formulae can be implemented in constant time irrespective of the size of the map; hence the computational complexity is significantly reduced. The mechanical scanning imaging sonar is chosen as the active sensing device for the underwater vehicle, and a compensation method based on feedback of the AUV pose is presented to overcome distortion of the acoustic images due to the vehicle motion. In order to verify the feasibility of the navigation methods proposed for the C-Ranger, a sea trial was conducted in Tuandao Bay. Experimental results and analysis show that the proposed navigation approach based on SEIF-SLAM improves the accuracy of the navigation compared with conventional method; moreover the algorithm has a low computational cost when compared with EKF-SLAM. PMID:22346682
He, Bo; Zhang, Hongjin; Li, Chao; Zhang, Shujing; Liang, Yan; Yan, Tianhong
2011-01-01
This paper addresses an autonomous navigation method for the autonomous underwater vehicle (AUV) C-Ranger applying information-filter-based simultaneous localization and mapping (SLAM), and its sea trial experiments in Tuandao Bay (Shangdong Province, P.R. China). Weak links in the information matrix in an extended information filter (EIF) can be pruned to achieve an efficient approach-sparse EIF algorithm (SEIF-SLAM). All the basic update formulae can be implemented in constant time irrespective of the size of the map; hence the computational complexity is significantly reduced. The mechanical scanning imaging sonar is chosen as the active sensing device for the underwater vehicle, and a compensation method based on feedback of the AUV pose is presented to overcome distortion of the acoustic images due to the vehicle motion. In order to verify the feasibility of the navigation methods proposed for the C-Ranger, a sea trial was conducted in Tuandao Bay. Experimental results and analysis show that the proposed navigation approach based on SEIF-SLAM improves the accuracy of the navigation compared with conventional method; moreover the algorithm has a low computational cost when compared with EKF-SLAM.
Application of fuzzy system theory in addressing the presence of uncertainties
NASA Astrophysics Data System (ADS)
Yusmye, A. Y. N.; Goh, B. Y.; Adnan, N. F.; Ariffin, A. K.
2015-02-01
In this paper, the combinations of fuzzy system theory with the finite element methods are present and discuss to deal with the uncertainties. The present of uncertainties is needed to avoid for prevent the failure of the material in engineering. There are three types of uncertainties, which are stochastic, epistemic and error uncertainties. In this paper, the epistemic uncertainties have been considered. For the epistemic uncertainty, it exists as a result of incomplete information and lack of knowledge or data. Fuzzy system theory is a non-probabilistic method, and this method is most appropriate to interpret the uncertainty compared to statistical approach when the deal with the lack of data. Fuzzy system theory contains a number of processes started from converting the crisp input to fuzzy input through fuzzification process and followed by the main process known as mapping process. The term mapping here means that the logical relationship between two or more entities. In this study, the fuzzy inputs are numerically integrated based on extension principle method. In the final stage, the defuzzification process is implemented. Defuzzification is an important process to allow the conversion of the fuzzy output to crisp outputs. Several illustrative examples are given and from the simulation, the result showed that propose the method produces more conservative results comparing with the conventional finite element method.
Integration of Genetic Algorithms and Fuzzy Logic for Urban Growth Modeling
NASA Astrophysics Data System (ADS)
Foroutan, E.; Delavar, M. R.; Araabi, B. N.
2012-07-01
Urban growth phenomenon as a spatio-temporal continuous process is subject to spatial uncertainty. This inherent uncertainty cannot be fully addressed by the conventional methods based on the Boolean algebra. Fuzzy logic can be employed to overcome this limitation. Fuzzy logic preserves the continuity of dynamic urban growth spatially by choosing fuzzy membership functions, fuzzy rules and the fuzzification-defuzzification process. Fuzzy membership functions and fuzzy rule sets as the heart of fuzzy logic are rather subjective and dependent on the expert. However, due to lack of a definite method for determining the membership function parameters, certain optimization is needed to tune the parameters and improve the performance of the model. This paper integrates genetic algorithms and fuzzy logic as a genetic fuzzy system (GFS) for modeling dynamic urban growth. The proposed approach is applied for modeling urban growth in Tehran Metropolitan Area in Iran. Historical land use/cover data of Tehran Metropolitan Area extracted from the 1988 and 1999 Landsat ETM+ images are employed in order to simulate the urban growth. The extracted land use classes of the year 1988 include urban areas, street, vegetation areas, slope and elevation used as urban growth physical driving forces. Relative Operating Characteristic (ROC) curve as an fitness function has been used to evaluate the performance of the GFS algorithm. The optimum membership function parameter is applied for generating a suitability map for the urban growth. Comparing the suitability map and real land use map of 1999 gives the threshold value for the best suitability map which can simulate the land use map of 1999. The simulation outcomes in terms of kappa of 89.13% and overall map accuracy of 95.58% demonstrated the efficiency and reliability of the proposed model.
NASA Technical Reports Server (NTRS)
Hill, Michael D.; Herrera, Acey A.; Crane, J. Allen; Packard, Edward A.; Aviado, Carlos; Sampler, Henry P.; Obenschain, Arthur (Technical Monitor)
2000-01-01
The Microwave Anisotropy Probe (MAP) Observatory, scheduled for a late 2000 launch, is designed to measure temperature fluctuations (anisotropy) and produce a high sensitivity and high spatial resolution (< 0.3 deg at 90 GHz.) map of the cosmic microwave background (CMB) radiation over the entire sky between 22 and 90 GHz. MAP utilizes back-to-back Gregorian telescopes to focus the microwave signals into 10 differential microwave receivers, via 20 feed horns. Proper alignment of the telescope reflectors and the feed horns at the operating temperature of 90 K is a critical element to ensure mission success. We describe the hardware and methods used to validate the displacement/deformation predictions of the reflectors and the microwave feed horns during thermal/vacuum testing of the reflectors and the microwave instrument. The smallest deformations to be resolved by the measurement system were on the order of +/- 0.030 inches (0.762 mm). Performance of these alignment measurements inside a thermal/vacuum chamber with conventional alignment equipment posed several limitations. A photogrammetry (PG) system was chosen to perform the measurements since it is a non-contact measurement system, the measurements can be made relatively quickly and accurately, and the photogrammetric camera can be operated remotely. The hardware and methods developed to perform the MAP alignment measurements using PG proved to be highly successful. The PG measurements met the desired requirements, enabling the desired deformations to be measured and even resolved to an order of magnitude smaller than the imposed requirements. Viable data were provided to the MAP Project for a full analysis of the on-orbit performance of the Instrument's microwave system.
NASA Technical Reports Server (NTRS)
Hill, Michael D.; Herrera, Acey A.; Crane, J. Allen; Packard, Edward A.; Aviado, Carlos; Sampler, Henry P.
2000-01-01
The Microwave Anisotropy Probe (MAP) Observatory, scheduled for a fall 2000 launch, is designed to measure temperature fluctuations (anisotropy) and produce a high sensitivity and high spatial resolution (approximately 0.2 degree) map of the cosmic microwave background (CMB) radiation over the entire sky between 22 and 90 GHz. MAP utilizes back-to-back Gregorian telescopes to focus the microwave signals into 10 differential microwave receivers, via 20 feed horns. Proper alignment of the telescope reflectors and the feed horns at the operating temperature of 90 K is a critical element to ensure mission success. We describe the hardware and methods used to validate the displacement/deformation predictions of the reflectors and the microwave feed horns during thermal/vacuum testing of the reflectors and the microwave instrument. The smallest deformation predictions to be measured were on the order of +/- 0.030 inches (+/- 0.762 mm). Performance of these alignment measurements inside a thermal/vacuum chamber with conventional alignment equipment posed several limitations. The most troublesome limitation was the inability to send personnel into the chamber to perform the measurements during the test due to vacuum and the temperature extremes. The photogrammetry (PG) system was chosen to perform the measurements since it is a non- contact measurement system, the measurements can be made relatively quickly and accurately, and the photogrammetric camera can be operated remotely. The hardware and methods developed to perform the MAP alignment measurements using PG proved to be highly successful. The measurements met the desired requirements, for the metal structures enabling the desired distortions to be measured resolving deformations an order of magnitude smaller than the imposed requirements. Viable data were provided to the MAP Project for a full analysis of the on-orbit performance of the Instrument's microwave system.
PCA-based groupwise image registration for quantitative MRI.
Huizinga, W; Poot, D H J; Guyader, J-M; Klaassen, R; Coolen, B F; van Kranenburg, M; van Geuns, R J M; Uitterdijk, A; Polfliet, M; Vandemeulebroucke, J; Leemans, A; Niessen, W J; Klein, S
2016-04-01
Quantitative magnetic resonance imaging (qMRI) is a technique for estimating quantitative tissue properties, such as the T1 and T2 relaxation times, apparent diffusion coefficient (ADC), and various perfusion measures. This estimation is achieved by acquiring multiple images with different acquisition parameters (or at multiple time points after injection of a contrast agent) and by fitting a qMRI signal model to the image intensities. Image registration is often necessary to compensate for misalignments due to subject motion and/or geometric distortions caused by the acquisition. However, large differences in image appearance make accurate image registration challenging. In this work, we propose a groupwise image registration method for compensating misalignment in qMRI. The groupwise formulation of the method eliminates the requirement of choosing a reference image, thus avoiding a registration bias. The method minimizes a cost function that is based on principal component analysis (PCA), exploiting the fact that intensity changes in qMRI can be described by a low-dimensional signal model, but not requiring knowledge on the specific acquisition model. The method was evaluated on 4D CT data of the lungs, and both real and synthetic images of five different qMRI applications: T1 mapping in a porcine heart, combined T1 and T2 mapping in carotid arteries, ADC mapping in the abdomen, diffusion tensor mapping in the brain, and dynamic contrast-enhanced mapping in the abdomen. Each application is based on a different acquisition model. The method is compared to a mutual information-based pairwise registration method and four other state-of-the-art groupwise registration methods. Registration accuracy is evaluated in terms of the precision of the estimated qMRI parameters, overlap of segmented structures, distance between corresponding landmarks, and smoothness of the deformation. In all qMRI applications the proposed method performed better than or equally well as competing methods, while avoiding the need to choose a reference image. It is also shown that the results of the conventional pairwise approach do depend on the choice of this reference image. We therefore conclude that our groupwise registration method with a similarity measure based on PCA is the preferred technique for compensating misalignments in qMRI. Copyright © 2015 Elsevier B.V. All rights reserved.
Atrial Fibrillation Ablation Guided by a Novel Nonfluoroscopic Navigation System.
Ballesteros, Gabriel; Ramos, Pablo; Neglia, Renzo; Menéndez, Diego; García-Bolao, Ignacio
2017-09-01
Rhythmia is a new nonfluoroscopic navigation system that is able to create high-density electroanatomic maps. The aim of this study was to describe the acute outcomes of atrial fibrillation (AF) ablation guided by this system, to analyze the volume provided by its electroanatomic map, and to describe its ability to locate pulmonary vein (PV) reconnection gaps in redo procedures. This observational study included 62 patients who underwent AF ablation with Rhythmia compared with a retrospective cohort who underwent AF ablation with a conventional nonfluoroscopic navigation system (Ensite Velocity). The number of surface electrograms per map was significantly higher in Rhythmia procedures (12 125 ± 2826 vs 133 ± 21 with Velocity; P < .001), with no significant differences in the total procedure time. The Orion catheter was placed for mapping in 99.5% of PV (95.61% in the control group with a conventional circular mapping catheter; P = .04). There were no significant differences in the percentage of PV isolation between the 2 groups. In redo procedures, an ablation gap could be identified on the activation map in 67% of the reconnected PV (40% in the control group; P = .042). The measured left atrial volume was lower than that calculated by computed tomography (109.3 v 15.2 and 129.9 ± 13.2 mL, respectively; P < .001). There were no significant differences in the number of complications. The Rhythmia system is effective for AF ablation procedures, with procedure times and safety profiles similar to conventional nonfluoroscopic navigation systems. In redo procedures, it appears to be more effective in identifying reconnected PV conduction gaps. Copyright © 2016 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
User Preferences in Image Map Using
NASA Astrophysics Data System (ADS)
Vondráková, A.; Vozenilek, V.
2016-06-01
In the process of map making, the attention is given to the resulting image map (to be accurate, readable, and suit the primary purpose) and its user aspects. Current cartography understands the user issues as all matters relating to user perception, map use and also user preferences. Most commercial cartographic production is strongly connected to economic circumstances. Companies are discovering user's interests and market demands. However, is it sufficient to focus just on the user's preferences? Recent research on user aspects at Palacký University Olomouc addresses a much wider scope of user aspects. The user's preferences are very often distorting - the users think that the particular image map is kind, beautiful, and useful and they wants to buy it (or use it - it depends on the form of the map production). But when the same user gets the task to use practically this particular map (such as finding the shortest way), so the user concludes that initially preferred map is useless, and uses a map, that was worse evaluated according to his preferences. It is, therefore, necessary to evaluate not only the correctness of image maps and their aesthetics but also to assess the user perception and other user issues. For the accomplishment of such testing, eye-tracking technology is a useful tool. The research analysed how users read image maps, or if they prefer image maps over traditional maps. The eye tracking experiment on the comparison of the conventional and image map reading was conducted. The map readers were asked to solve few simple tasks with either conventional or image map. The readers' choice of the map to solve the task was one of investigated aspect of user preferences. Results demonstrate that the user preferences and user needs are often quite different issues. The research outcomes show that it is crucial to implement map user testing into the cartographic production process.
Shape Classification Using Wasserstein Distance for Brain Morphometry Analysis.
Su, Zhengyu; Zeng, Wei; Wang, Yalin; Lu, Zhong-Lin; Gu, Xianfeng
2015-01-01
Brain morphometry study plays a fundamental role in medical imaging analysis and diagnosis. This work proposes a novel framework for brain cortical surface classification using Wasserstein distance, based on uniformization theory and Riemannian optimal mass transport theory. By Poincare uniformization theorem, all shapes can be conformally deformed to one of the three canonical spaces: the unit sphere, the Euclidean plane or the hyperbolic plane. The uniformization map will distort the surface area elements. The area-distortion factor gives a probability measure on the canonical uniformization space. All the probability measures on a Riemannian manifold form the Wasserstein space. Given any 2 probability measures, there is a unique optimal mass transport map between them, the transportation cost defines the Wasserstein distance between them. Wasserstein distance gives a Riemannian metric for the Wasserstein space. It intrinsically measures the dissimilarities between shapes and thus has the potential for shape classification. To the best of our knowledge, this is the first. work to introduce the optimal mass transport map to general Riemannian manifolds. The method is based on geodesic power Voronoi diagram. Comparing to the conventional methods, our approach solely depends on Riemannian metrics and is invariant under rigid motions and scalings, thus it intrinsically measures shape distance. Experimental results on classifying brain cortical surfaces with different intelligence quotients demonstrated the efficiency and efficacy of our method.
Shape Classification Using Wasserstein Distance for Brain Morphometry Analysis
Su, Zhengyu; Zeng, Wei; Wang, Yalin; Lu, Zhong-Lin; Gu, Xianfeng
2015-01-01
Brain morphometry study plays a fundamental role in medical imaging analysis and diagnosis. This work proposes a novel framework for brain cortical surface classification using Wasserstein distance, based on uniformization theory and Riemannian optimal mass transport theory. By Poincare uniformization theorem, all shapes can be conformally deformed to one of the three canonical spaces: the unit sphere, the Euclidean plane or the hyperbolic plane. The uniformization map will distort the surface area elements. The area-distortion factor gives a probability measure on the canonical uniformization space. All the probability measures on a Riemannian manifold form the Wasserstein space. Given any 2 probability measures, there is a unique optimal mass transport map between them, the transportation cost defines the Wasserstein distance between them. Wasserstein distance gives a Riemannian metric for the Wasserstein space. It intrinsically measures the dissimilarities between shapes and thus has the potential for shape classification. To the best of our knowledge, this is the first work to introduce the optimal mass transport map to general Riemannian manifolds. The method is based on geodesic power Voronoi diagram. Comparing to the conventional methods, our approach solely depends on Riemannian metrics and is invariant under rigid motions and scalings, thus it intrinsically measures shape distance. Experimental results on classifying brain cortical surfaces with different intelligence quotients demonstrated the efficiency and efficacy of our method. PMID:26221691
Frickenhaus, Stephan; Kannan, Srinivasaraghavan; Zacharias, Martin
2009-02-01
A direct conformational clustering and mapping approach for peptide conformations based on backbone dihedral angles has been developed and applied to compare conformational sampling of Met-enkephalin using two molecular dynamics (MD) methods. Efficient clustering in dihedrals has been achieved by evaluating all combinations resulting from independent clustering of each dihedral angle distribution, thus resolving all conformational substates. In contrast, Cartesian clustering was unable to accurately distinguish between all substates. Projection of clusters on dihedral principal component (PCA) subspaces did not result in efficient separation of highly populated clusters. However, representation in a nonlinear metric by Sammon mapping was able to separate well the 48 highest populated clusters in just two dimensions. In addition, this approach also allowed us to visualize the transition frequencies between clusters efficiently. Significantly, higher transition frequencies between more distinct conformational substates were found for a recently developed biasing-potential replica exchange MD simulation method allowing faster sampling of possible substates compared to conventional MD simulations. Although the number of theoretically possible clusters grows exponentially with peptide length, in practice, the number of clusters is only limited by the sampling size (typically much smaller), and therefore the method is well suited also for large systems. The approach could be useful to rapidly and accurately evaluate conformational sampling during MD simulations, to compare different sampling strategies and eventually to detect kinetic bottlenecks in folding pathways.
New approach to estimating variability in visual field data using an image processing technique.
Crabb, D P; Edgar, D F; Fitzke, F W; McNaught, A I; Wynn, H P
1995-01-01
AIMS--A new framework for evaluating pointwise sensitivity variation in computerised visual field data is demonstrated. METHODS--A measure of local spatial variability (LSV) is generated using an image processing technique. Fifty five eyes from a sample of normal and glaucomatous subjects, examined on the Humphrey field analyser (HFA), were used to illustrate the method. RESULTS--Significant correlation between LSV and conventional estimates--namely, HFA pattern standard deviation and short term fluctuation, were found. CONCLUSION--LSV is not dependent on normals' reference data or repeated threshold determinations, thus potentially reducing test time. Also, the illustrated pointwise maps of LSV could provide a method for identifying areas of fluctuation commonly found in early glaucomatous field loss. PMID:7703196
Application of Thermal Infrared Remote Sensing for Quantitative Evaluation of Crop Characteristics
NASA Technical Reports Server (NTRS)
Shaw, J.; Luvall, J.; Rickman, D.; Mask, P.; Wersinger, J.; Sullivan, D.; Arnold, James E. (Technical Monitor)
2002-01-01
Evidence suggests that thermal infrared emittance (TIR) at the field-scale is largely a function of the integrated crop/soil moisture continuum. Because soil moisture dynamics largely determine crop yields in non-irrigated farming (85 % of Alabama farms are non-irrigated), TIR may be an effective method of mapping within field crop yield variability, and possibly, absolute yields. The ability to map yield variability at juvenile growth stages can lead to improved soil fertility and pest management, as well as facilitating the development of economic forecasting. Researchers at GHCC/MSFC/NASA and Auburn University are currently investigating the role of TIR in site-specific agriculture. Site-specific agriculture (SSA), or precision farming, is a method of crop production in which zones and soils within a field are delineated and managed according to their unique properties. The goal of SSA is to improve farm profits and reduce environmental impacts through targeted agrochemical applications. The foundation of SSA depends upon the spatial and temporal characterization of soil and crop properties through the creation of management zones. Management zones can be delineated using: 1) remote sensing (RS) data, 2) conventional soil testing and soil mapping, and 3) yield mapping. Portions of this research have concentrated on using remote sensing data to map yield variability in corn (Zea mays L.) and soybean (Glycine max L.) crops. Remote sensing data have been collected for several fields in the Tennessee Valley region at various crop growth stages during the last four growing seasons. Preliminary results of this study will be presented.
Dieringer, Matthias A.; Deimling, Michael; Santoro, Davide; Wuerfel, Jens; Madai, Vince I.; Sobesky, Jan; von Knobelsdorff-Brenkenhoff, Florian; Schulz-Menger, Jeanette; Niendorf, Thoralf
2014-01-01
Introduction Visual but subjective reading of longitudinal relaxation time (T1) weighted magnetic resonance images is commonly used for the detection of brain pathologies. For this non-quantitative measure, diagnostic quality depends on hardware configuration, imaging parameters, radio frequency transmission field (B1+) uniformity, as well as observer experience. Parametric quantification of the tissue T1 relaxation parameter offsets the propensity for these effects, but is typically time consuming. For this reason, this study examines the feasibility of rapid 2D T1 quantification using a variable flip angles (VFA) approach at magnetic field strengths of 1.5 Tesla, 3 Tesla, and 7 Tesla. These efforts include validation in phantom experiments and application for brain T1 mapping. Methods T1 quantification included simulations of the Bloch equations to correct for slice profile imperfections, and a correction for B1+. Fast gradient echo acquisitions were conducted using three adjusted flip angles for the proposed T1 quantification approach that was benchmarked against slice profile uncorrected 2D VFA and an inversion-recovery spin-echo based reference method. Brain T1 mapping was performed in six healthy subjects, one multiple sclerosis patient, and one stroke patient. Results Phantom experiments showed a mean T1 estimation error of (-63±1.5)% for slice profile uncorrected 2D VFA and (0.2±1.4)% for the proposed approach compared to the reference method. Scan time for single slice T1 mapping including B1+ mapping could be reduced to 5 seconds using an in-plane resolution of (2×2) mm2, which equals a scan time reduction of more than 99% compared to the reference method. Conclusion Our results demonstrate that rapid 2D T1 quantification using a variable flip angle approach is feasible at 1.5T/3T/7T. It represents a valuable alternative for rapid T1 mapping due to the gain in speed versus conventional approaches. This progress may serve to enhance the capabilities of parametric MR based lesion detection and brain tissue characterization. PMID:24621588
NASA Astrophysics Data System (ADS)
Ketcha, M. D.; De Silva, T.; Uneri, A.; Jacobson, M. W.; Goerres, J.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.
2017-06-01
A multi-stage image-based 3D-2D registration method is presented that maps annotations in a 3D image (e.g. point labels annotating individual vertebrae in preoperative CT) to an intraoperative radiograph in which the patient has undergone non-rigid anatomical deformation due to changes in patient positioning or due to the intervention itself. The proposed method (termed msLevelCheck) extends a previous rigid registration solution (LevelCheck) to provide an accurate mapping of vertebral labels in the presence of spinal deformation. The method employs a multi-stage series of rigid 3D-2D registrations performed on sets of automatically determined and increasingly localized sub-images, with the final stage achieving a rigid mapping for each label to yield a locally rigid yet globally deformable solution. The method was evaluated first in a phantom study in which a CT image of the spine was acquired followed by a series of 7 mobile radiographs with increasing degree of deformation applied. Second, the method was validated using a clinical data set of patients exhibiting strong spinal deformation during thoracolumbar spine surgery. Registration accuracy was assessed using projection distance error (PDE) and failure rate (PDE > 20 mm—i.e. label registered outside vertebra). The msLevelCheck method was able to register all vertebrae accurately for all cases of deformation in the phantom study, improving the maximum PDE of the rigid method from 22.4 mm to 3.9 mm. The clinical study demonstrated the feasibility of the approach in real patient data by accurately registering all vertebral labels in each case, eliminating all instances of failure encountered in the conventional rigid method. The multi-stage approach demonstrated accurate mapping of vertebral labels in the presence of strong spinal deformation. The msLevelCheck method maintains other advantageous aspects of the original LevelCheck method (e.g. compatibility with standard clinical workflow, large capture range, and robustness against mismatch in image content) and extends capability to cases exhibiting strong changes in spinal curvature.
Improved method for retinotopy constrained source estimation of visual evoked responses
Hagler, Donald J.; Dale, Anders M.
2011-01-01
Retinotopy constrained source estimation (RCSE) is a method for non-invasively measuring the time courses of activation in early visual areas using magnetoencephalography (MEG) or electroencephalography (EEG). Unlike conventional equivalent current dipole or distributed source models, the use of multiple, retinotopically-mapped stimulus locations to simultaneously constrain the solutions allows for the estimation of independent waveforms for visual areas V1, V2, and V3, despite their close proximity to each other. We describe modifications that improve the reliability and efficiency of this method. First, we find that increasing the number and size of visual stimuli results in source estimates that are less susceptible to noise. Second, to create a more accurate forward solution, we have explicitly modeled the cortical point spread of individual visual stimuli. Dipoles are represented as extended patches on the cortical surface, which take into account the estimated receptive field size at each location in V1, V2, and V3 as well as the contributions from contralateral, ipsilateral, dorsal, and ventral portions of the visual areas. Third, we implemented a map fitting procedure to deform a template to match individual subject retinotopic maps derived from functional magnetic resonance imaging (fMRI). This improves the efficiency of the overall method by allowing automated dipole selection, and it makes the results less sensitive to physiological noise in fMRI retinotopy data. Finally, the iteratively reweighted least squares (IRLS) method was used to reduce the contribution from stimulus locations with high residual error for robust estimation of visual evoked responses. PMID:22102418
Christen, T.; Pannetier, NA.; Ni, W.; Qiu, D.; Moseley, M.; Schuff, N.; Zaharchuk, G.
2014-01-01
In the present study, we describe a fingerprinting approach to analyze the time evolution of the MR signal and retrieve quantitative information about the microvascular network. We used a Gradient Echo Sampling of the Free Induction Decay and Spin Echo (GESFIDE) sequence and defined a fingerprint as the ratio of signals acquired pre and post injection of an iron based contrast agent. We then simulated the same experiment with an advanced numerical tool that takes a virtual voxel containing blood vessels as input, then computes microscopic magnetic fields and water diffusion effects, and eventually derives the expected MR signal evolution. The parameters inputs of the simulations (cerebral blood volume [CBV], mean vessel radius [R], and blood oxygen saturation [SO2]) were varied to obtain a dictionary of all possible signal evolutions. The best fit between the observed fingerprint and the dictionary was then determined using least square minimization. This approach was evaluated in 5 normal subjects and the results were compared to those obtained using more conventional MR methods, steady-state contrast imaging for CBV and R and a global measure of oxygenation obtained from the superior sagittal sinus for SO2. The fingerprinting method enabled the creation of high-resolution parametric maps of the microvascular network showing expected contrast and fine details. Numerical values in gray matter (CBV=3.1±0.7%, R=12.6±2.4µm, SO2=59.5±4.7%) are consistent with literature reports and correlated with conventional MR approaches. SO2 values in white matter (53.0±4.0%) were slightly lower than expected. Numerous improvements can easily be made and the method should be useful to study brain pathologies. PMID:24321559
NASA Astrophysics Data System (ADS)
Thordsen, J. J.; Stonestrom, D. A.; Conaway, C. H.; Luo, W.; Baker, R. J.; Andraski, B. J.
2015-12-01
Two types of handheld photoionization detectors were evaluated in April 2015 for reconnaissance mapping of volatile organic compounds (VOCs) in the unsaturated zone surrounding legacy disposal trenches for commercial low-level radioactive waste near Beatty, Nevada (USA). This method is rapid and cost effective when compared to the more conventional procedure used ate the site, where VOCs are collected on sorbent cartridges in the field followed by thermal desorption, gas chromatographic separation, and quantitation by mass spectroscopy in the laboratory (TD-GC-MS analysis). Using the conventional method, more than sixty distinct compounds have been identified in the 110-m deep unsaturated zone vapor phase, and the changing nature of the VOC mix over a 15-yr timeframe has been recorded. Analyses to date have identified chlorofluorocarbons (CFCs), chlorinated ethenes, chlorinated ethanes, gasoline-range hydrocarbons, chloroform, and carbon tetrachloride as the main classes of VOCs present. The VOC plumes emanating from the various subgroups of trenches are characterized by different relative abundances of the compound classes, and total VOC concentrations that cover several orders of magnitude. One of the photoionization detectors, designed for industrial compliance testing, lacked sufficient dynamic range and sensitivity to be useful. The other, a wide range (1 ppb-20,000 ppm) research-grade instrument with a 10.6 eV photoionization detector (PID) lamp, produced promising results, detecting roughly half of the non-CFC VOCs present. The rapid and inexpensive photoionization method is envisioned as a screening tool to supplement, expedite, and direct the collection of additional samples for TD-GC-MS analyses at this and other VOC-contaminated sites.
Rasper, Michael; Nadjiri, Jonathan; Sträter, Alexandra S; Settles, Marcus; Laugwitz, Karl-Ludwig; Rummeny, Ernst J; Huber, Armin M
2017-06-01
To prospectively compare image quality and myocardial T 1 relaxation times of modified Look-Locker inversion recovery (MOLLI) imaging at 3.0 T (T) acquired with patient-adaptive dual-source (DS) and conventional single-source (SS) radiofrequency (RF) transmission. Pre- and post-contrast MOLLI T 1 mapping using SS and DS was acquired in 27 patients. Patient wise and segment wise analysis of T 1 times was performed. The correlation of DS MOLLI measurements with a reference spin echo sequence was analysed in phantom experiments. DS MOLLI imaging reduced T 1 standard deviation in 14 out of 16 myocardial segments (87.5%). Significant reduction of T 1 variance could be obtained in 7 segments (43.8%). DS significantly reduced myocardial T 1 variance in 16 out of 25 patients (64.0%). With conventional RF transmission, dielectric shading artefacts occurred in six patients causing diagnostic uncertainty. No according artefacts were found on DS images. DS image findings were in accordance with conventional T 1 mapping and late gadolinium enhancement (LGE) imaging. Phantom experiments demonstrated good correlation of myocardial T 1 time between DS MOLLI and spin echo imaging. Dual-source RF transmission enhances myocardial T 1 homogeneity in MOLLI imaging at 3.0 T. The reduction of signal inhomogeneities and artefacts due to dielectric shading is likely to enhance diagnostic confidence.
Regional mapping of soil parent material by machine learning based on point data
NASA Astrophysics Data System (ADS)
Lacoste, Marine; Lemercier, Blandine; Walter, Christian
2011-10-01
A machine learning system (MART) has been used to predict soil parent material (SPM) at the regional scale with a 50-m resolution. The use of point-specific soil observations as training data was tested as a replacement for the soil maps introduced in previous studies, with the aim of generating a more even distribution of training data over the study area and reducing information uncertainty. The 27,020-km 2 study area (Brittany, northwestern France) contains mainly metamorphic, igneous and sedimentary substrates. However, superficial deposits (aeolian loam, colluvial and alluvial deposits) very often represent the actual SPM and are typically under-represented in existing geological maps. In order to calibrate the predictive model, a total of 4920 point soil descriptions were used as training data along with 17 environmental predictors (terrain attributes derived from a 50-m DEM, as well as emissions of K, Th and U obtained by means of airborne gamma-ray spectrometry, geological variables at the 1:250,000 scale and land use maps obtained by remote sensing). Model predictions were then compared: i) during SPM model creation to point data not used in model calibration (internal validation), ii) to the entire point dataset (point validation), and iii) to existing detailed soil maps (external validation). The internal, point and external validation accuracy rates were 56%, 81% and 54%, respectively. Aeolian loam was one of the three most closely predicted substrates. Poor prediction results were associated with uncommon materials and areas with high geological complexity, i.e. areas where existing maps used for external validation were also imprecise. The resultant predictive map turned out to be more accurate than existing geological maps and moreover indicated surface deposits whose spatial coverage is consistent with actual knowledge of the area. This method proves quite useful in predicting SPM within areas where conventional mapping techniques might be too costly or lengthy or where soil maps are insufficient for use as training data. In addition, this method allows producing repeatable and interpretable results, whose accuracy can be assessed objectively.
Sekiguchi, Yuki; Oroguchi, Tomotaka; Nakasako, Masayoshi
2016-01-01
Coherent X-ray diffraction imaging (CXDI) is one of the techniques used to visualize structures of non-crystalline particles of micrometer to submicrometer size from materials and biological science. In the structural analysis of CXDI, the electron density map of a sample particle can theoretically be reconstructed from a diffraction pattern by using phase-retrieval (PR) algorithms. However, in practice, the reconstruction is difficult because diffraction patterns are affected by Poisson noise and miss data in small-angle regions due to the beam stop and the saturation of detector pixels. In contrast to X-ray protein crystallography, in which the phases of diffracted waves are experimentally estimated, phase retrieval in CXDI relies entirely on the computational procedure driven by the PR algorithms. Thus, objective criteria and methods to assess the accuracy of retrieved electron density maps are necessary in addition to conventional parameters monitoring the convergence of PR calculations. Here, a data analysis scheme, named ASURA, is proposed which selects the most probable electron density maps from a set of maps retrieved from 1000 different random seeds for a diffraction pattern. Each electron density map composed of J pixels is expressed as a point in a J-dimensional space. Principal component analysis is applied to describe characteristics in the distribution of the maps in the J-dimensional space. When the distribution is characterized by a small number of principal components, the distribution is classified using the k-means clustering method. The classified maps are evaluated by several parameters to assess the quality of the maps. Using the proposed scheme, structure analysis of a diffraction pattern from a non-crystalline particle is conducted in two stages: estimation of the overall shape and determination of the fine structure inside the support shape. In each stage, the most accurate and probable density maps are objectively selected. The validity of the proposed scheme is examined by application to diffraction data that were obtained from an aggregate of metal particles and a biological specimen at the XFEL facility SACLA using custom-made diffraction apparatus.
Combining task-evoked and spontaneous activity to improve pre-operative brain mapping with fMRI
Fox, Michael D.; Qian, Tianyi; Madsen, Joseph R.; Wang, Danhong; Li, Meiling; Ge, Manling; Zuo, Huan-cong; Groppe, David M.; Mehta, Ashesh D.; Hong, Bo; Liu, Hesheng
2016-01-01
Noninvasive localization of brain function is used to understand and treat neurological disease, exemplified by pre-operative fMRI mapping prior to neurosurgical intervention. The principal approach for generating these maps relies on brain responses evoked by a task and, despite known limitations, has dominated clinical practice for over 20 years. Recently, pre-operative fMRI mapping based on correlations in spontaneous brain activity has been demonstrated, however this approach has its own limitations and has not seen widespread clinical use. Here we show that spontaneous and task-based mapping can be performed together using the same pre-operative fMRI data, provide complimentary information relevant for functional localization, and can be combined to improve identification of eloquent motor cortex. Accuracy, sensitivity, and specificity of our approach are quantified through comparison with electrical cortical stimulation mapping in eight patients with intractable epilepsy. Broad applicability and reproducibility of our approach is demonstrated through prospective replication in an independent dataset of six patients from a different center. In both cohorts and every individual patient, we see a significant improvement in signal to noise and mapping accuracy independent of threshold, quantified using receiver operating characteristic curves. Collectively, our results suggest that modifying the processing of fMRI data to incorporate both task-based and spontaneous activity significantly improves functional localization in pre-operative patients. Because this method requires no additional scan time or modification to conventional pre-operative data acquisition protocols it could have widespread utility. PMID:26408860
NASA Astrophysics Data System (ADS)
Mao, Deqing; Zhang, Yin; Zhang, Yongchao; Huang, Yulin; Yang, Jianyu
2018-01-01
Doppler beam sharpening (DBS) is a critical technology for airborne radar ground mapping in forward-squint region. In conventional DBS technology, the narrow-band Doppler filter groups formed by fast Fourier transform (FFT) method suffer from low spectral resolution and high side lobe levels. The iterative adaptive approach (IAA), based on the weighted least squares (WLS), is applied to the DBS imaging applications, forming narrower Doppler filter groups than the FFT with lower side lobe levels. Regrettably, the IAA is iterative, and requires matrix multiplication and inverse operation when forming the covariance matrix, its inverse and traversing the WLS estimate for each sampling point, resulting in a notably high computational complexity for cubic time. We propose a fast IAA (FIAA)-based super-resolution DBS imaging method, taking advantage of the rich matrix structures of the classical narrow-band filtering. First, we formulate the covariance matrix via the FFT instead of the conventional matrix multiplication operation, based on the typical Fourier structure of the steering matrix. Then, by exploiting the Gohberg-Semencul representation, the inverse of the Toeplitz covariance matrix is computed by the celebrated Levinson-Durbin (LD) and Toeplitz-vector algorithm. Finally, the FFT and fast Toeplitz-vector algorithm are further used to traverse the WLS estimates based on the data-dependent trigonometric polynomials. The method uses the Hermitian feature of the echo autocorrelation matrix R to achieve its fast solution and uses the Toeplitz structure of R to realize its fast inversion. The proposed method enjoys a lower computational complexity without performance loss compared with the conventional IAA-based super-resolution DBS imaging method. The results based on simulations and measured data verify the imaging performance and the operational efficiency.
Baquero, Maria T; Lostritto, Karen; Gustavson, Mark D; Bassi, Kimberly A; Appia, Franck; Camp, Robert L; Molinaro, Annette M; Harris, Lyndsay N; Rimm, David L
2011-11-02
Microtubule associated proteins (MAPs) endogenously regulate microtubule stabilization and have been reported as prognostic and predictive markers for taxane response. The microtubule stabilizer, MAP-tau, has shown conflicting results. We quantitatively assessed MAP-tau expression in two independent breast cancer cohorts to determine prognostic and predictive value of this biomarker. MAP-tau expression was evaluated in the retrospective Yale University breast cancer cohort (n = 651) using tissue microarrays and also in the TAX 307 cohort, a clinical trial randomized for TAC versus FAC chemotherapy (n = 140), using conventional whole tissue sections. Expression was measured using the AQUA method for quantitative immunofluorescence. Scores were correlated with clinicopathologic variables, survival, and response to therapy. Assessment of the Yale cohort using Cox univariate analysis indicated an improved overall survival (OS) in tumors with a positive correlation between high MAP-tau expression and overall survival (OS) (HR = 0.691, 95% CI = 0.489-0.974; P = 0.004). Kaplan Meier analysis showed 10-year survival for 65% of patients with high MAP-tau expression compared to 52% with low expression (P = .006). In TAX 307, high expression was associated with significantly longer median time to tumor progression (TTP) regardless of treatment arm (33.0 versus 23.4 months, P = 0.010) with mean TTP of 31.2 months. Response rates did not differ by MAP-tau expression (P = 0.518) or by treatment arm (P = 0.584). Quantitative measurement of MAP-tau expression has prognostic value in both cohorts, with high expression associated with longer TTP and OS. Differences by treatment arm or response rate in low versus high MAP-tau groups were not observed, indicating that MAP-tau is not associated with response to taxanes and is not a useful predictive marker for taxane-based chemotherapy.
Development of predictive mapping techniques for soil survey and salinity mapping
NASA Astrophysics Data System (ADS)
Elnaggar, Abdelhamid A.
Conventional soil maps represent a valuable source of information about soil characteristics, however they are subjective, very expensive, and time-consuming to prepare. Also, they do not include explicit information about the conceptual mental model used in developing them nor information about their accuracy, in addition to the error associated with them. Decision tree analysis (DTA) was successfully used in retrieving the expert knowledge embedded in old soil survey data. This knowledge was efficiently used in developing predictive soil maps for the study areas in Benton and Malheur Counties, Oregon and accessing their consistency. A retrieved soil-landscape model from a reference area in Harney County was extrapolated to develop a preliminary soil map for the neighboring unmapped part of Malheur County. The developed map had a low prediction accuracy and only a few soil map units (SMUs) were predicted with significant accuracy, mostly those shallow SMUs that have either a lithic contact with the bedrock or developed on a duripan. On the other hand, the developed soil map based on field data was predicted with very high accuracy (overall was about 97%). Salt-affected areas of the Malheur County study area are indicated by their high spectral reflectance and they are easily discriminated from the remote sensing data. However, remote sensing data fails to distinguish between the different classes of soil salinity. Using the DTA method, five classes of soil salinity were successfully predicted with an overall accuracy of about 99%. Moreover, the calculated area of salt-affected soil was overestimated when mapped using remote sensing data compared to that predicted by using DTA. Hence, DTA could be a very helpful approach in developing soil survey and soil salinity maps in more objective, effective, less-expensive and quicker ways based on field data.
New Clinically Feasible 3T MRI Protocol to Discriminate Internal Brain Stem Anatomy.
Hoch, M J; Chung, S; Ben-Eliezer, N; Bruno, M T; Fatterpekar, G M; Shepherd, T M
2016-06-01
Two new 3T MR imaging contrast methods, track density imaging and echo modulation curve T2 mapping, were combined with simultaneous multisection acquisition to reveal exquisite anatomic detail at 7 canonical levels of the brain stem. Compared with conventional MR imaging contrasts, many individual brain stem tracts and nuclear groups were directly visualized for the first time at 3T. This new approach is clinically practical and feasible (total scan time = 20 minutes), allowing better brain stem anatomic localization and characterization. © 2016 by American Journal of Neuroradiology.
1990-04-01
EXPLOSIVE ACTIVITY . FINDINGS AND MEASUREMENTS FROM EACH IMAGE WILL BE COMBINED IN A GEOGRAPHIC INFORMATION DATA BASE . VARIOUS IMAGE AND MAP PROJECTS WILL BE...PROPOSAL OF LAND MINES DETECTION BY A NUCLEAR ACTIVATION METHOD IS BASED ON A NEW EXTREMELY INTENSE, COMPACT PULSED SOURCE OF 14.1 MeV NEUTRONS (WITH A...CONVENTIONAL KNOWLEDGE- BASED SYSTEMS TOPIC# 38 OFFICE: PM/SBIR IDENT#: 33862 CASE- BASED REASONING (CBR) REPRESENTS A POWERFUL NEW PARADIGM FOR BUILDING EXPERT
Baseline mathematics and geodetics for tracking operations
NASA Technical Reports Server (NTRS)
James, R.
1981-01-01
Various geodetic and mapping algorithms are analyzed as they apply to radar tracking systems and tested in extended BASIC computer language for real time computer applications. Closed-form approaches to the solution of converting Earth centered coordinates to latitude, longitude, and altitude are compared with classical approximations. A simplified approach to atmospheric refractivity called gradient refraction is compared with conventional ray tracing processes. An extremely detailed set of documentation which provides the theory, derivations, and application of algorithms used in the programs is included. Validation methods are also presented for testing the accuracy of the algorithms.
USING DIRECT-PUSH TOOLS TO MAP HYDROSTRATIGRAPHY AND PREDICT MTBE PLUME DIVING
Conventional wells for monitoring MTBE contamination at underground storage tank sites are screened a few feet above and a few feet below the water table. At some sites, a plume of contamination in ground water may dive below the screen of conventional monitoring wells and escap...
Rhee, Ye-Kyu
2015-01-01
PURPOSE The aim of this study is to evaluate the appropriate impression technique by analyzing the superimposition of 3D digital model for evaluating accuracy of conventional impression technique and digital impression. MATERIALS AND METHODS Twenty-four patients who had no periodontitis or temporomandibular joint disease were selected for analysis. As a reference model, digital impressions with a digital impression system were performed. As a test models, for conventional impression dual-arch and full-arch, impression techniques utilizing addition type polyvinylsiloxane for fabrication of cast were applied. 3D laser scanner is used for scanning the cast. Each 3 pairs for 25 STL datasets were imported into the inspection software. The three-dimensional differences were illustrated in a color-coded map. For three-dimensional quantitative analysis, 4 specified contact locations(buccal and lingual cusps of second premolar and molar) were established. For twodimensional quantitative analysis, the sectioning from buccal cusp to lingual cusp of second premolar and molar were acquired depending on the tooth axis. RESULTS In color-coded map, the biggest difference between intraoral scanning and dual-arch impression was seen (P<.05). In three-dimensional analysis, the biggest difference was seen between intraoral scanning and dual-arch impression and the smallest difference was seen between dual-arch and full-arch impression. CONCLUSION The two- and three-dimensional deviations between intraoral scanner and dual-arch impression was bigger than full-arch and dual-arch impression (P<.05). The second premolar showed significantly bigger three-dimensional deviations than the second molar in the three-dimensional deviations (P>.05). PMID:26816576
Wilson, J.T.; Morlock, S.E.; Baker, N.T.
1997-01-01
Acoustic Doppler current profiler, global positioning system, and geographic information system technology were used to map the bathymetry of Morse and Geist Reservoirs, two artificial lakes used for public water supply in central Indiana. The project was a pilot study to evaluate the use of the technologies for bathymetric surveys. Bathymetric surveys were last conducted in 1978 on Morse Reservoir and in 1980 on Geist Reservoir; those surveys were done with conventional methods using networks of fathometer transects. The 1996 bathymetric surveys produced updated estimates of reservoir volumes that will serve as base-line data for future estimates of storage capacity and sedimentation rates.An acoustic Doppler current profiler and global positioning system receiver were used to collect water-depth and position data from April 1996 through October 1996. All water-depth and position data were imported to a geographic information system to create a data base. The geographic information system then was used to generate water-depth contour maps and to compute the volumes for each reservoir.The computed volume of Morse Reservoir was 22,820 acre-feet (7.44 billion gallons), with a surface area of 1,484 acres. The computed volume of Geist Reservoir was 19,280 acre-feet (6.29 billion gallons), with a surface area of 1,848 acres. The computed 1996 reservoir volumes are less than the design volumes and indicate that sedimentation has occurred in both reservoirs. Cross sections were constructed from the computer-generated surfaces for 1996 and compared to the fathometer profiles from the 1978 and 1980 surveys; analysis of these cross sections also indicates that some sedimentation has occurred in both reservoirs.The acoustic Doppler current profiler, global positioning system, and geographic information system technologies described in this report produced bathymetric maps and volume estimates more efficiently and with comparable or greater resolution than conventional bathymetry methods.
Robust Multipoint Water-Fat Separation Using Fat Likelihood Analysis
Yu, Huanzhou; Reeder, Scott B.; Shimakawa, Ann; McKenzie, Charles A.; Brittain, Jean H.
2016-01-01
Fat suppression is an essential part of routine MRI scanning. Multiecho chemical-shift based water-fat separation methods estimate and correct for Bo field inhomogeneity. However, they must contend with the intrinsic challenge of water-fat ambiguity that can result in water-fat swapping. This problem arises because the signals from two chemical species, when both are modeled as a single discrete spectral peak, may appear indistinguishable in the presence of Bo off-resonance. In conventional methods, the water-fat ambiguity is typically removed by enforcing field map smoothness using region growing based algorithms. In reality, the fat spectrum has multiple spectral peaks. Using this spectral complexity, we introduce a novel concept that identifies water and fat for multiecho acquisitions by exploiting the spectral differences between water and fat. A fat likelihood map is produced to indicate if a pixel is likely to be water-dominant or fat-dominant by comparing the fitting residuals of two different signal models. The fat likelihood analysis and field map smoothness provide complementary information, and we designed an algorithm (Fat Likelihood Analysis for Multiecho Signals) to exploit both mechanisms. It is demonstrated in a wide variety of data that the Fat Likelihood Analysis for Multiecho Signals algorithm offers highly robust water-fat separation for 6-echo acquisitions, particularly in some previously challenging applications. PMID:21842498
Fast periodic stimulation (FPS): a highly effective approach in fMRI brain mapping.
Gao, Xiaoqing; Gentile, Francesco; Rossion, Bruno
2018-06-01
Defining the neural basis of perceptual categorization in a rapidly changing natural environment with low-temporal resolution methods such as functional magnetic resonance imaging (fMRI) is challenging. Here, we present a novel fast periodic stimulation (FPS)-fMRI approach to define face-selective brain regions with natural images. Human observers are presented with a dynamic stream of widely variable natural object images alternating at a fast rate (6 images/s). Every 9 s, a short burst of variable face images contrasting with object images in pairs induces an objective face-selective neural response at 0.111 Hz. A model-free Fourier analysis achieves a twofold increase in signal-to-noise ratio compared to a conventional block-design approach with identical stimuli and scanning duration, allowing to derive a comprehensive map of face-selective areas in the ventral occipito-temporal cortex, including the anterior temporal lobe (ATL), in all individual brains. Critically, periodicity of the desired category contrast and random variability among widely diverse images effectively eliminates the contribution of low-level visual cues, and lead to the highest values (80-90%) of test-retest reliability in the spatial activation map yet reported in imaging higher level visual functions. FPS-fMRI opens a new avenue for understanding brain function with low-temporal resolution methods.
Quantitative analysis of scale of aeromagnetic data raises questions about geologic-map scale
Nykanen, V.; Raines, G.L.
2006-01-01
A recently published study has shown that small-scale geologic map data can reproduce mineral assessments made with considerably larger scale data. This result contradicts conventional wisdom about the importance of scale in mineral exploration, at least for regional studies. In order to formally investigate aspects of scale, a weights-of-evidence analysis using known gold occurrences and deposits in the Central Lapland Greenstone Belt of Finland as training sites provided a test of the predictive power of the aeromagnetic data. These orogenic-mesothermal-type gold occurrences and deposits have strong lithologic and structural controls associated with long (up to several kilometers), narrow (up to hundreds of meters) hydrothermal alteration zones with associated magnetic lows. The aeromagnetic data were processed using conventional geophysical methods of successive upward continuation simulating terrane clearance or 'flight height' from the original 30 m to an artificial 2000 m. The analyses show, as expected, that the predictive power of aeromagnetic data, as measured by the weights-of-evidence contrast, decreases with increasing flight height. Interestingly, the Moran autocorrelation of aeromagnetic data representing differing flight height, that is spatial scales, decreases with decreasing resolution of source data. The Moran autocorrelation coefficient scems to be another measure of the quality of the aeromagnetic data for predicting exploration targets. ?? Springer Science+Business Media, LLC 2007.
ATOMIC RESOLUTION CRYO ELECTRON MICROSCOPY OF MACROMOLECULAR COMPLEXES
ZHOU, Z. HONG
2013-01-01
Single-particle cryo electron microscopy (cryoEM) is a technique for determining three-dimensional (3D) structures from projection images of molecular complexes preserved in their “native,” noncrystalline state. Recently, atomic or near-atomic resolution structures of several viruses and protein assemblies have been determined by single-particle cryoEM, allowing ab initio atomic model building by following the amino acid side chains or nucleic acid bases identifiable in their cryoEM density maps. In particular, these cryoEM structures have revealed extended arms contributing to molecular interactions that are otherwise not resolved by the conventional structural method of X-ray crystallography at similar resolutions. High-resolution cryoEM requires careful consideration of a number of factors, including proper sample preparation to ensure structural homogeneity, optimal configuration of electron imaging conditions to record high-resolution cryoEM images, accurate determination of image parameters to correct image distortions, efficient refinement and computation to reconstruct a 3D density map, and finally appropriate choice of modeling tools to construct atomic models for functional interpretation. This progress illustrates the power of cryoEM and ushers it into the arsenal of structural biology, alongside conventional techniques of X-ray crystallography and NMR, as a major tool (and sometimes the preferred one) for the studies of molecular interactions in supramolecular assemblies or machines. PMID:21501817
NASA Astrophysics Data System (ADS)
Bártová, H.; Trojek, T.; Čechák, T.; Šefců, R.; Chlumská, Š.
2017-10-01
The presence of heavy chemical elements in old pigments is possible to identify in historical paintings using X-ray fluorescence analysis (XRF). This is a non-destructive analytical method frequently used in examination of objects that require in situ analysis, where it is necessary to avoid damaging the object by taking samples. Different modalities are available, such as microanalysis, scanning selected areas, or depth profiling techniques. Surface scanning is particularly profitable since 2D element distribution maps are much more understandable than the results of individual analyses. Information on the layered structure of the painting can be also obtained by handheld portable systems. Results presented in our paper combine 2D element distribution maps obtained by scanning analysis, and depth profiling using conventional XRF. The latter is very suitable for objects of art, as it can be evaluated from data measured with portable XRF device. Depth profiling by conventional XRF is based on the differences in X-ray absorption in paint layers. The XRF technique was applied for analysis of panel paintings of the Master of the St George Altarpiece who was active in Prague in the 1470s and 1480s. The results were evaluated by taking micro-samples and performing a material analysis.
Animation of Mapped Photo Collections for Storytelling
NASA Astrophysics Data System (ADS)
Fujita, Hideyuki; Arikawa, Masatoshi
Our research goal is to facilitate the sharing of stories with digital photographs. Some map websites now collect stories associated with peoples' relationships to places. Users map collections of places and include their intangible emotional associations with each location along with photographs, videos, etc. Though this framework of mapping stories is important, it is not sufficiently expressive to communicate stories in a narrative fashion. For example, when the number of the mapped collections of places is particularly large, it is neither easy for viewers to interpret the map nor is it easy for the creator to express a story as a series of events in the real world. This is because each narrative, in the form of a sequence of textual narratives, a sequence of photographs, a movie, or audio is mapped to just one point. As a result, it is up to the viewer to decide which points on the map must be read, and in what order. The conventional framework is fairly suitable for mapping and expressing fragments or snapshots of a whole story and not for conveying the whole story as a narrative using the entire map as the setting. We therefore propose a new framework, Spatial Slideshow, for mapping personal photo collections and representing them as stories such as route guidances, sightseeing guidances, historical topics, fieldwork records, personal diaries, and so on. It is a fusion of personal photo mapping and photo storytelling. Each story is conveyed through a sequence of mapped photographs, presented as a synchronized animation of a map and an enhanced photo slideshow. The main technical novelty of this paper is a method for creating three-dimensional animations of photographs that induce the visual effect of motion from photo to photo. We believe that the proposed framework may have considerable significance in facilitating the grassroots development of spatial content driven by visual communication concerning real-world locations or events.
mapKITE: a New Paradigm for Simultaneous Aerial and Terrestrial Geodata Acquisition and Mapping
NASA Astrophysics Data System (ADS)
Molina, P.; Blázquez, M.; Sastre, J.; Colomina, I.
2016-06-01
We introduce a new mobile, simultaneous terrestrial and aerial, geodata collection and post-processing method: mapKITE. By combining two mapping technologies such as terrestrial mobile mapping and unmanned aircraft aerial mapping, geodata are simultaneously acquired from air and ground. More in detail, a mapKITE geodata acquisition system consists on an unmanned aircraft and a terrestrial vehicle, which hosts the ground control station. By means of a real-time navigation system on the terrestrial vehicle, real-time waypoints are sent to the aircraft from the ground. By doing so, the aircraft is linked to the terrestrial vehicle through a "virtual tether," acting as a "mapping kite." In the article, we entail the concept of mapKITE as well as the various technologies and techniques involved, from aircraft guidance and navigation based on IMU and GNSS, optical cameras for mapping and tracking, sensor orientation and calibration, etc. Moreover, we report of a new measurement introduced in mapKITE, that is, point-and-scale photogrammetric measurements [of image coordinates and scale] for optical targets of known size installed on the ground vehicle roof. By means of accurate posteriori trajectory determination of the terrestrial vehicle, mapKITE benefits then from kinematic ground control points which are photogrametrically observed by point-and-scale measures. Initial results for simulated configurations show that these measurements added to the usual Integrated Sensor Orientation ones reduce or even eliminate the need of conventional ground control points -therefore, lowering mission costs- and enable selfcalibration of the unmanned aircraft interior orientation parameters in corridor configurations, in contrast to the situation of traditional corridor configurations. Finally, we report about current developments of the first mapKITE prototype, developed under the European Union Research and Innovation programme Horizon 2020. The first mapKITE mission will be held at the BCN Drone Center (Collsuspina, Moià, Spain) in mid 2016.
Zhou, Zhengdong; Guan, Shaolin; Xin, Runchao; Li, Jianbo
2018-06-01
Contrast-enhanced subtracted breast computer tomography (CESBCT) images acquired using energy-resolved photon counting detector can be helpful to enhance the visibility of breast tumors. In such technology, one challenge is the limited number of photons in each energy bin, thereby possibly leading to high noise in separate images from each energy bin, the projection-based weighted image, and the subtracted image. In conventional low-dose CT imaging, iterative image reconstruction provides a superior signal-to-noise compared with the filtered back projection (FBP) algorithm. In this paper, maximum a posteriori expectation maximization (MAP-EM) based on projection-based weighting imaging for reconstruction of CESBCT images acquired using an energy-resolving photon counting detector is proposed, and its performance was investigated in terms of contrast-to-noise ratio (CNR). The simulation study shows that MAP-EM based on projection-based weighting imaging can improve the CNR in CESBCT images by 117.7%-121.2% compared with FBP based on projection-based weighting imaging method. When compared with the energy-integrating imaging that uses the MAP-EM algorithm, projection-based weighting imaging that uses the MAP-EM algorithm can improve the CNR of CESBCT images by 10.5%-13.3%. In conclusion, MAP-EM based on projection-based weighting imaging shows significant improvement the CNR of the CESBCT image compared with FBP based on projection-based weighting imaging, and MAP-EM based on projection-based weighting imaging outperforms MAP-EM based on energy-integrating imaging for CESBCT imaging.
Near-edge X-ray refraction fine structure microscopy
Farmand, Maryam; Celestre, Richard; Denes, Peter; ...
2017-02-06
We demonstrate a method for obtaining increased spatial resolution and specificity in nanoscale chemical composition maps through the use of full refractive reference spectra in soft x-ray spectro-microscopy. Using soft x-ray ptychography, we measure both the absorption and refraction of x-rays through pristine reference materials as a function of photon energy and use these reference spectra as the basis for decomposing spatially resolved spectra from a heterogeneous sample, thereby quantifying the composition at high resolution. While conventional instruments are limited to absorption contrast, our novel refraction based method takes advantage of the strongly energy dependent scattering cross-section and can seemore » nearly five-fold improved spatial resolution on resonance.« less
Earth Resources Technology Satellite data collection project, ERTS - Bolivia. [thematic mapping
NASA Technical Reports Server (NTRS)
Brockmann, C. E.
1974-01-01
The Earth Resources Technology Satellite program of Bolivia has developed a multidisciplinary project to carry out investigations in cartography and to prepare various thematic maps. In cartography, investigations are being carried out with the ERTS-1 images and with existing maps, to determine their application to the preparation of new cartographic products on one hand and on the other to map those regions where the cartography is still deficient. The application of the MSS images to the geological mapping has given more than satisfactory results. Working with conventional photointerpretation, it has been possible to prepare regional geological maps, tectonic maps, studies relative to mining, geomorphological maps, studies relative to petroleum exploration, volcanological maps and maps of hydrologic basins. In agriculture, the ERTS images are used to study land classification and forest and soils mapping.
Mani, Merry; Jacob, Mathews; Kelley, Douglas; Magnotta, Vincent
2017-01-01
Purpose To introduce a novel method for the recovery of multi-shot diffusion weighted (MS-DW) images from echo-planar imaging (EPI) acquisitions. Methods Current EPI-based MS-DW reconstruction methods rely on the explicit estimation of the motion-induced phase maps to recover artifact-free images. In the new formulation, the k-space data of the artifact-free DWI is recovered using a structured low-rank matrix completion scheme, which does not require explicit estimation of the phase maps. The structured matrix is obtained as the lifting of the multi-shot data. The smooth phase-modulations between shots manifest as null-space vectors of this matrix, which implies that the structured matrix is low-rank. The missing entries of the structured matrix are filled in using a nuclear-norm minimization algorithm subject to the data-consistency. The formulation enables the natural introduction of smoothness regularization, thus enabling implicit motion-compensated recovery of the MS-DW data. Results Our experiments on in-vivo data show effective removal of artifacts arising from inter-shot motion using the proposed method. The method is shown to achieve better reconstruction than the conventional phase-based methods. Conclusion We demonstrate the utility of the proposed method to effectively recover artifact-free images from Cartesian fully/under-sampled and partial Fourier acquired data without the use of explicit phase estimates. PMID:27550212
Fast flow-based algorithm for creating density-equalizing map projections
Gastner, Michael T.; Seguy, Vivien; More, Pratyush
2018-01-01
Cartograms are maps that rescale geographic regions (e.g., countries, districts) such that their areas are proportional to quantitative demographic data (e.g., population size, gross domestic product). Unlike conventional bar or pie charts, cartograms can represent correctly which regions share common borders, resulting in insightful visualizations that can be the basis for further spatial statistical analysis. Computer programs can assist data scientists in preparing cartograms, but developing an algorithm that can quickly transform every coordinate on the map (including points that are not exactly on a border) while generating recognizable images has remained a challenge. Methods that translate the cartographic deformations into physics-inspired equations of motion have become popular, but solving these equations with sufficient accuracy can still take several minutes on current hardware. Here we introduce a flow-based algorithm whose equations of motion are numerically easier to solve compared with previous methods. The equations allow straightforward parallelization so that the calculation takes only a few seconds even for complex and detailed input. Despite the speedup, the proposed algorithm still keeps the advantages of previous techniques: With comparable quantitative measures of shape distortion, it accurately scales all areas, correctly fits the regions together, and generates a map projection for every point. We demonstrate the use of our algorithm with applications to the 2016 US election results, the gross domestic products of Indian states and Chinese provinces, and the spatial distribution of deaths in the London borough of Kensington and Chelsea between 2011 and 2014. PMID:29463721
Nonlinear mapping of the luminance in dual-layer high dynamic range displays
NASA Astrophysics Data System (ADS)
Guarnieri, Gabriele; Ramponi, Giovanni; Bonfiglio, Silvio; Albani, Luigi
2009-02-01
It has long been known that the human visual system (HVS) has a nonlinear response to luminance. This nonlinearity can be quantified using the concept of just noticeable difference (JND), which represents the minimum amplitude of a specified test pattern an average observer can discern from a uniform background. The JND depends on the background luminance following a threshold versus intensity (TVI) function. It is possible to define a curve which maps physical luminances into a perceptually linearized domain. This mapping can be used to optimize a digital encoding, by minimizing the visibility of quantization noise. It is also commonly used in medical applications to display images adapting to the characteristics of the display device. High dynamic range (HDR) displays, which are beginning to appear on the market, can display luminance levels outside the range in which most standard mapping curves are defined. In particular, dual-layer LCD displays are able to extend the gamut of luminance offered by conventional liquid crystals towards the black region; in such areas suitable and HVS-compliant luminance transformations need to be determined. In this paper we propose a method, which is primarily targeted to the extension of the DICOM curve used in medical imaging, but also has a more general application. The method can be modified in order to compensate for the ambient light, which can be significantly greater than the black level of an HDR display and consequently reduce the visibility of the details in dark areas.
A PDE approach for quantifying and visualizing tumor progression and regression
NASA Astrophysics Data System (ADS)
Sintay, Benjamin J.; Bourland, J. Daniel
2009-02-01
Quantification of changes in tumor shape and size allows physicians the ability to determine the effectiveness of various treatment options, adapt treatment, predict outcome, and map potential problem sites. Conventional methods are often based on metrics such as volume, diameter, or maximum cross sectional area. This work seeks to improve the visualization and analysis of tumor changes by simultaneously analyzing changes in the entire tumor volume. This method utilizes an elliptic partial differential equation (PDE) to provide a roadmap of boundary displacement that does not suffer from the discontinuities associated with other measures such as Euclidean distance. Streamline pathways defined by Laplace's equation (a commonly used PDE) are used to track tumor progression and regression at the tumor boundary. Laplace's equation is particularly useful because it provides a smooth, continuous solution that can be evaluated with sub-pixel precision on variable grid sizes. Several metrics are demonstrated including maximum, average, and total regression and progression. This method provides many advantages over conventional means of quantifying change in tumor shape because it is observer independent, stable for highly unusual geometries, and provides an analysis of the entire three-dimensional tumor volume.
Conversion of KEGG metabolic pathways to SBGN maps including automatic layout
2013-01-01
Background Biologists make frequent use of databases containing large and complex biological networks. One popular database is the Kyoto Encyclopedia of Genes and Genomes (KEGG) which uses its own graphical representation and manual layout for pathways. While some general drawing conventions exist for biological networks, arbitrary graphical representations are very common. Recently, a new standard has been established for displaying biological processes, the Systems Biology Graphical Notation (SBGN), which aims to unify the look of such maps. Ideally, online repositories such as KEGG would automatically provide networks in a variety of notations including SBGN. Unfortunately, this is non‐trivial, since converting between notations may add, remove or otherwise alter map elements so that the existing layout cannot be simply reused. Results Here we describe a methodology for automatic translation of KEGG metabolic pathways into the SBGN format. We infer important properties of the KEGG layout and treat these as layout constraints that are maintained during the conversion to SBGN maps. Conclusions This allows for the drawing and layout conventions of SBGN to be followed while creating maps that are still recognizably the original KEGG pathways. This article details the steps in this process and provides examples of the final result. PMID:23953132
Interpolation of diffusion weighted imaging datasets.
Dyrby, Tim B; Lundell, Henrik; Burke, Mark W; Reislev, Nina L; Paulson, Olaf B; Ptito, Maurice; Siebner, Hartwig R
2014-12-01
Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal to the voxel size showed that conventional higher-order interpolation methods improved the geometrical representation of white-matter tracts with reduced partial-volume-effect (PVE), except at tract boundaries. Simulations and interpolation of ex-vivo monkey brain DWI datasets revealed that conventional interpolation methods fail to disentangle fine anatomical details if PVE is too pronounced in the original data. As for validation we used ex-vivo DWI datasets acquired at various image resolutions as well as Nissl-stained sections. Increasing the image resolution by a factor of eight yielded finer geometrical resolution and more anatomical details in complex regions such as tract boundaries and cortical layers, which are normally only visualized at higher image resolutions. Similar results were found with typical clinical human DWI dataset. However, a possible bias in quantitative values imposed by the interpolation method used should be considered. The results indicate that conventional interpolation methods can be successfully applied to DWI datasets for mining anatomical details that are normally seen only at higher resolutions, which will aid in tractography and microstructural mapping of tissue compartments. Copyright © 2014. Published by Elsevier Inc.
Hagberg, Gisela E; Mamedov, Ilgar; Power, Anthony; Beyerlein, Michael; Merkle, Hellmut; Kiselev, Valerij G; Dhingra, Kirti; Kubìček, Vojtĕch; Angelovski, Goran; Logothetis, Nikos K
2014-01-01
Calcium-sensitive MRI contrast agents can only yield quantitative results if the agent concentration in the tissue is known. The agent concentration could be determined by diffusion modeling, if relevant parameters were available. We have established an MRI-based method capable of determining diffusion properties of conventional and calcium-sensitive agents. Simulations and experiments demonstrate that the method is applicable both for conventional contrast agents with a fixed relaxivity value and for calcium-sensitive contrast agents. The full pharmacokinetic time-course of gadolinium concentration estimates was observed by MRI before, during and after intracerebral administration of the agent, and the effective diffusion coefficient D* was determined by voxel-wise fitting of the solution to the diffusion equation. The method yielded whole brain coverage with a high spatial and temporal sampling. The use of two types of MRI sequences for sampling of the diffusion time courses was investigated: Look-Locker-based quantitative T(1) mapping, and T(1) -weighted MRI. The observation times of the proposed MRI method is long (up to 20 h) and consequently the diffusion distances covered are also long (2-4 mm). Despite this difference, the D* values in vivo were in agreement with previous findings using optical measurement techniques, based on observation times of a few minutes. The effective diffusion coefficient determined for the calcium-sensitive contrast agents may be used to determine local tissue concentrations and to design infusion protocols that maintain the agent concentration at a steady state, thereby enabling quantitative sensing of the local calcium concentration. Copyright © 2014 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Dan; Ruan, Dan; O’Connor, Daniel
Purpose: To deliver high quality intensity modulated radiotherapy (IMRT) using a novel generalized sparse orthogonal collimators (SOCs), the authors introduce a novel direct aperture optimization (DAO) approach based on discrete rectangular representation. Methods: A total of seven patients—two glioblastoma multiforme, three head & neck (including one with three prescription doses), and two lung—were included. 20 noncoplanar beams were selected using a column generation and pricing optimization method. The SOC is a generalized conventional orthogonal collimators with N leaves in each collimator bank, where N = 1, 2, or 4. SOC degenerates to conventional jaws when N = 1. For SOC-basedmore » IMRT, rectangular aperture optimization (RAO) was performed to optimize the fluence maps using rectangular representation, producing fluence maps that can be directly converted into a set of deliverable rectangular apertures. In order to optimize the dose distribution and minimize the number of apertures used, the overall objective was formulated to incorporate an L2 penalty reflecting the difference between the prescription and the projected doses, and an L1 sparsity regularization term to encourage a low number of nonzero rectangular basis coefficients. The optimization problem was solved using the Chambolle–Pock algorithm, a first-order primal–dual algorithm. Performance of RAO was compared to conventional two-step IMRT optimization including fluence map optimization and direct stratification for multileaf collimator (MLC) segmentation (DMS) using the same number of segments. For the RAO plans, segment travel time for SOC delivery was evaluated for the N = 1, N = 2, and N = 4 SOC designs to characterize the improvement in delivery efficiency as a function of N. Results: Comparable PTV dose homogeneity and coverage were observed between the RAO and the DMS plans. The RAO plans were slightly superior to the DMS plans in sparing critical structures. On average, the maximum and mean critical organ doses were reduced by 1.94% and 1.44% of the prescription dose. The average number of delivery segments was 12.68 segments per beam for both the RAO and DMS plans. The N = 2 and N = 4 SOC designs were, on average, 1.56 and 1.80 times more efficient than the N = 1 SOC design to deliver. The mean aperture size produced by the RAO plans was 3.9 times larger than that of the DMS plans. Conclusions: The DAO and dose domain optimization approach enabled high quality IMRT plans using a low-complexity collimator setup. The dosimetric quality is comparable or slightly superior to conventional MLC-based IMRT plans using the same number of delivery segments. The SOC IMRT delivery efficiency can be significantly improved by increasing the leaf numbers, but the number is still significantly lower than the number of leaves in a typical MLC.« less
NASA Astrophysics Data System (ADS)
Zalev, Jason; Clingman, Bryan; Smith, Remie J.; Herzog, Don; Miller, Tom; Stavros, A. Thomas; Ermilov, Sergey; Conjusteau, André; Tsyboulski, Dmitri; Oraevsky, Alexander A.; Kist, Kenneth; Dornbluth, N. C.; Otto, Pamela
2013-03-01
We report on findings from the clinical feasibility study of the ImagioTM. Breast Imaging System, which acquires two-dimensional opto-acoustic (OA) images co-registered with conventional ultrasound using a specialized duplex hand-held probe. Dual-wavelength opto-acoustic technology is used to generate parametric maps based upon total hemoglobin and its oxygen saturation in breast tissues. This may provide functional diagnostic information pertaining to tumor metabolism and microvasculature, which is complementary to morphological information obtained with conventional gray-scale ultrasound. We present co-registered opto-acoustic and ultrasonic images of malignant and benign tumors from a recent clinical feasibility study. The clinical results illustrate that the technology may have the capability to improve the efficacy of breast tumor diagnosis. In doing so, it may have the potential to reduce biopsies and to characterize cancers that were not seen well with conventional gray-scale ultrasound alone.
Pofelski, A; Woo, S Y; Le, B H; Liu, X; Zhao, S; Mi, Z; Löffler, S; Botton, G A
2018-04-01
A strain characterization technique based on Moiré interferometry in a scanning transmission electron microscope (STEM) and geometrical phase analysis (GPA) method is demonstrated. The deformation field is first captured in a single STEM Moiré hologram composed of multiple sets of periodic fringes (Moiré patterns) generated from the interference between the periodic scanning grating, fixing the positions of the electron probe on the sample, and the crystal structure. Applying basic principles from sampling theory, the Moiré patterns arrangement is then simulated using a STEM electron micrograph reference to convert the experimental STEM Moiré hologram into information related to the crystal lattice periodicities. The GPA method is finally applied to extract the 2D relative strain and rotation fields. The STEM Moiré interferometry enables the local information to be de-magnified to a large length scale, comparable to what can be achieved in dark-field electron holography. The STEM Moiré GPA method thus extends the conventional high-resolution STEM GPA capabilities by providing comparable quantitative 2D strain mapping with a larger field of view (up to a few microns). Copyright © 2017 Elsevier B.V. All rights reserved.
Cooperation-Controlled Learning for Explicit Class Structure in Self-Organizing Maps
Kamimura, Ryotaro
2014-01-01
We attempt to demonstrate the effectiveness of multiple points of view toward neural networks. By restricting ourselves to two points of view of a neuron, we propose a new type of information-theoretic method called “cooperation-controlled learning.” In this method, individual and collective neurons are distinguished from one another, and we suppose that the characteristics of individual and collective neurons are different. To implement individual and collective neurons, we prepare two networks, namely, cooperative and uncooperative networks. The roles of these networks and the roles of individual and collective neurons are controlled by the cooperation parameter. As the parameter is increased, the role of cooperative networks becomes more important in learning, and the characteristics of collective neurons become more dominant. On the other hand, when the parameter is small, individual neurons play a more important role. We applied the method to the automobile and housing data from the machine learning database and examined whether explicit class boundaries could be obtained. Experimental results showed that cooperation-controlled learning, in particular taking into account information on input units, could be used to produce clearer class structure than conventional self-organizing maps. PMID:25309950
NASA Astrophysics Data System (ADS)
Castilla, G.
2004-09-01
Landcover maps typically represent the territory as a mosaic of contiguous units "polygons- that are assumed to correspond to geographic entities" like e.g. lakes, forests or villages-. They may also be viewed as representing a particular level of a landscape hierarchy where each polygon is a holon - an object made of subobjects and part of a superobject. The focal level portrayed in the map is distinguished from other levels by the average size of objects compounding it. Moreover, the focal level is bounded by the minimum size that objects of this level are supposed to have. Based on this framework, we have developed a segmentation method that defines a partition on a multiband image such that i) the mean size of segments is close to the one specified; ii) each segment exceeds the required minimum size; and iii) the internal homogeneity of segments is maximal given the size constraints. This paper briefly describes the method, focusing on its region merging stage. The most distinctive feature of the latter is that while the merging sequence is ordered by increasing dissimilarity as in conventional methods, there is no need to define a threshold on the dissimilarity measure between adjacent segments.
NASA Astrophysics Data System (ADS)
Tamiminia, Haifa; Homayouni, Saeid; McNairn, Heather; Safari, Abdoreza
2017-06-01
Polarimetric Synthetic Aperture Radar (PolSAR) data, thanks to their specific characteristics such as high resolution, weather and daylight independence, have become a valuable source of information for environment monitoring and management. The discrimination capability of observations acquired by these sensors can be used for land cover classification and mapping. The aim of this paper is to propose an optimized kernel-based C-means clustering algorithm for agriculture crop mapping from multi-temporal PolSAR data. Firstly, several polarimetric features are extracted from preprocessed data. These features are linear polarization intensities, and several statistical and physical based decompositions such as Cloude-Pottier, Freeman-Durden and Yamaguchi techniques. Then, the kernelized version of hard and fuzzy C-means clustering algorithms are applied to these polarimetric features in order to identify crop types. The kernel function, unlike the conventional partitioning clustering algorithms, simplifies the non-spherical and non-linearly patterns of data structure, to be clustered easily. In addition, in order to enhance the results, Particle Swarm Optimization (PSO) algorithm is used to tune the kernel parameters, cluster centers and to optimize features selection. The efficiency of this method was evaluated by using multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Manitoba, Canada, during June and July in 2012. The results demonstrate more accurate crop maps using the proposed method when compared to the classical approaches, (e.g. 12% improvement in general). In addition, when the optimization technique is used, greater improvement is observed in crop classification, e.g. 5% in overall. Furthermore, a strong relationship between Freeman-Durden volume scattering component, which is related to canopy structure, and phenological growth stages is observed.
Investigation of gunshot residue patterns using milli-XRF-techniques: first experiences in casework
NASA Astrophysics Data System (ADS)
Schumacher, Rüdiger; Barth, Martin; Neimke, Dieter; Niewöhner, Ludwig
2010-06-01
The investigation of gunshot residue (GSR) patterns for shooting range estimation is usually based on visualizing the lead, copper, or nitrocellulose distributions on targets like fabric or adhesive tape by chemographic color tests. The method usually provides good results but has its drawbacks when it comes to the examination of ammunition containing lead-free primers or bloody clothing. A milli-X-ray fluorescence (m-XRF) spectrometer with a large motorized stage can help to circumvent these problems allowing the acquisition of XRF mappings of relatively large areas (up to 20 x 20 cm) in millimeter resolution within reasonable time (2-10 hours) for almost all elements. First experiences in GSR casework at the Forensic Science Institute of the Bundeskriminalamt (BKA) have shown, that m-XRF is a useful supplementation for conventional methods in shooting ranges estimation, which helps if there are problems in transferring a GSR pattern to secondary targets (e.g. bloody or stained garments) or if there is no suitable color test available for the element of interest. The resulting elemental distributions are a good estimate for the shooting range and can be evaluated by calculating radial distributions or integrated count rates of irregular shaped regions like pieces of human skin which are too small to be investigated with a conventional WD-XRF spectrometer. Beside a mapping mode the milli-XRF offers also point and line scan modes which can also be utilized in gunshot crime investigations as a quick survey tool to identify bullet holes based on the elements present in the wipe ring.
Fixed Point Results of Locally Contractive Mappings in Ordered Quasi-Partial Metric Spaces
Arshad, Muhammad; Ahmad, Jamshaid
2013-01-01
Fixed point results for a self-map satisfying locally contractive conditions on a closed ball in an ordered 0-complete quasi-partial metric space have been established. Instead of monotone mapping, the notion of dominated mappings is applied. We have used weaker metric, weaker contractive conditions, and weaker restrictions to obtain unique fixed points. An example is given which shows that how this result can be used when the corresponding results cannot. Our results generalize, extend, and improve several well-known conventional results. PMID:24062629
Geodesy and cartography of the Martian satellites
NASA Technical Reports Server (NTRS)
Batson, R. M.; Edwards, Kathleen; Duxbury, T. C.
1992-01-01
The difficulties connected with conventional maps of Phobos and Deimos are largely overcome by producing maps in digital forms, i.e., by projecting Viking Orbiter images onto a global topographic model made from collections of radii derived by photogrammetry. The resulting digital mosaics are then formatted as arrays of body-centered latitudes, longitudes, radii, and brightness values of Viking Orbiter images. The Phobos mapping described was done with Viking Orbiter data. Significant new coverage was obtained by the Soviet Phobos mission. The mapping of Deimos is in progress, using the techniques developed for Phobos.
Correlation mapping microscopy
NASA Astrophysics Data System (ADS)
McGrath, James; Alexandrov, Sergey; Owens, Peter; Subhash, Hrebesh M.; Leahy, Martin J.
2015-03-01
Changes in the microcirculation are associated with conditions such as Raynauds disease. Current modalities used to assess the microcirculation such as nailfold capillaroscopy are limited due to their depth ambiguity. A correlation mapping technique was recently developed to extend the capabilities of Optical Coherence Tomography to generate depth resolved images of the microcirculation. Here we present the extension of this technique to microscopy modalities, including confocal microscopy. It is shown that this correlation mapping microscopy technique can extend the capabilities of conventional microscopy to enable mapping of vascular networks in vivo with high spatial resolution.
Ren, Jiliang; Yuan, Ying; Wu, Yingwei; Tao, Xiaofeng
2018-05-02
The overlap of morphological feature and mean ADC value restricted clinical application of MRI in the differential diagnosis of orbital lymphoma and idiopathic orbital inflammatory pseudotumor (IOIP). In this paper, we aimed to retrospectively evaluate the combined diagnostic value of conventional magnetic resonance imaging (MRI) and whole-tumor histogram analysis of apparent diffusion coefficient (ADC) maps in the differentiation of the two lesions. In total, 18 patients with orbital lymphoma and 22 patients with IOIP were included, who underwent both conventional MRI and diffusion weighted imaging before treatment. Conventional MRI features and histogram parameters derived from ADC maps, including mean ADC (ADC mean ), median ADC (ADC median ), skewness, kurtosis, 10th, 25th, 75th and 90th percentiles of ADC (ADC 10 , ADC 25 , ADC 75 , ADC 90 ) were evaluated and compared between orbital lymphoma and IOIP. Multivariate logistic regression analysis was used to identify the most valuable variables for discriminating. Differential model was built upon the selected variables and receiver operating characteristic (ROC) analysis was also performed to determine the differential ability of the model. Multivariate logistic regression showed ADC 10 (P = 0.023) and involvement of orbit preseptal space (P = 0.029) were the most promising indexes in the discrimination of orbital lymphoma and IOIP. The logistic model defined by ADC 10 and involvement of orbit preseptal space was built, which achieved an AUC of 0.939, with sensitivity of 77.30% and specificity of 94.40%. Conventional MRI feature of involvement of orbit preseptal space and ADC histogram parameter of ADC 10 are valuable in differential diagnosis of orbital lymphoma and IOIP.
Investigation of practical initial attenuation image estimates in TOF-MLAA reconstruction for PET/MR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Ju-Chieh, E-mail: chengjuchieh@gmail.com; Y
Purpose: Time-of-flight joint attenuation and activity positron emission tomography reconstruction requires additional calibration (scale factors) or constraints during or post-reconstruction to produce a quantitative μ-map. In this work, the impact of various initializations of the joint reconstruction was investigated, and the initial average mu-value (IAM) method was introduced such that the forward-projection of the initial μ-map is already very close to that of the reference μ-map, thus reducing/minimizing the offset (scale factor) during the early iterations of the joint reconstruction. Consequently, the accuracy and efficiency of unconstrained joint reconstruction such as time-of-flight maximum likelihood estimation of attenuation and activity (TOF-MLAA)more » can be improved by the proposed IAM method. Methods: 2D simulations of brain and chest were used to evaluate TOF-MLAA with various initial estimates which include the object filled with water uniformly (conventional initial estimate), bone uniformly, the average μ-value uniformly (IAM magnitude initialization method), and the perfect spatial μ-distribution but with a wrong magnitude (initialization in terms of distribution). 3D GATE simulation was also performed for the chest phantom under a typical clinical scanning condition, and the simulated data were reconstructed with a fully corrected list-mode TOF-MLAA algorithm with various initial estimates. The accuracy of the average μ-values within the brain, chest, and abdomen regions obtained from the MR derived μ-maps was also evaluated using computed tomography μ-maps as the gold-standard. Results: The estimated μ-map with the initialization in terms of magnitude (i.e., average μ-value) was observed to reach the reference more quickly and naturally as compared to all other cases. Both 2D and 3D GATE simulations produced similar results, and it was observed that the proposed IAM approach can produce quantitative μ-map/emission when the corrections for physical effects such as scatter and randoms were included. The average μ-value obtained from MR derived μ-map was accurate within 5% with corrections for bone, fat, and uniform lungs. Conclusions: The proposed IAM-TOF-MLAA can produce quantitative μ-map without any calibration provided that there are sufficient counts in the measured data. For low count data, noise reduction and additional regularization/rescaling techniques need to be applied and investigated. The average μ-value within the object is prior information which can be extracted from MR and patient database, and it is feasible to obtain accurate average μ-value using MR derived μ-map with corrections as demonstrated in this work.« less
NASA Astrophysics Data System (ADS)
Denize, J.; Corgne, S.; Todoroff, P.; LE Mezo, L.
2015-12-01
In Reunion, a tropical island of 2,512 km², 700 km east of Madagascar in the Indian Ocean, constrained by a rugged relief, agricultural sectors are competing in highly fragmented agricultural land constituted by heterogeneous farming systems from corporate to small-scale farming. Policymakers, planners and institutions are in dire need of reliable and updated land use references. Actually conventional land use mapping methods are inefficient under the tropic with frequent cloud cover and loosely synchronous vegetative cycles of the crops due to a constant temperature. This study aims to provide an appropriate method for the identification and mapping of tropical crops by remote sensing. For this purpose, we assess the potential of polarimetric SAR imagery associated with associated with machine learning algorithms. The method has been developed and tested on a study area of 25*25 km thanks to 6 RADARSAT-2 images in 2014 in full-polarization. A set of radar indicators (backscatter coefficient, bands ratios, indices, polarimetric decompositions (Freeman-Durden, Van zyl, Yamaguchi, Cloude and Pottier, Krogager), texture, etc.) was calculated from the coherency matrix. A random forest procedure allowed the selection of the most important variables on each images to reduce the dimension of the dataset and the processing time. Support Vector Machines (SVM), allowed the classification of these indicators based on a learning database created from field observations in 2013. The method shows an overall accuracy of 88% with a Kappa index of 0.82 for the identification of four major crops.
Application of fuzzy system theory in addressing the presence of uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yusmye, A. Y. N.; Goh, B. Y.; Adnan, N. F.
In this paper, the combinations of fuzzy system theory with the finite element methods are present and discuss to deal with the uncertainties. The present of uncertainties is needed to avoid for prevent the failure of the material in engineering. There are three types of uncertainties, which are stochastic, epistemic and error uncertainties. In this paper, the epistemic uncertainties have been considered. For the epistemic uncertainty, it exists as a result of incomplete information and lack of knowledge or data. Fuzzy system theory is a non-probabilistic method, and this method is most appropriate to interpret the uncertainty compared to statisticalmore » approach when the deal with the lack of data. Fuzzy system theory contains a number of processes started from converting the crisp input to fuzzy input through fuzzification process and followed by the main process known as mapping process. The term mapping here means that the logical relationship between two or more entities. In this study, the fuzzy inputs are numerically integrated based on extension principle method. In the final stage, the defuzzification process is implemented. Defuzzification is an important process to allow the conversion of the fuzzy output to crisp outputs. Several illustrative examples are given and from the simulation, the result showed that propose the method produces more conservative results comparing with the conventional finite element method.« less
Optimal and fast E/B separation with a dual messenger field
NASA Astrophysics Data System (ADS)
Kodi Ramanah, Doogesh; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-05-01
We adapt our recently proposed dual messenger algorithm for spin field reconstruction and showcase its efficiency and effectiveness in Wiener filtering polarized cosmic microwave background (CMB) maps. Unlike conventional preconditioned conjugate gradient (PCG) solvers, our preconditioner-free technique can deal with high-resolution joint temperature and polarization maps with inhomogeneous noise distributions and arbitrary mask geometries with relative ease. Various convergence diagnostics illustrate the high quality of the dual messenger reconstruction. In contrast, the PCG implementation fails to converge to a reasonable solution for the specific problem considered. The implementation of the dual messenger method is straightforward and guarantees numerical stability and convergence. We show how the algorithm can be modified to generate fluctuation maps, which, combined with the Wiener filter solution, yield unbiased constrained signal realizations, consistent with observed data. This algorithm presents a pathway to exact global analyses of high-resolution and high-sensitivity CMB data for a statistically optimal separation of E and B modes. It is therefore relevant for current and next-generation CMB experiments, in the quest for the elusive primordial B-mode signal.
RECENT DEVELOPMENTS IN THE U. S. GEOLOGICAL SURVEY'S LANDSAT IMAGE MAPPING PROGRAM.
Brownworth, Frederick S.; Rohde, Wayne G.
1986-01-01
At the 1984 ASPRS-ACSM Convention in Washington, D. C. a paper on 'The Emerging U. S. Geological Survey Image Mapping Program' was presented that discussed recent satellite image mapping advancements and published products. Since then Landsat image mapping has become an integral part of the National Mapping Program. The Survey currently produces about 20 Landsat multispectral scanner (MSS) and Thematic Mapper (TM) image map products annually at 1:250,000 and 1:100,000 scales, respectively. These Landsat image maps provide users with a regional or synoptic view of an area. The resultant geographical presentation of the terrain and cultural features will help planners and managers make better decisions regarding the use of our national resources.
A statistical approach for validating eSOTER and digital soil maps in front of traditional soil maps
NASA Astrophysics Data System (ADS)
Bock, Michael; Baritz, Rainer; Köthe, Rüdiger; Melms, Stephan; Günther, Susann
2015-04-01
During the European research project eSOTER, three different Digital Soil Maps (DSM) were developed for the pilot area Chemnitz 1:250,000 (FP7 eSOTER project, grant agreement nr. 211578). The core task of the project was to revise the SOTER method for the interpretation of soil and terrain data. It was one of the working hypothesis that eSOTER does not only provide terrain data with typical soil profiles, but that the new products actually perform like a conceptual soil map. The three eSOTER maps for the pilot area considerably differed in spatial representation and content of soil classes. In this study we compare the three eSOTER maps against existing reconnaissance soil maps keeping in mind that traditional soil maps have many subjective issues and intended bias regarding the overestimation and emphasize of certain features. Hence, a true validation of the proper representation of modeled soil maps is hardly possible; rather a statistical comparison between modeled and empirical approaches is possible. If eSOTER data represent conceptual soil maps, then different eSOTER, DSM and conventional maps from various sources and different regions could be harmonized towards consistent new data sets for large areas including the whole European continent. One of the eSOTER maps has been developed closely to the traditional SOTER method: terrain classification data (derived from SRTM DEM) were combined with lithology data (re-interpreted geological map); the corresponding terrain units were then extended with soil information: a very dense regional soil profile data set was used to define soil mapping units based on a statistical grouping of terrain units. The second map is a pure DSM map using continuous terrain parameters instead of terrain classification; radiospectrometric data were used to supplement parent material information from geology maps. The classification method Random Forest was used. The third approach predicts soil diagnostic properties based on covariates similar to DSM practices; in addition, multi-temporal MODIS data were used; the resulting soil map is the product of these diagnostic layers producing a map of soil reference groups (classified according to WRB). Because the third approach was applied to a larger test area in central Europe, and compared to the first two approaches, has worked with coarser input data, comparability is only partly fulfilled. To evaluate the usability of the three eSOTER maps, and to make a comparison among them, traditional soil maps 1:200,000 and 1:50,000 were used as reference data sets. Three statistical methods were applied: (i) in a moving window the distribution of the soil classes of each DSM product was compared to that of the soil maps by calculating the corrected coefficient of contingency, (ii) the value of predictive power for each of the eSOTER maps was determined, and (iii) the degree of consistency was derived. The latter is based on a weighting of the match of occurring class combinations via expert knowledge and recalculating the proportions of map appearance with these weights. To re-check the validation results a field study by local soil experts was conducted. The results show clearly that the first eSOTER approach based on the terrain classification / reinterpreted parent material information has the greatest similarity with traditional soil maps. The spatial differentiation offered by such an approach is well suitable to serve as a conceptual soil map. Therefore, eSOTER can be a tool for soil mappers to generate conceptual soil maps in a faster and more consistent way. This conclusion is at least valid for overview scales such as 1.250,000.
Improved failure prediction in forming simulations through pre-strain mapping
NASA Astrophysics Data System (ADS)
Upadhya, Siddharth; Staupendahl, Daniel; Heuse, Martin; Tekkaya, A. Erman
2018-05-01
The sensitivity of sheared edges of advanced high strength steel (AHSS) sheets to cracking during subsequent forming operations and the difficulty to predict this failure with any degree of accuracy using conventionally used FLC based failure criteria is a major problem plaguing the manufacturing industry. A possible method that allows for an accurate prediction of edge cracks is the simulation of the shearing operation and carryover of this model into a subsequent forming simulation. But even with an efficient combination of a solid element shearing operation and a shell element forming simulation, the need for a fine mesh, and the resulting high computation time makes this approach not viable from an industry point of view. The crack sensitivity of sheared edges is due to work hardening in the shear-affected zone (SAZ). A method to predict plastic strains induced by the shearing process is to measure the hardness after shearing and calculate the ultimate tensile strength as well as the flow stress. In combination with the flow curve, the relevant strain data can be obtained. To eliminate the time-intensive shearing simulation necessary to obtain the strain data in the SAZ, a new pre-strain mapping approach is proposed. The pre-strains to be mapped are, hereby, determined from hardness values obtained in the proximity of the sheared edge. To investigate the performance of this approach the ISO/TS 16630 hole expansion test was simulated with shell elements for different materials, whereby the pre-strains were mapped onto the edge of the hole. The hole expansion ratios obtained from such pre-strain mapped simulations are in close agreement with the experimental results. Furthermore, the simulations can be carried out with no increase in computation time, making this an interesting and viable solution for predicting edge failure due to shearing.
The landslide susceptibility mapping and assessment with ZY satellite data
NASA Astrophysics Data System (ADS)
Zhang, R.; Zhang, Z.; Zhao, Y.
2012-12-01
Natural hazards can result in enormous property damage and casualties in mountainous regions. In China, the direct loss of hazards is about 400 million yuan in 2011. Especially the landslide, the most common natural hazards, got the wide attention of each country. Landslide susceptibility mapping is of great importance for landslide hazard mitigation efforts throughout the world. In Southwest Hubei, there are much mineral mining activities, which may trigger the landslide. In addition the Three Gorges reservoir is located in this area, and the storage changed the geological and hydrological environment, which may increase the frequency of the ancient landslide reactivation, and the new landslide occurrence. There are more than 200 landslide hazards happened since 2003. So producing a regional-scaled landslide susceptibility map is necessary. For the above purpose, the landslide susceptibility mapping was produced by using the ZY-3 and ZY-1-02C satellite data, the DEMs and the conventional topographic data.(1) The DEM derivatives slope gradient, the slope aspect and the topographic wetness index (TWI) ; (2) in order to acquire the spatially continuous vegetation information, Normalized Difference Vegetation Index (NDVI) was computed using ZY-1-02C and ZY-3; (3) the regional lithologic information (i.e. mineral distribution) and the tectonic information obtained from remote sensing data in combination with regional geological survey; (4) the regional hydrogeological information was produced by using the remote sensing data in combination with the DEMs; (5) the existed landslides information obtained from remote sensing. To model the landslide hazard assessment using variety of statistic methods and evaluation methods, the cross application model yields reasonable results which can be applied for preliminary landslide hazard mapping and the hazard grade division.
Enhancing the performance of regional land cover mapping
NASA Astrophysics Data System (ADS)
Wu, Weicheng; Zucca, Claudio; Karam, Fadi; Liu, Guangping
2016-10-01
Different pixel-based, object-based and subpixel-based methods such as time-series analysis, decision-tree, and different supervised approaches have been proposed to conduct land use/cover classification. However, despite their proven advantages in small dataset tests, their performance is variable and less satisfactory while dealing with large datasets, particularly, for regional-scale mapping with high resolution data due to the complexity and diversity in landscapes and land cover patterns, and the unacceptably long processing time. The objective of this paper is to demonstrate the comparatively highest performance of an operational approach based on integration of multisource information ensuring high mapping accuracy in large areas with acceptable processing time. The information used includes phenologically contrasted multiseasonal and multispectral bands, vegetation index, land surface temperature, and topographic features. The performance of different conventional and machine learning classifiers namely Malahanobis Distance (MD), Maximum Likelihood (ML), Artificial Neural Networks (ANNs), Support Vector Machines (SVMs) and Random Forests (RFs) was compared using the same datasets in the same IDL (Interactive Data Language) environment. An Eastern Mediterranean area with complex landscape and steep climate gradients was selected to test and develop the operational approach. The results showed that SVMs and RFs classifiers produced most accurate mapping at local-scale (up to 96.85% in Overall Accuracy), but were very time-consuming in whole-scene classification (more than five days per scene) whereas ML fulfilled the task rapidly (about 10 min per scene) with satisfying accuracy (94.2-96.4%). Thus, the approach composed of integration of seasonally contrasted multisource data and sampling at subclass level followed by a ML classification is a suitable candidate to become an operational and effective regional land cover mapping method.
NASA Astrophysics Data System (ADS)
Xu, Z.; Guan, K.; Peng, B.; Casler, N. P.; Wang, S. W.
2017-12-01
Landscape has complex three-dimensional features. These 3D features are difficult to extract using conventional methods. Small-footprint LiDAR provides an ideal way for capturing these features. Existing approaches, however, have been relegated to raster or metric-based (two-dimensional) feature extraction from the upper or bottom layer, and thus are not suitable for resolving morphological and intensity features that could be important to fine-scale land cover mapping. Therefore, this research combines airborne LiDAR and multi-temporal Landsat imagery to classify land cover types of Williamson County, Illinois that has diverse and mixed landscape features. Specifically, we applied a 3D convolutional neural network (CNN) method to extract features from LiDAR point clouds by (1) creating occupancy grid, intensity grid at 1-meter resolution, and then (2) normalizing and incorporating data into a 3D CNN feature extractor for many epochs of learning. The learned features (e.g., morphological features, intensity features, etc) were combined with multi-temporal spectral data to enhance the performance of land cover classification based on a Support Vector Machine classifier. We used photo interpretation for training and testing data generation. The classification results show that our approach outperforms traditional methods using LiDAR derived feature maps, and promises to serve as an effective methodology for creating high-quality land cover maps through fusion of complementary types of remote sensing data.
Magnetic Resonance Imaging of Solids Using Oscillating Field Gradients
NASA Astrophysics Data System (ADS)
Daud, Yaacob Mat
1992-01-01
Available from UMI in association with The British Library. A fully automatic solid state NMR imaging spectrometer is described. Use has been made of oscillating field gradients to frequency and phase encode the spatial localisation of the nuclear spins. The RF pulse is applied during the zero crossing of the field gradient, so only low RF power is needed to cover the narrow spectral width of the spins. The oscillating field gradient coils were operated on resonance hence large gradient strength could be applied (up to 200G/cm). Two image reconstruction methods were used, filtered back-projection and two dimensional Fourier transformation. The use of phase encoding, both with oscillating and with pulsed field gradients, enabled us to acquire the data when the gradients were off, and this method proved to be insensitive to eddy currents. It also allowed the use of narrow bandwidth receiver thus improving the signal to noise ratio. The maximum entropy method was used in an effort to remove data truncation effects, although the results were not too convincing. The application of these new imaging schemes, was tested by mapping the T_1 and T_2 of polymers. The calculated relaxation maps produced precise spatial information about T_1 and T_2 which is not possible to achieve by conventional relaxation weight mapping. In a second application, the diffusion of water vapour into dried zeolite powder was studied. We found that the diffusion process is not Fickian.
MAP-Motivated Carrier Synchronization of GMSK Based on the Laurent AMP Representation
NASA Technical Reports Server (NTRS)
Simon, M. K.
1998-01-01
Using the MAP estimation approach to carrier synchronization of digital modulations containing ISI together with a two pulse stream AMP representation of GMSK, it is possible to obtain an optimum closed loop configuration in the same manner as has been previously proposed for other conventional modulations with ISI.
Technology Toolkit: Literary Road Trip
ERIC Educational Resources Information Center
Hayes, Sandy
2007-01-01
Hayes recognizes the value of connections kids make when authors and settings strike a familiar note. She invites readers to participate in a new event at this year's NCTE Annual Convention in New York City: The 21st-Century Literary Map Project gallery, where attendees are encouraged to examine affiliates' literary maps, see digital or…
Non-destructive testing for the structures and civil infrastructures characterization
NASA Astrophysics Data System (ADS)
Capozzoli, L.; Rizzo, E.
2012-04-01
This work evaluates the ability of non-conventional NDT techniques such as GPR, geoelectrical method and conventional ones such as infrared thermography (IRT) and sonic test for the characterization of building structures in laboratory and in-situ. Moreover, the integration of the different techniques were evaluated in order to reduce the degree of uncertainties associated. The presence of electromagnetic, resistivity or thermal anomalies in the behavior may be related to the presence of defects, crack, decay or moisture. The research was conducted in two phases: the first phase was performed in laboratory and the second one mainly in the field work. The laboratory experiments proceeded to calibrate the geophysical techniques GPR and geoelectrical method on building structures. A multi-layer structure was reconstructed in laboratory, in order to simulate a back-bridge: asphalt, reinforced concrete, sand and gravel layers. In the deep sandy layer, PVC, aluminum and steel pipes were introduced. This structure has also been brought to crack in a predetermined area and hidden internal fractures were investigated. GPR has allowed to characterize the panel in a non-invasive mode; radar maps were developed using various algorithms during post-process about 2D maps and 3D models with aerial acquisition of 400 MHz, 900MHz, 1500MHz, 2000MHz. Geoelectrical testing was performed with a network of 25 electrodes spaced at mutual distance of 5 cm. Two different configurations were used dipole-dipole and pole-dipole approaches. In the second phase, we proceeded to the analysis of pre-tensioned concrete in order to detect the possible presence of criticality in the structure. For this purpose by GPR 2GHz antenna, a '70 years precast bridge characterized by a high state of decay was studied; then were also analyzed a pillar and a beam of recent production directly into the processing plant. Moreover, results obtained using GPR were compared with those obtained through the use of infrared thermography and sonic testing. Finally, we investigated a radiant floor by GPR (900 MHz to 2000 MHz antennas) and long-wave infrared camera. Non-destructive diagnostic techniques allow to investigate a building structure in reinforced concrete or masonry without altering the characteristics of the element investigated. For this reason, geo-electrical and electromagnetic surveys of masonry are a suitable non-destructive tool for the diagnosis of a deteriorated concrete structure. Moreover, the integration of different NDT techniques (conventional and no-conventional) is a very powerful to maximize the capabilities and to compensate for the limitations of each method.
Ryder, Robert T.; Kinney, Scott A.; Suitt, Stephen E.; Merrill, Matthew D.; Trippi, Michael H.; Ruppert, Leslie F.; Ryder, Robert T.
2014-01-01
In 2006 and 2007, the greenline Appalachian basin field maps were digitized under the supervision of Scott Kinney and converted to geographic information system (GIS) files for chapter I.1 (this volume). By converting these oil and gas field maps to a digital format and maintaining the field names where noted, they are now available for a variety of oil and gas and possibly carbon-dioxide sequestration projects. Having historical names assigned to known digitized conventional fields provides a convenient classification scheme into which cumulative production and ultimate field-size databases can be organized. Moreover, as exploratory and development drilling expands across the basin, many previously named fields that were originally treated as conventional fields have evolved into large, commonly unnamed continuous-type accumulations. These new digital maps will facilitate a comparison between EUR values from recently drilled, unnamed parts of continuous accumulations and EUR values from named fields discovered early during the exploration cycle of continuous accumulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merrill, D.W.; Selvin, S.; Close, E.R.
In studying geographic disease distributions, one normally compares rates of arbitrarily defined geographic subareas (e.g. census tracts), thereby sacrificing the geographic detail of the original data. The sparser the data, the larger the subareas must be in order to calculate stable rates. This dilemma is avoided with the technique of Density Equalizing Map Projections (DEMP). Boundaries of geographic subregions are adjusted to equalize population density over the entire study area. Case locations plotted on the transformed map should have a uniform distribution if the underlying disease-rates are constant. On the transformed map, the statistical analysis of the observed distribution ismore » greatly simplified. Even for sparse distributions, the statistical significance of a supposed disease cluster can be reliably calculated. The present report describes the first successful application of the DEMP technique to a sizeable ``real-world`` data set of epidemiologic interest. An improved DEMP algorithm [GUSE93, CLOS94] was applied to a data set previously analyzed with conventional techniques [SATA90, REYN91]. The results from the DEMP analysis and a conventional analysis are compared.« less
Multislice CT perfusion imaging of the lung in detection of pulmonary embolism
NASA Astrophysics Data System (ADS)
Hong, Helen; Lee, Jeongjin
2006-03-01
We propose a new subtraction technique for accurately imaging lung perfusion and efficiently detecting pulmonary embolism in chest MDCT angiography. Our method is composed of five stages. First, optimal segmentation technique is performed for extracting same volume of the lungs, major airway and vascular structures from pre- and post-contrast images with different lung density. Second, initial registration based on apex, hilar point and center of inertia (COI) of each unilateral lung is proposed to correct the gross translational mismatch. Third, initial alignment is refined by iterative surface registration. For fast and robust convergence of the distance measure to the optimal value, a 3D distance map is generated by the narrow-band distance propagation. Fourth, 3D nonlinear filter is applied to the lung parenchyma to compensate for residual spiral artifacts and artifacts caused by heart motion. Fifth, enhanced vessels are visualized by subtracting registered pre-contrast images from post-contrast images. To facilitate visualization of parenchyma enhancement, color-coded mapping and image fusion is used. Our method has been successfully applied to ten patients of pre- and post-contrast images in chest MDCT angiography. Experimental results show that the performance of our method is very promising compared with conventional methods with the aspects of its visual inspection, accuracy and processing time.
NASA Astrophysics Data System (ADS)
Choi, Jinhyeok; Kim, Hyeonjin
2016-12-01
To improve the efficacy of undersampled MRI, a method of designing adaptive sampling functions is proposed that is simple to implement on an MR scanner and yet effectively improves the performance of the sampling functions. An approximation of the energy distribution of an image (E-map) is estimated from highly undersampled k-space data acquired in a prescan and efficiently recycled in the main scan. An adaptive probability density function (PDF) is generated by combining the E-map with a modeled PDF. A set of candidate sampling functions are then prepared from the adaptive PDF, among which the one with maximum energy is selected as the final sampling function. To validate its computational efficiency, the proposed method was implemented on an MR scanner, and its robust performance in Fourier-transform (FT) MRI and compressed sensing (CS) MRI was tested by simulations and in a cherry tomato. The proposed method consistently outperforms the conventional modeled PDF approach for undersampling ratios of 0.2 or higher in both FT-MRI and CS-MRI. To fully benefit from undersampled MRI, it is preferable that the design of adaptive sampling functions be performed online immediately before the main scan. In this way, the proposed method may further improve the efficacy of the undersampled MRI.
Saliency-aware food image segmentation for personal dietary assessment using a wearable computer
Chen, Hsin-Chen; Jia, Wenyan; Sun, Xin; Li, Zhaoxin; Li, Yuecheng; Fernstrom, John D.; Burke, Lora E.; Baranowski, Thomas; Sun, Mingui
2015-01-01
Image-based dietary assessment has recently received much attention in the community of obesity research. In this assessment, foods in digital pictures are specified, and their portion sizes (volumes) are estimated. Although manual processing is currently the most utilized method, image processing holds much promise since it may eventually lead to automatic dietary assessment. In this paper we study the problem of segmenting food objects from images. This segmentation is difficult because of various food types, shapes and colors, different decorating patterns on food containers, and occlusions of food and non-food objects. We propose a novel method based on a saliency-aware active contour model (ACM) for automatic food segmentation from images acquired by a wearable camera. An integrated saliency estimation approach based on food location priors and visual attention features is designed to produce a salient map of possible food regions in the input image. Next, a geometric contour primitive is generated and fitted to the salient map by means of multi-resolution optimization with respect to a set of affine and elastic transformation parameters. The food regions are then extracted after contour fitting. Our experiments using 60 food images showed that the proposed method achieved significantly higher accuracy in food segmentation when compared to conventional segmentation methods. PMID:26257473
Statistical framework and noise sensitivity of the amplitude radial correlation contrast method.
Kipervaser, Zeev Gideon; Pelled, Galit; Goelman, Gadi
2007-09-01
A statistical framework for the amplitude radial correlation contrast (RCC) method, which integrates a conventional pixel threshold approach with cluster-size statistics, is presented. The RCC method uses functional MRI (fMRI) data to group neighboring voxels in terms of their degree of temporal cross correlation and compares coherences in different brain states (e.g., stimulation OFF vs. ON). By defining the RCC correlation map as the difference between two RCC images, the map distribution of two OFF states is shown to be normal, enabling the definition of the pixel cutoff. The empirical cluster-size null distribution obtained after the application of the pixel cutoff is used to define a cluster-size cutoff that allows 5% false positives. Assuming that the fMRI signal equals the task-induced response plus noise, an analytical expression of amplitude-RCC dependency on noise is obtained and used to define the pixel threshold. In vivo and ex vivo data obtained during rat forepaw electric stimulation are used to fine-tune this threshold. Calculating the spatial coherences within in vivo and ex vivo images shows enhanced coherence in the in vivo data, but no dependency on the anesthesia method, magnetic field strength, or depth of anesthesia, strengthening the generality of the proposed cutoffs. Copyright (c) 2007 Wiley-Liss, Inc.
Multiscale approach to the determination of the photoactive yellow protein signaling state ensemble.
A Rohrdanz, Mary; Zheng, Wenwei; Lambeth, Bradley; Vreede, Jocelyne; Clementi, Cecilia
2014-10-01
The nature of the optical cycle of photoactive yellow protein (PYP) makes its elucidation challenging for both experiment and theory. The long transition times render conventional simulation methods ineffective, and yet the short signaling-state lifetime makes experimental data difficult to obtain and interpret. Here, through an innovative combination of computational methods, a prediction and analysis of the biological signaling state of PYP is presented. Coarse-grained modeling and locally scaled diffusion map are first used to obtain a rough bird's-eye view of the free energy landscape of photo-activated PYP. Then all-atom reconstruction, followed by an enhanced sampling scheme; diffusion map-directed-molecular dynamics are used to focus in on the signaling-state region of configuration space and obtain an ensemble of signaling state structures. To the best of our knowledge, this is the first time an all-atom reconstruction from a coarse grained model has been performed in a relatively unexplored region of molecular configuration space. We compare our signaling state prediction with previous computational and more recent experimental results, and the comparison is favorable, which validates the method presented. This approach provides additional insight to understand the PYP photo cycle, and can be applied to other systems for which more direct methods are impractical.
Saliency-aware food image segmentation for personal dietary assessment using a wearable computer
NASA Astrophysics Data System (ADS)
Chen, Hsin-Chen; Jia, Wenyan; Sun, Xin; Li, Zhaoxin; Li, Yuecheng; Fernstrom, John D.; Burke, Lora E.; Baranowski, Thomas; Sun, Mingui
2015-02-01
Image-based dietary assessment has recently received much attention in the community of obesity research. In this assessment, foods in digital pictures are specified, and their portion sizes (volumes) are estimated. Although manual processing is currently the most utilized method, image processing holds much promise since it may eventually lead to automatic dietary assessment. In this paper we study the problem of segmenting food objects from images. This segmentation is difficult because of various food types, shapes and colors, different decorating patterns on food containers, and occlusions of food and non-food objects. We propose a novel method based on a saliency-aware active contour model (ACM) for automatic food segmentation from images acquired by a wearable camera. An integrated saliency estimation approach based on food location priors and visual attention features is designed to produce a salient map of possible food regions in the input image. Next, a geometric contour primitive is generated and fitted to the salient map by means of multi-resolution optimization with respect to a set of affine and elastic transformation parameters. The food regions are then extracted after contour fitting. Our experiments using 60 food images showed that the proposed method achieved significantly higher accuracy in food segmentation when compared to conventional segmentation methods.
Positioning Genomics in Biology Education: Content Mapping of Undergraduate Biology Textbooks†
Wernick, Naomi L. B.; Ndung’u, Eric; Haughton, Dominique; Ledley, Fred D.
2014-01-01
Biological thought increasingly recognizes the centrality of the genome in constituting and regulating processes ranging from cellular systems to ecology and evolution. In this paper, we ask whether genomics is similarly positioned as a core concept in the instructional sequence for undergraduate biology. Using quantitative methods, we analyzed the order in which core biological concepts were introduced in textbooks for first-year general and human biology. Statistical analysis was performed using self-organizing map algorithms and conventional methods to identify clusters of terms and their relative position in the books. General biology textbooks for both majors and nonmajors introduced genome-related content after text related to cell biology and biological chemistry, but before content describing higher-order biological processes. However, human biology textbooks most often introduced genomic content near the end of the books. These results suggest that genomics is not yet positioned as a core concept in commonly used textbooks for first-year biology and raises questions about whether such textbooks, or courses based on the outline of these textbooks, provide an appropriate foundation for understanding contemporary biological science. PMID:25574293
Ultrasoft Electronics for Hyperelastic Strain, Pressure, and Direct Curvature Sensing
NASA Astrophysics Data System (ADS)
Majidi, Carmel; Kramer, Rebecca; Wood, Robert
2011-03-01
Progress in soft robotics, wearable computing, and programmable matter demands a new class of ultrasoft electronics for tactile control, contact detection, and deformation mapping. This next generation of sensors will remain electrically functional under extreme deformation without influencing the natural mechanics of the host system. Ultrasoft strain and pressure sensing has previously been demonstrated with elastomer sheets (eg. PDMS, silicone rubber) embedded with microchannels of conductive liquid (mercury, eGaIn). Building on these efforts, we introduce a novel method for direct curvature sensing that registers the location and intensity of surface curvature. An elastomer sheet is embedded with micropatterned cavities and microchannels of conductive liquid. Bending the elastomer or placing it on a curved surface leads to a change in channel cross-section and a corresponding change in its electrical resistance. In contrast to conventional methods of curvature sensing, this approach does not depend on semi-rigid components or differential strain measurement. Direct curvature sensing completes the portfolio of sensing elements required to completely map hyperelastic deformation for future soft robotics and computing. NSF MRSEC DMR-0820484.
Report of the Workshop on Geologic Applications of Remote Sensing to the Study of Sedimentary Basins
NASA Technical Reports Server (NTRS)
Lang, H. R. (Editor)
1985-01-01
The Workshop on Geologic Applications of Remote Sensing to the Study of Sedimentary Basins, held January 10 to 11, 1985 in Lakewood, Colorado, involved 43 geologists from industry, government, and academia. Disciplines represented ranged from vertebrate paleontology to geophysical modeling of continents. Deliberations focused on geologic problems related to the formation, stratigraphy, structure, and evolution of foreland basins in general, and to the Wind River/Bighorn Basin area of Wyoming in particular. Geological problems in the Wind River/Bighorn basin area that should be studied using state-of-the-art remote sensing methods were identified. These include: (1) establishing the stratigraphic sequence and mapping, correlating, and analyzing lithofacies of basin-filling strata in order to refine the chronology of basin sedimentation, and (2) mapping volcanic units, fracture patterns in basement rocks, and Tertiary-Holocene landforms in searches for surface manifestations of concealed structures in order to refine models of basin tectonics. Conventional geologic, topographic, geophysical, and borehole data should be utilized in these studies. Remote sensing methods developed in the Wind River/Bighorn Basin area should be applied in other basins.
Positioning genomics in biology education: content mapping of undergraduate biology textbooks.
Wernick, Naomi L B; Ndung'u, Eric; Haughton, Dominique; Ledley, Fred D
2014-12-01
Biological thought increasingly recognizes the centrality of the genome in constituting and regulating processes ranging from cellular systems to ecology and evolution. In this paper, we ask whether genomics is similarly positioned as a core concept in the instructional sequence for undergraduate biology. Using quantitative methods, we analyzed the order in which core biological concepts were introduced in textbooks for first-year general and human biology. Statistical analysis was performed using self-organizing map algorithms and conventional methods to identify clusters of terms and their relative position in the books. General biology textbooks for both majors and nonmajors introduced genome-related content after text related to cell biology and biological chemistry, but before content describing higher-order biological processes. However, human biology textbooks most often introduced genomic content near the end of the books. These results suggest that genomics is not yet positioned as a core concept in commonly used textbooks for first-year biology and raises questions about whether such textbooks, or courses based on the outline of these textbooks, provide an appropriate foundation for understanding contemporary biological science.
Han, Yang; Wang, Shutao; Payen, Thomas; Konofagou, Elisa
2017-01-01
The successful clinical application of High Intensity Focused Ultrasound (HIFU) ablation depends on reliable monitoring of the lesion formation. Harmonic Motion Imaging guided Focused Ultrasound (HMIgFUS) is an ultrasound-based elasticity imaging technique, which monitors HIFU ablation based on the stiffness change of the tissue instead of the echo intensity change in conventional B-mode monitoring, rendering it potentially more sensitive to lesion development. Our group has shown that predicting the lesion location based on the radiation force-excited region is feasible during HMIgFUS. In this study, the feasibility of a fast lesion mapping method is explored to directly monitor the lesion map during HIFU. The HMI lesion map was generated by subtracting the reference HMI image from the present HMI peak-to-peak displacement map to be streamed on the computer display. The dimensions of the HMIgFUS lesions were compared against gross pathology. Excellent agreement was found between the lesion depth (r2 = 0.81, slope = 0.90), width (r2 = 0.85, slope = 1.12) and area (r2 = 0.58, slope = 0.75). In vivo feasibility was assessed in a mouse with a pancreatic tumor. These findings demonstrate that HMIgFUS can successfully map thermal lesion and monitor lesion development in real time in vitro and in vivo. The HMIgFUS technique may therefore constitute a novel clinical tool for HIFU treatment monitoring. PMID:28323638
Casella, Michela; Dello Russo, Antonio; Pelargonio, Gemma; Bongiorni, Maria Grazia; Del Greco, Maurizio; Piacenti, Marcello; Andreassi, Maria Grazia; Santangeli, Pasquale; Bartoletti, Stefano; Moltrasio, Massimo; Fassini, Gaetano; Marini, Massimiliano; Di Cori, Andrea; Di Biase, Luigi; Fiorentini, Cesare; Zecchi, Paolo; Natale, Andrea; Picano, Eugenio; Tondo, Claudio
2012-10-01
Radiofrequency catheter ablation is the mainstay of therapy for supraventricular tachyarrhythmias. Conventional radiofrequency catheter ablation requires the use of fluoroscopy, thus exposing patients to ionising radiation. The feasibility and safety of non-fluoroscopic radiofrequency catheter ablation has been recently reported in a wide range of supraventricular tachyarrhythmias using the EnSite NavX™ mapping system. The NO-PARTY is a multi-centre, randomised controlled trial designed to test the hypothesis that catheter ablation of supraventricular tachyarrhythmias guided by the EnSite NavX™ mapping system results in a clinically significant reduction in exposure to ionising radiation compared with conventional catheter ablation. The study will randomise 210 patients undergoing catheter ablation of supraventricular tachyarrhythmias to either a conventional ablation technique or one guided by the EnSite NavX™ mapping system. The primary end-point is the reduction of the radiation dose to the patient. Secondary end-points include procedural success, reduction of the radiation dose to the operator, and a cost-effectiveness analysis. In a subgroup of patients, we will also evaluate the radiobiological effectiveness of dose reduction by assessing acute chromosomal DNA damage in peripheral blood lymphocytes. NO-PARTY will determine whether radiofrequency catheter ablation of supraventricular tachyarrhythmias guided by the EnSite NavX™ mapping system is a suitable and cost-effective approach to achieve a clinically significant reduction in ionising radiation exposure for both patient and operator.
Wu, Rongli; Watanabe, Yoshiyuki; Arisawa, Atsuko; Takahashi, Hiroto; Tanaka, Hisashi; Fujimoto, Yasunori; Watabe, Tadashi; Isohashi, Kayako; Hatazawa, Jun; Tomiyama, Noriyuki
2017-10-01
This study aimed to compare the tumor volume definition using conventional magnetic resonance (MR) and 11C-methionine positron emission tomography (MET/PET) images in the differentiation of the pre-operative glioma grade by using whole-tumor histogram analysis of normalized cerebral blood volume (nCBV) maps. Thirty-four patients with histopathologically proven primary brain low-grade gliomas (n = 15) and high-grade gliomas (n = 19) underwent pre-operative or pre-biopsy MET/PET, fluid-attenuated inversion recovery, dynamic susceptibility contrast perfusion-weighted magnetic resonance imaging, and contrast-enhanced T1-weighted at 3.0 T. The histogram distribution derived from the nCBV maps was obtained by co-registering the whole tumor volume delineated on conventional MR or MET/PET images, and eight histogram parameters were assessed. The mean nCBV value had the highest AUC value (0.906) based on MET/PET images. Diagnostic accuracy significantly improved when the tumor volume was measured from MET/PET images compared with conventional MR images for the parameters of mean, 50th, and 75th percentile nCBV value (p = 0.0246, 0.0223, and 0.0150, respectively). Whole-tumor histogram analysis of CBV map provides more valuable histogram parameters and increases diagnostic accuracy in the differentiation of pre-operative cerebral gliomas when the tumor volume is derived from MET/PET images.
Reproducibility Between Brain Uptake Ratio Using Anatomic Standardization and Patlak-Plot Methods.
Shibutani, Takayuki; Onoguchi, Masahisa; Noguchi, Atsushi; Yamada, Tomoki; Tsuchihashi, Hiroko; Nakajima, Tadashi; Kinuya, Seigo
2015-12-01
The Patlak-plot and conventional methods of determining brain uptake ratio (BUR) have some problems with reproducibility. We formulated a method of determining BUR using anatomic standardization (BUR-AS) in a statistical parametric mapping algorithm to improve reproducibility. The objective of this study was to demonstrate the inter- and intraoperator reproducibility of mean cerebral blood flow as determined using BUR-AS in comparison to the conventional-BUR (BUR-C) and Patlak-plot methods. The images of 30 patients who underwent brain perfusion SPECT were retrospectively used in this study. The images were reconstructed using ordered-subset expectation maximization and processed using an automatic quantitative analysis for cerebral blood flow of ECD tool. The mean SPECT count was calculated from axial basal ganglia slices of the normal side (slices 31-40) drawn using a 3-dimensional stereotactic region-of-interest template after anatomic standardization. The mean cerebral blood flow was calculated from the mean SPECT count. Reproducibility was evaluated using coefficient of variation and Bland-Altman plotting. For both inter- and intraoperator reproducibility, the BUR-AS method had the lowest coefficient of variation and smallest error range about the Bland-Altman plot. Mean CBF obtained using the BUR-AS method had the highest reproducibility. Compared with the Patlak-plot and BUR-C methods, the BUR-AS method provides greater inter- and intraoperator reproducibility of cerebral blood flow measurement. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Diffeomorphic Sulcal Shape Analysis on the Cortex
Joshi, Shantanu H.; Cabeen, Ryan P.; Joshi, Anand A.; Sun, Bo; Dinov, Ivo; Narr, Katherine L.; Toga, Arthur W.; Woods, Roger P.
2014-01-01
We present a diffeomorphic approach for constructing intrinsic shape atlases of sulci on the human cortex. Sulci are represented as square-root velocity functions of continuous open curves in ℝ3, and their shapes are studied as functional representations of an infinite-dimensional sphere. This spherical manifold has some advantageous properties – it is equipped with a Riemannian metric on the tangent space and facilitates computational analyses and correspondences between sulcal shapes. Sulcal shape mapping is achieved by computing geodesics in the quotient space of shapes modulo scales, translations, rigid rotations and reparameterizations. The resulting sulcal shape atlas preserves important local geometry inherently present in the sample population. The sulcal shape atlas is integrated in a cortical registration framework and exhibits better geometric matching compared to the conventional euclidean method. We demonstrate experimental results for sulcal shape mapping, cortical surface registration, and sulcal classification for two different surface extraction protocols for separate subject populations. PMID:22328177
Panoramic stereo sphere vision
NASA Astrophysics Data System (ADS)
Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian
2013-01-01
Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.
A family of chaotic pure analog coding schemes based on baker's map function
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Jing; Lu, Xuanxuan; Yuen, Chau; Wu, Jun
2015-12-01
This paper considers a family of pure analog coding schemes constructed from dynamic systems which are governed by chaotic functions—baker's map function and its variants. Various decoding methods, including maximum likelihood (ML), minimum mean square error (MMSE), and mixed ML-MMSE decoding algorithms, have been developed for these novel encoding schemes. The proposed mirrored baker's and single-input baker's analog codes perform a balanced protection against the fold error (large distortion) and weak distortion and outperform the classical chaotic analog coding and analog joint source-channel coding schemes in literature. Compared to the conventional digital communication system, where quantization and digital error correction codes are used, the proposed analog coding system has graceful performance evolution, low decoding latency, and no quantization noise. Numerical results show that under the same bandwidth expansion, the proposed analog system outperforms the digital ones over a wide signal-to-noise (SNR) range.
NASA Astrophysics Data System (ADS)
Subhash, Hrebesh M.; O'Gorman, Sean; Neuhaus, Kai; Leahy, Martin
2014-03-01
In this paper we demonstrate a novel application of correlation mapping optical coherence tomography (cm-OCT) for volumetric nailfold capillaroscopy (NFC). NFC is a widely used non-invasive diagnostic method to analyze capillary morphology and microvascular abnormalities of nailfold area for a range of disease conditions. However, the conventional NFC is incapable of providing volumetric imaging, when volumetric quantitative microangiopathic parameters such as plexus morphology, capillary density, and morphologic anomalies of the end row loops most critical. cm-OCT is a recently developed well established coherence domain magnitude based angiographic modality, which takes advantage of the time-varying speckle effect, which is normally dominant in the vicinity of vascular regions compared to static tissue region. It utilizes the correlation coefficient as a direct measurement of decorrelation between two adjacent B-frames to enhance the visibility of depth-resolved microcirculation.
Microwave platform as a valuable tool for characterization of nanophotonic devices
Shishkin, Ivan; Baranov, Dmitry; Slobozhanyuk, Alexey; Filonov, Dmitry; Lukashenko, Stanislav; Samusev, Anton; Belov, Pavel
2016-01-01
The rich potential of the microwave experiments for characterization and optimization of optical devices is discussed. While the control of the light fields together with their spatial mapping at the nanoscale is still laborious and not always clear, the microwave setup allows to measure both amplitude and phase of initially determined magnetic and electric field components without significant perturbation of the near-field. As an example, the electromagnetic properties of an add-drop filter, which became a well-known workhorse of the photonics, is experimentally studied with the aid of transmission spectroscopy measurements in optical and microwave ranges and through direct mapping of the near fields at microwave frequencies. We demonstrate that the microwave experiments provide a unique platform for the comprehensive studies of electromagnetic properties of micro- and nanophotonic devices, and allow to obtain data which are hardly acquirable by conventional optical methods. PMID:27759058
Mediated-reality magnification for macular degeneration rehabilitation
NASA Astrophysics Data System (ADS)
Martin-Gonzalez, Anabel; Kotliar, Konstantin; Rios-Martinez, Jorge; Lanzl, Ines; Navab, Nassir
2014-10-01
Age-related macular degeneration (AMD) is a gradually progressive eye condition, which is one of the leading causes of blindness and low vision in the Western world. Prevailing optical visual aids compensate part of the lost visual function, but omitting helpful complementary information. This paper proposes an efficient magnification technique, which can be implemented on a head-mounted display, for improving vision of patients with AMD, by preserving global information of the scene. Performance of the magnification approach is evaluated by simulating central vision loss in normally sighted subjects. Visual perception was measured as a function of text reading speed and map route following speed. Statistical analysis of experimental results suggests that our magnification method improves reading speed 1.2 times and spatial orientation to find routes on a map 1.5 times compared to a conventional magnification approach, being capable to enhance peripheral vision of AMD subjects along with their life quality.
Aliotta, Eric; Moulin, Kévin; Zhang, Zhaohuan; Ennis, Daniel B.
2018-01-01
Purpose To evaluate a technique for simultaneous quantitative T2 and apparent diffusion coefficient (ADC) mapping in the heart (T2+ADC) using spin echo (SE) diffusion-weighted imaging (DWI). Theory and Methods T2 maps from T2+ADC were compared with single-echo SE in phantoms and with T2-prepared (T2-prep) balanced steady-state free precession (bSSFP) in healthy volunteers. ADC maps from T2+ADC were compared with conventional DWI in phantoms and in vivo. T2+ADC was also demonstrated in a patient with acute myocardial infarction (MI). Results Phantom T2 values from T2+ADC were closer to a single-echo SE reference than T2-prep bSSFP (−2.3 ± 6.0% vs 22.2 ± 16.3%; P < 0.01), and ADC values were in excellent agreement with DWI (0.28 ± 0.4%). In volunteers, myocardial T2 values from T2+ADC were significantly shorter than T2-prep bSSFP (35.8 ± 3.1 vs 46.8 ± 3.8 ms; P < 0.01); myocardial ADC was not significantly (N.S.) different between T2+ADC and conventional motion-compensated DWI (1.39 ± 0.18 vs 1.38 ± 0.18 mm2/ms; P = N.S.). In the patient, T2 and ADC were both significantly elevated in the infarct compared with remote myocardium (T2: 40.4 ± 7.6 vs 56.8 ± 22.0; P < 0.01; ADC: 1.47 ± 0.59 vs 1.65 ± 0.65 mm2/ms; P < 0.01). Conclusion T2+ADC generated coregistered, free-breathing T2 and ADC maps in healthy volunteers and a patient with acute MI with no cost in accuracy, precision, or scan time compared with DWI. PMID:28516485
Petrides, K V; McManus, I C
2004-10-01
The medical specialities chosen by doctors for their careers play an important part in the workforce planning of health-care services. However, there is little theoretical understanding of how different medical specialities are perceived or how choices are made, despite there being much work in general on this topic in occupational psychology, which is influenced by Holland's RIASEC (Realistic-Investigative-Artistic-Social-Enterprising-Conventional) typology of careers, and Gottfredson's model of circumscription and compromise. In this study, we use three large-scale cohorts of medical students to produce maps of medical careers. Information on between 24 and 28 specialities was collected in three UK cohorts of medical students (1981, 1986 and 1991 entry), in applicants (1981 and 1986 cohorts, N = 1135 and 2032) or entrants (1991 cohort, N = 2973) and in final-year students (N = 330, 376, and 1437). Mapping used Individual Differences Scaling (INDSCAL) on sub-groups broken down by age and sex. The method was validated in a population sample using a full range of careers, and demonstrating that the RIASEC structure could be extracted. Medical specialities in each cohort, at application and in the final-year, were well represented by a two-dimensional space. The representations showed a close similarity to Holland's RIASEC typology, with the main orthogonal dimensions appearing similar to Prediger's derived orthogonal dimensions of 'Things-People' and 'Data-Ideas'. There are close parallels between Holland's general typology of careers, and the structure we have found in medical careers. Medical specialities typical of Holland's six RIASEC categories are Surgery (Realistic), Hospital Medicine (Investigative), Psychiatry (Artistic), Public Health (Social), Administrative Medicine (Enterprising), and Laboratory Medicine (Conventional). The homology between medical careers and RIASEC may mean that the map can be used as the basis for understanding career choice, and for providing career counselling.
Abdoun, Oussama; Joucla, Sébastien; Mazzocco, Claire; Yvert, Blaise
2010-01-01
A major characteristic of neural networks is the complexity of their organization at various spatial scales, from microscopic local circuits to macroscopic brain-scale areas. Understanding how neural information is processed thus entails the ability to study them at multiple scales simultaneously. This is made possible using microelectrodes array (MEA) technology. Indeed, high-density MEAs provide large-scale coverage (several square millimeters) of whole neural structures combined with microscopic resolution (about 50 μm) of unit activity. Yet, current options for spatiotemporal representation of MEA-collected data remain limited. Here we present NeuroMap, a new interactive Matlab-based software for spatiotemporal mapping of MEA data. NeuroMap uses thin plate spline interpolation, which provides several assets with respect to conventional mapping methods used currently. First, any MEA design can be considered, including 2D or 3D, regular or irregular, arrangements of electrodes. Second, spline interpolation allows the estimation of activity across the tissue with local extrema not necessarily at recording sites. Finally, this interpolation approach provides a straightforward analytical estimation of the spatial Laplacian for better current sources localization. In this software, coregistration of 2D MEA data on the anatomy of the neural tissue is made possible by fine matching of anatomical data with electrode positions using rigid-deformation-based correction of anatomical pictures. Overall, NeuroMap provides substantial material for detailed spatiotemporal analysis of MEA data. The package is distributed under GNU General Public License and available at http://sites.google.com/site/neuromapsoftware. PMID:21344013
Abdoun, Oussama; Joucla, Sébastien; Mazzocco, Claire; Yvert, Blaise
2011-01-01
A major characteristic of neural networks is the complexity of their organization at various spatial scales, from microscopic local circuits to macroscopic brain-scale areas. Understanding how neural information is processed thus entails the ability to study them at multiple scales simultaneously. This is made possible using microelectrodes array (MEA) technology. Indeed, high-density MEAs provide large-scale coverage (several square millimeters) of whole neural structures combined with microscopic resolution (about 50 μm) of unit activity. Yet, current options for spatiotemporal representation of MEA-collected data remain limited. Here we present NeuroMap, a new interactive Matlab-based software for spatiotemporal mapping of MEA data. NeuroMap uses thin plate spline interpolation, which provides several assets with respect to conventional mapping methods used currently. First, any MEA design can be considered, including 2D or 3D, regular or irregular, arrangements of electrodes. Second, spline interpolation allows the estimation of activity across the tissue with local extrema not necessarily at recording sites. Finally, this interpolation approach provides a straightforward analytical estimation of the spatial Laplacian for better current sources localization. In this software, coregistration of 2D MEA data on the anatomy of the neural tissue is made possible by fine matching of anatomical data with electrode positions using rigid-deformation-based correction of anatomical pictures. Overall, NeuroMap provides substantial material for detailed spatiotemporal analysis of MEA data. The package is distributed under GNU General Public License and available at http://sites.google.com/site/neuromapsoftware.
The Application of MRI for Depiction of Subtle Blood Brain Barrier Disruption in Stroke
Israeli, David; Tanne, David; Daniels, Dianne; Last, David; Shneor, Ran; Guez, David; Landau, Efrat; Roth, Yiftach; Ocherashvilli, Aharon; Bakon, Mati; Hoffman, Chen; Weinberg, Amit; Volk, Talila; Mardor, Yael
2011-01-01
The development of imaging methodologies for detecting blood-brain-barrier (BBB) disruption may help predict stroke patient's propensity to develop hemorrhagic complications following reperfusion. We have developed a delayed contrast extravasation MRI-based methodology enabling real-time depiction of subtle BBB abnormalities in humans with high sensitivity to BBB disruption and high spatial resolution. The increased sensitivity to subtle BBB disruption is obtained by acquiring T1-weighted MRI at relatively long delays (~15 minutes) after contrast injection and subtracting from them images acquired immediately after contrast administration. In addition, the relatively long delays allow for acquisition of high resolution images resulting in high resolution BBB disruption maps. The sensitivity is further increased by image preprocessing with corrections for intensity variations and with whole body (rigid+elastic) registration. Since only two separate time points are required, the time between the two acquisitions can be used for acquiring routine clinical data, keeping the total imaging time to a minimum. A proof of concept study was performed in 34 patients with ischemic stroke and 2 patients with brain metastases undergoing high resolution T1-weighted MRI acquired at 3 time points after contrast injection. The MR images were pre-processed and subtracted to produce BBB disruption maps. BBB maps of patients with brain metastases and ischemic stroke presented different patterns of BBB opening. The significant advantage of the long extravasation time was demonstrated by a dynamic-contrast-enhancement study performed continuously for 18 min. The high sensitivity of our methodology enabled depiction of clear BBB disruption in 27% of the stroke patients who did not have abnormalities on conventional contrast-enhanced MRI. In 36% of the patients, who had abnormalities detectable by conventional MRI, the BBB disruption volumes were significantly larger in the maps than in conventional MRI. These results demonstrate the advantages of delayed contrast extravasation in increasing the sensitivity to subtle BBB disruption in ischemic stroke patients. The calculated disruption maps provide clear depiction of significant volumes of BBB disruption unattainable by conventional contrast-enhanced MRI. PMID:21209786
The application of MRI for depiction of subtle blood brain barrier disruption in stroke.
Israeli, David; Tanne, David; Daniels, Dianne; Last, David; Shneor, Ran; Guez, David; Landau, Efrat; Roth, Yiftach; Ocherashvilli, Aharon; Bakon, Mati; Hoffman, Chen; Weinberg, Amit; Volk, Talila; Mardor, Yael
2010-12-26
The development of imaging methodologies for detecting blood-brain-barrier (BBB) disruption may help predict stroke patient's propensity to develop hemorrhagic complications following reperfusion. We have developed a delayed contrast extravasation MRI-based methodology enabling real-time depiction of subtle BBB abnormalities in humans with high sensitivity to BBB disruption and high spatial resolution. The increased sensitivity to subtle BBB disruption is obtained by acquiring T1-weighted MRI at relatively long delays (~15 minutes) after contrast injection and subtracting from them images acquired immediately after contrast administration. In addition, the relatively long delays allow for acquisition of high resolution images resulting in high resolution BBB disruption maps. The sensitivity is further increased by image preprocessing with corrections for intensity variations and with whole body (rigid+elastic) registration. Since only two separate time points are required, the time between the two acquisitions can be used for acquiring routine clinical data, keeping the total imaging time to a minimum. A proof of concept study was performed in 34 patients with ischemic stroke and 2 patients with brain metastases undergoing high resolution T1-weighted MRI acquired at 3 time points after contrast injection. The MR images were pre-processed and subtracted to produce BBB disruption maps. BBB maps of patients with brain metastases and ischemic stroke presented different patterns of BBB opening. The significant advantage of the long extravasation time was demonstrated by a dynamic-contrast-enhancement study performed continuously for 18 min. The high sensitivity of our methodology enabled depiction of clear BBB disruption in 27% of the stroke patients who did not have abnormalities on conventional contrast-enhanced MRI. In 36% of the patients, who had abnormalities detectable by conventional MRI, the BBB disruption volumes were significantly larger in the maps than in conventional MRI. These results demonstrate the advantages of delayed contrast extravasation in increasing the sensitivity to subtle BBB disruption in ischemic stroke patients. The calculated disruption maps provide clear depiction of significant volumes of BBB disruption unattainable by conventional contrast-enhanced MRI.
Kim, Yoon-Chul; Nielsen, Jon-Fredrik; Nayak, Krishna S
2008-01-01
To develop a method that automatically corrects ghosting artifacts due to echo-misalignment in interleaved gradient-echo echo-planar imaging (EPI) in arbitrary oblique or double-oblique scan planes. An automatic ghosting correction technique was developed based on an alternating EPI acquisition and the phased-array ghost elimination (PAGE) reconstruction method. The direction of k-space traversal is alternated at every temporal frame, enabling lower temporal-resolution ghost-free coil sensitivity maps to be dynamically estimated. The proposed method was compared with conventional one-dimensional (1D) phase correction in axial, oblique, and double-oblique scan planes in phantom and cardiac in vivo studies. The proposed method was also used in conjunction with two-fold acceleration. The proposed method with nonaccelerated acquisition provided excellent suppression of ghosting artifacts in all scan planes, and was substantially more effective than conventional 1D phase correction in oblique and double-oblique scan planes. The feasibility of real-time reconstruction using the proposed technique was demonstrated in a scan protocol with 3.1-mm spatial and 60-msec temporal resolution. The proposed technique with nonaccelerated acquisition provides excellent ghost suppression in arbitrary scan orientations without a calibration scan, and can be useful for real-time interactive imaging, in which scan planes are frequently changed with arbitrary oblique orientations.
Rayarao, Geetha; Biederman, Robert W W; Williams, Ronald B; Yamrozik, June A; Lombardi, Richard; Doyle, Mark
2018-01-01
To establish the clinical validity and accuracy of automatic thresholding and manual trimming (ATMT) by comparing the method with the conventional contouring method for in vivo cardiac volume measurements. CMR was performed on 40 subjects (30 patients and 10 controls) using steady-state free precession cine sequences with slices oriented in the short-axis and acquired contiguously from base to apex. Left ventricular (LV) volumes, end-diastolic volume, end-systolic volume, and stroke volume (SV) were obtained with ATMT and with the conventional contouring method. Additionally, SV was measured independently using CMR phase velocity mapping (PVM) of the aorta for validation. Three methods of calculating SV were compared by applying Bland-Altman analysis. The Bland-Altman standard deviation of variation (SD) and offset bias for LV SV for the three sets of data were: ATMT-PVM (7.65, [Formula: see text]), ATMT-contours (7.85, [Formula: see text]), and contour-PVM (11.01, 4.97), respectively. Equating the observed range to the error contribution of each approach, the error magnitude of ATMT:PVM:contours was in the ratio 1:2.4:2.5. Use of ATMT for measuring ventricular volumes accommodates trabeculae and papillary structures more intuitively than contemporary contouring methods. This results in lower variation when analyzing cardiac structure and function and consequently improved accuracy in assessing chamber volumes.
One registration multi-atlas-based pseudo-CT generation for attenuation correction in PET/MRI.
Arabi, Hossein; Zaidi, Habib
2016-10-01
The outcome of a detailed assessment of various strategies for atlas-based whole-body bone segmentation from magnetic resonance imaging (MRI) was exploited to select the optimal parameters and setting, with the aim of proposing a novel one-registration multi-atlas (ORMA) pseudo-CT generation approach. The proposed approach consists of only one online registration between the target and reference images, regardless of the number of atlas images (N), while for the remaining atlas images, the pre-computed transformation matrices to the reference image are used to align them to the target image. The performance characteristics of the proposed method were evaluated and compared with conventional atlas-based attenuation map generation strategies (direct registration of the entire atlas images followed by voxel-wise weighting (VWW) and arithmetic averaging atlas fusion). To this end, four different positron emission tomography (PET) attenuation maps were generated via arithmetic averaging and VWW scheme using both direct registration and ORMA approaches as well as the 3-class attenuation map obtained from the Philips Ingenuity TF PET/MRI scanner commonly used in the clinical setting. The evaluation was performed based on the accuracy of extracted whole-body bones by the different attenuation maps and by quantitative analysis of resulting PET images compared to CT-based attenuation-corrected PET images serving as reference. The comparison of validation metrics regarding the accuracy of extracted bone using the different techniques demonstrated the superiority of the VWW atlas fusion algorithm achieving a Dice similarity measure of 0.82 ± 0.04 compared to arithmetic averaging atlas fusion (0.60 ± 0.02), which uses conventional direct registration. Application of the ORMA approach modestly compromised the accuracy, yielding a Dice similarity measure of 0.76 ± 0.05 for ORMA-VWW and 0.55 ± 0.03 for ORMA-averaging. The results of quantitative PET analysis followed the same trend with less significant differences in terms of SUV bias, whereas massive improvements were observed compared to PET images corrected for attenuation using the 3-class attenuation map. The maximum absolute bias achieved by VWW and VWW-ORMA methods was 06.4 ± 5.5 in the lung and 07.9 ± 4.8 in the bone, respectively. The proposed algorithm is capable of generating decent attenuation maps. The quantitative analysis revealed a good correlation between PET images corrected for attenuation using the proposed pseudo-CT generation approach and the corresponding CT images. The computational time is reduced by a factor of 1/N at the expense of a modest decrease in quantitative accuracy, thus allowing us to achieve a reasonable compromise between computing time and quantitative performance.
Tomographic diffractive microscopy with a wavefront sensor.
Ruan, Y; Bon, P; Mudry, E; Maire, G; Chaumet, P C; Giovannini, H; Belkebir, K; Talneau, A; Wattellier, B; Monneret, S; Sentenac, A
2012-05-15
Tomographic diffractive microscopy is a recent imaging technique that reconstructs quantitatively the three-dimensional permittivity map of a sample with a resolution better than that of conventional wide-field microscopy. Its main drawbacks lie in the complexity of the setup and in the slowness of the image recording as both the amplitude and the phase of the field scattered by the sample need to be measured for hundreds of successive illumination angles. In this Letter, we show that, using a wavefront sensor, tomographic diffractive microscopy can be implemented easily on a conventional microscope. Moreover, the number of illuminations can be dramatically decreased if a constrained reconstruction algorithm is used to recover the sample map of permittivity.
Performance Evaluation of Dsm Extraction from ZY-3 Three-Line Arrays Imagery
NASA Astrophysics Data System (ADS)
Xue, Y.; Xie, W.; Du, Q.; Sang, H.
2015-08-01
ZiYuan-3 (ZY-3), launched in January 09, 2012, is China's first civilian high-resolution stereo mapping satellite. ZY-3 is equipped with three-line scanners (nadir, backward and forward) for stereo mapping, the resolutions of the panchromatic (PAN) stereo mapping images are 2.1-m at nadir looking and 3.6-m at tilt angles of ±22° forward and backward looking, respectively. The stereo base-height ratio is 0.85-0.95. Compared with stereo mapping from two views images, three-line arrays images of ZY-3 can be used for DSM generation taking advantage of one more view than conventional photogrammetric methods. It would enrich the information for image matching and enhance the accuracy of DSM generated. The primary result of positioning accuracy of ZY-3 images has been reported, while before the massive mapping applications of utilizing ZY-3 images for DSM generation, the performance evaluation of DSM extraction from three-line arrays imagery of ZY-3 has significant meaning for the routine mapping applications. The goal of this research is to clarify the mapping performance of ZY-3 three-line arrays scanners on china's first civilian high-resolution stereo mapping satellite of ZY-3 through the accuracy evaluation of DSM generation. The comparison of DSM product in different topographic areas generated with three views images with different two views combination images of ZY-3 would be presented. Besides the comparison within different topographic study area, the accuracy deviation of the DSM products with different grid size including 25-m, 10-m and 5-m is delineated in order to clarify the impact of grid size on accuracy evaluation.
Dieringer, Matthias A; Deimling, Michael; Santoro, Davide; Wuerfel, Jens; Madai, Vince I; Sobesky, Jan; von Knobelsdorff-Brenkenhoff, Florian; Schulz-Menger, Jeanette; Niendorf, Thoralf
2014-01-01
Visual but subjective reading of longitudinal relaxation time (T1) weighted magnetic resonance images is commonly used for the detection of brain pathologies. For this non-quantitative measure, diagnostic quality depends on hardware configuration, imaging parameters, radio frequency transmission field (B1+) uniformity, as well as observer experience. Parametric quantification of the tissue T1 relaxation parameter offsets the propensity for these effects, but is typically time consuming. For this reason, this study examines the feasibility of rapid 2D T1 quantification using a variable flip angles (VFA) approach at magnetic field strengths of 1.5 Tesla, 3 Tesla, and 7 Tesla. These efforts include validation in phantom experiments and application for brain T1 mapping. T1 quantification included simulations of the Bloch equations to correct for slice profile imperfections, and a correction for B1+. Fast gradient echo acquisitions were conducted using three adjusted flip angles for the proposed T1 quantification approach that was benchmarked against slice profile uncorrected 2D VFA and an inversion-recovery spin-echo based reference method. Brain T1 mapping was performed in six healthy subjects, one multiple sclerosis patient, and one stroke patient. Phantom experiments showed a mean T1 estimation error of (-63±1.5)% for slice profile uncorrected 2D VFA and (0.2±1.4)% for the proposed approach compared to the reference method. Scan time for single slice T1 mapping including B1+ mapping could be reduced to 5 seconds using an in-plane resolution of (2×2) mm2, which equals a scan time reduction of more than 99% compared to the reference method. Our results demonstrate that rapid 2D T1 quantification using a variable flip angle approach is feasible at 1.5T/3T/7T. It represents a valuable alternative for rapid T1 mapping due to the gain in speed versus conventional approaches. This progress may serve to enhance the capabilities of parametric MR based lesion detection and brain tissue characterization.
Arthritis inflammation monitored by subcutaneous millimeter wave thermography.
Edrich, J; Smyth, C J
1978-01-01
A new technique for remote, noninvasive mapping of temperature elevations of the human joints is described; it uses the mm wave radiation emitted by the human body. A solid state switched scanner for 68 GHz is described which overcomes the depth limitations of conventional, infrared thermographs and can measure to subcutaneous depths of several mm with a temperature resolution of 0.25 degrees C. Measurements on rheumatoid arthritic knee joints are presented which show little correlation with simultaneously measured skin temperatures. Significant longterm thermographic changes induced by steroid injection indicate a potential for objective patient monitoring and development of new treatment methods.
DTMs Assessment to the Definition of Shallow Landslides Prone Areas
NASA Astrophysics Data System (ADS)
Martins, Tiago D.; Oka-Fiori, Chisato; Carvalho Vieira, Bianca; Montgomery, David R.
2017-04-01
Predictive methods have been developed, especially since the 1990s, to identify landslide prone areas. One of the examples it is the physically based model SHALSTAB (Shallow Landsliding Stability Model), that calculate the potential instability for shallow landslides based on topography and physical soil properties. Normally, in such applications in Brazil, the Digital Terrain Model (DTM), is obtained mainly from conventional contour lines. However, recently the LiDAR (Light Detection and Ranging) system has been largely used in Brazil. Thus, this study aimed to evaluate different DTM's, generated from conventional data and LiDAR, and their influence in generating susceptibility maps to shallow landslides using SHALSTAB model. For that were analyzed the physical properties of soil, the response of the model when applying conventional topographical data and LiDAR's in the generation of DTM, and the shallow landslides susceptibility maps based on different topographical data. The selected area is in the urban perimeter of the municipality of Antonina (PR), affected by widespread landslides in March 2011. Among the results, it was evaluated different LiDAR data interpolation, using GIS tools, wherein the Triangulation/Natural Neighbor presented the best performance. It was also found that in one of evaluation indexes (Scars Concentration), the LiDAR derived DTM presented the best performance when compared with the one originated from contour lines, however, the Landslide Potential index, has presented a small increase. Consequently, it was possible to assess the DTM's, and the one derived from LiDAR improved very little the certitude percentage. It is also noted a gap in researches carried out in Brazil on the use of products generated from LiDAR data on geomorphological analysis.
NASA Astrophysics Data System (ADS)
Haslam, Richard; Aldiss, Donald
2013-04-01
Most of the London Basin, south-eastern UK, is underlain by the Palaeogene London Clay Formation, comprising a succession of rather uniform marine clay deposits up to 150 m thick, with widespread cover of Quaternary deposits and urban development. Therefore, in this area faults are difficult to delineate (or to detect) by conventional geological surveying methods in the field, and few are shown on the geological maps of the area. However, boreholes and excavations, especially those for civil engineering works, indicate that faults are probably widespread and numerous in the London area. A representative map of fault distribution and patterns of displacement is a pre-requisite for understanding the tectonic development of a region. Moreover, faulting is an important influence on the design and execution of civil engineering works, and on the hydrogeological characteristics of the ground. This paper reviews methods currently being used to map faults in the London Basin area. These are: the interpretation of persistent scatterer interferometry (PSI) data from time-series satellite-borne radar measurements; the interpretation of regional geophysical fields (Bouguer gravity anomaly and aeromagnetic), especially in combination with a digital elevation model; and the construction and interpretation of 3D geological models. Although these methods are generally not as accurate as large-scale geological field surveys, due to the availability of appropriate data in the London Basin they provide the means to recognise and delineate more faults, and with more confidence, than was possible using traditional geological mapping techniques. Together they reveal regional structures arising during Palaeogene crustal extension and subsidence in the North Sea, followed by inversion of a Mesozoic sedimentary basin in the south of the region, probably modified by strike-slip fault motion associated with the relative northward movement of the African Plate and the Alpine orogeny. This work is distributed under the Creative Commons Attribution 3.0 Unported License together with an NERC copyright. This license does not conflict with the regulations of the Crown Copyright.
HomozygosityMapper2012--bridging the gap between homozygosity mapping and deep sequencing.
Seelow, Dominik; Schuelke, Markus
2012-07-01
Homozygosity mapping is a common method to map recessive traits in consanguineous families. To facilitate these analyses, we have developed HomozygosityMapper, a web-based approach to homozygosity mapping. HomozygosityMapper allows researchers to directly upload the genotype files produced by the major genotyping platforms as well as deep sequencing data. It detects stretches of homozygosity shared by the affected individuals and displays them graphically. Users can interactively inspect the underlying genotypes, manually refine these regions and eventually submit them to our candidate gene search engine GeneDistiller to identify the most promising candidate genes. Here, we present the new version of HomozygosityMapper. The most striking new feature is the support of Next Generation Sequencing *.vcf files as input. Upon users' requests, we have implemented the analysis of common experimental rodents as well as of important farm animals. Furthermore, we have extended the options for single families and loss of heterozygosity studies. Another new feature is the export of *.bed files for targeted enrichment of the potential disease regions for deep sequencing strategies. HomozygosityMapper also generates files for conventional linkage analyses which are already restricted to the possible disease regions, hence superseding CPU-intensive genome-wide analyses. HomozygosityMapper is freely available at http://www.homozygositymapper.org/.
Hoshikawa, Ryo; Kawaguchi, Hiroshi; Takuwa, Hiroyuki; Ikoma, Yoko; Tomita, Yutaka; Unekawa, Miyuki; Suzuki, Norihiro; Kanno, Iwao; Masamoto, Kazuto
2016-08-01
This study aimed to develop a new method for mapping blood flow velocity based on the spatial evolution of fluorescent dye transit times captured with CLSFM in the cerebral microcirculation of anesthetized rodents. The animals were anesthetized with isoflurane, and a small amount of fluorescent dye was intravenously injected to label blood plasma. The CLSFM was conducted through a closed cranial window to capture propagation of the dye in the cortical vessels. The transit time of the dye over a certain distance in a single vessel was determined with automated image analyses, and average flow velocity was mapped in each vessel. The average flow velocity measured in the rat pial artery and vein was 4.4 ± 1.2 and 2.4 ± 0.5 mm/sec, respectively. A similar range of flow velocity to those of the rats was observed in the mice; 4.9 ± 1.4 and 2.0 ± 0.9 mm/sec, respectively, although the vessel diameter in the mice was about half of that in the rats. Flow velocity in the cerebral microcirculation can be mapped based on fluorescent dye transit time measurements with conventional CLSFM in experimental animals. © 2016 John Wiley & Sons Ltd.
Kumar, Mukesh; Rath, Nitish Kumar; Rath, Santanu Kumar
2016-04-01
Microarray-based gene expression profiling has emerged as an efficient technique for classification, prognosis, diagnosis, and treatment of cancer. Frequent changes in the behavior of this disease generates an enormous volume of data. Microarray data satisfies both the veracity and velocity properties of big data, as it keeps changing with time. Therefore, the analysis of microarray datasets in a small amount of time is essential. They often contain a large amount of expression, but only a fraction of it comprises genes that are significantly expressed. The precise identification of genes of interest that are responsible for causing cancer are imperative in microarray data analysis. Most existing schemes employ a two-phase process such as feature selection/extraction followed by classification. In this paper, various statistical methods (tests) based on MapReduce are proposed for selecting relevant features. After feature selection, a MapReduce-based K-nearest neighbor (mrKNN) classifier is also employed to classify microarray data. These algorithms are successfully implemented in a Hadoop framework. A comparative analysis is done on these MapReduce-based models using microarray datasets of various dimensions. From the obtained results, it is observed that these models consume much less execution time than conventional models in processing big data. Copyright © 2016 Elsevier Inc. All rights reserved.
Hada, Shinnosuke; Ishijima, Muneaki; Kaneko, Haruka; Kinoshita, Mayuko; Liu, Lizu; Sadatsuki, Ryo; Futami, Ippei; Yusup, Anwajan; Takamura, Tomohiro; Arita, Hitoshi; Shiozawa, Jun; Aoki, Takako; Takazawa, Yuji; Ikeda, Hiroshi; Aoki, Shigeki; Kurosawa, Hisashi; Okada, Yasunori; Kaneko, Kazuo
2017-09-12
Medial meniscal extrusion (MME) is associated with progression of medial knee osteoarthritis (OA), but no or little information is available for relationships between MME and osteophytes, which are found in cartilage and bone parts. Because of the limitation in detectability of the cartilage part of osteophytes by radiography or conventional magnetic resonance imaging (MRI), the rate of development and size of osteophytes appear to have been underestimated. Because T2 mapping MRI may enable us to evaluate the cartilage part of osteophytes, we aimed to examine the association between MME and OA-related changes, including osteophytes, by using conventional and T2 mapping MRI. Patients with early-stage knee OA (n = 50) were examined. MRI-detected OA-related changes, in addition to MME, were evaluated according to the Whole-Organ Magnetic Resonance Imaging Score. T2 values of the medial meniscus and osteophytes were measured on T2 mapping images. Osteophytes surgically removed from patients with end-stage knee OA were histologically analyzed and compared with findings derived by radiography and MRI. Medial side osteophytes were detected by T2 mapping MRI in 98% of patients with early-stage knee OA, although the detection rate was 48% by conventional MRI and 40% by radiography. Among the OA-related changes, medial tibial osteophyte distance was most closely associated with MME, as determined by multiple logistic regression analysis, in the patients with early-stage knee OA (β = 0.711, p < 0.001). T2 values of the medial meniscus were directly correlated with MME in patients with early-stage knee OA, who showed ≥ 3 mm of MME (r = 0.58, p = 0.003). The accuracy of osteophyte evaluation by T2 mapping MRI was confirmed by histological analysis of the osteophytes removed from patients with end-stage knee OA. Our study demonstrates that medial tibial osteophyte evaluated by T2 mapping MRI is frequently observed in the patients with early-stage knee OA, showing close association with MME, and that MME is positively correlated with the meniscal degeneration.
Laboratory studies of lean combustion
NASA Technical Reports Server (NTRS)
Sawyer, R. F.; Schefer, R. W.; Ganji, A. R.; Daily, J. W.; Pitz, R. W.; Oppenheim, A. K.; Angeli, J. W.
1977-01-01
The fundamental processes controlling lean combustion were observed for better understanding, with particular emphasis on the formation and measurement of gas-phase pollutants, the stability of the combustion process (blowout limits), methods of improving stability, and the application of probe and optical diagnostics for flow field characterization, temperature mapping, and composition measurements. The following areas of investigation are described in detail: (1) axisymmetric, opposed-reacting-jet-stabilized combustor studies; (2) stabilization through heat recirculation; (3) two dimensional combustor studies; and (4) spectroscopic methods. A departure from conventional combustor design to a premixed/prevaporized, lean combustion configuration is attractive for the control of oxides of nitrogen and smoke emissions, the promotion of uniform turbine inlet temperatures, and, possibly, the reduction of carbon monoxide and hydrocarbons at idle.
Mediterranean Land Use and Land Cover Classification Assessment Using High Spatial Resolution Data
NASA Astrophysics Data System (ADS)
Elhag, Mohamed; Boteva, Silvena
2016-10-01
Landscape fragmentation is noticeably practiced in Mediterranean regions and imposes substantial complications in several satellite image classification methods. To some extent, high spatial resolution data were able to overcome such complications. For better classification performances in Land Use Land Cover (LULC) mapping, the current research adopts different classification methods comparison for LULC mapping using Sentinel-2 satellite as a source of high spatial resolution. Both of pixel-based and an object-based classification algorithms were assessed; the pixel-based approach employs Maximum Likelihood (ML), Artificial Neural Network (ANN) algorithms, Support Vector Machine (SVM), and, the object-based classification uses the Nearest Neighbour (NN) classifier. Stratified Masking Process (SMP) that integrates a ranking process within the classes based on spectral fluctuation of the sum of the training and testing sites was implemented. An analysis of the overall and individual accuracy of the classification results of all four methods reveals that the SVM classifier was the most efficient overall by distinguishing most of the classes with the highest accuracy. NN succeeded to deal with artificial surface classes in general while agriculture area classes, and forest and semi-natural area classes were segregated successfully with SVM. Furthermore, a comparative analysis indicates that the conventional classification method yielded better accuracy results than the SMP method overall with both classifiers used, ML and SVM.
Mapping Language to the World: The Role of Iconicity in the Sign Language Input
ERIC Educational Resources Information Center
Perniss, Pamela; Lu, Jenny C.; Morgan, Gary; Vigliocco, Gabriella
2018-01-01
Most research on the mechanisms underlying referential mapping has assumed that learning occurs in ostensive contexts, where label and referent co-occur, and that form and meaning are linked by arbitrary convention alone. In the present study, we focus on "iconicity" in language, that is, resemblance relationships between form and…
Use of Business-Naming Practices to Delineate Vernacular Regions: A Michigan Example
ERIC Educational Resources Information Center
Liesch, Matthew; Dunklee, Linda M.; Legg, Robert J.; Feig, Anthony D.; Krause, Austin Jena
2015-01-01
This article provides a history of efforts to map vernacular regions as context for offering readers a way of using business directories in order to construct a GIS-based map of vernacular regions. With Michigan as a case study, the article discusses regional-naming conventions, boundaries, and inclusions and omissions of areas from regional…
Longo, Dario Livio; Dastrù, Walter; Consolino, Lorena; Espak, Miklos; Arigoni, Maddalena; Cavallo, Federica; Aime, Silvio
2015-07-01
The objective of this study was to compare a clustering approach to conventional analysis methods for assessing changes in pharmacokinetic parameters obtained from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) during antiangiogenic treatment in a breast cancer model. BALB/c mice bearing established transplantable her2+ tumors were treated with a DNA-based antiangiogenic vaccine or with an empty plasmid (untreated group). DCE-MRI was carried out by administering a dose of 0.05 mmol/kg of Gadocoletic acid trisodium salt, a Gd-based blood pool contrast agent (CA) at 1T. Changes in pharmacokinetic estimates (K(trans) and vp) in a nine-day interval were compared between treated and untreated groups on a voxel-by-voxel analysis. The tumor response to therapy was assessed by a clustering approach and compared with conventional summary statistics, with sub-regions analysis and with histogram analysis. Both the K(trans) and vp estimates, following blood-pool CA injection, showed marked and spatial heterogeneous changes with antiangiogenic treatment. Averaged values for the whole tumor region, as well as from the rim/core sub-regions analysis were unable to assess the antiangiogenic response. Histogram analysis resulted in significant changes only in the vp estimates (p<0.05). The proposed clustering approach depicted marked changes in both the K(trans) and vp estimates, with significant spatial heterogeneity in vp maps in response to treatment (p<0.05), provided that DCE-MRI data are properly clustered in three or four sub-regions. This study demonstrated the value of cluster analysis applied to pharmacokinetic DCE-MRI parametric maps for assessing tumor response to antiangiogenic therapy. Copyright © 2015 Elsevier Inc. All rights reserved.
Genomic Tools in Cowpea Breeding Programs: Status and Perspectives
Boukar, Ousmane; Fatokun, Christian A.; Huynh, Bao-Lam; Roberts, Philip A.; Close, Timothy J.
2016-01-01
Cowpea is one of the most important grain legumes in sub-Saharan Africa (SSA). It provides strong support to the livelihood of small-scale farmers through its contributions to their nutritional security, income generation and soil fertility enhancement. Worldwide about 6.5 million metric tons of cowpea are produced annually on about 14.5 million hectares. The low productivity of cowpea is attributable to numerous abiotic and biotic constraints. The abiotic stress factors comprise drought, low soil fertility, and heat while biotic constraints include insects, diseases, parasitic weeds, and nematodes. Cowpea farmers also have limited access to quality seeds of improved varieties for planting. Some progress has been made through conventional breeding at international and national research institutions in the last three decades. Cowpea improvement could also benefit from modern breeding methods based on molecular genetic tools. A number of advances in cowpea genetic linkage maps, and quantitative trait loci associated with some desirable traits such as resistance to Striga, Macrophomina, Fusarium wilt, bacterial blight, root-knot nematodes, aphids, and foliar thrips have been reported. An improved consensus genetic linkage map has been developed and used to identify QTLs of additional traits. In order to take advantage of these developments single nucleotide polymorphism (SNP) genotyping is being streamlined to establish an efficient workflow supported by genotyping support service (GSS)-client interactions. About 1100 SNPs mapped on the cowpea genome were converted by LGC Genomics to KASP assays. Several cowpea breeding programs have been exploiting these resources to implement molecular breeding, especially for MARS and MABC, to accelerate cowpea variety improvement. The combination of conventional breeding and molecular breeding strategies, with workflow managed through the CGIAR breeding management system (BMS), promises an increase in the number of improved varieties available to farmers, thereby boosting cowpea production and productivity in SSA. PMID:27375632
Optimized energy of spectral CT for infarct imaging: Experimental validation with human validation.
Sandfort, Veit; Palanisamy, Srikanth; Symons, Rolf; Pourmorteza, Amir; Ahlman, Mark A; Rice, Kelly; Thomas, Tom; Davies-Venn, Cynthia; Krauss, Bernhard; Kwan, Alan; Pandey, Ankur; Zimmerman, Stefan L; Bluemke, David A
Late contrast enhancement visualizes myocardial infarction, but the contrast to noise ratio (CNR) is low using conventional CT. The aim of this study was to determine if spectral CT can improve imaging of myocardial infarction. A canine model of myocardial infarction was produced in 8 animals (90-min occlusion, reperfusion). Later, imaging was performed after contrast injection using CT at 90 kVp/150 kVpSn. The following reconstructions were evaluated: Single energy 90 kVp, mixed, iodine map, multiple monoenergetic conventional and monoenergetic noise optimized reconstructions. Regions of interest were measured in infarct and remote regions to calculate contrast to noise ratio (CNR) and Bhattacharya distance (a metric of the differentiation between regions). Blinded assessment of image quality was performed. The same reconstruction methods were applied to CT scans of four patients with known infarcts. For animal studies, the highest CNR for infarct vs. myocardium was achieved in the lowest keV (40 keV) VMo images (CNR 4.42, IQR 3.64-5.53), which was superior to 90 kVp, mixed and iodine map (p = 0.008, p = 0.002, p < 0.001, respectively). Compared to 90 kVp and iodine map, the 40 keV VMo reconstructions showed significantly higher histogram separation (p = 0.042 and p < 0.0001, respectively). The VMo reconstructions showed the highest rate of excellent quality scores. A similar pattern was seen in human studies, with CNRs for infarct maximized at the lowest keV optimized reconstruction (CNR 4.44, IQR 2.86-5.94). Dual energy in conjunction with noise-optimized monoenergetic post-processing improves CNR of myocardial infarct delineation by approximately 20-25%. Published by Elsevier Inc.
Bertleff, Marco; Domsch, Sebastian; Weingärtner, Sebastian; Zapp, Jascha; O'Brien, Kieran; Barth, Markus; Schad, Lothar R
2017-12-01
Artificial neural networks (ANNs) were used for voxel-wise parameter estimation with the combined intravoxel incoherent motion (IVIM) and kurtosis model facilitating robust diffusion parameter mapping in the human brain. The proposed ANN approach was compared with conventional least-squares regression (LSR) and state-of-the-art multi-step fitting (LSR-MS) in Monte-Carlo simulations and in vivo in terms of estimation accuracy and precision, number of outliers and sensitivity in the distinction between grey (GM) and white (WM) matter. Both the proposed ANN approach and LSR-MS yielded visually increased parameter map quality. Estimations of all parameters (perfusion fraction f, diffusion coefficient D, pseudo-diffusion coefficient D*, kurtosis K) were in good agreement with the literature using ANN, whereas LSR-MS resulted in D* overestimation and LSR yielded increased values for f and D*, as well as decreased values for K. Using ANN, outliers were reduced for the parameters f (ANN, 1%; LSR-MS, 19%; LSR, 8%), D* (ANN, 21%; LSR-MS, 25%; LSR, 23%) and K (ANN, 0%; LSR-MS, 0%; LSR, 15%). Moreover, ANN enabled significant distinction between GM and WM based on all parameters, whereas LSR facilitated this distinction only based on D and LSR-MS on f, D and K. Overall, the proposed ANN approach was found to be superior to conventional LSR, posing a powerful alternative to the state-of-the-art method LSR-MS with several advantages in the estimation of IVIM-kurtosis parameters, which might facilitate increased applicability of enhanced diffusion models at clinical scan times. Copyright © 2017 John Wiley & Sons, Ltd.
Mineralogical Mapping in the Cuprite Mining District, Nevada
NASA Technical Reports Server (NTRS)
Goetz, A. F. H.; Srivastava, V.
1985-01-01
The airborne imaging spectrometer (AIS) has provided for the first time, the possibility to map mineralogical constituents in the Earth's surface and thus has enormously increased the value of remote-sensing data as a tool in the solution of geologic problems. The question addressed with AIS at Cuprite was how well could the mineral components at the surface of a hydrothermal alteration zone be detected, identified and mapped? The question was answered positively and is discussed. A relatively rare mineral, buddingtonie, that could not have been detected by conventional means, was discovered and mapped by the use of AIS.
Genetic linkage analysis using pooled DNA and infrared detection of tailed STRP primer patterns
NASA Astrophysics Data System (ADS)
Oetting, William S.; Wildenberg, Scott C.; King, Richard A.
1996-04-01
The mapping of a disease locus to a specific chromosomal region is an important step in the eventual isolation and analysis of a disease causing gene. Conventional mapping methods analyze large multiplex families and/or smaller nuclear families to find linkage between the disease and a chromosome marker that maps to a known chromosomal region. This analysis is time consuming and tedious, typically requiring the determination of 30,000 genotypes or more. For appropriate populations, we have instead utilized pooled DNA samples for gene mapping which greatly reduces the amount of time necessary for an initial chromosomal screen. This technique assumes a common founder for the disease locus of interest and searches for a region of a chromosome shared between affected individuals. Our analysis involves the PCR amplification of short tandem repeat polymorphisms (STRP) to detect these shared regions. In order to reduce the cost of genotyping, we have designed unlabeled tailed PCR primers which, when combined with a labeled universal primer, provides for an alternative to synthesizing custom labeled primers. The STRP pattern is visualized with an infrared fluorescence based automated DNA sequencer and the patterns quantitated by densitometric analysis of the allele pattern. Differences in the distribution of alleles between pools of affected and unaffected individuals, including a reduction in the number of alleles in the affected pool, indicate the sharing of a region of a chromosome. We have found this method effective for markers 10 - 15 cM away from the disease locus for a recessive genetic disease.
A successful development of subtle traps: Chihuido de la Sierra Negra, Neuquen Basin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comeron, R.; Valenzuela, M.W.
1996-08-01
Using new traps search concepts in the Chihuido de la Sierra Negra oil field, it was possible to substantially increase production from 4400 bbl/day in 1984 to 125,000 bbl/day in 1995. Oil reserves are located within an 8000 ha area situated on the Chihuido de la Sierra Negra anticlinal flank. Success was achieved by using different techniques for subtle traps detection, namely seismic amplitude mapping, individualization of different production facies and its predictive mapping. This reduced the search and development toward low structural areas which had not been considered before. The production layers are formed by two lower Cretaceous eolianmore » sandstones called the Avile Member and the Troncoso Lower Member, with thicknesses ranging from 5 to 30 meters. 2D seismic made it possible to individualize the thickest sand areas, some of which turned out to be productive. Using 3D seismic, by means of azimuth and dip maps, fractured areas were detected where fault throws range from 5 to 10 meters. In many of these fractured zones, thin igneous intrusives are emplaced forming seals. Such determinations make it possible for different oil-water contacts and static pressures to be delimited. Due to the small fault throws, the different blocks could not be detected by conventional mapping methods. The delineation of the field compartmentalization becomes important in the waterflooding stage as well as for the detection of new traps in surrounding areas. The combination of seismic and stratigraphic methods made it possible to discover and develop Argentina`s main oil field.« less
Fast magnetic resonance fingerprinting for dynamic contrast-enhanced studies in mice.
Gu, Yuning; Wang, Charlie Y; Anderson, Christian E; Liu, Yuchi; Hu, He; Johansen, Mette L; Ma, Dan; Jiang, Yun; Ramos-Estebanez, Ciro; Brady-Kalnay, Susann; Griswold, Mark A; Flask, Chris A; Yu, Xin
2018-05-09
The goal of this study was to develop a fast MR fingerprinting (MRF) method for simultaneous T 1 and T 2 mapping in DCE-MRI studies in mice. The MRF sequences based on balanced SSFP and fast imaging with steady-state precession were implemented and evaluated on a 7T preclinical scanner. The readout used a zeroth-moment-compensated variable-density spiral trajectory that fully sampled the entire k-space and the inner 10 × 10 k-space with 48 and 4 interleaves, respectively. In vitro and in vivo studies of mouse brain were performed to evaluate the accuracy of MRF measurements with both fully sampled and undersampled data. The application of MRF to dynamic T 1 and T 2 mapping in DCE-MRI studies were demonstrated in a mouse model of heterotopic glioblastoma using gadolinium-based and dysprosium-based contrast agents. The T 1 and T 2 measurements in phantom showed strong agreement between the MRF and the conventional methods. The MRF with spiral encoding allowed up to 8-fold undersampling without loss of measurement accuracy. This enabled simultaneous T 1 and T 2 mapping with 2-minute temporal resolution in DCE-MRI studies. Magnetic resonance fingerprinting provides the opportunity for dynamic quantification of contrast agent distribution in preclinical tumor models on high-field MRI scanners. © 2018 International Society for Magnetic Resonance in Medicine.
Cooper, James; Ding, Yi; Song, Jiuzhou; Zhao, Keji
2017-11-01
Increased chromatin accessibility is a feature of cell-type-specific cis-regulatory elements; therefore, mapping of DNase I hypersensitive sites (DHSs) enables the detection of active regulatory elements of transcription, including promoters, enhancers, insulators and locus-control regions. Single-cell DNase sequencing (scDNase-seq) is a method of detecting genome-wide DHSs when starting with either single cells or <1,000 cells from primary cell sources. This technique enables genome-wide mapping of hypersensitive sites in a wide range of cell populations that cannot be analyzed using conventional DNase I sequencing because of the requirement for millions of starting cells. Fresh cells, formaldehyde-cross-linked cells or cells recovered from formalin-fixed paraffin-embedded (FFPE) tissue slides are suitable for scDNase-seq assays. To generate scDNase-seq libraries, cells are lysed and then digested with DNase I. Circular carrier plasmid DNA is included during subsequent DNA purification and library preparation steps to prevent loss of the small quantity of DHS DNA. Libraries are generated for high-throughput sequencing on the Illumina platform using standard methods. Preparation of scDNase-seq libraries requires only 2 d. The materials and molecular biology techniques described in this protocol should be accessible to any general molecular biology laboratory. Processing of high-throughput sequencing data requires basic bioinformatics skills and uses publicly available bioinformatics software.
User's guide for mapIMG 3--Map image re-projection software package
Finn, Michael P.; Mattli, David M.
2012-01-01
Version 0.0 (1995), Dan Steinwand, U.S. Geological Survey (USGS)/Earth Resources Observation Systems (EROS) Data Center (EDC)--Version 0.0 was a command line version for UNIX that required four arguments: the input metadata, the output metadata, the input data file, and the output destination path. Version 1.0 (2003), Stephen Posch and Michael P. Finn, USGS/Mid-Continent Mapping Center (MCMC--Version 1.0 added a GUI interface that was built using the Qt library for cross platform development. Version 1.01 (2004), Jason Trent and Michael P. Finn, USGS/MCMC--Version 1.01 suggested bounds for the parameters of each projection. Support was added for larger input files, storage of the last used input and output folders, and for TIFF/ GeoTIFF input images. Version 2.0 (2005), Robert Buehler, Jason Trent, and Michael P. Finn, USGS/National Geospatial Technical Operations Center (NGTOC)--Version 2.0 added Resampling Methods (Mean, Mode, Min, Max, and Sum), updated the GUI design, and added the viewer/pre-viewer. The metadata style was changed to XML and was switched to a new naming convention. Version 3.0 (2009), David Mattli and Michael P. Finn, USGS/Center of Excellence for Geospatial Information Science (CEGIS)--Version 3.0 brings optimized resampling methods, an updated GUI, support for less than global datasets, UTM support and the whole codebase was ported to Qt4.
Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang
2017-04-26
This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient ( C ) and kernel width ( s ), in mapping homogeneous specific land cover.
Chosen Aspects of the Production of the Basic Map Using Uav Imagery
NASA Astrophysics Data System (ADS)
Kedzierski, M.; Fryskowska, A.; Wierzbicki, D.; Nerc, P.
2016-06-01
For several years there has been an increasing interest in the use of unmanned aerial vehicles in acquiring image data from a low altitude. Considering the cost-effectiveness of the flight time of UAVs vs. conventional airplanes, the use of the former is advantageous when generating large scale accurate ortophotos. Through the development of UAV imagery, we can update large-scale basic maps. These maps are cartographic products which are used for registration, economic, and strategic planning. On the basis of these maps other cartographic maps are produced, for example maps used building planning. The article presents an assessesment of the usefulness of orthophotos based on UAV imagery to upgrade the basic map. In the research a compact, non-metric camera, mounted on a fixed wing powered by an electric motor was used. The tested area covered flat, agricultural and woodland terrains. The processing and analysis of orthorectification were carried out with the INPHO UASMaster programme. Due to the effect of UAV instability on low-altitude imagery, the use of non-metric digital cameras and the low-accuracy GPS-INS sensors, the geometry of images is visibly lower were compared to conventional digital aerial photos (large values of phi and kappa angles). Therefore, typically, low-altitude images require large along- and across-track direction overlap - usually above 70 %. As a result of the research orthoimages were obtained with a resolution of 0.06 meters and a horizontal accuracy of 0.10m. Digitized basic maps were used as the reference data. The accuracy of orthoimages vs. basic maps was estimated based on the study and on the available reference sources. As a result, it was found that the geometric accuracy and interpretative advantages of the final orthoimages allow the updating of basic maps. It is estimated that such an update of basic maps based on UAV imagery reduces processing time by approx. 40%.
Reconstructing metastatic seeding patterns of human cancers
Reiter, Johannes G.; Makohon-Moore, Alvin P.; Gerold, Jeffrey M.; Bozic, Ivana; Chatterjee, Krishnendu; Iacobuzio-Donahue, Christine A.; Vogelstein, Bert; Nowak, Martin A.
2017-01-01
Reconstructing the evolutionary history of metastases is critical for understanding their basic biological principles and has profound clinical implications. Genome-wide sequencing data has enabled modern phylogenomic methods to accurately dissect subclones and their phylogenies from noisy and impure bulk tumour samples at unprecedented depth. However, existing methods are not designed to infer metastatic seeding patterns. Here we develop a tool, called Treeomics, to reconstruct the phylogeny of metastases and map subclones to their anatomic locations. Treeomics infers comprehensive seeding patterns for pancreatic, ovarian, and prostate cancers. Moreover, Treeomics correctly disambiguates true seeding patterns from sequencing artifacts; 7% of variants were misclassified by conventional statistical methods. These artifacts can skew phylogenies by creating illusory tumour heterogeneity among distinct samples. In silico benchmarking on simulated tumour phylogenies across a wide range of sample purities (15–95%) and sequencing depths (25-800 × ) demonstrates the accuracy of Treeomics compared with existing methods. PMID:28139641
Kozakov, Dima; Hall, David R.; Napoleon, Raeanne L.; Yueh, Christine; Whitty, Adrian; Vajda, Sandor
2016-01-01
A powerful early approach to evaluating the druggability of proteins involved determining the hit rate in NMR-based screening of a library of small compounds. Here we show that a computational analog of this method, based on mapping proteins using small molecules as probes, can reliably reproduce druggability results from NMR-based screening, and can provide a more meaningful assessment in cases where the two approaches disagree. We apply the method to a large set of proteins. The results show that, because the method is based on the biophysics of binding rather than on empirical parameterization, meaningful information can be gained about classes of proteins and classes of compounds beyond those resembling validated targets and conventionally druglike ligands. In particular, the method identifies targets that, while not druggable by druglike compounds, may become druggable using compound classes such as macrocycles or other large molecules beyond the rule-of-five limit. PMID:26230724
MODFLOW equipped with a new method for the accurate simulation of axisymmetric flow
NASA Astrophysics Data System (ADS)
Samani, N.; Kompani-Zare, M.; Barry, D. A.
2004-01-01
Axisymmetric flow to a well is an important topic of groundwater hydraulics, the simulation of which depends on accurate computation of head gradients. Groundwater numerical models with conventional rectilinear grid geometry such as MODFLOW (in contrast to analytical models) generally have not been used to simulate aquifer test results at a pumping well because they are not designed or expected to closely simulate the head gradient near the well. A scaling method is proposed based on mapping the governing flow equation from cylindrical to Cartesian coordinates, and vice versa. A set of relationships and scales is derived to implement the conversion. The proposed scaling method is then embedded in MODFLOW 2000. To verify the accuracy of the method steady and unsteady flows in confined and unconfined aquifers with fully or partially penetrating pumping wells are simulated and compared with the corresponding analytical solutions. In all cases a high degree of accuracy is achieved.
NASA Astrophysics Data System (ADS)
Tavakkoli Estahbanat, A.; Dehghani, M.
2017-09-01
In interferometry technique, phases have been modulated between 0-2π. Finding the number of integer phases missed when they were wrapped is the main goal of unwrapping algorithms. Although the density of points in conventional interferometry is high, this is not effective in some cases such as large temporal baselines or noisy interferograms. Due to existing noisy pixels, not only it does not improve results, but also it leads to some unwrapping errors during interferogram unwrapping. In PS technique, because of the sparse PS pixels, scientists are confronted with a problem to unwrap phases. Due to the irregular data separation, conventional methods are sterile. Unwrapping techniques are divided in to path-independent and path-dependent in the case of unwrapping paths. A region-growing method which is a path-dependent technique has been used to unwrap PS data. In this paper an idea of EKF has been generalized on PS data. This algorithm is applied to consider the nonlinearity of PS unwrapping problem as well as conventional unwrapping problem. A pulse-pair method enhanced with singular value decomposition (SVD) has been used to estimate spectral shift from interferometric power spectral density in 7*7 local windows. Furthermore, a hybrid cost-map is used to manage the unwrapping path. This algorithm has been implemented on simulated PS data. To form a sparse dataset, A few points from regular grid are randomly selected and the RMSE of results and true unambiguous phases in presented to validate presented approach. The results of this algorithm and true unwrapped phases were completely identical.
Phase unwrapping with a virtual Hartmann-Shack wavefront sensor.
Akondi, Vyas; Falldorf, Claas; Marcos, Susana; Vohnsen, Brian
2015-10-05
The use of a spatial light modulator for implementing a digital phase-shifting (PS) point diffraction interferometer (PDI) allows tunability in fringe spacing and in achieving PS without the need for mechanically moving parts. However, a small amount of detector or scatter noise could affect the accuracy of wavefront sensing. Here, a novel method of wavefront reconstruction incorporating a virtual Hartmann-Shack (HS) wavefront sensor is proposed that allows easy tuning of several wavefront sensor parameters. The proposed method was tested and compared with a Fourier unwrapping method implemented on a digital PS PDI. The rewrapping of the Fourier reconstructed wavefronts resulted in phase maps that matched well the original wrapped phase and the performance was found to be more stable and accurate than conventional methods. Through simulation studies, the superiority of the proposed virtual HS phase unwrapping method is shown in comparison with the Fourier unwrapping method in the presence of noise. Further, combining the two methods could improve accuracy when the signal-to-noise ratio is sufficiently high.
Serag, Maged F.; Abadi, Maram; Habuchi, Satoshi
2014-01-01
Single-molecule localization and tracking has been used to translate spatiotemporal information of individual molecules to map their diffusion behaviours. However, accurate analysis of diffusion behaviours and including other parameters, such as the conformation and size of molecules, remain as limitations to the method. Here, we report a method that addresses the limitations of existing single-molecular localization methods. The method is based on temporal tracking of the cumulative area occupied by molecules. These temporal fluctuations are tied to molecular size, rates of diffusion and conformational changes. By analysing fluorescent nanospheres and double-stranded DNA molecules of different lengths and topological forms, we demonstrate that our cumulative-area method surpasses the conventional single-molecule localization method in terms of the accuracy of determined diffusion coefficients. Furthermore, the cumulative-area method provides conformational relaxation times of structurally flexible chains along with diffusion coefficients, which together are relevant to work in a wide spectrum of scientific fields. PMID:25283876
Fluorescence spectroscopy using indocyanine green for lymph node mapping
NASA Astrophysics Data System (ADS)
Haj-Hosseini, Neda; Behm, Pascal; Shabo, Ivan; Wârdell, Karin
2014-02-01
The principles of cancer treatment has for years been radical resection of the primary tumor. In the oncologic surgeries where the affected cancer site is close to the lymphatic system, it is as important to detect the draining lymph nodes for metastasis (lymph node mapping). As a replacement for conventional radioactive labeling, indocyanine green (ICG) has shown successful results in lymph node mapping; however, most of the ICG fluorescence detection techniques developed are based on camera imaging. In this work, fluorescence spectroscopy using a fiber-optical probe was evaluated on a tissue-like ICG phantom with ICG concentrations of 6-64 μM and on breast tissue from five patients. Fiber-optical based spectroscopy was able to detect ICG fluorescence at low intensities; therefore, it is expected to increase the detection threshold of the conventional imaging systems when used intraoperatively. The probe allows spectral characterization of the fluorescence and navigation in the tissue as opposed to camera imaging which is limited to the view on the surface of the tissue.
A spectroscopic search for faint secondaries in cataclysmic variables
NASA Astrophysics Data System (ADS)
Vande Putte, D.; Smith, Robert Connon; Hawkins, N. A.; Martin, J. S.
2003-06-01
The secondary in cataclysmic variables (CVs) is usually detected by cross-correlation of the CV spectrum with that of a K or M dwarf template, to produce a radial velocity curve. Although this method has demonstrated its power, it has its limits in the case of noisy spectra, such as are found when the secondary is faint. A method of coadding spectra, called skew mapping, has been proposed in the past. Gradually, examples of its application are being published; none the less, so far no journal article has described the technique in detail. To answer this need, this paper explores in detail the capabilities of skew mapping when determining the amplitude of the radial velocity for faint secondaries. It demonstrates the power of the method over techniques that are more conventional, when the signal-to-noise ratio is poor. The paper suggests an approach to assessing the quality of results. This leads in the case of the investigated objects to a first tier of results, where we find K2= 127 +/- 23 km s-1 for SY Cnc, K2= 144 +/- 18 km s-1 for RW Sex and K2= 262 +/- 14 km s-1 for UX UMa. These we believe to be the first direct determinations of K2 for these objects. Furthermore, we also obtain K2= 263 +/- 30 km s-1 for RW Tri, close to a skew mapping result obtained elsewhere. In the first three cases, we use these results to derive the mass of the white dwarf companion. A second tier of results includes UU Aqr, EX Hya and LX Ser, for which we propose more tentative values of K2. Clear failures of the method are also discussed (EF Eri, VV Pup and SW Sex).
Wang, Jingbo; Templeton, Dennise C.; Harris, David B.
2015-07-30
Using empirical matched field processing (MFP), we compare 4 yr of continuous seismic data to a set of 195 master templates from within an active geothermal field and identify over 140 per cent more events than were identified using traditional detection and location techniques alone. In managed underground reservoirs, a substantial fraction of seismic events can be excluded from the official catalogue due to an inability to clearly identify seismic-phase onsets. Empirical MFP can improve the effectiveness of current seismic detection and location methodologies by using conventionally located events with higher signal-to-noise ratios as master events to define wavefield templatesmore » that could then be used to map normally discarded indistinct seismicity. Since MFP does not require picking, it can be carried out automatically and rapidly once suitable templates are defined. In this application, we extend MFP by constructing local-distance empirical master templates using Southern California Earthquake Data Center archived waveform data of events originating within the Salton Sea Geothermal Field. We compare the empirical templates to continuous seismic data collected between 1 January 2008 and 31 December 2011. The empirical MFP method successfully identifies 6249 additional events, while the original catalogue reported 4352 events. The majority of these new events are lower-magnitude events with magnitudes between M0.2–M0.8. Here, the increased spatial-temporal resolution of the microseismicity map within the geothermal field illustrates how empirical MFP, when combined with conventional methods, can significantly improve seismic network detection capabilities, which can aid in long-term sustainability and monitoring of managed underground reservoirs.« less
Miura, Naoki; Kucho, Ken-Ichi; Noguchi, Michiko; Miyoshi, Noriaki; Uchiumi, Toshiki; Kawaguchi, Hiroaki; Tanimoto, Akihide
2014-01-01
The microminipig, which weighs less than 10 kg at an early stage of maturity, has been reported as a potential experimental model animal. Its extremely small size and other distinct characteristics suggest the possibility of a number of differences between the genome of the microminipig and that of conventional pigs. In this study, we analyzed the genomes of two healthy microminipigs using a next-generation sequencer SOLiD™ system. We then compared the obtained genomic sequences with a genomic database for the domestic pig (Sus scrofa). The mapping coverage of sequenced tag from the microminipig to conventional pig genomic sequences was greater than 96% and we detected no clear, substantial genomic variance from these data. The results may indicate that the distinct characteristics of the microminipig derive from small-scale alterations in the genome, such as Single Nucleotide Polymorphisms or translational modifications, rather than large-scale deletion or insertion polymorphisms. Further investigation of the entire genomic sequence of the microminipig with methods enabling deeper coverage is required to elucidate the genetic basis of its distinct phenotypic traits. Copyright © 2014 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
Secure positioning technique based on the encrypted visible light map
NASA Astrophysics Data System (ADS)
Lee, Y. U.; Jung, G.
2017-01-01
For overcoming the performance degradation problems of the conventional visible light (VL) positioning system, which are due to the co-channel interference by adjacent light and the irregularity of the VL reception position in the three dimensional (3-D) VL channel, the secure positioning technique based on the two dimensional (2-D) encrypted VL map is proposed, implemented as the prototype for the specific embedded positioning system, and verified by performance tests in this paper. It is shown from the test results that the proposed technique achieves the performance enhancement over 21.7% value better than the conventional one in the real positioning environment, and the well known PN code is the optimal stream encryption key for the good VL positioning.
Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S; Subramanian, Hariharan; Dravid, Vinayak P; Backman, Vadim
2017-06-01
Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass-density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass-density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass-density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass-density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass-density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes.
NASA Astrophysics Data System (ADS)
Kilic, Veli Tayfun; Unal, Emre; Demir, Hilmi Volkan
2017-05-01
In this work, we investigate a method proposed for vessel detection and coil powering in an all-surface inductive heating system composed of outer squircle coils. Besides conventional circular coils, coils with different shapes such as outer squircle coils are used for and enable efficient all-surface inductive heating. Validity of the method, which relies on measuring inductance and resistance values of a loaded coil at different frequencies, is experimentally demonstrated for a coil with shape different from conventional circular coil. Simple setup was constructed with a small coil to model an all-surface inductive heating system. Inductance and resistance maps were generated by measuring coil's inductance and resistance values at different frequencies loaded by a plate made of different materials and located at various positions. Results show that in an induction hob for various coil geometries it is possible to detect a vessel's presence, to identify its material type and to specify its position on the hob surface by considering inductance and resistance of the coil measured on at least two different frequencies. The studied method is important in terms of enabling safe, efficient and user flexible heating in an all-surface inductive heating system by automatically detecting the vessel's presence and powering on only the coils that are loaded by the vessel with predetermined current levels.
NASA Astrophysics Data System (ADS)
Liu, Yu; Shi, Zhanjie; Wang, Bangbing; Yu, Tianxiang
2018-01-01
As a method with high resolution, GPR has been extensively used in archaeological surveys. However, conventional GPR profile can only provide limited geometry information, such as the shape or location of the interface, but can't give the distribution of physical properties which could help identify the historical remains more directly. A common way for GPR to map parameter distribution is the common-midpoint velocity analysis, but it provides limited resolution. Another research hotspot, the full-waveform inversion, is unstable and relatively dependent on the initial model. Coring method could give direct information in drilling site, while the accurate result is only limited in several boreholes. In this paper, we propose a new scheme to enhance imaging and characterization of archaeological targets by fusion of GPR and coring data. The scheme mainly involves the impedance inversion of conventional common-offset GPR data, which uses well log to compensate GPR data and finally obtains a high-resolution estimation of permittivity. The core analysis result also contributes to interpretation of the inversion result. To test this method, we did a case study at Mudu city site in Suzhou, China. The results provide clear images of the ancient city's moat and wall subsurface and improve the characterization of archaeological targets. It is shown that this method is effective and feasible for archaeological exploration.
Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A.; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S.; Subramanian, Hariharan; Dravid, Vinayak P.; Backman, Vadim
2018-01-01
Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass–density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass–density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass–density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass–density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass–density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes. PMID:28416035
Visualizing Similarity of Appearance by Arrangement of Cards
Nakatsuji, Nao; Ihara, Hisayasu; Seno, Takeharu; Ito, Hiroshi
2016-01-01
This study proposes a novel method to extract the configuration of the psychological space by directly measuring subjects' similarity rating without computational work. Although multidimensional scaling (MDS) is well-known as a conventional method for extracting the psychological space, the method requires many pairwise evaluations. The times taken for evaluations increase in proportion to the square of the number of objects in MDS. The proposed method asks subjects to arrange cards on a poster sheet according to the degree of similarity of the objects. To compare the performance of the proposed method with the conventional one, we developed similarity maps of typefaces through the proposed method and through non-metric MDS. We calculated the trace correlation coefficient among all combinations of the configuration for both methods to evaluate the degree of similarity in the obtained configurations. The threshold value of trace correlation coefficient for statistically discriminating similar configuration was decided based on random data. The ratio of the trace correlation coefficient exceeding the threshold value was 62.0% so that the configurations of the typefaces obtained by the proposed method closely resembled those obtained by non-metric MDS. The required duration for the proposed method was approximately one third of the non-metric MDS's duration. In addition, all distances between objects in all the data for both methods were calculated. The frequency for the short distance in the proposed method was lower than that of the non-metric MDS so that a relatively small difference was likely to be emphasized among objects in the configuration by the proposed method. The card arrangement method we here propose, thus serves as a easier and time-saving tool to obtain psychological structures in the fields related to similarity of appearance. PMID:27242611
Example-Based Automatic Music-Driven Conventional Dance Motion Synthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Songhua; Fan, Rukun; Geng, Weidong
We introduce a novel method for synthesizing dance motions that follow the emotions and contents of a piece of music. Our method employs a learning-based approach to model the music to motion mapping relationship embodied in example dance motions along with those motions' accompanying background music. A key step in our method is to train a music to motion matching quality rating function through learning the music to motion mapping relationship exhibited in synchronized music and dance motion data, which were captured from professional human dance performance. To generate an optimal sequence of dance motion segments to match with amore » piece of music, we introduce a constraint-based dynamic programming procedure. This procedure considers both music to motion matching quality and visual smoothness of a resultant dance motion sequence. We also introduce a two-way evaluation strategy, coupled with a GPU-based implementation, through which we can execute the dynamic programming process in parallel, resulting in significant speedup. To evaluate the effectiveness of our method, we quantitatively compare the dance motions synthesized by our method with motion synthesis results by several peer methods using the motions captured from professional human dancers' performance as the gold standard. We also conducted several medium-scale user studies to explore how perceptually our dance motion synthesis method can outperform existing methods in synthesizing dance motions to match with a piece of music. These user studies produced very positive results on our music-driven dance motion synthesis experiments for several Asian dance genres, confirming the advantages of our method.« less
NASA Astrophysics Data System (ADS)
Han, Yang; Wang, Shutao; Payen, Thomas; Konofagou, Elisa
2017-04-01
The successful clinical application of high intensity focused ultrasound (HIFU) ablation depends on reliable monitoring of the lesion formation. Harmonic motion imaging guided focused ultrasound (HMIgFUS) is an ultrasound-based elasticity imaging technique, which monitors HIFU ablation based on the stiffness change of the tissue instead of the echo intensity change in conventional B-mode monitoring, rendering it potentially more sensitive to lesion development. Our group has shown that predicting the lesion location based on the radiation force-excited region is feasible during HMIgFUS. In this study, the feasibility of a fast lesion mapping method is explored to directly monitor the lesion map during HIFU. The harmonic motion imaging (HMI) lesion map was generated by subtracting the reference HMI image from the present HMI peak-to-peak displacement map, as streamed on the computer display. The dimensions of the HMIgFUS lesions were compared against gross pathology. Excellent agreement was found between the lesion depth (r 2 = 0.81, slope = 0.90), width (r 2 = 0.85, slope = 1.12) and area (r 2 = 0.58, slope = 0.75). In vivo feasibility was assessed in a mouse with a pancreatic tumor. These findings demonstrate that HMIgFUS can successfully map thermal lesions and monitor lesion development in real time in vitro and in vivo. The HMIgFUS technique may therefore constitute a novel clinical tool for HIFU treatment monitoring.
Han, Yang; Wang, Shutao; Payen, Thomas; Konofagou, Elisa
2017-04-21
The successful clinical application of high intensity focused ultrasound (HIFU) ablation depends on reliable monitoring of the lesion formation. Harmonic motion imaging guided focused ultrasound (HMIgFUS) is an ultrasound-based elasticity imaging technique, which monitors HIFU ablation based on the stiffness change of the tissue instead of the echo intensity change in conventional B-mode monitoring, rendering it potentially more sensitive to lesion development. Our group has shown that predicting the lesion location based on the radiation force-excited region is feasible during HMIgFUS. In this study, the feasibility of a fast lesion mapping method is explored to directly monitor the lesion map during HIFU. The harmonic motion imaging (HMI) lesion map was generated by subtracting the reference HMI image from the present HMI peak-to-peak displacement map, as streamed on the computer display. The dimensions of the HMIgFUS lesions were compared against gross pathology. Excellent agreement was found between the lesion depth (r 2 = 0.81, slope = 0.90), width (r 2 = 0.85, slope = 1.12) and area (r 2 = 0.58, slope = 0.75). In vivo feasibility was assessed in a mouse with a pancreatic tumor. These findings demonstrate that HMIgFUS can successfully map thermal lesions and monitor lesion development in real time in vitro and in vivo. The HMIgFUS technique may therefore constitute a novel clinical tool for HIFU treatment monitoring.
NASA Astrophysics Data System (ADS)
Swartwout, Michael Alden
New paradigms in space missions require radical changes in spacecraft operations. In the past, operations were insulated from competitive pressures of cost, quality and time by system infrastructures, technological limitations and historical precedent. However, modern demands now require that operations meet competitive performance goals. One target for improvement is the telemetry downlink, where significant resources are invested to acquire thousands of measurements for human interpretation. This cost-intensive method is used because conventional operations are not based on formal methodologies but on experiential reasoning and incrementally adapted procedures. Therefore, to improve the telemetry downlink it is first necessary to invent a rational framework for discussing operations. This research explores operations as a feedback control problem, develops the conceptual basis for the use of spacecraft telemetry, and presents a method to improve performance. The method is called summarization, a process to make vehicle data more useful to operators. Summarization enables rational trades for telemetry downlink by defining and quantitatively ranking these elements: all operational decisions, the knowledge needed to inform each decision, and all possible sensor mappings to acquire that knowledge. Summarization methods were implemented for the Sapphire microsatellite; conceptual health management and system models were developed and a degree-of-observability metric was defined. An automated tool was created to generate summarization methods from these models. Methods generated using a Sapphire model were compared against the conventional operations plan. Summarization was shown to identify the key decisions and isolate the most appropriate sensors. Secondly, a form of summarization called beacon monitoring was experimentally verified. Beacon monitoring automates the anomaly detection and notification tasks and migrates these responsibilities to the space segment. A set of experiments using Sapphire demonstrated significant cost and time savings compared to conventional operations. Summarization is based on rational concepts for defining and understanding operations. Therefore, it enables additional trade studies that were formerly not possible and also can form the basis for future detailed research into spacecraft operations.
Seismic hazard assessment of Syria using seismicity, DEM, slope, active tectonic and GIS
NASA Astrophysics Data System (ADS)
Ahmad, Raed; Adris, Ahmad; Singh, Ramesh
2016-07-01
In the present work, we discuss the use of an integrated remote sensing and Geographical Information System (GIS) techniques for evaluation of seismic hazard areas in Syria. The present study is the first time effort to create seismic hazard map with the help of GIS. In the proposed approach, we have used Aster satellite data, digital elevation data (30 m resolution), earthquake data, and active tectonic maps. Many important factors for evaluation of seismic hazard were identified and corresponding thematic data layers (past earthquake epicenters, active faults, digital elevation model, and slope) were generated. A numerical rating scheme has been developed for spatial data analysis using GIS to identify ranking of parameters to be included in the evaluation of seismic hazard. The resulting earthquake potential map delineates the area into different relative susceptibility classes: high, moderate, low and very low. The potential earthquake map was validated by correlating the obtained different classes with the local probability that produced using conventional analysis of observed earthquakes. Using earthquake data of Syria and the peak ground acceleration (PGA) data is introduced to the model to develop final seismic hazard map based on Gutenberg-Richter (a and b values) parameters and using the concepts of local probability and recurrence time. The application of the proposed technique in Syrian region indicates that this method provides good estimate of seismic hazard map compared to those developed from traditional techniques (Deterministic (DSHA) and probabilistic seismic hazard (PSHA). For the first time we have used numerous parameters using remote sensing and GIS in preparation of seismic hazard map which is found to be very realistic.
Bayless, E. Randall; Arihood, Leslie D.; Reeves, Howard W.; Sperl, Benjamin J.S.; Qi, Sharon L.; Stipe, Valerie E.; Bunch, Aubrey R.
2017-01-18
As part of the National Water Availability and Use Program established by the U.S. Geological Survey (USGS) in 2005, this study took advantage of about 14 million records from State-managed collections of water-well drillers’ records and created a database of hydrogeologic properties for the glaciated United States. The water-well drillers’ records were standardized to be relatively complete and error-free and to provide consistent variables and naming conventions that span all State boundaries.Maps and geospatial grids were developed for (1) total thickness of glacial deposits, (2) total thickness of coarse-grained deposits, (3) specific-capacity based transmissivity and hydraulic conductivity, and (4) texture-based estimated equivalent horizontal and vertical hydraulic conductivity and transmissivity. The information included in these maps and grids is required for most assessments of groundwater availability, in addition to having applications to studies of groundwater flow and transport. The texture-based estimated equivalent horizontal and vertical hydraulic conductivity and transmissivity were based on an assumed range of hydraulic conductivity values for coarse- and fine-grained deposits and should only be used with complete awareness of the methods used to create them. However, the maps and grids of texture-based estimated equivalent hydraulic conductivity and transmissivity may be useful for application to areas where a range of measured values is available for re-scaling.Maps of hydrogeologic information for some States are presented as examples in this report but maps and grids for all States are available electronically at the project Web site (USGS Glacial Aquifer System Groundwater Availability Study, http://mi.water.usgs.gov/projects/WaterSmart/Map-SIR2015-5105.html) and the Science Base Web site, https://www.sciencebase.gov/catalog/item/58756c7ee4b0a829a3276352.
NASA Astrophysics Data System (ADS)
Lutich, Andrey
2017-07-01
This research considers the problem of generating compact vector representations of physical design patterns for analytics purposes in semiconductor patterning domain. PatterNet uses a deep artificial neural network to learn mapping of physical design patterns to a compact Euclidean hyperspace. Distances among mapped patterns in this space correspond to dissimilarities among patterns defined at the time of the network training. Once the mapping network has been trained, PatterNet embeddings can be used as feature vectors with standard machine learning algorithms, and pattern search, comparison, and clustering become trivial problems. PatterNet is inspired by the concepts developed within the framework of generative adversarial networks as well as the FaceNet. Our method facilitates a deep neural network (DNN) to learn directly the compact representation by supplying it with pairs of design patterns and dissimilarity among these patterns defined by a user. In the simplest case, the dissimilarity is represented by an area of the XOR of two patterns. Important to realize that our PatterNet approach is very different to the methods developed for deep learning on image data. In contrast to "conventional" pictures, the patterns in the CAD world are the lists of polygon vertex coordinates. The method solely relies on the promise of deep learning to discover internal structure of the incoming data and learn its hierarchical representations. Artificial intelligence arising from the combination of PatterNet and clustering analysis very precisely follows intuition of patterning/optical proximity correction experts paving the way toward human-like and human-friendly engineering tools.
Orthogonal on-off control of radar pulses for the suppression of mutual interference
NASA Astrophysics Data System (ADS)
Kim, Yong Cheol
1998-10-01
Intelligent vehicles of the future will be guided by radars and other sensors to avoid obstacles. When multiple vehicles move simultaneously in autonomous navigational mode, mutual interference among car radars becomes a serious problem. An obstacle is illuminated with electromagnetic pulses from several radars. The signal at a radar receiver is actually a mixture of the self-reflection and the reflection of interfering pulses emitted by others. When standardized pulse- type radars are employed on vehicles for obstacle avoidance and so self-pulse and interfering pulses have identical pulse repetition interval, this SI (synchronous Interference) is very difficult to separate from the true reflection. We present a method of suppressing such a synchronous interference. By controlling the pulse emission of a radar in a binary orthogonal ON, OFF pattern, the true self-reflection can be separated from the false one. Two range maps are generated, TRM (true-reflection map) and SIM (synchronous- interference map). TRM is updated for every ON interval and SIM is updated for every OFF interval of the self-radar. SIM represents the SI of interfering radars while TRM keeps a record of a mixture of the true self-reflection and SI. Hence the true obstacles can be identified by the set subtraction operation. The performance of the proposed method is compared with that of the conventional M of N method. Bayesian analysis shows that the probability of false alarm is improved by order of 103 to approximately 106 while the deterioration in the probability of detection is negligible.
Estimators of The Magnitude-Squared Spectrum and Methods for Incorporating SNR Uncertainty
Lu, Yang; Loizou, Philipos C.
2011-01-01
Statistical estimators of the magnitude-squared spectrum are derived based on the assumption that the magnitude-squared spectrum of the noisy speech signal can be computed as the sum of the (clean) signal and noise magnitude-squared spectra. Maximum a posterior (MAP) and minimum mean square error (MMSE) estimators are derived based on a Gaussian statistical model. The gain function of the MAP estimator was found to be identical to the gain function used in the ideal binary mask (IdBM) that is widely used in computational auditory scene analysis (CASA). As such, it was binary and assumed the value of 1 if the local SNR exceeded 0 dB, and assumed the value of 0 otherwise. By modeling the local instantaneous SNR as an F-distributed random variable, soft masking methods were derived incorporating SNR uncertainty. The soft masking method, in particular, which weighted the noisy magnitude-squared spectrum by the a priori probability that the local SNR exceeds 0 dB was shown to be identical to the Wiener gain function. Results indicated that the proposed estimators yielded significantly better speech quality than the conventional MMSE spectral power estimators, in terms of yielding lower residual noise and lower speech distortion. PMID:21886543
Varga-Szemes, Akos; Simor, Tamas; Lenkey, Zsofia; van der Geest, Rob J; Kirschner, Robert; Toth, Levente; Brott, Brigitta C.; Ada, Elgavish; Elgavish, Gabriel A.
2014-01-01
Purpose To study the feasibility of a myocardial infarct (MI) quantification method (Signal Intensity-based Percent Infarct Mapping, SI-PIM) that is able to evaluate not only the size, but also the density distribution of the MI. Methods In 14 male swine, MI was generated by 90 minutes of closed-chest balloon occlusion followed by reperfusion. Seven (n=7) or 56 (n=7) days after reperfusion, Gd-DTPA-bolus and continuous-infusion enhanced Late Gadolinium Enhancement (LGE) MRI, and R1-mapping were carried out and post mortem triphenyl-tetrazolium-chloride (TTC) staining was performed. MI was quantified using binary (2 or 5 standard deviations, SD), SI-PIM and R1-PIM methods. Infarct Fraction (IF), and Infarct-Involved Voxel Fraction (IIVF) were determined by each MRI method. Bias of each method was compared to the TTC technique. Results The accuracy of MI quantification did not depend on the method of contrast administration or the age of the MI. IFs obtained by either of the two PIM methods were statistically not different from the IFs derived from the TTC measurements at either MI age. IFs obtained from the binary 2SD method overestimated IF obtained from TTC. IIVF among the three different PIM methods did not vary, but with the binary methods the IIVF gradually decreased with increasing the threshold limit. Conclusions The advantage of SI-PIM over the conventional binary method is the ability to represent not only IF but also the density distribution of the MI. Since the SI-PIM methods are based on a single LGE acquisition, the bolus-data-based SI-PIM method can effortlessly be incorporated into the clinical image post-processing procedure. PMID:24718787
Injection molding lens metrology using software configurable optical test system
NASA Astrophysics Data System (ADS)
Zhan, Cheng; Cheng, Dewen; Wang, Shanshan; Wang, Yongtian
2016-10-01
Optical plastic lens produced by injection molding machine possesses numerous advantages of light quality, impact resistance, low cost, etc. The measuring methods in the optical shop are mainly interferometry, profile meter. However, these instruments are not only expensive, but also difficult to alignment. The software configurable optical test system (SCOTS) is based on the geometry of the fringe refection and phase measuring deflectometry method (PMD), which can be used to measure large diameter mirror, aspheric and freeform surface rapidly, robustly, and accurately. In addition to the conventional phase shifting method, we propose another data collection method called as dots matrix projection. We also use the Zernike polynomials to correct the camera distortion. This polynomials fitting mapping distortion method has not only simple operation, but also high conversion precision. We simulate this test system to measure the concave surface using CODE V and MATLAB. The simulation results show that the dots matrix projection method has high accuracy and SCOTS has important significance for on-line detection in optical shop.
Comparative analysis of ROS-based monocular SLAM methods for indoor navigation
NASA Astrophysics Data System (ADS)
Buyval, Alexander; Afanasyev, Ilya; Magid, Evgeni
2017-03-01
This paper presents a comparison of four most recent ROS-based monocular SLAM-related methods: ORB-SLAM, REMODE, LSD-SLAM, and DPPTAM, and analyzes their feasibility for a mobile robot application in indoor environment. We tested these methods using video data that was recorded from a conventional wide-angle full HD webcam with a rolling shutter. The camera was mounted on a human-operated prototype of an unmanned ground vehicle, which followed a closed-loop trajectory. Both feature-based methods (ORB-SLAM, REMODE) and direct SLAMrelated algorithms (LSD-SLAM, DPPTAM) demonstrated reasonably good results in detection of volumetric objects, corners, obstacles and other local features. However, we met difficulties with recovering typical for offices homogeneously colored walls, since all of these methods created empty spaces in a reconstructed sparse 3D scene. This may cause collisions of an autonomously guided robot with unfeatured walls and thus limits applicability of maps, which are obtained by the considered monocular SLAM-related methods for indoor robot navigation.
Juchem, Christoph; Umesh Rudrapatna, S; Nixon, Terence W; de Graaf, Robin A
2015-01-15
Gradient-echo echo-planar imaging (EPI) is the primary method of choice in functional MRI and other methods relying on fast MRI to image brain activation and connectivity. However, the high susceptibility of EPI towards B0 magnetic field inhomogeneity poses serious challenges. Conventional magnetic field shimming with low-order spherical harmonic (SH) functions is capable of compensating shallow field distortions, but performs poorly for global brain shimming or on specific areas with strong susceptibility-induced B0 distortions such as the prefrontal cortex (PFC). Excellent B0 homogeneity has been demonstrated recently in the human brain at 7 Tesla with the DYNAmic Multi-coIl TEchnique (DYNAMITE) for magnetic field shimming (J Magn Reson (2011) 212:280-288). Here, we report the benefits of DYNAMITE shimming for multi-slice EPI and T2* mapping. A standard deviation of 13Hz was achieved for the residual B0 distribution in the human brain at 7 Tesla with DYNAMITE shimming and was 60% lower compared to conventional shimming that employs static zero through third order SH shapes. The residual field inhomogeneity with SH shimming led to an average 8mm shift at acquisition parameters commonly used for fMRI and was reduced to 1.5-3mm with DYNAMITE shimming. T2* values obtained from the prefrontal and temporal cortices with DYNAMITE shimming were 10-50% longer than those measured with SH shimming. The reduction of the confounding macroscopic B0 field gradients with DYNAMITE shimming thereby promises improved access to the relevant microscopic T2* effects. The combination of high spatial resolution and DYNAMITE shimming allows largely artifact-free EPI and T2* mapping throughout the brain, including prefrontal and temporal lobe areas. DYNAMITE shimming is expected to critically benefit a wide range of MRI applications that rely on excellent B0 magnetic field conditions including EPI-based fMRI to study various cognitive processes and assessing large-scale brain connectivity in vivo. As such, DYNAMITE shimming has the potential to replace conventional SH shim systems in human MR scanners. Copyright © 2014 Elsevier Inc. All rights reserved.
Juchem, Christoph; Rudrapatna, S. Umesh; Nixon, Terence W.; de Graaf, Robin A.
2014-01-01
Gradient-echo echo-planar imaging (EPI) is the primary method of choice in functional MRI and other methods relying on fast MRI to image brain activation and connectivity. However, the high susceptibility of EPI towards B0 magnetic field inhomogeneity poses serious challenges. Conventional magnetic field shimming with low-order spherical harmonic (SH) functions is capable of compensating shallow field distortions, but performs poorly for global brain shimming or on specific areas with strong susceptibility-induced B0 distortions such as the prefrontal cortex (PFC). Excellent B0 homogeneity has been demonstrated recently in the human brain at 7 Tesla with the DYNAmic Multi-coIl TEchnique (DYNAMITE) for magnetic field shimming (Juchem et al., J Magn Reson (2011) 212:280-288). Here, we report the benefits of DYNAMITE shimming for multi-slice EPI and T2* mapping. A standard deviation of 13 Hz was achieved for the residual B0 distribution in the human brain at 7 Tesla with DYNAMITE shimming and was 60% lower compared to conventional shimming that employs static zero through third order SH shapes. The residual field inhomogeneity with SH shimming led to an average 8 mm shift at acquisition parameters commonly used for fMRI and was reduced to 1.5-3 mm with DYNAMITE shimming. T2* values obtained from the prefrontal and temporal cortices with DYNAMITE shimming were 10-50% longer than those measured with SH shimming. The reduction of the confounding macroscopic B0 field gradients with DYNAMITE shimming thereby promises improved access to the relevant microscopic T2* effects. The combination of high spatial resolution and DYNAMITE shimming allows largely artifact-free EPI and T2* mapping throughout the brain, including prefrontal and temporal lobe areas. DYNAMITE shimming is expected to critically benefit a wide range of MRI applications that rely on excellent B0 magnetic field conditions including EPI-based fMRI to study various cognitive processes and assessing large-scale brain connectivity in vivo. As such, DYNAMITE shimming has the potential to replace conventional SH shim systems in human MR scanners. PMID:25462795
MO-F-CAMPUS-I-05: Quantitative ADC Measurement of Esophageal Cancer Before and After Chemoradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, L; UT MD Anderson Cancer Center, Houston, TX; Son, JB
2015-06-15
Purpose: We investigated whether quantitative diffusion imaging can be used as an imaging biomarker for early prediction of treatment response of esophageal cancer. Methods: Eight patients with esophageal cancer underwent a baseline and an interim MRI studies during chemoradiation on a 3T whole body MRI scanner with an 8-channel torso phased array coil. Each MRI study contained two axial diffusion-weighted imaging (DWI) series with a conventional DWI sequence and a reduced field-of-view DWI sequence (FOCUS) of varying b-values. ADC maps with two b-values were computed from conventional DWI images using a mono-exponential model. For each of DWI sequences, separate ADCallmore » was computed by fitting the signal intensity of images with all the b-values to a single exponential model. For the FOCUS sequence, a bi-exponential model was used to extract perfusion and diffusion coefficients (ADCperf and ADCdiff) and their contributions to the signal decay. A board-certified radiologist contoured the tumor region and mean ADC values and standard deviations of tumor and muscle ROIs were recorded from different ADC maps. Results: Our results showed that (1) the magnitude of ADCs from the same ROIs by the different analysis methods can be substantially different. (2) For a given method, the change between the baseline and interim muscle ADCs was relatively small (≤10%). In contrast, the change between the baseline and interim tumor ADCs was substantially larger, with the change in ADCdiff by FOCUS DWI showing the largest percentage change of 73.2%. (3) The range of the relative change of a specific parameter for different patients was also different. Conclusion: Presently, we do not have the final pathological confirmation of the treatment response for all the patients. However, for a few patients whose surgical specimen is available, the quantitative ADC changes have been found to be useful as a potential predictor for treatment response.« less
Christen, T; Pannetier, N A; Ni, W W; Qiu, D; Moseley, M E; Schuff, N; Zaharchuk, G
2014-04-01
In the present study, we describe a fingerprinting approach to analyze the time evolution of the MR signal and retrieve quantitative information about the microvascular network. We used a Gradient Echo Sampling of the Free Induction Decay and Spin Echo (GESFIDE) sequence and defined a fingerprint as the ratio of signals acquired pre- and post-injection of an iron-based contrast agent. We then simulated the same experiment with an advanced numerical tool that takes a virtual voxel containing blood vessels as input, then computes microscopic magnetic fields and water diffusion effects, and eventually derives the expected MR signal evolution. The parameter inputs of the simulations (cerebral blood volume [CBV], mean vessel radius [R], and blood oxygen saturation [SO2]) were varied to obtain a dictionary of all possible signal evolutions. The best fit between the observed fingerprint and the dictionary was then determined by using least square minimization. This approach was evaluated in 5 normal subjects and the results were compared to those obtained by using more conventional MR methods, steady-state contrast imaging for CBV and R and a global measure of oxygenation obtained from the superior sagittal sinus for SO2. The fingerprinting method enabled the creation of high-resolution parametric maps of the microvascular network showing expected contrast and fine details. Numerical values in gray matter (CBV=3.1±0.7%, R=12.6±2.4μm, SO2=59.5±4.7%) are consistent with literature reports and correlated with conventional MR approaches. SO2 values in white matter (53.0±4.0%) were slightly lower than expected. Numerous improvements can easily be made and the method should be useful to study brain pathologies. Copyright © 2013 Elsevier Inc. All rights reserved.
Landsat for practical forest type mapping - A test case
NASA Technical Reports Server (NTRS)
Bryant, E.; Dodge, A. G., Jr.; Warren, S. D.
1980-01-01
Computer classified Landsat maps are compared with a recent conventional inventory of forest lands in northern Maine. Over the 196,000 hectare area mapped, estimates of the areas of softwood, mixed wood and hardwood forest obtained by a supervised classification of the Landsat data and a standard inventory based on aerial photointerpretation, probability proportional to prediction, field sampling and a standard forest measurement program are found to agree to within 5%. The cost of the Landsat maps is estimated to be $0.065/hectare. It is concluded that satellite techniques are worth developing for forest inventories, although they are not yet refined enough to be incorporated into current practical inventories.
floodX: urban flash flood experiments monitored with conventional and alternative sensors
NASA Astrophysics Data System (ADS)
Moy de Vitry, Matthew; Dicht, Simon; Leitão, João P.
2017-09-01
The data sets described in this paper provide a basis for developing and testing new methods for monitoring and modelling urban pluvial flash floods. Pluvial flash floods are a growing hazard to property and inhabitants' well-being in urban areas. However, the lack of appropriate data collection methods is often cited as an impediment for reliable flood modelling, thereby hindering the improvement of flood risk mapping and early warning systems. The potential of surveillance infrastructure and social media is starting to draw attention for this purpose. In the floodX project, 22 controlled urban flash floods were generated in a flood response training facility and monitored with state-of-the-art sensors as well as standard surveillance cameras. With these data, it is possible to explore the use of video data and computer vision for urban flood monitoring and modelling. The floodX project stands out as the largest documented flood experiment of its kind, providing both conventional measurements and video data in parallel and at high temporal resolution. The data set used in this paper is available at https://doi.org/10.5281/zenodo.830513.
Cartagena, Alexander; Hernando-Pérez, Mercedes; Carrascosa, José L; de Pablo, Pedro J; Raman, Arvind
2013-06-07
Understanding the relationships between viral material properties (stiffness, strength, charge density, adhesion, hydration, viscosity, etc.), structure (protein sub-units, genome, surface receptors, appendages), and functions (self-assembly, stability, disassembly, infection) is of significant importance in physical virology and nanomedicine. Conventional Atomic Force Microscopy (AFM) methods have measured a single physical property such as the stiffness of the entire virus from nano-indentation at a few points which severely limits the study of structure-property-function relationships. We present an in vitro dynamic AFM technique operating in the intermittent contact regime which synthesizes anharmonic Lorentz-force excited AFM cantilevers to map quantitatively at nanometer resolution the local electro-mechanical force gradient, adhesion, and hydration layer viscosity within individual φ29 virions. Furthermore, the changes in material properties over the entire φ29 virion provoked by the local disruption of its shell are studied, providing evidence of bacteriophage depressurization. The technique significantly generalizes recent multi-harmonic theory (A. Raman, et al., Nat. Nanotechnol., 2011, 6, 809-814) and enables high-resolution in vitro quantitative mapping of multiple material properties within weakly bonded viruses and nanoparticles with complex structure that otherwise cannot be observed using standard AFM techniques.
Dispersible oxygen microsensors map oxygen gradients in three-dimensional cell cultures.
Lesher-Pérez, Sasha Cai; Kim, Ge-Ah; Kuo, Chuan-Hsien; Leung, Brendan M; Mong, Sanda; Kojima, Taisuke; Moraes, Christopher; Thouless, M D; Luker, Gary D; Takayama, Shuichi
2017-09-26
Phase fluorimetry, unlike the more commonly used intensity-based measurement, is not affected by differences in light paths from culture vessels or by optical attenuation through dense 3D cell cultures and hydrogels thereby minimizing dependence on signal intensity for accurate measurements. This work describes the use of phase fluorimetry on oxygen-sensor microbeads to perform oxygen measurements in different microtissue culture environments. In one example, cell spheroids were observed to deplete oxygen from the cell-culture medium filling the bottom of conventional microwells within minutes, whereas oxygen concentrations remained close to ambient levels for several days in hanging-drop cultures. By dispersing multiple oxygen microsensors in cell-laden hydrogels, we also mapped cell-generated oxygen gradients. The spatial oxygen mapping was sufficiently precise to enable the use of computational models of oxygen diffusion and uptake to give estimates of the cellular oxygen uptake rate and the half-saturation constant. The results show the importance of integrated design and analysis of 3D cell cultures from both biomaterial and oxygen supply aspects. While this paper specifically tests spheroids and cell-laden gel cultures, the described methods should be useful for measuring pericellular oxygen concentrations in a variety of biomaterials and culture formats.
MapReduce Based Parallel Bayesian Network for Manufacturing Quality Control
NASA Astrophysics Data System (ADS)
Zheng, Mao-Kuan; Ming, Xin-Guo; Zhang, Xian-Yu; Li, Guo-Ming
2017-09-01
Increasing complexity of industrial products and manufacturing processes have challenged conventional statistics based quality management approaches in the circumstances of dynamic production. A Bayesian network and big data analytics integrated approach for manufacturing process quality analysis and control is proposed. Based on Hadoop distributed architecture and MapReduce parallel computing model, big volume and variety quality related data generated during the manufacturing process could be dealt with. Artificial intelligent algorithms, including Bayesian network learning, classification and reasoning, are embedded into the Reduce process. Relying on the ability of the Bayesian network in dealing with dynamic and uncertain problem and the parallel computing power of MapReduce, Bayesian network of impact factors on quality are built based on prior probability distribution and modified with posterior probability distribution. A case study on hull segment manufacturing precision management for ship and offshore platform building shows that computing speed accelerates almost directly proportionally to the increase of computing nodes. It is also proved that the proposed model is feasible for locating and reasoning of root causes, forecasting of manufacturing outcome, and intelligent decision for precision problem solving. The integration of bigdata analytics and BN method offers a whole new perspective in manufacturing quality control.
NASA Astrophysics Data System (ADS)
Ramirez-Lopez, Leonardo; Alexandre Dematte, Jose
2010-05-01
There is consensus in the scientific community about the great need of spatial soil information. Conventional mapping methods are time consuming and involve high costs. Digital soil mapping has emerged as an area in which the soil mapping is optimized by the application of mathematical and statistical approaches, as well as the application of expert knowledge in pedology. In this sense, the objective of the study was to develop a methodology for the spatial prediction of soil classes by using soil spectroscopy methodologies related with fieldwork, spectral data from satellite image and terrain attributes in simultaneous. The studied area is located in São Paulo State, and comprised an area of 473 ha, which was covered by a regular grid (100 x 100 m). In each grid node was collected soil samples at two depths (layers A and B). There were extracted 206 samples from transect sections and submitted to soil analysis (clay, Al2O3, Fe2O3, SiO2 TiO2, and weathering index). The first analog soil class map (ASC-N) contains only soil information regarding from orders to subgroups of the USDA Soil Taxonomy System. The second (ASC-H) map contains some additional information related to some soil attributes like color, ferric levels and base sum. For the elaboration of the digital soil maps the data was divided into three groups: i) Predicted soil attributes of the layer B (related to the soil weathering) which were obtained by using a local soil spectral library; ii) Spectral bands data extracted from a Landsat image; and iii) Terrain parameters. This information was summarized by a principal component analysis (PCA) in each group. Digital soil maps were generated by supervised classification using a maximum likelihood method. The trainee information for this classification was extracted from five toposequences based on the analog soil class maps. The spectral models of weathering soil attributes shown a high predictive performance with low error (R2 0.71 to 0.90). The spatial prediction of these attributes also showed a high performance (validations with R2> 0.78). These models allowed to increase spatial resolution of soil weathering information. On the other hand, the comparison between the analog and digital soil maps showed a global accuracy of 69% for the ASC-N map and 62% in the ASC-H map, with kappa indices of 0.52 and 0.45 respectively.
NASA Astrophysics Data System (ADS)
Simatos, N.; Perivolaropoulos, L.
2001-01-01
We use the publicly available code CMBFAST, as modified by Pogosian and Vachaspati, to simulate the effects of wiggly cosmic strings on the cosmic microwave background (CMB). Using the modified CMBFAST code, which takes into account vector modes and models wiggly cosmic strings by the one-scale model, we go beyond the angular power spectrum to construct CMB temperature maps with a resolution of a few degrees. The statistics of these maps are then studied using conventional and recently proposed statistical tests optimized for the detection of hidden temperature discontinuities induced by the Gott-Kaiser-Stebbins effect. We show, however, that these realistic maps cannot be distinguished in a statistically significant way from purely Gaussian maps with an identical power spectrum.
Gambi, Cecilia M C; Vannoni, Maurizio; Sordini, Andrea; Molesini, Giuseppe
2014-02-01
An interferometric method to monitor the thinning process of vertical soap films from a water solution of surfactant materials is reported. Raw data maps of optical path difference introduced by the film are obtained by conventional phase shift interferometry. Off-line re-processing of such raw data taking into account the layered structure of soap films leads to an accurate measurement of the geometrical thickness. As an example of data acquisition and processing, the measuring chain is demonstrated on perfluoropolyether surfactants; the section profile of vertical films is monitored from drawing to black film state, and quantitative data on the dynamics of the thinning process are presented. The interferometric method proves effective to the task, and lends itself to further investigate the physical properties of soap films.
Lai, Yiu Wai; Krause, Michael; Savan, Alan; Thienhaus, Sigurd; Koukourakis, Nektarios; Hofmann, Martin R; Ludwig, Alfred
2011-10-01
A high-throughput characterization technique based on digital holography for mapping film thickness in thin-film materials libraries was developed. Digital holographic microscopy is used for fully automatic measurements of the thickness of patterned films with nanometer resolution. The method has several significant advantages over conventional stylus profilometry: it is contactless and fast, substrate bending is compensated, and the experimental setup is simple. Patterned films prepared by different combinatorial thin-film approaches were characterized to investigate and demonstrate this method. The results show that this technique is valuable for the quick, reliable and high-throughput determination of the film thickness distribution in combinatorial materials research. Importantly, it can also be applied to thin films that have been structured by shadow masking.
3D chemical imaging in the laboratory by hyperspectral X-ray computed tomography
Egan, C. K.; Jacques, S. D. M.; Wilson, M. D.; Veale, M. C.; Seller, P.; Beale, A. M.; Pattrick, R. A. D.; Withers, P. J.; Cernik, R. J.
2015-01-01
We report the development of laboratory based hyperspectral X-ray computed tomography which allows the internal elemental chemistry of an object to be reconstructed and visualised in three dimensions. The method employs a spectroscopic X-ray imaging detector with sufficient energy resolution to distinguish individual elemental absorption edges. Elemental distributions can then be made by K-edge subtraction, or alternatively by voxel-wise spectral fitting to give relative atomic concentrations. We demonstrate its application to two material systems: studying the distribution of catalyst material on porous substrates for industrial scale chemical processing; and mapping of minerals and inclusion phases inside a mineralised ore sample. The method makes use of a standard laboratory X-ray source with measurement times similar to that required for conventional computed tomography. PMID:26514938
NASA Astrophysics Data System (ADS)
Shi, J.; Donahue, N. M.; Klima, K.; Blackhurst, M.
2016-12-01
In order to tradeoff global impacts of greenhouse gases with highly local impacts of conventional air pollution, researchers require a method to compare global and regional impacts. Unfortunately, we are not aware of a method that allows these to be compared, "apples-to-apples". In this research we propose a three-step model to compare possible city-wide actions to reduce greenhouse gases and conventional air pollutants. We focus on Pittsburgh, PA, a city with consistently poor air quality that is interested in reducing both greenhouse gases and conventional air pollutants. First, we use the 2013 Pittsburgh Greenhouse Gas Inventory to update the Blackhurst et al. model and conduct a greenhouse gas abatement potentials and implementation costs of proposed greenhouse gas reduction efforts. Second, we use field tests for PM2.5, NOx, SOx, organic carbon (OC) and elemental carbon (EC) data to inform a Land-use Regression Model for local air pollution at a 100m x 100m spatial level, which combined with a social cost of air pollution model (EASIUR) allows us to calculate economic social damages. Third, we combine these two models into a three-dimensional greenhouse gas cost abatement curve to understand the implementation costs and social benefits in terms of air quality improvement and greenhouse gas abatement for each potential intervention. We anticipated such results could provide policy-maker insights in green city development.
Theory of wavelet-based coarse-graining hierarchies for molecular dynamics.
Rinderspacher, Berend Christopher; Bardhan, Jaydeep P; Ismail, Ahmed E
2017-07-01
We present a multiresolution approach to compressing the degrees of freedom and potentials associated with molecular dynamics, such as the bond potentials. The approach suggests a systematic way to accelerate large-scale molecular simulations with more than two levels of coarse graining, particularly applications of polymeric materials. In particular, we derive explicit models for (arbitrarily large) linear (homo)polymers and iterative methods to compute large-scale wavelet decompositions from fragment solutions. This approach does not require explicit preparation of atomistic-to-coarse-grained mappings, but instead uses the theory of diffusion wavelets for graph Laplacians to develop system-specific mappings. Our methodology leads to a hierarchy of system-specific coarse-grained degrees of freedom that provides a conceptually clear and mathematically rigorous framework for modeling chemical systems at relevant model scales. The approach is capable of automatically generating as many coarse-grained model scales as necessary, that is, to go beyond the two scales in conventional coarse-grained strategies; furthermore, the wavelet-based coarse-grained models explicitly link time and length scales. Furthermore, a straightforward method for the reintroduction of omitted degrees of freedom is presented, which plays a major role in maintaining model fidelity in long-time simulations and in capturing emergent behaviors.
Bai, Yu; Iwasaki, Yuki; Kanaya, Shigehiko; Zhao, Yue; Ikemura, Toshimichi
2014-01-01
With remarkable increase of genomic sequence data of a wide range of species, novel tools are needed for comprehensive analyses of the big sequence data. Self-Organizing Map (SOM) is an effective tool for clustering and visualizing high-dimensional data such as oligonucleotide composition on one map. By modifying the conventional SOM, we have previously developed Batch-Learning SOM (BLSOM), which allows classification of sequence fragments according to species, solely depending on the oligonucleotide composition. In the present study, we introduce the oligonucleotide BLSOM used for characterization of vertebrate genome sequences. We first analyzed pentanucleotide compositions in 100 kb sequences derived from a wide range of vertebrate genomes and then the compositions in the human and mouse genomes in order to investigate an efficient method for detecting differences between the closely related genomes. BLSOM can recognize the species-specific key combination of oligonucleotide frequencies in each genome, which is called a "genome signature," and the specific regions specifically enriched in transcription-factor-binding sequences. Because the classification and visualization power is very high, BLSOM is an efficient powerful tool for extracting a wide range of information from massive amounts of genomic sequences (i.e., big sequence data).
Particle displacement tracking applied to air flows
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1991-01-01
Electronic Particle Image Velocimeter (PIV) techniques offer many advantages over conventional photographic PIV methods such as fast turn around times and simplified data reduction. A new all electronic PIV technique was developed which can measure high speed gas velocities. The Particle Displacement Tracking (PDT) technique employs a single cw laser, small seed particles (1 micron), and a single intensified, gated CCD array frame camera to provide a simple and fast method of obtaining two-dimensional velocity vector maps with unambiguous direction determination. Use of a single CCD camera eliminates registration difficulties encountered when multiple cameras are used to obtain velocity magnitude and direction information. An 80386 PC equipped with a large memory buffer frame-grabber board provides all of the data acquisition and data reduction operations. No array processors of other numerical processing hardware are required. Full video resolution (640x480 pixel) is maintained in the acquired images, providing high resolution video frames of the recorded particle images. The time between data acquisition to display of the velocity vector map is less than 40 sec. The new electronic PDT technique is demonstrated on an air nozzle flow with velocities less than 150 m/s.