Sample records for performance map applied

  1. Mining Specific and General Features in Both Positive and Negative Relevance Feedback. QUT E-Discovery Lab at the TREC󈧍 Relevance Feedback Track

    DTIC Science & Technology

    2009-11-01

    relevance feedback algo- rithm. Four methods, εMap [1], MapA , P10A, and StatAP [2], were used in the track to measure the performance of Phase 2 runs...εMap and StatAP were applied to the runs us- ing the testing set of only ClueWeb09 Category-B, whereas MapA and P10A were applied to those using the...whole ClueWeb09 English set. Because our experiments were based on only ClueWeb09 Category-B, measuring our per- formance by MapA and P10A might not

  2. Classification of fMRI resting-state maps using machine learning techniques: A comparative study

    NASA Astrophysics Data System (ADS)

    Gallos, Ioannis; Siettos, Constantinos

    2017-11-01

    We compare the efficiency of Principal Component Analysis (PCA) and nonlinear learning manifold algorithms (ISOMAP and Diffusion maps) for classifying brain maps between groups of schizophrenia patients and healthy from fMRI scans during a resting-state experiment. After a standard pre-processing pipeline, we applied spatial Independent component analysis (ICA) to reduce (a) noise and (b) spatial-temporal dimensionality of fMRI maps. On the cross-correlation matrix of the ICA components, we applied PCA, ISOMAP and Diffusion Maps to find an embedded low-dimensional space. Finally, support-vector-machines (SVM) and k-NN algorithms were used to evaluate the performance of the algorithms in classifying between the two groups.

  3. A Multivariate Methodological Workflow for the Analysis of FTIR Chemical Mapping Applied on Historic Paint Stratigraphies

    PubMed Central

    Sciutto, Giorgia; Oliveri, Paolo; Catelli, Emilio; Bonacini, Irene

    2017-01-01

    In the field of applied researches in heritage science, the use of multivariate approach is still quite limited and often chemometric results obtained are often underinterpreted. Within this scenario, the present paper is aimed at disseminating the use of suitable multivariate methodologies and proposes a procedural workflow applied on a representative group of case studies, of considerable importance for conservation purposes, as a sort of guideline on the processing and on the interpretation of this FTIR data. Initially, principal component analysis (PCA) is performed and the score values are converted into chemical maps. Successively, the brushing approach is applied, demonstrating its usefulness for a deep understanding of the relationships between the multivariate map and PC score space, as well as for the identification of the spectral bands mainly involved in the definition of each area localised within the score maps. PMID:29333162

  4. Detecting Corresponding Vertex Pairs between Planar Tessellation Datasets with Agglomerative Hierarchical Cell-Set Matching.

    PubMed

    Huh, Yong; Yu, Kiyun; Park, Woojin

    2016-01-01

    This paper proposes a method to detect corresponding vertex pairs between planar tessellation datasets. Applying an agglomerative hierarchical co-clustering, the method finds geometrically corresponding cell-set pairs from which corresponding vertex pairs are detected. Then, the map transformation is performed with the vertex pairs. Since these pairs are independently detected for each corresponding cell-set pairs, the method presents improved matching performance regardless of locally uneven positional discrepancies between dataset. The proposed method was applied to complicated synthetic cell datasets assumed as a cadastral map and a topographical map, and showed an improved result with the F-measures of 0.84 comparing to a previous matching method with the F-measure of 0.48.

  5. Using remote sensing in support of environmental management: A framework for selecting products, algorithms and methods.

    PubMed

    de Klerk, Helen M; Gilbertson, Jason; Lück-Vogel, Melanie; Kemp, Jaco; Munch, Zahn

    2016-11-01

    Traditionally, to map environmental features using remote sensing, practitioners will use training data to develop models on various satellite data sets using a number of classification approaches and use test data to select a single 'best performer' from which the final map is made. We use a combination of an omission/commission plot to evaluate various results and compile a probability map based on consistently strong performing models across a range of standard accuracy measures. We suggest that this easy-to-use approach can be applied in any study using remote sensing to map natural features for management action. We demonstrate this approach using optical remote sensing products of different spatial and spectral resolution to map the endemic and threatened flora of quartz patches in the Knersvlakte, South Africa. Quartz patches can be mapped using either SPOT 5 (used due to its relatively fine spatial resolution) or Landsat8 imagery (used because it is freely accessible and has higher spectral resolution). Of the variety of classification algorithms available, we tested maximum likelihood and support vector machine, and applied these to raw spectral data, the first three PCA summaries of the data, and the standard normalised difference vegetation index. We found that there is no 'one size fits all' solution to the choice of a 'best fit' model (i.e. combination of classification algorithm or data sets), which is in agreement with the literature that classifier performance will vary with data properties. We feel this lends support to our suggestion that rather than the identification of a 'single best' model and a map based on this result alone, a probability map based on the range of consistently top performing models provides a rigorous solution to environmental mapping. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Hyperspectral feature mapping classification based on mathematical morphology

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Li, Junwei; Wang, Guangping; Wu, Jingli

    2016-03-01

    This paper proposed a hyperspectral feature mapping classification algorithm based on mathematical morphology. Without the priori information such as spectral library etc., the spectral and spatial information can be used to realize the hyperspectral feature mapping classification. The mathematical morphological erosion and dilation operations are performed respectively to extract endmembers. The spectral feature mapping algorithm is used to carry on hyperspectral image classification. The hyperspectral image collected by AVIRIS is applied to evaluate the proposed algorithm. The proposed algorithm is compared with minimum Euclidean distance mapping algorithm, minimum Mahalanobis distance mapping algorithm, SAM algorithm and binary encoding mapping algorithm. From the results of the experiments, it is illuminated that the proposed algorithm's performance is better than that of the other algorithms under the same condition and has higher classification accuracy.

  7. Intra-operative multi-site stimulation: Expanding methodology for cortical brain mapping of language functions

    PubMed Central

    Korn, Akiva; Kirschner, Adi; Perry, Daniella; Hendler, Talma; Ram, Zvi

    2017-01-01

    Direct cortical stimulation (DCS) is considered the gold-standard for functional cortical mapping during awake surgery for brain tumor resection. DCS is performed by stimulating one local cortical area at a time. We present a feasibility study using an intra-operative technique aimed at improving our ability to map brain functions which rely on activity in distributed cortical regions. Following standard DCS, Multi-Site Stimulation (MSS) was performed in 15 patients by applying simultaneous cortical stimulations at multiple locations. Language functioning was chosen as a case-cognitive domain due to its relatively well-known cortical organization. MSS, performed at sites that did not produce disruption when applied in a single stimulation point, revealed additional language dysfunction in 73% of the patients. Functional regions identified by this technique were presumed to be significant to language circuitry and were spared during surgery. No new neurological deficits were observed in any of the patients following surgery. Though the neuro-electrical effects of MSS need further investigation, this feasibility study may provide a first step towards sophistication of intra-operative cortical mapping. PMID:28700619

  8. Intra-operative multi-site stimulation: Expanding methodology for cortical brain mapping of language functions.

    PubMed

    Gonen, Tal; Gazit, Tomer; Korn, Akiva; Kirschner, Adi; Perry, Daniella; Hendler, Talma; Ram, Zvi

    2017-01-01

    Direct cortical stimulation (DCS) is considered the gold-standard for functional cortical mapping during awake surgery for brain tumor resection. DCS is performed by stimulating one local cortical area at a time. We present a feasibility study using an intra-operative technique aimed at improving our ability to map brain functions which rely on activity in distributed cortical regions. Following standard DCS, Multi-Site Stimulation (MSS) was performed in 15 patients by applying simultaneous cortical stimulations at multiple locations. Language functioning was chosen as a case-cognitive domain due to its relatively well-known cortical organization. MSS, performed at sites that did not produce disruption when applied in a single stimulation point, revealed additional language dysfunction in 73% of the patients. Functional regions identified by this technique were presumed to be significant to language circuitry and were spared during surgery. No new neurological deficits were observed in any of the patients following surgery. Though the neuro-electrical effects of MSS need further investigation, this feasibility study may provide a first step towards sophistication of intra-operative cortical mapping.

  9. Evaluation of a PCR assay on overgrown environmental samples cultured for Mycobacterium avium subsp. paratuberculosis.

    PubMed

    Arango-Sabogal, Juan C; Labrecque, Olivia; Paré, Julie; Fairbrother, Julie-Hélène; Roy, Jean-Philippe; Wellemans, Vincent; Fecteau, Gilles

    2016-11-01

    Culture of Mycobacterium avium subsp. paratuberculosis (MAP) is the definitive antemortem test method for paratuberculosis. Microbial overgrowth is a challenge for MAP culture, as it complicates, delays, and increases the cost of the process. Additionally, herd status determination is impeded when noninterpretable (NI) results are obtained. The performance of PCR is comparable to fecal culture, thus it may be a complementary detection tool to classify NI samples. Our study aimed to determine if MAP DNA can be identified by PCR performed on NI environmental samples and to evaluate the performance of PCR before and after the culture of these samples in liquid media. A total of 154 environmental samples (62 NI, 62 negative, and 30 positive) were analyzed by PCR before being incubated in an automated system. Growth was confirmed by acid-fast bacilli stain and then the same PCR method was again applied on incubated samples, regardless of culture and stain results. Change in MAP DNA after incubation was assessed by converting the PCR quantification cycle (Cq) values into fold change using the 2 -ΔCq method (ΔCq = Cq after culture - Cq before culture). A total of 1.6% (standard error [SE] = 1.6) of the NI environmental samples had detectable MAP DNA. The PCR had a significantly better performance when applied after culture than before culture (p = 0.004). After culture, a 66-fold change (SE = 17.1) in MAP DNA was observed on average. Performing a PCR on NI samples improves MAP culturing. The PCR method used in our study is a reliable and consistent method to classify NI environmental samples. © 2016 The Author(s).

  10. Planning or something else? Examining neuropsychological predictors of Zoo Map performance.

    PubMed

    Oosterman, Joukje M; Wijers, Marijn; Kessels, Roy P C

    2013-01-01

    The Zoo Map Test of the Behavioral Assessment of the Dysexecutive Syndrome battery is often applied to measure planning ability as part of executive function. Successful performance on this test is, however, dependent on various cognitive functions, and deficient Zoo Map performance does therefore not necessarily imply selectively disrupted planning abilities. To address this important issue, we examined whether planning is still the most important predictor of Zoo Map performance in a heterogeneous sample of neurologic and psychiatric outpatients (N = 71). In addition to the Zoo Map Test, the patients completed other neuropsychological tests of planning, inhibition, processing speed, and episodic memory. Planning was the strongest predictor of the total raw score and inappropriate places visited, and no additional contribution of other cognitive scores was found. One exception to this was the total time, which was associated with processing speed. Overall, our findings indicate that the Zoo Map Test is a valid indicator of planning ability in a heterogeneous patient sample.

  11. Dynamic Assessment of EFL Learners' Listening Comprehension via Computerized Concept Mapping

    ERIC Educational Resources Information Center

    Ebadi, Saman; Latif, Shokoufeh Vakili

    2015-01-01

    In Vygotsky's theory, learner's Zone of Proximal Development (ZPD) and autonomous performance could be further developed through social interaction with an expert. Computerized concept mapping enjoys the advantage of meeting learners' differences and therefore can be applied as a scaffold to support learning process.Taking a dynamic assessment…

  12. Imaging and elemental mapping of biological specimens with a dual-EDS dedicated scanning transmission electron microscope

    PubMed Central

    Wu, J.S.; Kim, A. M.; Bleher, R.; Myers, B.D.; Marvin, R. G.; Inada, H.; Nakamura, K.; Zhang, X.F.; Roth, E.; Li, S.Y.; Woodruff, T. K.; O'Halloran, T. V.; Dravid, Vinayak P.

    2013-01-01

    A dedicated analytical scanning transmission electron microscope (STEM) with dual energy dispersive spectroscopy (EDS) detectors has been designed for complementary high performance imaging as well as high sensitivity elemental analysis and mapping of biological structures. The performance of this new design, based on a Hitachi HD-2300A model, was evaluated using a variety of biological specimens. With three imaging detectors, both the surface and internal structure of cells can be examined simultaneously. The whole-cell elemental mapping, especially of heavier metal species that have low cross-section for electron energy loss spectroscopy (EELS), can be faithfully obtained. Optimization of STEM imaging conditions is applied to thick sections as well as thin sections of biological cells under low-dose conditions at room- and cryogenic temperatures. Such multimodal capabilities applied to soft/biological structures usher a new era for analytical studies in biological systems. PMID:23500508

  13. A Voxel-by-Voxel Comparison of Deformable Vector Fields Obtained by Three Deformable Image Registration Algorithms Applied to 4DCT Lung Studies.

    PubMed

    Fatyga, Mirek; Dogan, Nesrin; Weiss, Elizabeth; Sleeman, William C; Zhang, Baoshe; Lehman, William J; Williamson, Jeffrey F; Wijesooriya, Krishni; Christensen, Gary E

    2015-01-01

    Commonly used methods of assessing the accuracy of deformable image registration (DIR) rely on image segmentation or landmark selection. These methods are very labor intensive and thus limited to relatively small number of image pairs. The direct voxel-by-voxel comparison can be automated to examine fluctuations in DIR quality on a long series of image pairs. A voxel-by-voxel comparison of three DIR algorithms applied to lung patients is presented. Registrations are compared by comparing volume histograms formed both with individual DIR maps and with a voxel-by-voxel subtraction of the two maps. When two DIR maps agree one concludes that both maps are interchangeable in treatment planning applications, though one cannot conclude that either one agrees with the ground truth. If two DIR maps significantly disagree one concludes that at least one of the maps deviates from the ground truth. We use the method to compare 3 DIR algorithms applied to peak inhale-peak exhale registrations of 4DFBCT data obtained from 13 patients. All three algorithms appear to be nearly equivalent when compared using DICE similarity coefficients. A comparison based on Jacobian volume histograms shows that all three algorithms measure changes in total volume of the lungs with reasonable accuracy, but show large differences in the variance of Jacobian distribution on contoured structures. Analysis of voxel-by-voxel subtraction of DIR maps shows differences between algorithms that exceed a centimeter for some registrations. Deformation maps produced by DIR algorithms must be treated as mathematical approximations of physical tissue deformation that are not self-consistent and may thus be useful only in applications for which they have been specifically validated. The three algorithms tested in this work perform fairly robustly for the task of contour propagation, but produce potentially unreliable results for the task of DVH accumulation or measurement of local volume change. Performance of DIR algorithms varies significantly from one image pair to the next hence validation efforts, which are exhaustive but performed on a small number of image pairs may not reflect the performance of the same algorithm in practical clinical situations. Such efforts should be supplemented by validation based on a longer series of images of clinical quality.

  14. Towards the Optimal Pixel Size of dem for Automatic Mapping of Landslide Areas

    NASA Astrophysics Data System (ADS)

    Pawłuszek, K.; Borkowski, A.; Tarolli, P.

    2017-05-01

    Determining appropriate spatial resolution of digital elevation model (DEM) is a key step for effective landslide analysis based on remote sensing data. Several studies demonstrated that choosing the finest DEM resolution is not always the best solution. Various DEM resolutions can be applicable for diverse landslide applications. Thus, this study aims to assess the influence of special resolution on automatic landslide mapping. Pixel-based approach using parametric and non-parametric classification methods, namely feed forward neural network (FFNN) and maximum likelihood classification (ML), were applied in this study. Additionally, this allowed to determine the impact of used classification method for selection of DEM resolution. Landslide affected areas were mapped based on four DEMs generated at 1 m, 2 m, 5 m and 10 m spatial resolution from airborne laser scanning (ALS) data. The performance of the landslide mapping was then evaluated by applying landslide inventory map and computation of confusion matrix. The results of this study suggests that the finest scale of DEM is not always the best fit, however working at 1 m DEM resolution on micro-topography scale, can show different results. The best performance was found at 5 m DEM-resolution for FFNN and 1 m DEM resolution for results. The best performance was found to be using 5 m DEM-resolution for FFNN and 1 m DEM resolution for ML classification.

  15. Mapping edge-based traffic measurements onto the internal links in MPLS network

    NASA Astrophysics Data System (ADS)

    Zhao, Guofeng; Tang, Hong; Zhang, Yi

    2004-09-01

    Applying multi-protocol label switching techniques to IP-based backbone for traffic engineering goals has shown advantageous. Obtaining a volume of load on each internal link of the network is crucial for traffic engineering applying. Though collecting can be available for each link, such as applying traditional SNMP scheme, the approach may cause heavy processing load and sharply degrade the throughput of the core routers. Then monitoring merely at the edge of the network and mapping the measurements onto the core provides a good alternative way. In this paper, we explore a scheme for traffic mapping with edge-based measurements in MPLS network. It is supposed that the volume of traffic on each internal link over the domain would be mapped onto by measurements available only at ingress nodes. We apply path-based measurements at ingress nodes without enabling measurements in the core of the network. We propose a method that can infer a path from the ingress to the egress node using label distribution protocol without collecting routing data from core routers. Based on flow theory and queuing theory, we prove that our approach is effective and present the algorithm for traffic mapping. We also show performance simulation results that indicate potential of our approach.

  16. Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy

    NASA Astrophysics Data System (ADS)

    Tang, Jing; Rahmim, Arman

    2015-01-01

    A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE-MAP algorithm resulted in comparable regional mean values to those from the maximum likelihood algorithm while reducing noise. Achieving robust performance in various noise-level simulation and patient studies, the WJE-MAP algorithm demonstrates its potential in clinical quantitative PET imaging.

  17. Study of Abrasive Wear Volume Map for PTFE and PTFE Composites

    NASA Astrophysics Data System (ADS)

    Unal, H.; Sen, U.; Mimaroglu, A.

    2007-11-01

    The potential of this work is based on consideration of wear volume map for the evaluation of abrasive wear performance of polytetrafluoroethylene (PTFE) and PTFE composites. The fillers used in the composite are 25% bronze, 35% graphite and 17% glass fibre glass (GFR). The influence of filler materials, abrasion surface roughness and applied load values on abrasive wear performance of PTFE and PTFE composites were studied and evaluated. Experimental abrasive wear tests were carried out at atmospheric condition on pin-on-disc wear tribometer. Tests were performed under 4, 6, 8 and 10 N load values, travelling speed of 1 m/sec and abrasion surface roughness values of 5, 20 and 45 µm. Wear volume maps were obtained and the results showed that the lowest wear volume rate for PTFE is reached using GFR filler. Furthermore, the results also showed that the higher is the applied load and the roughness of the abrasion surface, the higher is the wear rate. Finally it is also concluded that abrasive wear process mechanism include ploughing and cutting mechanisms.

  18. Mapping Applications Center, National Mapping Division, U.S. Geological Survey

    USGS Publications Warehouse

    ,

    1996-01-01

    The Mapping Applications Center (MAC), National Mapping Division (NMD), is the eastern regional center for coordinating the production, distribution, and sale of maps and digital products of the U.S. Geological Survey (USGS). It is located in the John Wesley Powell Federal Building in Reston, Va. The MAC's major functions are to (1) establish and manage cooperative mapping programs with State and Federal agencies; (2) perform new research in preparing and applying geospatial information; (3) prepare digital cartographic data, special purpose maps, and standard maps from traditional and classified source materials; (4) maintain the domestic names program of the United States; (5) manage the National Aerial Photography Program (NAPP); (6) coordinate the NMD's publications and outreach programs; and (7) direct the USGS mapprinting operations.

  19. A real time QRS detection using delay-coordinate mapping for the microcontroller implementation.

    PubMed

    Lee, Jeong-Whan; Kim, Kyeong-Seop; Lee, Bongsoo; Lee, Byungchae; Lee, Myoung-Ho

    2002-01-01

    In this article, we propose a new algorithm using the characteristics of reconstructed phase portraits by delay-coordinate mapping utilizing lag rotundity for a real-time detection of QRS complexes in ECG signals. In reconstructing phase portrait the mapping parameters, time delay, and mapping dimension play important roles in shaping of portraits drawn in a new dimensional space. Experimentally, the optimal mapping time delay for detection of QRS complexes turned out to be 20 ms. To explore the meaning of this time delay and the proper mapping dimension, we applied a fill factor, mutual information, and autocorrelation function algorithm that were generally used to analyze the chaotic characteristics of sampled signals. From these results, we could find the fact that the performance of our proposed algorithms relied mainly on the geometrical property such as an area of the reconstructed phase portrait. For the real application, we applied our algorithm for designing a small cardiac event recorder. This system was to record patients' ECG and R-R intervals for 1 h to investigate HRV characteristics of the patients who had vasovagal syncope symptom and for the evaluation, we implemented our algorithm in C language and applied to MIT/BIH arrhythmia database of 48 subjects. Our proposed algorithm achieved a 99.58% detection rate of QRS complexes.

  20. High-resolution geological mapping at 3D Environments: A case study from the fold-and-thrust belt in northern Taiwan

    NASA Astrophysics Data System (ADS)

    Chan, Y. C.; Shih, N. C.; Hsieh, Y. C.

    2016-12-01

    Geologic maps have provided fundamental information for many scientific and engineering applications in human societies. Geologic maps directly influence the reliability of research results or the robustness of engineering projects. In the past, geologic maps were mainly produced by field geologists through direct field investigations and 2D topographic maps. However, the quality of traditional geologic maps was significantly compromised by field conditions, particularly, when the map area is covered by heavy forest canopies. Recent developments in airborne LiDAR technology may virtually remove trees or buildings, thus, providing a useful data set for improving geological mapping. Because high-quality topographic information still needs to be interpreted in terms of geology, there are many fundamental questions regarding how to best apply the data set for high-resolution geological mapping. In this study, we aim to test the quality and reliability of high-resolution geologic maps produced by recent technological methods through an example from the fold-and-thrust belt in northern Taiwan. We performed the geological mapping by applying the LiDAR-derived DEM, self-developed program tools and many layers of relevant information at interactive 3D environments. Our mapping results indicate that the proposed methods will considerably improve the quality and consistency of the geologic maps. The study also shows that in order to gain consistent mapping results, future high-resolution geologic maps should be produced at interactive 3D environments on the basis of existing geologic maps.

  1. Imaging and elemental mapping of biological specimens with a dual-EDS dedicated scanning transmission electron microscope.

    PubMed

    Wu, J S; Kim, A M; Bleher, R; Myers, B D; Marvin, R G; Inada, H; Nakamura, K; Zhang, X F; Roth, E; Li, S Y; Woodruff, T K; O'Halloran, T V; Dravid, Vinayak P

    2013-05-01

    A dedicated analytical scanning transmission electron microscope (STEM) with dual energy dispersive spectroscopy (EDS) detectors has been designed for complementary high performance imaging as well as high sensitivity elemental analysis and mapping of biological structures. The performance of this new design, based on a Hitachi HD-2300A model, was evaluated using a variety of biological specimens. With three imaging detectors, both the surface and internal structure of cells can be examined simultaneously. The whole-cell elemental mapping, especially of heavier metal species that have low cross-section for electron energy loss spectroscopy (EELS), can be faithfully obtained. Optimization of STEM imaging conditions is applied to thick sections as well as thin sections of biological cells under low-dose conditions at room and cryogenic temperatures. Such multimodal capabilities applied to soft/biological structures usher a new era for analytical studies in biological systems. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Regional Geological Mapping in the Graham Land of Antarctic Peninsula Using LANDSAT-8 Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Pour, A. B.; Hashim, M.; Park, Y.

    2017-10-01

    Geological investigations in Antarctica confront many difficulties due to its remoteness and extreme environmental conditions. In this study, the applications of Landsat-8 data were investigated to extract geological information for lithological and alteration mineral mapping in poorly exposed lithologies in inaccessible domains such in Antarctica. The north-eastern Graham Land, Antarctic Peninsula (AP) was selected in this study to conduct a satellite-based remote sensing mapping technique. Continuum Removal (CR) spectral mapping tool and Independent Components Analysis (ICA) were applied to Landsat-8 spectral bands to map poorly exposed lithologies at regional scale. Pixels composed of distinctive absorption features of alteration mineral assemblages associated with poorly exposed lithological units were detected by applying CR mapping tool to VNIR and SWIR bands of Landsat-8.Pixels related to Si-O bond emission minima features were identified using CR mapping tool to TIR bands in poorly mapped andunmapped zones in north-eastern Graham Land at regional scale. Anomaly pixels in the ICA image maps related to spectral featuresof Al-O-H, Fe, Mg-O-H and CO3 groups and well-constrained lithological attributions from felsic to mafic rocks were detectedusing VNIR, SWIR and TIR datasets of Landsat-8. The approach used in this study performed very well for lithological andalteration mineral mapping with little available geological data or without prior information of the study region.

  3. Spectral Unmixing Based Construction of Lunar Mineral Abundance Maps

    NASA Astrophysics Data System (ADS)

    Bernhardt, V.; Grumpe, A.; Wöhler, C.

    2017-07-01

    In this study we apply a nonlinear spectral unmixing algorithm to a nearly global lunar spectral reflectance mosaic derived from hyper-spectral image data acquired by the Moon Mineralogy Mapper (M3) instrument. Corrections for topographic effects and for thermal emission were performed. A set of 19 laboratory-based reflectance spectra of lunar samples published by the Lunar Soil Characterization Consortium (LSCC) were used as a catalog of potential endmember spectra. For a given spectrum, the multi-population population-based incremental learning (MPBIL) algorithm was used to determine the subset of endmembers actually contained in it. However, as the MPBIL algorithm is computationally expensive, it cannot be applied to all pixels of the reflectance mosaic. Hence, the reflectance mosaic was clustered into a set of 64 prototype spectra, and the MPBIL algorithm was applied to each prototype spectrum. Each pixel of the mosaic was assigned to the most similar prototype, and the set of endmembers previously determined for that prototype was used for pixel-wise nonlinear spectral unmixing using the Hapke model, implemented as linear unmixing of the single-scattering albedo spectrum. This procedure yields maps of the fractional abundances of the 19 endmembers. Based on the known modal abundances of a variety of mineral species in the LSCC samples, a conversion from endmember abundances to mineral abundances was performed. We present maps of the fractional abundances of plagioclase, pyroxene and olivine and compare our results with previously published lunar mineral abundance maps.

  4. Situation Venice: Towards a Performative "Ex-Planation" of a City

    ERIC Educational Resources Information Center

    Whybrow, Nicolas

    2011-01-01

    The article's main concern is to analyse theoretical and artistic factors influencing the attempt by a group of undergraduate students (at the University of Warwick, UK) to produce a "performative mapping" of the city of Venice. In other words, it asks what kind of performance-based strategies might usefully be applied in the process of…

  5. Fluorescent in situ hybridisation to amphioxus chromosomes.

    PubMed

    Castro, Luis Filipe Costa; Holland, Peter William Harold

    2002-12-01

    We describe an efficient protocol for mapping genes and other DNA sequences to amphioxus chromosomes using fluorescent in situ hybridisation. We apply this method to identify the number and location of ribosomal DNA gene clusters and telomere sequences in metaphase spreads of Branchiostoma floridae. We also describe how the locations of two single copy genes can be mapped relative to each other, and demonstrate this by mapping an amphioxus Pax gene relative to a homologue of the Notch gene. These methods have great potential for performing comparative genomics between amphioxus and vertebrates.

  6. Chaotic map clustering algorithm for EEG analysis

    NASA Astrophysics Data System (ADS)

    Bellotti, R.; De Carlo, F.; Stramaglia, S.

    2004-03-01

    The non-parametric chaotic map clustering algorithm has been applied to the analysis of electroencephalographic signals, in order to recognize the Huntington's disease, one of the most dangerous pathologies of the central nervous system. The performance of the method has been compared with those obtained through parametric algorithms, as K-means and deterministic annealing, and supervised multi-layer perceptron. While supervised neural networks need a training phase, performed by means of data tagged by the genetic test, and the parametric methods require a prior choice of the number of classes to find, the chaotic map clustering gives a natural evidence of the pathological class, without any training or supervision, thus providing a new efficient methodology for the recognition of patterns affected by the Huntington's disease.

  7. Exploring discrepancies between quantitative validation results and the geomorphic plausibility of statistical landslide susceptibility maps

    NASA Astrophysics Data System (ADS)

    Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas

    2016-06-01

    Empirical models are frequently applied to produce landslide susceptibility maps for large areas. Subsequent quantitative validation results are routinely used as the primary criteria to infer the validity and applicability of the final maps or to select one of several models. This study hypothesizes that such direct deductions can be misleading. The main objective was to explore discrepancies between the predictive performance of a landslide susceptibility model and the geomorphic plausibility of subsequent landslide susceptibility maps while a particular emphasis was placed on the influence of incomplete landslide inventories on modelling and validation results. The study was conducted within the Flysch Zone of Lower Austria (1,354 km2) which is known to be highly susceptible to landslides of the slide-type movement. Sixteen susceptibility models were generated by applying two statistical classifiers (logistic regression and generalized additive model) and two machine learning techniques (random forest and support vector machine) separately for two landslide inventories of differing completeness and two predictor sets. The results were validated quantitatively by estimating the area under the receiver operating characteristic curve (AUROC) with single holdout and spatial cross-validation technique. The heuristic evaluation of the geomorphic plausibility of the final results was supported by findings of an exploratory data analysis, an estimation of odds ratios and an evaluation of the spatial structure of the final maps. The results showed that maps generated by different inventories, classifiers and predictors appeared differently while holdout validation revealed similar high predictive performances. Spatial cross-validation proved useful to expose spatially varying inconsistencies of the modelling results while additionally providing evidence for slightly overfitted machine learning-based models. However, the highest predictive performances were obtained for maps that explicitly expressed geomorphically implausible relationships indicating that the predictive performance of a model might be misleading in the case a predictor systematically relates to a spatially consistent bias of the inventory. Furthermore, we observed that random forest-based maps displayed spatial artifacts. The most plausible susceptibility map of the study area showed smooth prediction surfaces while the underlying model revealed a high predictive capability and was generated with an accurate landslide inventory and predictors that did not directly describe a bias. However, none of the presented models was found to be completely unbiased. This study showed that high predictive performances cannot be equated with a high plausibility and applicability of subsequent landslide susceptibility maps. We suggest that greater emphasis should be placed on identifying confounding factors and biases in landslide inventories. A joint discussion between modelers and decision makers of the spatial pattern of the final susceptibility maps in the field might increase their acceptance and applicability.

  8. Design and application of star map simulation system for star sensors

    NASA Astrophysics Data System (ADS)

    Wu, Feng; Shen, Weimin; Zhu, Xifang; Chen, Yuheng; Xu, Qinquan

    2013-12-01

    Modern star sensors are powerful to measure attitude automatically which assure a perfect performance of spacecrafts. They achieve very accurate attitudes by applying algorithms to process star maps obtained by the star camera mounted on them. Therefore, star maps play an important role in designing star cameras and developing procession algorithms. Furthermore, star maps supply significant supports to exam the performance of star sensors completely before their launch. However, it is not always convenient to supply abundant star maps by taking pictures of the sky. Thus, star map simulation with the aid of computer attracts a lot of interests by virtue of its low price and good convenience. A method to simulate star maps by programming and extending the function of the optical design program ZEMAX is proposed. The star map simulation system is established. Firstly, based on analyzing the working procedures of star sensors to measure attitudes and the basic method to design optical system by ZEMAX, the principle of simulating star sensor imaging is given out in detail. The theory about adding false stars and noises, and outputting maps is discussed and the corresponding approaches are proposed. Then, by external programming, the star map simulation program is designed and produced. Its user interference and operation are introduced. Applications of star map simulation method in evaluating optical system, star image extraction algorithm and star identification algorithm, and calibrating system errors are presented completely. It was proved that the proposed simulation method provides magnificent supports to the study on star sensors, and improves the performance of star sensors efficiently.

  9. LPmerge: an R package for merging genetic maps by linear programming.

    PubMed

    Endelman, Jeffrey B; Plomion, Christophe

    2014-06-01

    Consensus genetic maps constructed from multiple populations are an important resource for both basic and applied research, including genome-wide association analysis, genome sequence assembly and studies of evolution. The LPmerge software uses linear programming to efficiently minimize the mean absolute error between the consensus map and the linkage maps from each population. This minimization is performed subject to linear inequality constraints that ensure the ordering of the markers in the linkage maps is preserved. When marker order is inconsistent between linkage maps, a minimum set of ordinal constraints is deleted to resolve the conflicts. LPmerge is on CRAN at http://cran.r-project.org/web/packages/LPmerge. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Filtering Non-Linear Transfer Functions on Surfaces.

    PubMed

    Heitz, Eric; Nowrouzezahrai, Derek; Poulin, Pierre; Neyret, Fabrice

    2014-07-01

    Applying non-linear transfer functions and look-up tables to procedural functions (such as noise), surface attributes, or even surface geometry are common strategies used to enhance visual detail. Their simplicity and ability to mimic a wide range of realistic appearances have led to their adoption in many rendering problems. As with any textured or geometric detail, proper filtering is needed to reduce aliasing when viewed across a range of distances, but accurate and efficient transfer function filtering remains an open problem for several reasons: transfer functions are complex and non-linear, especially when mapped through procedural noise and/or geometry-dependent functions, and the effects of perspective and masking further complicate the filtering over a pixel's footprint. We accurately solve this problem by computing and sampling from specialized filtering distributions on the fly, yielding very fast performance. We investigate the case where the transfer function to filter is a color map applied to (macroscale) surface textures (like noise), as well as color maps applied according to (microscale) geometric details. We introduce a novel representation of a (potentially modulated) color map's distribution over pixel footprints using Gaussian statistics and, in the more complex case of high-resolution color mapped microsurface details, our filtering is view- and light-dependent, and capable of correctly handling masking and occlusion effects. Our approach can be generalized to filter other physical-based rendering quantities. We propose an application to shading with irradiance environment maps over large terrains. Our framework is also compatible with the case of transfer functions used to warp surface geometry, as long as the transformations can be represented with Gaussian statistics, leading to proper view- and light-dependent filtering results. Our results match ground truth and our solution is well suited to real-time applications, requires only a few lines of shader code (provided in supplemental material, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/TVCG.2013.102), is high performance, and has a negligible memory footprint.

  11. Textural and Mineralogical Analysis of Volcanic Rocks by µ-XRF Mapping.

    PubMed

    Germinario, Luigi; Cossio, Roberto; Maritan, Lara; Borghi, Alessandro; Mazzoli, Claudio

    2016-06-01

    In this study, µ-XRF was applied as a novel surface technique for quick acquisition of elemental X-ray maps of rocks, image analysis of which provides quantitative information on texture and rock-forming minerals. Bench-top µ-XRF is cost-effective, fast, and non-destructive, can be applied to both large (up to a few tens of cm) and fragile samples, and yields major and trace element analysis with good sensitivity. Here, X-ray mapping was performed with a resolution of 103.5 µm and spot size of 30 µm over sample areas of about 5×4 cm of Euganean trachyte, a volcanic porphyritic rock from the Euganean Hills (NE Italy) traditionally used in cultural heritage. The relative abundance of phenocrysts and groundmass, as well as the size and shape of the various mineral phases, were obtained from image analysis of the elemental maps. The quantified petrographic features allowed identification of various extraction sites, revealing an objective method for archaeometric provenance studies exploiting µ-XRF imaging.

  12. Use of an Annular Silicon Drift Detector (SDD) Versus a Conventional SDD Makes Phase Mapping a Practical Solution for Rare Earth Mineral Characterization.

    PubMed

    Teng, Chaoyi; Demers, Hendrix; Brodusch, Nicolas; Waters, Kristian; Gauvin, Raynald

    2018-06-04

    A number of techniques for the characterization of rare earth minerals (REM) have been developed and are widely applied in the mining industry. However, most of them are limited to a global analysis due to their low spatial resolution. In this work, phase map analyses were performed on REM with an annular silicon drift detector (aSDD) attached to a field emission scanning electron microscope. The optimal conditions for the aSDD were explored, and the high-resolution phase maps generated at a low accelerating voltage identify phases at the micron scale. In comparisons between an annular and a conventional SDD, the aSDD performed at optimized conditions, making the phase map a practical solution for choosing an appropriate grinding size, judging the efficiency of different separation processes, and optimizing a REM beneficiation flowsheet.

  13. Distortion correction of echo planar images applying the concept of finite rate of innovation to point spread function mapping (FRIP).

    PubMed

    Nunes, Rita G; Hajnal, Joseph V

    2018-06-01

    Point spread function (PSF) mapping enables estimating the displacement fields required for distortion correction of echo planar images. Recently, a highly accelerated approach was introduced for estimating displacements from the phase slope of under-sampled PSF mapping data. Sampling schemes with varying spacing were proposed requiring stepwise phase unwrapping. To avoid unwrapping errors, an alternative approach applying the concept of finite rate of innovation to PSF mapping (FRIP) is introduced, using a pattern search strategy to locate the PSF peak, and the two methods are compared. Fully sampled PSF data was acquired in six subjects at 3.0 T, and distortion maps were estimated after retrospective under-sampling. The two methods were compared for both previously published and newly optimized sampling patterns. Prospectively under-sampled data were also acquired. Shift maps were estimated and deviations relative to the fully sampled reference map were calculated. The best performance was achieved when using FRIP with a previously proposed sampling scheme. The two methods were comparable for the remaining schemes. The displacement field errors tended to be lower as the number of samples or their spacing increased. A robust method for estimating the position of the PSF peak has been introduced.

  14. Advanced Map For Real-Time Process Control

    NASA Astrophysics Data System (ADS)

    Shiobara, Yasuhisa; Matsudaira, Takayuki; Sashida, Yoshio; Chikuma, Makoto

    1987-10-01

    MAP, a communications protocol for factory automation proposed by General Motors [1], has been accepted by users throughout the world and is rapidly becoming a user standard. In fact, it is now a LAN standard for factory automation. MAP is intended to interconnect different devices, such as computers and programmable devices, made by different manufacturers, enabling them to exchange information. It is based on the OSI intercomputer com-munications protocol standard under development by the ISO. With progress and standardization, MAP is being investigated for application to process control fields other than factory automation [2]. The transmission response time of the network system and centralized management of data exchanged with various devices for distributed control are import-ant in the case of a real-time process control with programmable controllers, computers, and instruments connected to a LAN system. MAP/EPA and MINI MAP aim at reduced overhead in protocol processing and enhanced transmission response. If applied to real-time process control, a protocol based on point-to-point and request-response transactions limits throughput and transmission response. This paper describes an advanced MAP LAN system applied to real-time process control by adding a new data transmission control that performs multicasting communication voluntarily and periodically in the priority order of data to be exchanged.

  15. HARMONIC IN-PAINTING OF COSMIC MICROWAVE BACKGROUND SKY BY CONSTRAINED GAUSSIAN REALIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jaiseung; Naselsky, Pavel; Mandolesi, Nazzareno, E-mail: jkim@nbi.dk

    The presence of astrophysical emissions between the last scattering surface and our vantage point requires us to apply a foreground mask on cosmic microwave background (CMB) sky maps, leading to large cuts around the Galactic equator and numerous holes. Since many CMB analysis, in particular on the largest angular scales, may be performed on a whole-sky map in a more straightforward and reliable manner, it is of utmost importance to develop an efficient method to fill in the masked pixels in a way compliant with the expected statistical properties and the unmasked pixels. In this Letter, we consider the Montemore » Carlo simulation of a constrained Gaussian field and derive it CMB anisotropy in harmonic space, where a feasible implementation is possible with good approximation. We applied our method to simulated data, which shows that our method produces a plausible whole-sky map, given the unmasked pixels, and a theoretical expectation. Subsequently, we applied our method to the Wilkinson Microwave Anisotropy Probe foreground-reduced maps and investigated the anomalous alignment between quadrupole and octupole components. From our investigation, we find that the alignment in the foreground-reduced maps is even higher than the Internal Linear Combination map. We also find that the V-band map has higher alignment than other bands, despite the expectation that the V-band map has less foreground contamination than other bands. Therefore, we find it hard to attribute the alignment to residual foregrounds. Our method will be complementary to other efforts on in-painting or reconstructing the masked CMB data, and of great use to Planck surveyor and future missions.« less

  16. On the efficiency of the image encryption and decryption by using logistic-sine chaotic system and logistic-tent chaotic system

    NASA Astrophysics Data System (ADS)

    Chiun, Lee Chia; Mandangan, Arif; Daud, Muhamad Azlan; Hussin, Che Haziqah Che

    2017-04-01

    We may secure the content of text, audio, image and video during their transmission from one party to another party via an open channel such as the internet by using cryptograph. Logistic-Sine System (LSS) is a combination on two 1D chaotic maps which are Logistic Map and Sine Map. By applying the LSS into cryptography, the image encryption and decryption can be performed. This study is focusing on the performance test of the image encryption and decryption processes by using the LSS. For comparison purpose, we compare the performance of the encryption and decryption by using two different chaotic systems, which are the LSS and Logistic-Tent System (LTS). The result shows that system with LSS is less efficient than LTS in term of encryption time but both systems have similar efficiency in term of decryption time.

  17. Brain structural changes following adaptive cognitive training assessed by Tensor-Based Morphometry (TBM)

    PubMed Central

    Colom, Roberto; Hua, Xue; Martínez, Kenia; Burgaleta, Miguel; Román, Francisco J.; Gunter, Jeffrey L.; Carmona, Susanna; Jaeggi, Susanne M.; Thompson, Paul M.

    2016-01-01

    Tensor-Based Morphometry (TBM) allows the automatic mapping of brain changes across time building 3D deformation maps. This technique has been applied for tracking brain degeneration in Alzheimer's and other neurodegenerative diseases with high sensitivity and reliability. Here we applied TBM to quantify changes in brain structure after completing a challenging adaptive cognitive training program based on the n-back task. Twenty-six young women completed twenty-four training sessions across twelve weeks and they showed, on average, large cognitive improvements. High-resolution MRI scans were obtained before and after training. The computed longitudinal deformation maps were analyzed for answering three questions: (a) Are there differential brain structural changes in the training group as compared with a matched control group? (b) Are these changes related to performance differences in the training program? (c) Are standardized changes in a set of psychological factors (fluid and crystallized intelligence, working memory, and attention control) measured before and after training, related to structural changes in the brain? Results showed (a) greater structural changes for the training group in the temporal lobe, (b) a negative correlation between these changes and performance across training sessions (the greater the structural change, the lower the cognitive performance improvements), and (c) negligible effects regarding the psychological factors measured before and after training. PMID:27477628

  18. Single Image Super-Resolution Using Global Regression Based on Multiple Local Linear Mappings.

    PubMed

    Choi, Jae-Seok; Kim, Munchurl

    2017-03-01

    Super-resolution (SR) has become more vital, because of its capability to generate high-quality ultra-high definition (UHD) high-resolution (HR) images from low-resolution (LR) input images. Conventional SR methods entail high computational complexity, which makes them difficult to be implemented for up-scaling of full-high-definition input images into UHD-resolution images. Nevertheless, our previous super-interpolation (SI) method showed a good compromise between Peak-Signal-to-Noise Ratio (PSNR) performances and computational complexity. However, since SI only utilizes simple linear mappings, it may fail to precisely reconstruct HR patches with complex texture. In this paper, we present a novel SR method, which inherits the large-to-small patch conversion scheme from SI but uses global regression based on local linear mappings (GLM). Thus, our new SR method is called GLM-SI. In GLM-SI, each LR input patch is divided into 25 overlapped subpatches. Next, based on the local properties of these subpatches, 25 different local linear mappings are applied to the current LR input patch to generate 25 HR patch candidates, which are then regressed into one final HR patch using a global regressor. The local linear mappings are learned cluster-wise in our off-line training phase. The main contribution of this paper is as follows: Previously, linear-mapping-based conventional SR methods, including SI only used one simple yet coarse linear mapping to each patch to reconstruct its HR version. On the contrary, for each LR input patch, our GLM-SI is the first to apply a combination of multiple local linear mappings, where each local linear mapping is found according to local properties of the current LR patch. Therefore, it can better approximate nonlinear LR-to-HR mappings for HR patches with complex texture. Experiment results show that the proposed GLM-SI method outperforms most of the state-of-the-art methods, and shows comparable PSNR performance with much lower computational complexity when compared with a super-resolution method based on convolutional neural nets (SRCNN15). Compared with the previous SI method that is limited with a scale factor of 2, GLM-SI shows superior performance with average 0.79 dB higher in PSNR, and can be used for scale factors of 3 or higher.

  19. Simultaneous identification of natural dyes in the collection of drawings and maps from The Royal Chancellery Archives in Granada (Spain) by CE.

    PubMed

    López-Montes, Ana; Blanc García, Rosario; Espejo, Teresa; Huertas-Perez, José F; Navalón, Alberto; Vílchez, José Luis

    2007-04-01

    A simple and rapid capillary electrophoretic method with UV detection (CE-UV) has been developed for the identification of five natural dyes namely, carmine, indigo, saffron, gamboge and Rubia tinctoria root. The separation was performed in a fused-silica capillary of 64.5 cm length and 50 microm id. The running buffer was 40 mM sodium tetraborate buffer solution (pH 9.25). The applied potential was 30 kV, the temperature was 25 degrees C and detections were performed at 196, 232, 252, 300 and 356 nm. The injections were under pressure of 50 mbar during 13 s. The method was applied to the identification of carminic acid, gambogic acid, crocetin, indigotin, alizarin and purpurin in the collection of drawings and maps at the Royal Chancellery Archives in Granada (Spain). The method was validated by using HPLC as a reference method.

  20. Detection of Brain Reorganization in Pediatric Multiple Sclerosis Using Functional MRI

    DTIC Science & Technology

    2015-10-01

    accomplish this, we apply comparative assessments of fMRI mappings of language, memory , and motor function, and performance on clinical neurocognitive...community at a target rate of 13 volunteers per quarter period; acquire fMRI data for language, memory , and visual-motor functions (months 3-12). c...consensus fMRI activation maps for language, memory , and visual-motor tasks (months 8-12). f) Subtask 1f. Prepare publication to disseminate our

  1. The creation of digital thematic soil maps at the regional level (with the map of soil carbon pools in the Usa River basin as an example)

    NASA Astrophysics Data System (ADS)

    Pastukhov, A. V.; Kaverin, D. A.; Shchanov, V. M.

    2016-09-01

    A digital map of soil carbon pools was created for the forest-tundra ecotone in the Usa River basin with the use of ERDAS Imagine 2014 and ArcGIS 10.2 software. Supervised classification and thematic interpretation of satellite images and digital terrain models with the use of a georeferenced database on soil profiles were applied. Expert assessment of the natural diversity and representativeness of random samples for different soil groups was performed, and the minimal necessary size of the statistical sample was determined.

  2. Improving Terminology Mapping in Clinical Text with Context-Sensitive Spelling Correction.

    PubMed

    Dziadek, Juliusz; Henriksson, Aron; Duneld, Martin

    2017-01-01

    The mapping of unstructured clinical text to an ontology facilitates meaningful secondary use of health records but is non-trivial due to lexical variation and the abundance of misspellings in hurriedly produced notes. Here, we apply several spelling correction methods to Swedish medical text and evaluate their impact on SNOMED CT mapping; first in a controlled evaluation using medical literature text with induced errors, followed by a partial evaluation on clinical notes. It is shown that the best-performing method is context-sensitive, taking into account trigram frequencies and utilizing a corpus-based dictionary.

  3. Grids in topographic maps reduce distortions in the recall of learned object locations.

    PubMed

    Edler, Dennis; Bestgen, Anne-Kathrin; Kuchinke, Lars; Dickmann, Frank

    2014-01-01

    To date, it has been shown that cognitive map representations based on cartographic visualisations are systematically distorted. The grid is a traditional element of map graphics that has rarely been considered in research on perception-based spatial distortions. Grids do not only support the map reader in finding coordinates or locations of objects, they also provide a systematic structure for clustering visual map information ("spatial chunks"). The aim of this study was to examine whether different cartographic kinds of grids reduce spatial distortions and improve recall memory for object locations. Recall performance was measured as both the percentage of correctly recalled objects (hit rate) and the mean distance errors of correctly recalled objects (spatial accuracy). Different kinds of grids (continuous lines, dashed lines, crosses) were applied to topographic maps. These maps were also varied in their type of characteristic areas (LANDSCAPE) and different information layer compositions (DENSITY) to examine the effects of map complexity. The study involving 144 participants shows that all experimental cartographic factors (GRID, LANDSCAPE, DENSITY) improve recall performance and spatial accuracy of learned object locations. Overlaying a topographic map with a grid significantly reduces the mean distance errors of correctly recalled map objects. The paper includes a discussion of a square grid's usefulness concerning object location memory, independent of whether the grid is clearly visible (continuous or dashed lines) or only indicated by crosses.

  4. Task-evoked brain functional magnetic susceptibility mapping by independent component analysis (χICA).

    PubMed

    Chen, Zikuan; Calhoun, Vince D

    2016-03-01

    Conventionally, independent component analysis (ICA) is performed on an fMRI magnitude dataset to analyze brain functional mapping (AICA). By solving the inverse problem of fMRI, we can reconstruct the brain magnetic susceptibility (χ) functional states. Upon the reconstructed χ dataspace, we propose an ICA-based brain functional χ mapping method (χICA) to extract task-evoked brain functional map. A complex division algorithm is applied to a timeseries of fMRI phase images to extract temporal phase changes (relative to an OFF-state snapshot). A computed inverse MRI (CIMRI) model is used to reconstruct a 4D brain χ response dataset. χICA is implemented by applying a spatial InfoMax ICA algorithm to the reconstructed 4D χ dataspace. With finger-tapping experiments on a 7T system, the χICA-extracted χ-depicted functional map is similar to the SPM-inferred functional χ map by a spatial correlation of 0.67 ± 0.05. In comparison, the AICA-extracted magnitude-depicted map is correlated with the SPM magnitude map by 0.81 ± 0.05. The understanding of the inferiority of χICA to AICA for task-evoked functional map is an ongoing research topic. For task-evoked brain functional mapping, we compare the data-driven ICA method with the task-correlated SPM method. In particular, we compare χICA with AICA for extracting task-correlated timecourses and functional maps. χICA can extract a χ-depicted task-evoked brain functional map from a reconstructed χ dataspace without the knowledge about brain hemodynamic responses. The χICA-extracted brain functional χ map reveals a bidirectional BOLD response pattern that is unavailable (or different) from AICA. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Recent developments in machine learning applications in landslide susceptibility mapping

    NASA Astrophysics Data System (ADS)

    Lun, Na Kai; Liew, Mohd Shahir; Matori, Abdul Nasir; Zawawi, Noor Amila Wan Abdullah

    2017-11-01

    While the prediction of spatial distribution of potential landslide occurrences is a primary interest in landslide hazard mitigation, it remains a challenging task. To overcome the scarceness of complete, sufficiently detailed geomorphological attributes and environmental conditions, various machine-learning techniques are increasingly applied to effectively map landslide susceptibility for large regions. Nevertheless, limited review papers are devoted to this field, particularly on the various domain specific applications of machine learning techniques. Available literature often report relatively good predictive performance, however, papers discussing the limitations of each approaches are quite uncommon. The foremost aim of this paper is to narrow these gaps in literature and to review up-to-date machine learning and ensemble learning techniques applied in landslide susceptibility mapping. It provides new readers an introductory understanding on the subject matter and researchers a contemporary review of machine learning advancements alongside the future direction of these techniques in the landslide mitigation field.

  6. The Performance Analysis of AN Indoor Mobile Mapping System with Rgb-D Sensor

    NASA Astrophysics Data System (ADS)

    Tsai, G. J.; Chiang, K. W.; Chu, C. H.; Chen, Y. L.; El-Sheimy, N.; Habib, A.

    2015-08-01

    Over the years, Mobile Mapping Systems (MMSs) have been widely applied to urban mapping, path management and monitoring and cyber city, etc. The key concept of mobile mapping is based on positioning technology and photogrammetry. In order to achieve the integration, multi-sensor integrated mapping technology has clearly established. In recent years, the robotic technology has been rapidly developed. The other mapping technology that is on the basis of low-cost sensor has generally used in robotic system, it is known as the Simultaneous Localization and Mapping (SLAM). The objective of this study is developed a prototype of indoor MMS for mobile mapping applications, especially to reduce the costs and enhance the efficiency of data collection and validation of direct georeferenced (DG) performance. The proposed indoor MMS is composed of a tactical grade Inertial Measurement Unit (IMU), the Kinect RGB-D sensor and light detection, ranging (LIDAR) and robot. In summary, this paper designs the payload for indoor MMS to generate the floor plan. In first session, it concentrates on comparing the different positioning algorithms in the indoor environment. Next, the indoor plans are generated by two sensors, Kinect RGB-D sensor LIDAR on robot. Moreover, the generated floor plan will compare with the known plan for both validation and verification.

  7. Strategy generalization across orientation tasks: testing a computational cognitive model.

    PubMed

    Gunzelmann, Glenn

    2008-07-08

    Humans use their spatial information processing abilities flexibly to facilitate problem solving and decision making in a variety of tasks. This article explores the question of whether a general strategy can be adapted for performing two different spatial orientation tasks by testing the predictions of a computational cognitive model. Human performance was measured on an orientation task requiring participants to identify the location of a target either on a map (find-on-map) or within an egocentric view of a space (find-in-scene). A general strategy instantiated in a computational cognitive model of the find-on-map task, based on the results from Gunzelmann and Anderson (2006), was adapted to perform both tasks and used to generate performance predictions for a new study. The qualitative fit of the model to the human data supports the view that participants were able to tailor a general strategy to the requirements of particular spatial tasks. The quantitative differences between the predictions of the model and the performance of human participants in the new experiment expose individual differences in sample populations. The model provides a means of accounting for those differences and a framework for understanding how human spatial abilities are applied to naturalistic spatial tasks that involve reasoning with maps. 2008 Cognitive Science Society, Inc.

  8. Transition Funding for the Shallow Water Integrated Mapping System SWIMS and Modular Microstructure Profiler MMP

    DTIC Science & Technology

    2013-09-30

    the performance of operational and climate models, as well as for understanding local problems such as pollutant dispersal and biological...Mapping System (SWIMS) and Modular Microstructure Profiler (MMP) Matthew H. Alford Applied Physics Laboratory 1013 NE 40th Street Seattle, WA...in Juan de Fuca Submarine Canyon . Measurements were successful. In the next few weeks we will be testing MMP from our local work boat, the R/V Jack

  9. Integration of Genetic Algorithms and Fuzzy Logic for Urban Growth Modeling

    NASA Astrophysics Data System (ADS)

    Foroutan, E.; Delavar, M. R.; Araabi, B. N.

    2012-07-01

    Urban growth phenomenon as a spatio-temporal continuous process is subject to spatial uncertainty. This inherent uncertainty cannot be fully addressed by the conventional methods based on the Boolean algebra. Fuzzy logic can be employed to overcome this limitation. Fuzzy logic preserves the continuity of dynamic urban growth spatially by choosing fuzzy membership functions, fuzzy rules and the fuzzification-defuzzification process. Fuzzy membership functions and fuzzy rule sets as the heart of fuzzy logic are rather subjective and dependent on the expert. However, due to lack of a definite method for determining the membership function parameters, certain optimization is needed to tune the parameters and improve the performance of the model. This paper integrates genetic algorithms and fuzzy logic as a genetic fuzzy system (GFS) for modeling dynamic urban growth. The proposed approach is applied for modeling urban growth in Tehran Metropolitan Area in Iran. Historical land use/cover data of Tehran Metropolitan Area extracted from the 1988 and 1999 Landsat ETM+ images are employed in order to simulate the urban growth. The extracted land use classes of the year 1988 include urban areas, street, vegetation areas, slope and elevation used as urban growth physical driving forces. Relative Operating Characteristic (ROC) curve as an fitness function has been used to evaluate the performance of the GFS algorithm. The optimum membership function parameter is applied for generating a suitability map for the urban growth. Comparing the suitability map and real land use map of 1999 gives the threshold value for the best suitability map which can simulate the land use map of 1999. The simulation outcomes in terms of kappa of 89.13% and overall map accuracy of 95.58% demonstrated the efficiency and reliability of the proposed model.

  10. Velocity filtering applied to optical flow calculations

    NASA Technical Reports Server (NTRS)

    Barniv, Yair

    1990-01-01

    Optical flow is a method by which a stream of two-dimensional images obtained from a forward-looking passive sensor is used to map the three-dimensional volume in front of a moving vehicle. Passive ranging via optical flow is applied here to the helicopter obstacle-avoidance problem. Velocity filtering is used as a field-based method to determine range to all pixels in the initial image. The theoretical understanding and performance analysis of velocity filtering as applied to optical flow is expanded and experimental results are presented.

  11. Urban local climate zone mapping and apply in urban environment study

    NASA Astrophysics Data System (ADS)

    He, Shan; Zhang, Yunwei; Zhang, Jili

    2018-02-01

    The city’s local climate zone (LCZ) was considered to be a powerful tool for urban climate mapping. But for cities in different countries and regions, the LCZ division methods and results were different, thus targeted researches should be performed. In the current work, a LCZ mapping method was proposed, which is convenient in operation and city planning oriented. In this proposed method, the local climate zoning types were adjusted firstly, according to the characteristics of Chinese city, that more tall buildings and high density. Then the classification method proposed by WUDAPT based on remote sensing data was performed on Xi’an city, as an example, for LCZ mapping. Combined with the city road network, a reasonable expression of the dividing results was provided, to adapt to the characteristics in city planning that land parcels are usually recognized as the basic unit. The proposed method was validated against the actual land use and construction data that surveyed in Xi’an, with results indicating the feasibility of the proposed method for urban LCZ mapping in China.

  12. E-Index for Differentiating Complex Dynamic Traits

    PubMed Central

    Qi, Jiandong; Sun, Jianfeng; Wang, Jianxin

    2016-01-01

    While it is a daunting challenge in current biology to understand how the underlying network of genes regulates complex dynamic traits, functional mapping, a tool for mapping quantitative trait loci (QTLs) and single nucleotide polymorphisms (SNPs), has been applied in a variety of cases to tackle this challenge. Though useful and powerful, functional mapping performs well only when one or more model parameters are clearly responsible for the developmental trajectory, typically being a logistic curve. Moreover, it does not work when the curves are more complex than that, especially when they are not monotonic. To overcome this inadaptability, we therefore propose a mathematical-biological concept and measurement, E-index (earliness-index), which cumulatively measures the earliness degree to which a variable (or a dynamic trait) increases or decreases its value. Theoretical proofs and simulation studies show that E-index is more general than functional mapping and can be applied to any complex dynamic traits, including those with logistic curves and those with nonmonotonic curves. Meanwhile, E-index vector is proposed as well to capture more subtle differences of developmental patterns. PMID:27064292

  13. Investigation of inversion polymorphisms in the human genome using principal components analysis.

    PubMed

    Ma, Jianzhong; Amos, Christopher I

    2012-01-01

    Despite the significant advances made over the last few years in mapping inversions with the advent of paired-end sequencing approaches, our understanding of the prevalence and spectrum of inversions in the human genome has lagged behind other types of structural variants, mainly due to the lack of a cost-efficient method applicable to large-scale samples. We propose a novel method based on principal components analysis (PCA) to characterize inversion polymorphisms using high-density SNP genotype data. Our method applies to non-recurrent inversions for which recombination between the inverted and non-inverted segments in inversion heterozygotes is suppressed due to the loss of unbalanced gametes. Inside such an inversion region, an effect similar to population substructure is thus created: two distinct "populations" of inversion homozygotes of different orientations and their 1:1 admixture, namely the inversion heterozygotes. This kind of substructure can be readily detected by performing PCA locally in the inversion regions. Using simulations, we demonstrated that the proposed method can be used to detect and genotype inversion polymorphisms using unphased genotype data. We applied our method to the phase III HapMap data and inferred the inversion genotypes of known inversion polymorphisms at 8p23.1 and 17q21.31. These inversion genotypes were validated by comparing with literature results and by checking Mendelian consistency using the family data whenever available. Based on the PCA-approach, we also performed a preliminary genome-wide scan for inversions using the HapMap data, which resulted in 2040 candidate inversions, 169 of which overlapped with previously reported inversions. Our method can be readily applied to the abundant SNP data, and is expected to play an important role in developing human genome maps of inversions and exploring associations between inversions and susceptibility of diseases.

  14. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    NASA Astrophysics Data System (ADS)

    Bonetto, P.; Qi, Jinyi; Leahy, R. M.

    2000-08-01

    Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.

  15. Brain structural changes following adaptive cognitive training assessed by Tensor-Based Morphometry (TBM).

    PubMed

    Colom, Roberto; Hua, Xue; Martínez, Kenia; Burgaleta, Miguel; Román, Francisco J; Gunter, Jeffrey L; Carmona, Susanna; Jaeggi, Susanne M; Thompson, Paul M

    2016-10-01

    Tensor-Based Morphometry (TBM) allows the automatic mapping of brain changes across time building 3D deformation maps. This technique has been applied for tracking brain degeneration in Alzheimer's and other neurodegenerative diseases with high sensitivity and reliability. Here we applied TBM to quantify changes in brain structure after completing a challenging adaptive cognitive training program based on the n-back task. Twenty-six young women completed twenty-four training sessions across twelve weeks and they showed, on average, large cognitive improvements. High-resolution MRI scans were obtained before and after training. The computed longitudinal deformation maps were analyzed for answering three questions: (a) Are there differential brain structural changes in the training group as compared with a matched control group? (b) Are these changes related to performance differences in the training program? (c) Are standardized changes in a set of psychological factors (fluid and crystallized intelligence, working memory, and attention control) measured before and after training, related to structural changes in the brain? Results showed (a) greater structural changes for the training group in the temporal lobe, (b) a negative correlation between these changes and performance across training sessions (the greater the structural change, the lower the cognitive performance improvements), and (c) negligible effects regarding the psychological factors measured before and after training. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Performance Comparison of Big Data Analytics With NEXUS and Giovanni

    NASA Astrophysics Data System (ADS)

    Jacob, J. C.; Huang, T.; Lynnes, C.

    2016-12-01

    NEXUS is an emerging data-intensive analysis framework developed with a new approach for handling science data that enables large-scale data analysis. It is available through open source. We compare performance of NEXUS and Giovanni for 3 statistics algorithms applied to NASA datasets. Giovanni is a statistics web service at NASA Distributed Active Archive Centers (DAACs). NEXUS is a cloud-computing environment developed at JPL and built on Apache Solr, Cassandra, and Spark. We compute global time-averaged map, correlation map, and area-averaged time series. The first two algorithms average over time to produce a value for each pixel in a 2-D map. The third algorithm averages spatially to produce a single value for each time step. This talk is our report on benchmark comparison findings that indicate 15x speedup with NEXUS over Giovanni to compute area-averaged time series of daily precipitation rate for the Tropical Rainfall Measuring Mission (TRMM with 0.25 degree spatial resolution) for the Continental United States over 14 years (2000-2014) with 64-way parallelism and 545 tiles per granule. 16-way parallelism with 16 tiles per granule worked best with NEXUS for computing an 18-year (1998-2015) TRMM daily precipitation global time averaged map (2.5 times speedup) and 18-year global map of correlation between TRMM daily precipitation and TRMM real time daily precipitation (7x speedup). These and other benchmark results will be presented along with key lessons learned in applying the NEXUS tiling approach to big data analytics in the cloud.

  17. Development and Application of a Stepwise Assessment Process for Rational Redesign of Sequential Skills-Based Courses.

    PubMed

    Gallimore, Casey E; Porter, Andrea L; Barnett, Susanne G

    2016-10-25

    Objective. To develop and apply a stepwise process to assess achievement of course learning objectives related to advanced pharmacy practice experiences (APPEs) preparedness and inform redesign of sequential skills-based courses. Design. Four steps comprised the assessment and redesign process: (1) identify skills critical for APPE preparedness; (2) utilize focus groups and course evaluations to determine student competence in skill performance; (3) apply course mapping to identify course deficits contributing to suboptimal skill performance; and (4) initiate course redesign to target exposed deficits. Assessment. Focus group participants perceived students were least prepared for skills within the Accreditation Council for Pharmacy Education's pre-APPE core domains of Identification and Assessment of Drug-related Problems and General Communication Abilities. Course mapping identified gaps in instruction, performance, and assessment of skills within aforementioned domains. Conclusions. A stepwise process that identified strengths and weaknesses of a course, was used to facilitate structured course redesign. Strengths of the process included input and corroboration from both preceptors and students. Limitations included feedback from a small number of pharmacy preceptors and increased workload on course coordinators.

  18. Gradient-based multiresolution image fusion.

    PubMed

    Petrović, Valdimir S; Xydeas, Costas S

    2004-02-01

    A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.

  19. Advancing precision cosmology with 21 cm intensity mapping

    NASA Astrophysics Data System (ADS)

    Masui, Kiyoshi Wesley

    In this thesis we make progress toward establishing the observational method of 21 cm intensity mapping as a sensitive and efficient method for mapping the large-scale structure of the Universe. In Part I we undertake theoretical studies to better understand the potential of intensity mapping. This includes forecasting the ability of intensity mapping experiments to constrain alternative explanations to dark energy for the Universe's accelerated expansion. We also considered how 21 cm observations of the neutral gas in the early Universe (after recombination but before reionization) could be used to detect primordial gravity waves, thus providing a window into cosmological inflation. Finally we showed that scientifically interesting measurements could in principle be performed using intensity mapping in the near term, using existing telescopes in pilot surveys or prototypes for larger dedicated surveys. Part II describes observational efforts to perform some of the first measurements using 21 cm intensity mapping. We develop a general data analysis pipeline for analyzing intensity mapping data from single dish radio telescopes. We then apply the pipeline to observations using the Green Bank Telescope. By cross-correlating the intensity mapping survey with a traditional galaxy redshift survey we put a lower bound on the amplitude of the 21 cm signal. The auto-correlation provides an upper bound on the signal amplitude and we thus constrain the signal from both above and below. This pilot survey represents a pioneering effort in establishing 21 cm intensity mapping as a probe of the Universe.

  20. Continuous intensity map optimization (CIMO): A novel approach to leaf sequencing in step and shoot IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao Daliang; Earl, Matthew A.; Luan, Shuang

    2006-04-15

    A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases weremore » selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle{sup 3} treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.« less

  1. Analysis of Multipsectral Time Series for supporting Forest Management Plans

    NASA Astrophysics Data System (ADS)

    Simoniello, T.; Carone, M. T.; Costantini, G.; Frattegiani, M.; Lanfredi, M.; Macchiato, M.

    2010-05-01

    Adequate forest management requires specific plans based on updated and detailed mapping. Multispectral satellite time series have been largely applied to forest monitoring and studies at different scales tanks to their capability of providing synoptic information on some basic parameters descriptive of vegetation distribution and status. As a low expensive tool for supporting forest management plans in operative context, we tested the use of Landsat-TM/ETM time series (1987-2006) in the high Agri Valley (Southern Italy) for planning field surveys as well as for the integration of existing cartography. As preliminary activity to make all scenes radiometrically consistent the no-change regression normalization was applied to the time series; then all the data concerning available forest maps, municipal boundaries, water basins, rivers, and roads were overlapped in a GIS environment. From the 2006 image we elaborated the NDVI map and analyzed the distribution for each land cover class. To separate the physiological variability and identify the anomalous areas, a threshold on the distributions was applied. To label the non homogenous areas, a multitemporal analysis was performed by separating heterogeneity due to cover changes from that linked to basilar unit mapping and classification labelling aggregations. Then a map of priority areas was produced to support the field survey plan. To analyze the territorial evolution, the historical land cover maps were elaborated by adopting a hybrid classification approach based on a preliminary segmentation, the identification of training areas, and a subsequent maximum likelihood categorization. Such an analysis was fundamental for the general assessment of the territorial dynamics and in particular for the evaluation of the efficacy of past intervention activities.

  2. Influence of pansharpening techniques in obtaining accurate vegetation thematic maps

    NASA Astrophysics Data System (ADS)

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier

    2016-10-01

    In last decades, there have been a decline in natural resources, becoming important to develop reliable methodologies for their management. The appearance of very high resolution sensors has offered a practical and cost-effective means for a good environmental management. In this context, improvements are needed for obtaining higher quality of the information available in order to get reliable classified images. Thus, pansharpening enhances the spatial resolution of the multispectral band by incorporating information from the panchromatic image. The main goal in the study is to implement pixel and object-based classification techniques applied to the fused imagery using different pansharpening algorithms and the evaluation of thematic maps generated that serve to obtain accurate information for the conservation of natural resources. A vulnerable heterogenic ecosystem from Canary Islands (Spain) was chosen, Teide National Park, and Worldview-2 high resolution imagery was employed. The classes considered of interest were set by the National Park conservation managers. 7 pansharpening techniques (GS, FIHS, HCS, MTF based, Wavelet `à trous' and Weighted Wavelet `à trous' through Fractal Dimension Maps) were chosen in order to improve the data quality with the goal to analyze the vegetation classes. Next, different classification algorithms were applied at pixel-based and object-based approach, moreover, an accuracy assessment of the different thematic maps obtained were performed. The highest classification accuracy was obtained applying Support Vector Machine classifier at object-based approach in the Weighted Wavelet `à trous' through Fractal Dimension Maps fused image. Finally, highlight the difficulty of the classification in Teide ecosystem due to the heterogeneity and the small size of the species. Thus, it is important to obtain accurate thematic maps for further studies in the management and conservation of natural resources.

  3. Learning and diagnosing faults using neural networks

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Kiech, Earl L.; Ali, Moonis

    1990-01-01

    Neural networks have been employed for learning fault behavior from rocket engine simulator parameters and for diagnosing faults on the basis of the learned behavior. Two problems in applying neural networks to learning and diagnosing faults are (1) the complexity of the sensor data to fault mapping to be modeled by the neural network, which implies difficult and lengthy training procedures; and (2) the lack of sufficient training data to adequately represent the very large number of different types of faults which might occur. Methods are derived and tested in an architecture which addresses these two problems. First, the sensor data to fault mapping is decomposed into three simpler mappings which perform sensor data compression, hypothesis generation, and sensor fusion. Efficient training is performed for each mapping separately. Secondly, the neural network which performs sensor fusion is structured to detect new unknown faults for which training examples were not presented during training. These methods were tested on a task of fault diagnosis by employing rocket engine simulator data. Results indicate that the decomposed neural network architecture can be trained efficiently, can identify faults for which it has been trained, and can detect the occurrence of faults for which it has not been trained.

  4. Quantified pH imaging with hyperpolarized (13) C-bicarbonate.

    PubMed

    Scholz, David Johannes; Janich, Martin A; Köllisch, Ulrich; Schulte, Rolf F; Ardenkjaer-Larsen, Jan H; Frank, Annette; Haase, Axel; Schwaiger, Markus; Menzel, Marion I

    2015-06-01

    Because pH plays a crucial role in several diseases, it is desirable to measure pH in vivo noninvasively and in a spatially localized manner. Spatial maps of pH were quantified in vitro, with a focus on method-based errors, and applied in vivo. In vitro and in vivo (13) C mapping were performed for various flip angles for bicarbonate (BiC) and CO2 with spectral-spatial excitation and spiral readout in healthy Lewis rats in five slices. Acute subcutaneous sterile inflammation was induced with Concanavalin A in the right leg of Buffalo rats. pH and proton images were measured 2 h after induction. After optimizing the signal to noise ratio of the hyperpolarized (13) C-bicarbonate, error estimation of the spectral-spatial excited spectrum reveals that the method covers the biologically relevant pH range of 6 to 8 with low pH error (< 0.2). Quantification of pH maps shows negligible impact of the residual bicarbonate signal. pH maps reflect the induction of acute metabolic alkalosis. Inflamed, infected regions exhibit lower pH. Hyperpolarized (13) C-bicarbonate pH mapping was shown to be sensitive in the biologically relevant pH range. The mapping of pH was applied to healthy in vivo organs and interpreted within inflammation and acute metabolic alkalosis models. © 2014 Wiley Periodicals, Inc.

  5. O5.01STANDARDS AND ADVANCED TESTING FOR INTRAOPERATIVE LANGUAGE AND COGNITIVE MAPPING

    PubMed Central

    Comi, A.; Riva, M.; Casarotti, A.; Fava, E.; Pessina, F.; Papagno, C.; Bello, L.

    2014-01-01

    Resection of tumors involving language pathways requires the intraoperative identification of cortical and subcortical sites mediating the various language components, which determines the extent of resection (EOR). One of the critical point is which test(s) has to be performed during subcortical mapping. Object naming is the most used one, but it may have the limit to miss other components of language such as verb naming and generation, or comprehension of words or sentences, potentially resulting in permanent post operative deficits. Patients can be submitted intraoperatively to complex batteries of tests, resulting in limited performance and high chance of intraoperative fatigue, resulting in poor mapping. We revised our experience on subcortical language mapping in a series of patients with language pathways gliomas, in which two strategies for subcortical mapping were applied: in a first group only naming was used; in the second group, object naming was prevalently used but integrated with other tests (verb naming and generation, comprehension of words or sentences, number recognition and calculation). Results were evaluated as immediate and permanent deficits by applying a large neuropsychological testing, and as EOR (on volumetric FLAIR or post Gd T1 images). The first group was composed of 221 gliomas (168 LGGs, 53 HGGs); 130 were frontal, 21 in the insula, 58 temporal, and 12 parietal. Object naming was applied for subcortical mapping in all cases; 198 patients had immediate post operative deficits. Neuropsychological evaluation at 1 months showed complete recovery in 199 patients, a mild impairment was documented in 22 patients (12 posterior temporal tumors, 6 parietal tumors, and 4 posterior insular tumors); at 3 months evaluation, 15 patients still showed a mild impairment, mainly those whose tumors were located in the posterior temporal and parietal location. EOR was total and subtotal in 48.9% and 41.5% of cases. Fatigue was observed in 12% of patients with large volume tumors. The second group was composed of 179 gliomas (155 LGGs, 24 HGGs); 61 were frontal, 38 insular, 45 temporal and 11 parietal. Object naming was used for initial mapping and for locating main subcortical tracts (IFOF, ARC, UNC); in addition, when the initial portion of these tracts was identified, other tests were applied during subcortical mapping. 165 patients had immediate post operative deficits, only 2 patients had a mild impairment at 1 and 3 months evaluation. EOR was total and subtotal in 49.6% and 47.4% of cases. Patient fatigue was shown in 9% of patients. Object naming can be safely used during subcortical mapping for resection of tumors in frontal lobe; resection of tumors in posterior temporal, insular and parietal areas requires the use of a larger battery of tests, which did not influence the chance to reach a total or subtotal resection, nor results in a higher chance of patient fatigue.

  6. 25 CFR 170.5 - What definitions apply to this part?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... program. The term includes— (1) Locating, surveying, and mapping (including establishing temporary and permanent geodetic markers in accordance with specifications of the U.S. Geological Survey); (2) Resurfacing... otherwise prohibited by law. Design means services performed by licensed design professionals related to...

  7. 25 CFR 170.5 - What definitions apply to this part?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... program. The term includes— (1) Locating, surveying, and mapping (including establishing temporary and permanent geodetic markers in accordance with specifications of the U.S. Geological Survey); (2) Resurfacing... otherwise prohibited by law. Design means services performed by licensed design professionals related to...

  8. 25 CFR 170.5 - What definitions apply to this part?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... program. The term includes— (1) Locating, surveying, and mapping (including establishing temporary and permanent geodetic markers in accordance with specifications of the U.S. Geological Survey); (2) Resurfacing... otherwise prohibited by law. Design means services performed by licensed design professionals related to...

  9. 25 CFR 170.5 - What definitions apply to this part?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... program. The term includes— (1) Locating, surveying, and mapping (including establishing temporary and permanent geodetic markers in accordance with specifications of the U.S. Geological Survey); (2) Resurfacing... otherwise prohibited by law. Design means services performed by licensed design professionals related to...

  10. 25 CFR 170.5 - What definitions apply to this part?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... program. The term includes— (1) Locating, surveying, and mapping (including establishing temporary and permanent geodetic markers in accordance with specifications of the U.S. Geological Survey); (2) Resurfacing... otherwise prohibited by law. Design means services performed by licensed design professionals related to...

  11. Promoting Health Equity And Eliminating Disparities Through Performance Measurement And Payment.

    PubMed

    Anderson, Andrew C; O'Rourke, Erin; Chin, Marshall H; Ponce, Ninez A; Bernheim, Susannah M; Burstin, Helen

    2018-03-01

    Current approaches to health care quality have failed to reduce health care disparities. Despite dramatic increases in the use of quality measurement and associated payment policies, there has been no notable implementation of measurement strategies to reduce health disparities. The National Quality Forum developed a road map to demonstrate how measurement and associated policies can contribute to eliminating disparities and promote health equity. Specifically, the road map presents a four-part strategy whose components are identifying and prioritizing areas to reduce health disparities, implementing evidence-based interventions to reduce disparities, investing in the development and use of health equity performance measures, and incentivizing the reduction of health disparities and achievement of health equity. To demonstrate how the road map can be applied, we present an example of how measurement and value-based payment can be used to reduce racial disparities in hypertension among African Americans.

  12. A comparison of two conformal mapping techniques applied to an aerobrake body

    NASA Technical Reports Server (NTRS)

    Hommel, Mark J.

    1987-01-01

    Conformal mapping is a classical technique which has been utilized for solving problems in aerodynamics and hydrodynamics. Conformal mapping has been successfully applied in the construction of grids around airfoils, engine inlets and other aircraft configurations. Conformal mapping techniques were applied to an aerobrake body having an axis of symmetry. Two different approaches were utilized: (1) Karman-Trefftz transformation; and (2) Point Wise Schwarz Christoffel transformation. In both cases, the aerobrake body was mapped onto a near circle, and a grid was generated in the mapped plane. The mapped body and grid were then mapped back into physical space and the properties of the associated grids were examined. Advantages and disadvantages of both approaches are discussed.

  13. Elemental mapping of biofortified wheat grains using micro X-ray fluorescence

    NASA Astrophysics Data System (ADS)

    Ramos, I.; Pataco, I. M.; Mourinho, M. P.; Lidon, F.; Reboredo, F.; Pessoa, M. F.; Carvalho, M. L.; Santos, J. P.; Guerra, M.

    2016-06-01

    Micro X-ray fluorescence has been used to obtain elemental maps of biofortified wheat grains. Two varieties of wheat were used in the study, Triticum aestivum L. and Triticum durum desf. Two treatments, with different nutrient concentration, were applied to the plants during the whole plant growth cycle. From the obtained elemental maps it was possible to extract information regarding the plant's physiological processes under the biofortification procedures. Both macro and micronutrients were mapped, providing useful insight into the posterior food processing mechanisms of this biofortified staple food. We have also shown that these kind of studies can now be performed with laboratory benchtop apparatus, rather than using synchrotron radiation, increasing the overall attractiveness of micro X-ray fluorescence in the study of highly heterogeneous biological samples.

  14. Imaging interfacial electrical transport in graphene–MoS{sub 2} heterostructures with electron-beam-induced-currents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, E. R., E-mail: ewhite@physics.ucla.edu; Kerelsky, Alexander; Hubbard, William A.

    2015-11-30

    Heterostructure devices with specific and extraordinary properties can be fabricated by stacking two-dimensional crystals. Cleanliness at the inter-crystal interfaces within a heterostructure is crucial for maximizing device performance. However, because these interfaces are buried, characterizing their impact on device function is challenging. Here, we show that electron-beam induced current (EBIC) mapping can be used to image interfacial contamination and to characterize the quality of buried heterostructure interfaces with nanometer-scale spatial resolution. We applied EBIC and photocurrent imaging to map photo-sensitive graphene-MoS{sub 2} heterostructures. The EBIC maps, together with concurrently acquired scanning transmission electron microscopy images, reveal how a device's photocurrentmore » collection efficiency is adversely affected by nanoscale debris invisible to optical-resolution photocurrent mapping.« less

  15. Mapping soil texture classes and optimization of the result by accuracy assessment

    NASA Astrophysics Data System (ADS)

    Laborczi, Annamária; Takács, Katalin; Bakacsi, Zsófia; Szabó, József; Pásztor, László

    2014-05-01

    There are increasing demands nowadays on spatial soil information in order to support environmental related and land use management decisions. The GlobalSoilMap.net (GSM) project aims to make a new digital soil map of the world using state-of-the-art and emerging technologies for soil mapping and predicting soil properties at fine resolution. Sand, silt and clay are among the mandatory GSM soil properties. Furthermore, soil texture class information is input data of significant agro-meteorological and hydrological models. Our present work aims to compare and evaluate different digital soil mapping methods and variables for producing the most accurate spatial prediction of texture classes in Hungary. In addition to the Hungarian Soil Information and Monitoring System as our basic data, digital elevation model and its derived components, geological database, and physical property maps of the Digital Kreybig Soil Information System have been applied as auxiliary elements. Two approaches have been applied for the mapping process. At first the sand, silt and clay rasters have been computed independently using regression kriging (RK). From these rasters, according to the USDA categories, we have compiled the texture class map. Different combinations of reference and training soil data and auxiliary covariables have resulted several different maps. However, these results consequentially include the uncertainty factor of the three kriged rasters. Therefore we have suited data mining methods as the other approach of digital soil mapping. By working out of classification trees and random forests we have got directly the texture class maps. In this way the various results can be compared to the RK maps. The performance of the different methods and data has been examined by testing the accuracy of the geostatistically computed and the directly classified results. We have used the GSM methodology to assess the most predictive and accurate way for getting the best among the several result maps. Acknowledgement: Our work was supported by the Hungarian National Scientific Research Foundation (OTKA, Grant No. K105167).

  16. HOTEX: An Approach for Global Mapping of Human Built-Up and Settlement Extent

    NASA Technical Reports Server (NTRS)

    Wang, Panshi; Huang, Chengquan; Tilton, James C.; Tan, Bin; Brown De Colstoun, Eric C.

    2017-01-01

    Understanding the impacts of urbanization requires accurate and updatable urban extent maps. Here we present an algorithm for mapping urban extent at global scale using Landsat data. An innovative hierarchical object-based texture (HOTex) classification approach was designed to overcome spectral confusion between urban and nonurban land cover types. VIIRS nightlights data and MODIS vegetation index datasets are integrated as high-level features under an object-based framework. We applied the HOTex method to the GLS-2010 Landsat images to produce a global map of human built-up and settlement extent. As shown by visual assessments, our method could effectively map urban extent and generate consistent results using images with inconsistent acquisition time and vegetation phenology. Using scene-level cross validation for results in Europe, we assessed the performance of HOTex and achieved a kappa coefficient of 0.91, compared to 0.74 from a baseline method using per-pixel classification using spectral information.

  17. Automated grain mapping using wide angle convergent beam electron diffraction in transmission electron microscope for nanomaterials.

    PubMed

    Kumar, Vineet

    2011-12-01

    The grain size statistics, commonly derived from the grain map of a material sample, are important microstructure characteristics that greatly influence its properties. The grain map for nanomaterials is usually obtained manually by visual inspection of the transmission electron microscope (TEM) micrographs because automated methods do not perform satisfactorily. While the visual inspection method provides reliable results, it is a labor intensive process and is often prone to human errors. In this article, an automated grain mapping method is developed using TEM diffraction patterns. The presented method uses wide angle convergent beam diffraction in the TEM. The automated technique was applied on a platinum thin film sample to obtain the grain map and subsequently derive grain size statistics from it. The grain size statistics obtained with the automated method were found in good agreement with the visual inspection method.

  18. Validating a method for transferring social values of ecosystem services between public lands in the Rocky Mountain region

    USGS Publications Warehouse

    Sherrouse, Benson C.; Semmens, Darius J.

    2014-01-01

    With growing pressures on ecosystem services, social values attributed to them are increasingly important to land management decisions. Social values, defined here as perceived values the public ascribes to ecosystem services, particularly cultural services, are generally not accounted for through economic markets or considered alongside economic and ecological values in ecosystem service assessments. Social-values data can be elicited through public value and preference surveys; however, limitations prevent them from being regularly collected. These limitations led to our three study objectives: (1) demonstrate an approach for applying benefit transfer, a nonmarket-valuation method, to spatially explicit social values; (2) validate the approach; and (3) identify potential improvements. We applied Social Values for Ecosystem Services (SolVES) to survey data for three national forests in Colorado and Wyoming. Social-value maps and models were generated, describing relationships between the maps and various combinations of environmental variables. Models from each forest were used to estimate social-value maps for the other forests via benefit transfer. Model performance was evaluated relative to the locally derived models. Performance varied with the number and type of environmental variables used, as well as differences in the forests' physical and social contexts. Enhanced metadata and better social-context matching could improve model transferability.

  19. Evaluation of a College Freshman Diversity Research Program in Astronomy

    NASA Astrophysics Data System (ADS)

    Tremmel, Michael J.; Garner, S. M.; Schmidt, S. J.; Wisniewski, J. P.; Agol, E.

    2014-01-01

    Graduate students in the astronomy department at the University of Washington began the Pre-Major in Astronomy Program (Pre-MAP) after recognizing that underrepresented students in STEM fields are not well retained after their transition from high school. Pre-MAP is a research and mentoring program that begins with a keystone seminar where they learn astronomical research techniques that they apply to research projects conducted in small groups. Students also receive one-on-one mentoring and peer support for the duration of the academic year and beyond. Successful Pre-MAP students have declared astronomy and physics majors, expanded their research projects beyond the fall quarter, presented posters at the UW Undergraduate Research Symposium, and received research fellowships and summer internships. Here we examine the success of the program in attracting underrepresented minorities and in facilitating better STEM retention and academic performance among incoming UW students. We use the University of Washington Student Database to study both the performance of Pre-MAP students and the overall UW student body over the past 8 years. We show that Pre-MAP students are generally more diverse than the overall UW population and also come in with a variety of different math backgrounds, which we show to be an important factor on STEM performance for the overall UW population. We find that that Pre-MAP students are both more academically successful and more likely to graduate in STEM fields than their UW peers, regardless of initial math placement.

  20. High resolution physical mapping of single gene fragments on pachytene chromosome 4 and 7 of Rosa.

    PubMed

    Kirov, Ilya V; Van Laere, Katrijn; Khrustaleva, Ludmila I

    2015-07-02

    Rosaceae is a family containing many economically important fruit and ornamental species. Although fluorescence in situ hybridization (FISH)-based physical mapping of plant genomes is a valuable tool for map-based cloning, comparative genomics and evolutionary studies, no studies using high resolution physical mapping have been performed in this family. Previously we proved that physical mapping of single-copy genes as small as 1.1 kb is possible on mitotic metaphase chromosomes of Rosa wichurana using Tyramide-FISH. In this study we aimed to further improve the physical map of Rosa wichurana by applying high resolution FISH to pachytene chromosomes. Using high resolution Tyramide-FISH and multicolor Tyramide-FISH, 7 genes (1.7-3 kb) were successfully mapped on pachytene chromosomes 4 and 7 of Rosa wichurana. Additionally, by using multicolor Tyramide-FISH three closely located genes were simultaneously visualized on chromosome 7. A detailed map of heterochromatine/euchromatine patterns of chromosome 4 and 7 was developed with indication of the physical position of these 7 genes. Comparison of the gene order between Rosa wichurana and Fragaria vesca revealed a poor collinearity for chromosome 7, but a perfect collinearity for chromosome 4. High resolution physical mapping of short probes on pachytene chromosomes of Rosa wichurana was successfully performed for the first time. Application of Tyramide-FISH on pachytene chromosomes allowed the mapping resolution to be increased up to 20 times compared to mitotic metaphase chromosomes. High resolution Tyramide-FISH and multicolor Tyramide-FISH might become useful tools for further physical mapping of single-copy genes and for the integration of physical and genetic maps of Rosa wichurana and other members of the Rosaceae.

  1. Low Altitude AVIRIS Data for Mapping Land Cover in Yellowstone National Park: Use of Isodata Clustering Techniques

    NASA Technical Reports Server (NTRS)

    Spruce, Joe

    2001-01-01

    Yellowstone National Park (YNP) contains a diversity of land cover. YNP managers need site-specific land cover maps, which may be produced more effectively using high-resolution hyperspectral imagery. ISODATA clustering techniques have aided operational multispectral image classification and may benefit certain hyperspectral data applications if optimally applied. In response, a study was performed for an area in northeast YNP using 11 select bands of low-altitude AVIRIS data calibrated to ground reflectance. These data were subjected to ISODATA clustering and Maximum Likelihood Classification techniques to produce a moderately detailed land cover map. The latter has good apparent overall agreement with field surveys and aerial photo interpretation.

  2. Reconstruction of Thermographic Signals to Map Perforator Vessels in Humans

    PubMed Central

    Liu, Wei-Min; Maivelett, Jordan; Kato, Gregory J.; Taylor, James G.; Yang, Wen-Chin; Liu, Yun-Chung; Yang, You-Gang; Gorbach, Alexander M.

    2013-01-01

    Thermal representations on the surface of a human forearm of underlying perforator vessels have previously been mapped via recovery-enhanced infrared imaging, which is performed as skin blood flow recovers to baseline levels following cooling of the forearm. We noted that the same vessels could also be observed during reactive hyperaemia tests after complete 5-min occlusion of the forearm by an inflatable cuff. However, not all subjects showed vessels with acceptable contrast. Therefore, we applied a thermographic signal reconstruction algorithm to reactive hyperaemia testing, which substantially enhanced signal-to-noise ratios between perforator vessels and their surroundings, thereby enabling their mapping with higher accuracy and a shorter occlusion period. PMID:23667389

  3. Simultaneous orientation and thickness mapping in transmission electron microscopy

    DOE PAGES

    Tyutyunnikov, Dmitry; Özdöl, V. Burak; Koch, Christoph T.

    2014-12-04

    In this paper we introduce an approach for simultaneous thickness and orientation mapping of crystalline samples by means of transmission electron microscopy. We show that local thickness and orientation values can be extracted from experimental dark-field (DF) image data acquired at different specimen tilts. The method has been implemented to automatically acquire the necessary data and then map thickness and crystal orientation for a given region of interest. We have applied this technique to a specimen prepared from a commercial semiconductor device, containing multiple 22 nm technology transistor structures. The performance and limitations of our method are discussed and comparedmore » to those of other techniques available.« less

  4. Reply to the comment by B. Ghobadipour and B. Mojarradi "M. Abedi, S.A. Torabi, G.-H. Norouzi and M. Hamzeh; ELECTRE III: A knowledge-driven method for integration of geophysical data with geological and geochemical data in mineral prospectivity mapping"

    NASA Astrophysics Data System (ADS)

    Abedi, Maysam

    2015-06-01

    This reply discusses the results of two previously developed approaches in mineral prospectivity/potential mapping (MPM), i.e., ELECTRE III and PROMETHEE II as well-known methods in multi-criteria decision-making (MCDM) problems. Various geo-data sets are integrated to prepare MPM in which generated maps have acceptable matching with the drilled boreholes. Equal performance of the applied methods is indicated in the studied case. Complementary information of these methods is also provided in order to help interested readers to implement them in MPM process.

  5. 3-D ultrafast Doppler imaging applied to the noninvasive mapping of blood vessels in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Demene, Charlie; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2015-08-01

    Ultrafast Doppler imaging was introduced as a technique to quantify blood flow in an entire 2-D field of view, expanding the field of application of ultrasound imaging to the highly sensitive anatomical and functional mapping of blood vessels. We have recently developed 3-D ultrafast ultrasound imaging, a technique that can produce thousands of ultrasound volumes per second, based on a 3-D plane and diverging wave emissions, and demonstrated its clinical feasibility in human subjects in vivo. In this study, we show that noninvasive 3-D ultrafast power Doppler, pulsed Doppler, and color Doppler imaging can be used to perform imaging of blood vessels in humans when using coherent compounding of 3-D tilted plane waves. A customized, programmable, 1024-channel ultrasound system was designed to perform 3-D ultrafast imaging. Using a 32 × 32, 3-MHz matrix phased array (Vermon, Tours, France), volumes were beamformed by coherently compounding successive tilted plane wave emissions. Doppler processing was then applied in a voxel-wise fashion. The proof of principle of 3-D ultrafast power Doppler imaging was first performed by imaging Tygon tubes of various diameters, and in vivo feasibility was demonstrated by imaging small vessels in the human thyroid. Simultaneous 3-D color and pulsed Doppler imaging using compounded emissions were also applied in the carotid artery and the jugular vein in one healthy volunteer.

  6. Temperature Mapping of Air Film-Cooled Thermal Barrier Coated Surfaces Using Phosphor Thermometry

    NASA Technical Reports Server (NTRS)

    Eldridge, Jeffrey I.

    2016-01-01

    While the effects of thermal barrier coating (TBC) thermal protection and air film cooling effectiveness for jet engine components are usually studied separately, their contributions to combined cooling effectiveness are interdependent and are not simply additive. Therefore, combined cooling effectiveness must be measured to achieve an optimum balance between TBC thermal protection and air film cooling. Phosphor thermometry offers several advantages for mapping temperatures of air film cooled surfaces. While infrared thermography has been typically applied to study air film cooling effectiveness, temperature accuracy depends on knowing surface emissivity (which may change) and correcting for effects of reflected radiation. Because decay time-based full-field phosphor thermometry is relatively immune to these effects, it can be applied advantageously to temperature mapping of air film-cooled TBC-coated surfaces. In this presentation, an overview will be given of efforts at NASA Glenn Research Center to perform temperature mapping of air film-cooled TBC-coated surfaces in a burner rig test environment. The effects of thermal background radiation and flame chemiluminescence on the measurements are investigated, and the strengths and limitations of this method for studying air film cooling effectiveness are discussed.

  7. A componential model of human interaction with graphs: 1. Linear regression modeling

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Lewis, Robert

    1994-01-01

    Task analyses served as the basis for developing the Mixed Arithmetic-Perceptual (MA-P) model, which proposes (1) that people interacting with common graphs to answer common questions apply a set of component processes-searching for indicators, encoding the value of indicators, performing arithmetic operations on the values, making spatial comparisons among indicators, and repsonding; and (2) that the type of graph and user's task determine the combination and order of the components applied (i.e., the processing steps). Two experiments investigated the prediction that response time will be linearly related to the number of processing steps according to the MA-P model. Subjects used line graphs, scatter plots, and stacked bar graphs to answer comparison questions and questions requiring arithmetic calculations. A one-parameter version of the model (with equal weights for all components) and a two-parameter version (with different weights for arithmetic and nonarithmetic processes) accounted for 76%-85% of individual subjects' variance in response time and 61%-68% of the variance taken across all subjects. The discussion addresses possible modifications in the MA-P model, alternative models, and design implications from the MA-P model.

  8. Maps of averaged spectral deviations from soil lines and their comparison with traditional soil maps

    NASA Astrophysics Data System (ADS)

    Rukhovich, D. I.; Rukhovich, A. D.; Rukhovich, D. D.; Simakova, M. S.; Kulyanitsa, A. L.; Bryzzhev, A. V.; Koroleva, P. V.

    2016-07-01

    The analysis of 34 cloudless fragments of Landsat 5, 7, and 8 images (1985-2014) on the territory of Plavsk, Arsen'evsk, and Chern districts of Tula oblast has been performed. It is shown that bare soil surface on the RED-NIR plots derived from the images cannot be described in the form of a sector of spectral plane as it can be done for the NDVI values. The notion of spectral neighborhood of soil line (SNSL) is suggested. It is defined as the sum of points of the RED-NIR spectral space, which are characterized by spectral characteristics of the bare soil applied for constructing soil lines. The way of the SNSL separation along the line of the lowest concentration density of points on the RED-NIR spectral space is suggested. This line separates bare soil surface from vegetating plants. The SNSL has been applied to construct soil line (SL) for each of the 34 images and to delineate bare soil surface on them. Distances from the points with averaged RED-NIR coordinates to the SL have been calculated using the method of moving window. These distances can be referred to as averaged spectral deviations (ASDs). The calculations have been performed strictly for the SNSL areas. As a result, 34 maps of ASDs have been created. These maps contain ASD values for 6036 points of a grid used in the study. Then, the integral map of normalized ASD values has been built with due account for the number of points participating in the calculation (i.e., lying in the SNSL) within the moving window. The integral map of ASD values has been compared with four traditional soil maps on the studied territory. It is shown that this integral map can be interpreted in terms of soil taxa: the areas of seven soil subtypes (soddy moderately podzolic, soddy slightly podzolic, light gray forest. gray forest, dark gray forest, podzolized chernozems, and leached chernozems) belonging to three soil types (soddy-podzolic, gray forest, and chernozemic soils) can be delineated on it.

  9. Performance map of a cluster detection test using extended power

    PubMed Central

    2013-01-01

    Background Conventional power studies possess limited ability to assess the performance of cluster detection tests. In particular, they cannot evaluate the accuracy of the cluster location, which is essential in such assessments. Furthermore, they usually estimate power for one or a few particular alternative hypotheses and thus cannot assess performance over an entire region. Takahashi and Tango developed the concept of extended power that indicates both the rate of null hypothesis rejection and the accuracy of the cluster location. We propose a systematic assessment method, using here extended power, to produce a map showing the performance of cluster detection tests over an entire region. Methods To explore the behavior of a cluster detection test on identical cluster types at any possible location, we successively applied four different spatial and epidemiological parameters. These parameters determined four cluster collections, each covering the entire study region. We simulated 1,000 datasets for each cluster and analyzed them with Kulldorff’s spatial scan statistic. From the area under the extended power curve, we constructed a map for each parameter set showing the performance of the test across the entire region. Results Consistent with previous studies, the performance of the spatial scan statistic increased with the baseline incidence of disease, the size of the at-risk population and the strength of the cluster (i.e., the relative risk). Performance was heterogeneous, however, even for very similar clusters (i.e., similar with respect to the aforementioned factors), suggesting the influence of other factors. Conclusions The area under the extended power curve is a single measure of performance and, although needing further exploration, it is suitable to conduct a systematic spatial evaluation of performance. The performance map we propose enables epidemiologists to assess cluster detection tests across an entire study region. PMID:24156765

  10. An Improved Map-Matching Technique Based on the Fréchet Distance Approach for Pedestrian Navigation Services

    PubMed Central

    Bang, Yoonsik; Kim, Jiyoung; Yu, Kiyun

    2016-01-01

    Wearable and smartphone technology innovations have propelled the growth of Pedestrian Navigation Services (PNS). PNS need a map-matching process to project a user’s locations onto maps. Many map-matching techniques have been developed for vehicle navigation services. These techniques are inappropriate for PNS because pedestrians move, stop, and turn in different ways compared to vehicles. In addition, the base map data for pedestrians are more complicated than for vehicles. This article proposes a new map-matching method for locating Global Positioning System (GPS) trajectories of pedestrians onto road network datasets. The theory underlying this approach is based on the Fréchet distance, one of the measures of geometric similarity between two curves. The Fréchet distance approach can provide reasonable matching results because two linear trajectories are parameterized with the time variable. Then we improved the method to be adaptive to the positional error of the GPS signal. We used an adaptation coefficient to adjust the search range for every input signal, based on the assumption of auto-correlation between consecutive GPS points. To reduce errors in matching, the reliability index was evaluated in real time for each match. To test the proposed map-matching method, we applied it to GPS trajectories of pedestrians and the road network data. We then assessed the performance by comparing the results with reference datasets. Our proposed method performed better with test data when compared to a conventional map-matching technique for vehicles. PMID:27782091

  11. Mapping the Archives 1

    ERIC Educational Resources Information Center

    Jackson, Anthony

    2012-01-01

    With this issue, "RiDE" begins a new occasional series of short informational pieces on archives in the field of drama and theatre education and applied theatre and performance. Each instalment will include summaries of several collections of significant material in the field. Over time this will build into a readily accessible annotated directory…

  12. Mapping the Archives: 2

    ERIC Educational Resources Information Center

    Jackson, Anthony

    2012-01-01

    With this issue, "RiDE" continues its new occasional series of short informational pieces on archives in the field of drama and theatre education and applied theatre and performance. Each instalment includes summaries of one or more collections of significant material in the field. Over time this will build into a readily accessible…

  13. Mapping the Archives: 3

    ERIC Educational Resources Information Center

    Jackson, Anthony

    2013-01-01

    With this issue, "Research in Drama Education" (RiDE) continues its occasional series of short informational pieces on archives in the field of drama and theatre education and applied theatre and performance. Each instalment includes summaries of one or more collections of significant material in the field. Over time, this will build in…

  14. Progress in landslide susceptibility mapping over Europe using Tier-based approaches

    NASA Astrophysics Data System (ADS)

    Günther, Andreas; Hervás, Javier; Reichenbach, Paola; Malet, Jean-Philippe

    2010-05-01

    The European Thematic Strategy for Soil Protection aims, among other objectives, to ensure a sustainable use of soil. The legal instrument of the strategy, the proposed Framework Directive, suggests identifying priority areas of several soil threats including landslides using a coherent and compatible approach based on the use of common thematic data. In a first stage, this can be achieved through landslide susceptibility mapping using geographically nested, multi-step tiered approaches, where areas identified as of high susceptibility by a first, synoptic-scale Tier ("Tier 1") can then be further assessed and mapped at larger scale by successive Tiers. In order to identify areas prone to landslides at European scale ("Tier 1"), a number of thematic terrain and environmental data sets already available for the whole of Europe can be used as input for a continental scale susceptibility model. However, since no coherent landslide inventory data is available at the moment over the whole continent, qualitative heuristic zonation approaches are proposed. For "Tier 1" a preliminary, simplified model has been developed. It consists of an equally weighting combination of a reduced, continent-wide common dataset of landslide conditioning factors including soil parent material, slope angle and land cover, to derive a landslide susceptibility index using raster mapping units consisting of 1 x 1 km pixels. A preliminary European-wide susceptibility map has thus been produced at 1:1 Million scale, since this is compatible with that of the datasets used. The map has been validated by means of a ratio of effectiveness using samples from landslide inventories in Italy, Austria, Hungary and United Kingdom. Although not differentiated for specific geomorphological environments or specific landslide types, the experimental model reveals a relatively good performance in many European regions at a 1:1 Million scale. An additional "Tier 1" susceptibility map at the same scale and using the same or equivalent thematic data as for the one above has been generated for six French departments using a heuristic, weighting-based multi-criteria evaluation model applied also to raster-cell mapping units. In this experiment, thematic data class weights have been differentiated for two stratification areas, namely mountains and plains, and four main landslide types. Separate susceptibility maps for each landslide type and a combined map for all types have been produced. Results have been validated using BRGM's BDMvT landslide inventory. Unlike "Tier 1", "Tier 2" assessment requires landslide inventory data and additional thematic data on conditioning factors which may not be available for all European countries. For the "Tier 2", a nation-wide quantitative landslide susceptibility assessment has been performed for Italy by applying a statistical model. In this assessment, multivariate analysis was applied using bedrock, soil and climate data together with a number of derivatives from SRTM90 DEM. In addition, separate datasets from a historical landslide inventory were used for model training and validation respectively. The mapping units selected were based on administrative boundaries (municipalities). The performance of this nation-wide, quantitative susceptibility assessment has been evaluated using multi-temporal landslide inventory data. Finally, model limitations for "Tier 1" are discussed, and recommendations for enhanced Tier 1 and Tier 2 models including additional thematic data for conditioning factors are drawn. This project is part of the collaborative research carried out within the European Landslide Expert Group coordinated by JRC in support to the EU Soil Thematic Strategy. It is also supported by the International Programme on Landslides of the International Consortium on Landslides.

  15. Automatic metro map layout using multicriteria optimization.

    PubMed

    Stott, Jonathan; Rodgers, Peter; Martínez-Ovando, Juan Carlos; Walker, Stephen G

    2011-01-01

    This paper describes an automatic mechanism for drawing metro maps. We apply multicriteria optimization to find effective placement of stations with a good line layout and to label the map unambiguously. A number of metrics are defined, which are used in a weighted sum to find a fitness value for a layout of the map. A hill climbing optimizer is used to reduce the fitness value, and find improved map layouts. To avoid local minima, we apply clustering techniques to the map-the hill climber moves both stations and clusters when finding improved layouts. We show the method applied to a number of metro maps, and describe an empirical study that provides some quantitative evidence that automatically-drawn metro maps can help users to find routes more efficiently than either published maps or undistorted maps. Moreover, we have found that, in these cases, study subjects indicate a preference for automatically-drawn maps over the alternatives. © 2011 IEEE Published by the IEEE Computer Society

  16. MetaMap: An atlas of metatranscriptomic reads in human disease-related RNA-seq data.

    PubMed

    Simon, L M; Karg, S; Westermann, A J; Engel, M; Elbehery, A H A; Hense, B; Heinig, M; Deng, L; Theis, F J

    2018-06-12

    With the advent of the age of big data in bioinformatics, large volumes of data and high performance computing power enable researchers to perform re-analyses of publicly available datasets at an unprecedented scale. Ever more studies imply the microbiome in both normal human physiology and a wide range of diseases. RNA sequencing technology (RNA-seq) is commonly used to infer global eukaryotic gene expression patterns under defined conditions, including human disease-related contexts, but its generic nature also enables the detection of microbial and viral transcripts. We developed a bioinformatic pipeline to screen existing human RNA-seq datasets for the presence of microbial and viral reads by re-inspecting the non-human-mapping read fraction. We validated this approach by recapitulating outcomes from 6 independent controlled infection experiments of cell line models and comparison with an alternative metatranscriptomic mapping strategy. We then applied the pipeline to close to 150 terabytes of publicly available raw RNA-seq data from >17,000 samples from >400 studies relevant to human disease using state-of-the-art high performance computing systems. The resulting data of this large-scale re-analysis are made available in the presented MetaMap resource. Our results demonstrate that common human RNA-seq data, including those archived in public repositories, might contain valuable information to correlate microbial and viral detection patterns with diverse diseases. The presented MetaMap database thus provides a rich resource for hypothesis generation towards the role of the microbiome in human disease. Additionally, codes to process new datasets and perform statistical analyses are made available at https://github.com/theislab/MetaMap.

  17. Classification Techniques for Digital Map Compression

    DTIC Science & Technology

    1989-03-01

    classification improved the performance of the K-means classification algorithm resulting in a compression of 8.06:1 with Lempel - Ziv coding. Run-length coding... compression performance are run-length coding [2], [8] and Lempel - Ziv coding 110], [11]. These techniques are chosen because they are most efficient when...investigated. After the classification, some standard file compression methods, such as Lempel - Ziv and run-length encoding were applied to the

  18. Landslide susceptibility mapping & prediction using Support Vector Machine for Mandakini River Basin, Garhwal Himalaya, India

    NASA Astrophysics Data System (ADS)

    Kumar, Deepak; Thakur, Manoj; Dubey, Chandra S.; Shukla, Dericks P.

    2017-10-01

    In recent years, various machine learning techniques have been applied for landslide susceptibility mapping. In this study, three different variants of support vector machine viz., SVM, Proximal Support Vector Machine (PSVM) and L2-Support Vector Machine - Modified Finite Newton (L2-SVM-MFN) have been applied on the Mandakini River Basin in Uttarakhand, India to carry out the landslide susceptibility mapping. Eight thematic layers such as elevation, slope, aspect, drainages, geology/lithology, buffer of thrusts/faults, buffer of streams and soil along with the past landslide data were mapped in GIS environment and used for landslide susceptibility mapping in MATLAB. The study area covering 1625 km2 has merely 0.11% of area under landslides. There are 2009 pixels for past landslides out of which 50% (1000) landslides were considered as training set while remaining 50% as testing set. The performance of these techniques has been evaluated and the computational results show that L2-SVM-MFN obtains higher prediction values (0.829) of receiver operating characteristic curve (AUC-area under the curve) as compared to 0.807 for PSVM model and 0.79 for SVM. The results obtained from L2-SVM-MFN model are found to be superior than other SVM prediction models and suggest the usefulness of this technique to problem of landslide susceptibility mapping where training data is very less. However, these techniques can be used for satisfactory determination of susceptible zones with these inputs.

  19. Left Frontal Meningioangiomatosis Associated with Type IIIc Focal Cortical Dysplasia Causing Refractory Epilepsy and Literature Review.

    PubMed

    Roux, Alexandre; Mellerio, Charles; Lechapt-Zalcman, Emmanuelle; Still, Megan; Zerah, Michel; Bourgeois, Marie; Pallud, Johan

    2018-06-01

    We report the surgical management of a lesional drug-resistant epilepsy caused by a meningioangiomatosis associated with a type IIIc focal cortical dysplasia located in the left supplementary motor area in a young male patient. A first anatomically based partial surgical resection was performed on an 11-year-old under general anesthesia without intraoperative mapping, which allowed for postoperative seizure control (Engel IA) for 6 years. The patient then exhibited intractable right sensatory and aphasic focal onset seizures despite 2 appropriate antiepileptic drugs. A second functional-based surgical resection was performed using intraoperative corticosubcortical functional mapping with direct electrical stimulation under awake conditions. A complete surgical resection was performed, and a left partial supplementary motor area syndrome was observed. At 6 months postoperatively, the patient is seizure free (Engel IA) with an ongoing decrease in antiepileptic drug therapy. Intraoperative functional brain mapping can be applied to preserve the brain function and networks around a meningioangiomatosis to facilitate the resection of potentially epileptogenic perilesional dysplastic cortex and to tailor the extent of resection to functional boundaries. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. A case study in programming a quantum annealer for hard operational planning problems

    NASA Astrophysics Data System (ADS)

    Rieffel, Eleanor G.; Venturelli, Davide; O'Gorman, Bryan; Do, Minh B.; Prystay, Elicia M.; Smelyanskiy, Vadim N.

    2015-01-01

    We report on a case study in programming an early quantum annealer to attack optimization problems related to operational planning. While a number of studies have looked at the performance of quantum annealers on problems native to their architecture, and others have examined performance of select problems stemming from an application area, ours is one of the first studies of a quantum annealer's performance on parametrized families of hard problems from a practical domain. We explore two different general mappings of planning problems to quadratic unconstrained binary optimization (QUBO) problems, and apply them to two parametrized families of planning problems, navigation-type and scheduling-type. We also examine two more compact, but problem-type specific, mappings to QUBO, one for the navigation-type planning problems and one for the scheduling-type planning problems. We study embedding properties and parameter setting and examine their effect on the efficiency with which the quantum annealer solves these problems. From these results, we derive insights useful for the programming and design of future quantum annealers: problem choice, the mapping used, the properties of the embedding, and the annealing profile all matter, each significantly affecting the performance.

  1. Manifestation of a neuro-fuzzy model to produce landslide susceptibility map using remote sensing data derived parameters

    NASA Astrophysics Data System (ADS)

    Pradhan, Biswajeet; Lee, Saro; Buchroithner, Manfred

    Landslides are the most common natural hazards in Malaysia. Preparation of landslide suscep-tibility maps is important for engineering geologists and geomorphologists. However, due to complex nature of landslides, producing a reliable susceptibility map is not easy. In this study, a new attempt is tried to produce landslide susceptibility map of a part of Cameron Valley of Malaysia. This paper develops an adaptive neuro-fuzzy inference system (ANFIS) based on a geographic information system (GIS) environment for landslide susceptibility mapping. To ob-tain the neuro-fuzzy relations for producing the landslide susceptibility map, landslide locations were identified from interpretation of aerial photographs and high resolution satellite images, field surveys and historical inventory reports. Landslide conditioning factors such as slope, plan curvature, distance to drainage lines, soil texture, lithology, and distance to lineament were extracted from topographic, soil, and lineament maps. Landslide susceptible areas were analyzed by the ANFIS model and mapped using the conditioning factors. Furthermore, we applied various membership functions (MFs) and fuzzy relations to produce landslide suscep-tibility maps. The prediction performance of the susceptibility map is checked by considering actual landslides in the study area. Results show that, triangular, trapezoidal, and polynomial MFs were the best individual MFs for modelling landslide susceptibility maps (86

  2. DyKOSMap: A framework for mapping adaptation between biomedical knowledge organization systems.

    PubMed

    Dos Reis, Julio Cesar; Pruski, Cédric; Da Silveira, Marcos; Reynaud-Delaître, Chantal

    2015-06-01

    Knowledge Organization Systems (KOS) and their associated mappings play a central role in several decision support systems. However, by virtue of knowledge evolution, KOS entities are modified over time, impacting mappings and potentially turning them invalid. This requires semi-automatic methods to maintain such semantic correspondences up-to-date at KOS evolution time. We define a complete and original framework based on formal heuristics that drives the adaptation of KOS mappings. Our approach takes into account the definition of established mappings, the evolution of KOS and the possible changes that can be applied to mappings. This study experimentally evaluates the proposed heuristics and the entire framework on realistic case studies borrowed from the biomedical domain, using official mappings between several biomedical KOSs. We demonstrate the overall performance of the approach over biomedical datasets of different characteristics and sizes. Our findings reveal the effectiveness in terms of precision, recall and F-measure of the suggested heuristics and methods defining the framework to adapt mappings affected by KOS evolution. The obtained results contribute and improve the quality of mappings over time. The proposed framework can adapt mappings largely automatically, facilitating thus the maintenance task. The implemented algorithms and tools support and minimize the work of users in charge of KOS mapping maintenance. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Inverting ion images without Abel inversion: maximum entropy reconstruction of velocity maps.

    PubMed

    Dick, Bernhard

    2014-01-14

    A new method for the reconstruction of velocity maps from ion images is presented, which is based on the maximum entropy concept. In contrast to other methods used for Abel inversion the new method never applies an inversion or smoothing to the data. Instead, it iteratively finds the map which is the most likely cause for the observed data, using the correct likelihood criterion for data sampled from a Poissonian distribution. The entropy criterion minimizes the information content in this map, which hence contains no information for which there is no evidence in the data. Two implementations are proposed, and their performance is demonstrated with simulated and experimental data: Maximum Entropy Velocity Image Reconstruction (MEVIR) obtains a two-dimensional slice through the velocity distribution and can be compared directly to Abel inversion. Maximum Entropy Velocity Legendre Reconstruction (MEVELER) finds one-dimensional distribution functions Q(l)(v) in an expansion of the velocity distribution in Legendre polynomials P((cos θ) for the angular dependence. Both MEVIR and MEVELER can be used for the analysis of ion images with intensities as low as 0.01 counts per pixel, with MEVELER performing significantly better than MEVIR for images with low intensity. Both methods perform better than pBASEX, in particular for images with less than one average count per pixel.

  4. Investigation of Inversion Polymorphisms in the Human Genome Using Principal Components Analysis

    PubMed Central

    Ma, Jianzhong; Amos, Christopher I.

    2012-01-01

    Despite the significant advances made over the last few years in mapping inversions with the advent of paired-end sequencing approaches, our understanding of the prevalence and spectrum of inversions in the human genome has lagged behind other types of structural variants, mainly due to the lack of a cost-efficient method applicable to large-scale samples. We propose a novel method based on principal components analysis (PCA) to characterize inversion polymorphisms using high-density SNP genotype data. Our method applies to non-recurrent inversions for which recombination between the inverted and non-inverted segments in inversion heterozygotes is suppressed due to the loss of unbalanced gametes. Inside such an inversion region, an effect similar to population substructure is thus created: two distinct “populations” of inversion homozygotes of different orientations and their 1∶1 admixture, namely the inversion heterozygotes. This kind of substructure can be readily detected by performing PCA locally in the inversion regions. Using simulations, we demonstrated that the proposed method can be used to detect and genotype inversion polymorphisms using unphased genotype data. We applied our method to the phase III HapMap data and inferred the inversion genotypes of known inversion polymorphisms at 8p23.1 and 17q21.31. These inversion genotypes were validated by comparing with literature results and by checking Mendelian consistency using the family data whenever available. Based on the PCA-approach, we also performed a preliminary genome-wide scan for inversions using the HapMap data, which resulted in 2040 candidate inversions, 169 of which overlapped with previously reported inversions. Our method can be readily applied to the abundant SNP data, and is expected to play an important role in developing human genome maps of inversions and exploring associations between inversions and susceptibility of diseases. PMID:22808122

  5. Evaluating Continuous-Time Slam Using a Predefined Trajectory Provided by a Robotic Arm

    NASA Astrophysics Data System (ADS)

    Koch, B.; Leblebici, R.; Martell, A.; Jörissen, S.; Schilling, K.; Nüchter, A.

    2017-09-01

    Recently published approaches to SLAM algorithms process laser sensor measurements and output a map as a point cloud of the environment. Often the actual precision of the map remains unclear, since SLAMalgorithms apply local improvements to the resulting map. Unfortunately, it is not trivial to compare the performance of SLAMalgorithms objectively, especially without an accurate ground truth. This paper presents a novel benchmarking technique that allows to compare a precise map generated with an accurate ground truth trajectory to a map with a manipulated trajectory which was distorted by different forms of noise. The accurate ground truth is acquired by mounting a laser scanner on an industrial robotic arm. The robotic arm is moved on a predefined path while the position and orientation of the end-effector tool are monitored. During this process the 2D profile measurements of the laser scanner are recorded in six degrees of freedom and afterwards used to generate a precise point cloud of the test environment. For benchmarking, an offline continuous-time SLAM algorithm is subsequently applied to remove the inserted distortions. Finally, it is shown that the manipulated point cloud is reversible to its previous state and is slightly improved compared to the original version, since small errors that came into account by imprecise assumptions, sensor noise and calibration errors are removed as well.

  6. Application of Modern Design of Experiments to CARS Thermometry in a Model Scramjet Engine

    NASA Technical Reports Server (NTRS)

    Danehy, P. M.; DeLoach, R.; Cutler, A. D.

    2002-01-01

    We have applied formal experiment design and analysis to optimize the measurement of temperature in a supersonic combustor at NASA Langley Research Center. We used the coherent anti-Stokes Raman spectroscopy (CARS) technique to map the temperature distribution in the flowfield downstream of an 1160 K, Mach 2 freestream into which supersonic hydrogen fuel is injected at an angle of 30 degrees. CARS thermometry is inherently a single-point measurement technique; it was used to map thc flow by translating the measurement volume through the flowfield. The method known as "Modern Design of Experiments" (MDOE) was used to estimate the data volume required, design the test matrix, perform the experiment and analyze the resulting data. MDOE allowed us to match the volume of data acquired to the precision requirements of the customer. Furthermore, one aspect of MDOE, known as response surface methodology, allowed us to develop precise maps of the flowfield temperature, allowing interpolation between measurement points. An analytic function in two spatial variables was fit to the data from a single measurement plane. Fitting with a Cosine Series Bivariate Function allowed the mean temperature to be mapped with 95% confidence interval half-widths of +/- 30 Kelvin, comfortably meeting the confidence of +/- 50 Kelvin specified prior to performing the experiments. We estimate that applying MDOE to the present experiment saved a factor of 5 in data volume acquired, compared to experiments executed in the traditional manner. Furthermore, the precision requirements could have been met with less than half the data acquired.

  7. Mapping of the Resistance of a Superconducting Transition Edge Sensor as a Function of Temperature, Current, and Applied Magnetic Field

    NASA Technical Reports Server (NTRS)

    Zhang, Shou; Eckart, Megan E.; Jaeckel, Felix; Kripps, Kari L.; McCammon, Dan; Zhou, Yu; Morgan, Kelsey M.

    2017-01-01

    We have measured the resistance R (T, I, B(sub ext) of a superconducting transition edge sensor over the entire transition region on a fine scale, producing a four-dimensional map of the resistance surface. The dimensionless temperature and current sensitivities (alpha equivalence partial derivative log R/partial derivative log T|(sub I) and beta equivalence partial derivative log R/partial derivative log I|(sub T) of the TES resistance have been determined at each point. alpha and beta are closely related to the sensor performance, but show a great deal of complex, large amplitude fine structure over large portions of the surface that is sensitive to the applied magnetic field. We discuss the relation of this structure to the presence of Josephson weak link fringes.

  8. Description of a user-oriented geographic information system - The resource analysis program

    NASA Technical Reports Server (NTRS)

    Tilmann, S. E.; Mokma, D. L.

    1980-01-01

    This paper describes the Resource Analysis Program, an applied geographic information system. Several applications are presented which utilized soil, and other natural resource data, to develop integrated maps and data analyses. These applications demonstrate the methods of analysis and the philosophy of approach used in the mapping system. The applications are evaluated in reference to four major needs of a functional mapping system: data capture, data libraries, data analysis, and mapping and data display. These four criteria are then used to describe an effort to develop the next generation of applied mapping systems. This approach uses inexpensive microcomputers for field applications and should prove to be a viable entry point for users heretofore unable or unwilling to venture into applied computer mapping.

  9. Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics

    PubMed Central

    Chen, Wenan; Larrabee, Beth R.; Ovsyannikova, Inna G.; Kennedy, Richard B.; Haralambieva, Iana H.; Poland, Gregory A.; Schaid, Daniel J.

    2015-01-01

    Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf. PMID:25948564

  10. Measuring the quantum geometric tensor in two-dimensional photonic and exciton-polariton systems

    NASA Astrophysics Data System (ADS)

    Bleu, O.; Solnyshkov, D. D.; Malpuech, G.

    2018-05-01

    We propose theoretically a method that allows to measure all the components of the quantum geometric tensor (the metric tensor and the Berry curvature) in a photonic system. The method is based on standard optical measurements. It applies to two-band systems, which can be mapped to a pseudospin, and to four-band systems, which can be described by two entangled pseudospins. We apply this method to several specific cases. We consider a 2D planar cavity with two polarization eigenmodes, where the pseudospin measurement can be performed via polarization-resolved photoluminescence. We also consider the s band of a staggered honeycomb lattice with polarization-degenerate modes (scalar photons), where the sublattice pseudospin can be measured by performing spatially resolved interferometric measurements. We finally consider the s band of a honeycomb lattice with polarized (spinor) photons as an example of a four-band model. We simulate realistic experimental situations in all cases. We find the photon eigenstates by solving the Schrödinger equation including pumping and finite lifetime, and then simulate the measurements to finally extract realistic mappings of the k-dependent tensor components.

  11. The energy ratio mapping algorithm: a tool to improve the energy-based detection of odontocete echolocation clicks.

    PubMed

    Klinck, Holger; Mellinger, David K

    2011-04-01

    The energy ratio mapping algorithm (ERMA) was developed to improve the performance of energy-based detection of odontocete echolocation clicks, especially for application in environments with limited computational power and energy such as acoustic gliders. ERMA systematically evaluates many frequency bands for energy ratio-based detection of echolocation clicks produced by a target species in the presence of the species mix in a given geographic area. To evaluate the performance of ERMA, a Teager-Kaiser energy operator was applied to the series of energy ratios as derived by ERMA. A noise-adaptive threshold was then applied to the Teager-Kaiser function to identify clicks in data sets. The method was tested for detecting clicks of Blainville's beaked whales while rejecting echolocation clicks of Risso's dolphins and pilot whales. Results showed that the ERMA-based detector correctly identified 81.6% of the beaked whale clicks in an extended evaluation data set. Average false-positive detection rate was 6.3% (3.4% for Risso's dolphins and 2.9% for pilot whales).

  12. Multi-class ERP-based BCI data analysis using a discriminant space self-organizing map.

    PubMed

    Onishi, Akinari; Natsume, Kiyohisa

    2014-01-01

    Emotional or non-emotional image stimulus is recently applied to event-related potential (ERP) based brain computer interfaces (BCI). Though the classification performance is over 80% in a single trial, a discrimination between those ERPs has not been considered. In this research we tried to clarify the discriminability of four-class ERP-based BCI target data elicited by desk, seal, spider images and letter intensifications. A conventional self organizing map (SOM) and newly proposed discriminant space SOM (ds-SOM) were applied, then the discriminabilites were visualized. We also classify all pairs of those ERPs by stepwise linear discriminant analysis (SWLDA) and verify the visualization of discriminabilities. As a result, the ds-SOM showed understandable visualization of the data with a shorter computational time than the traditional SOM. We also confirmed the clear boundary between the letter cluster and the other clusters. The result was coherent with the classification performances by SWLDA. The method might be helpful not only for developing a new BCI paradigm, but also for the big data analysis.

  13. Computer aided manufacturing for complex freeform optics

    NASA Astrophysics Data System (ADS)

    Wolfs, Franciscus; Fess, Ed; Johns, Dustin; LePage, Gabriel; Matthews, Greg

    2017-10-01

    Recently, the desire to use freeform optics has been increasing. Freeform optics can be used to expand the capabilities of optical systems and reduce the number of optics needed in an assembly. The traits that increase optical performance also present challenges in manufacturing. As tolerances on freeform optics become more stringent, it is necessary to continue to improve methods for how the grinding and polishing processes interact with metrology. To create these complex shapes, OptiPro has developed a computer aided manufacturing package called PROSurf. PROSurf generates tool paths required for grinding and polishing freeform optics with multiple axes of motion. It also uses metrology feedback for deterministic corrections. ProSurf handles 2 key aspects of the manufacturing process that most other CAM systems struggle with. The first is having the ability to support several input types (equations, CAD models, point clouds) and still be able to create a uniform high-density surface map useable for generating a smooth tool path. The second is to improve the accuracy of mapping a metrology file to the part surface. To perform this OptiPro is using 3D error maps instead of traditional 2D maps. The metrology error map drives the tool path adjustment applied during processing. For grinding, the error map adjusts the tool position to compensate for repeatable system error. For polishing, the error map drives the relative dwell times of the tool across the part surface. This paper will present the challenges associated with these issues and solutions that we have created.

  14. Hi-fidelity multi-scale local processing for visually optimized far-infrared Herschel images

    NASA Astrophysics Data System (ADS)

    Li Causi, G.; Schisano, E.; Liu, S. J.; Molinari, S.; Di Giorgio, A.

    2016-07-01

    In the context of the "Hi-Gal" multi-band full-plane mapping program for the Galactic Plane, as imaged by the Herschel far-infrared satellite, we have developed a semi-automatic tool which produces high definition, high quality color maps optimized for visual perception of extended features, like bubbles and filaments, against the high background variations. We project the map tiles of three selected bands onto a 3-channel panorama, which spans the central 130 degrees of galactic longitude times 2.8 degrees of galactic latitude, at the pixel scale of 3.2", in cartesian galactic coordinates. Then we process this image piecewise, applying a custom multi-scale local stretching algorithm, enforced by a local multi-scale color balance. Finally, we apply an edge-preserving contrast enhancement to perform an artifact-free details sharpening. Thanks to this tool, we have thus produced a stunning giga-pixel color image of the far-infrared Galactic Plane that we made publicly available with the recent release of the Hi-Gal mosaics and compact source catalog.

  15. Multi-gradient echo MR thermometry for monitoring of the near-field area during MR-guided high intensity focused ultrasound heating

    NASA Astrophysics Data System (ADS)

    Lam, Mie K.; de Greef, Martijn; Bouwman, Job G.; Moonen, Chrit T. W.; Viergever, Max A.; Bartels, Lambertus W.

    2015-10-01

    The multi-gradient echo MR thermometry (MGE MRT) method is proposed to use at the interface of the muscle and fat layers found in the abdominal wall, to monitor MR-HIFU heating. As MGE MRT uses fat as a reference, it is field-drift corrected. Relative temperature maps were reconstructed by subtracting absolute temperature maps. Because the absolute temperature maps are reconstructed of individual scans, MGE MRT provides the flexibility of interleaved mapping of temperature changes between two arbitrary time points. The method’s performance was assessed in an ex vivo water bath experiment. An ex vivo HIFU experiment was performed to show the method’s ability to monitor heating of consecutive HIFU sonications and to estimate cooling time constants, in the presence of field drift. The interleaved use between scans of a clinical protocol was demonstrated in vivo in a patient during a clinical uterine fibroid treatment. The relative temperature measurements were accurate (mean absolute error 0.3 °C) and provided excellent visualization of the heating of consecutive HIFU sonications. Maps were reconstructed of estimated cooling time constants and mean ROI values could be well explained by the applied heating pattern. Heating upon HIFU sonication and subsequent cooling could be observed in the in vivo demonstration.

  16. Maximal Aerobic Power in Aging Men: Insights From a Record of 1-Hour Unaccompanied Cycling.

    PubMed

    Capelli, Carlo

    2018-01-01

    To analyze best 1-h unaccompanied performances of master athletes in ages ranging from 35 to 105 y to estimate the decay of maximal aerobic power (MAP) across the spectrum of age. MAP at the various ages was estimated by computing the metabolic power ([Formula: see text]) maintained to cover the distances during best 1-h unaccompanied performances established by master athletes of different classes of age and by assuming that they were able to maintain an [Formula: see text] equal to 88% of their MAP during 1 h of exhaustive exercise. MAP started monotonically decreasing at 47 y of age. Thereafter, it showed an average rate of decrease of ∼14% for the decades up to 105 y of age, similar to other classes of master athletes. The results confirm, by extending the analysis to centennial subjects, that MAP seems to start declining from the middle of the 5th decade of age, with an average percentage decay that is faster than that traditionally reported, even when one maintains a very active lifestyle. The proposed approach may be applied to other types of human locomotion for which the relationship between speed and [Formula: see text] is known.

  17. Human Orbitofrontal Cortex Represents a Cognitive Map of State Space.

    PubMed

    Schuck, Nicolas W; Cai, Ming Bo; Wilson, Robert C; Niv, Yael

    2016-09-21

    Although the orbitofrontal cortex (OFC) has been studied intensely for decades, its precise functions have remained elusive. We recently hypothesized that the OFC contains a "cognitive map" of task space in which the current state of the task is represented, and this representation is especially critical for behavior when states are unobservable from sensory input. To test this idea, we apply pattern-classification techniques to neuroimaging data from humans performing a decision-making task with 16 states. We show that unobservable task states can be decoded from activity in OFC, and decoding accuracy is related to task performance and the occurrence of individual behavioral errors. Moreover, similarity between the neural representations of consecutive states correlates with behavioral accuracy in corresponding state transitions. These results support the idea that OFC represents a cognitive map of task space and establish the feasibility of decoding state representations in humans using non-invasive neuroimaging. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.

    PubMed

    Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik

    2014-06-16

    Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.

  19. Hyperbolic Harmonic Mapping for Surface Registration

    PubMed Central

    Shi, Rui; Zeng, Wei; Su, Zhengyu; Jiang, Jian; Damasio, Hanna; Lu, Zhonglin; Wang, Yalin; Yau, Shing-Tung; Gu, Xianfeng

    2016-01-01

    Automatic computation of surface correspondence via harmonic map is an active research field in computer vision, computer graphics and computational geometry. It may help document and understand physical and biological phenomena and also has broad applications in biometrics, medical imaging and motion capture inducstries. Although numerous studies have been devoted to harmonic map research, limited progress has been made to compute a diffeomorphic harmonic map on general topology surfaces with landmark constraints. This work conquers this problem by changing the Riemannian metric on the target surface to a hyperbolic metric so that the harmonic mapping is guaranteed to be a diffeomorphism under landmark constraints. The computational algorithms are based on Ricci flow and nonlinear heat diffusion methods. The approach is general and robust. We employ our algorithm to study the constrained surface registration problem which applies to both computer vision and medical imaging applications. Experimental results demonstrate that, by changing the Riemannian metric, the registrations are always diffeomorphic and achieve relatively high performance when evaluated with some popular surface registration evaluation standards. PMID:27187948

  20. Pansharpening in coastal ecosystems using Worldview-2 imagery

    NASA Astrophysics Data System (ADS)

    Ibarrola-Ulzurrun, Edurne; Marcello-Ruiz, Javier; Gonzalo-Martin, Consuelo

    2016-10-01

    Both climate change and anthropogenic pressure impacts are producing a declining in ecosystem natural resources. In this work, a vulnerable coastal ecosystem, Maspalomas Natural Reserve (Canary Islands, Spain), is analyzed. The development of advanced image processing techniques, applied to new satellites with very high resolution sensors (VHR), are essential to obtain accurate and systematic information about such natural areas. Thus, remote sensing offers a practical and cost-effective means for a good environmental management although some improvements are needed by the application of pansharpening techniques. A preliminary assessment was performed selecting classical and new algorithms that could achieve good performance with WorldView-2 imagery. Moreover, different quality indices were used in order to asses which pansharpening technique gives a better fused image. A total of 7 pansharpening algorithms were analyzed using 6 spectral and spatial quality indices. The quality assessment was implemented for the whole set of multispectral bands and for those bands covered by the wavelength range of the panchromatic image and outside of it. After an extensive evaluation, the most suitable algorithm was the Weighted Wavelet `à trous' through Fractal Dimension Maps technique which provided the best compromise between the spectral and spatial quality for the image. Finally, Quality Map Analysis was performed in order to study the fusion in each band at local level. As conclusion, novel analysis has been conducted covering the evaluation of fusion methods in shallow water areas. Hence, the excellent results provided by this study have been applied to the generation of challenging thematic maps of coastal and dunes protected areas.

  1. Map based navigation for autonomous underwater vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuohy, S.T.; Leonard, J.J.; Bellingham, J.G.

    1995-12-31

    In this work, a map based navigation algorithm is developed wherein measured geophysical properties are matched to a priori maps. The objectives is a complete algorithm applicable to a small, power-limited AUV which performs in real time to a required resolution with bounded position error. Interval B-Splines are introduced for the non-linear representation of two-dimensional geophysical parameters that have measurement uncertainty. Fine-scale position determination involves the solution of a system of nonlinear polynomial equations with interval coefficients. This system represents the complete set of possible vehicle locations and is formulated as the intersection of contours established on each map frommore » the simultaneous measurement of associated geophysical parameters. A standard filter mechanisms, based on a bounded interval error model, predicts the position of the vehicle and, therefore, screens extraneous solutions. When multiple solutions are found, a tracking mechanisms is applied until a unique vehicle location is determined.« less

  2. NON-HOMOGENEOUS POISSON PROCESS MODEL FOR GENETIC CROSSOVER INTERFERENCE.

    PubMed

    Leu, Szu-Yun; Sen, Pranab K

    2014-01-01

    The genetic crossover interference is usually modeled with a stationary renewal process to construct the genetic map. We propose two non-homogeneous, also dependent, Poisson process models applied to the known physical map. The crossover process is assumed to start from an origin and to occur sequentially along the chromosome. The increment rate depends on the position of the markers and the number of crossover events occurring between the origin and the markers. We show how to obtain parameter estimates for the process and use simulation studies and real Drosophila data to examine the performance of the proposed models.

  3. Probabilistic, Seismically-Induced Landslide Hazard Mapping of Western Oregon

    NASA Astrophysics Data System (ADS)

    Olsen, M. J.; Sharifi Mood, M.; Gillins, D. T.; Mahalingam, R.

    2015-12-01

    Earthquake-induced landslides can generate significant damage within urban communities by damaging structures, obstructing lifeline connection routes and utilities, generating various environmental impacts, and possibly resulting in loss of life. Reliable hazard and risk maps are important to assist agencies in efficiently allocating and managing limited resources to prepare for such events. This research presents a new methodology in order to communicate site-specific landslide hazard assessments in a large-scale, regional map. Implementation of the proposed methodology results in seismic-induced landslide hazard maps that depict the probabilities of exceeding landslide displacement thresholds (e.g. 0.1, 0.3, 1.0 and 10 meters). These maps integrate a variety of data sources including: recent landslide inventories, LIDAR and photogrammetric topographic data, geology map, mapped NEHRP site classifications based on available shear wave velocity data in each geologic unit, and USGS probabilistic seismic hazard curves. Soil strength estimates were obtained by evaluating slopes present along landslide scarps and deposits for major geologic units. Code was then developed to integrate these layers to perform a rigid, sliding block analysis to determine the amount and associated probabilities of displacement based on each bin of peak ground acceleration in the seismic hazard curve at each pixel. The methodology was applied to western Oregon, which contains weak, weathered, and often wet soils at steep slopes. Such conditions have a high landslide hazard even without seismic events. A series of landslide hazard maps highlighting the probabilities of exceeding the aforementioned thresholds were generated for the study area. These output maps were then utilized in a performance based design framework enabling them to be analyzed in conjunction with other hazards for fully probabilistic-based hazard evaluation and risk assessment. a) School of Civil and Construction Engineering, Oregon State University, Corvallis, OR 97331, USA

  4. Molecular surface mesh generation by filtering electron density map.

    PubMed

    Giard, Joachim; Macq, Benoît

    2010-01-01

    Bioinformatics applied to macromolecules are now widely spread and in continuous expansion. In this context, representing external molecular surface such as the Van der Waals Surface or the Solvent Excluded Surface can be useful for several applications. We propose a fast and parameterizable algorithm giving good visual quality meshes representing molecular surfaces. It is obtained by isosurfacing a filtered electron density map. The density map is the result of the maximum of Gaussian functions placed around atom centers. This map is filtered by an ideal low-pass filter applied on the Fourier Transform of the density map. Applying the marching cubes algorithm on the inverse transform provides a mesh representation of the molecular surface.

  5. Comparison of performance and quantitative descriptive analysis sensory profiling and its relationship to consumer liking between the artisanal cheese producers panel and the descriptive trained panel.

    PubMed

    Ramírez-Rivera, Emmanuel de Jesús; Díaz-Rivera, Pablo; Guadalupe Ramón-Canul, Lorena; Juárez-Barrientos, José Manuel; Rodríguez-Miranda, Jesús; Herman-Lara, Erasmo; Prinyawiwatkul, Witoon; Herrera-Corredor, José Andrés

    2018-07-01

    The aim of this research was to compare the performance and sensory profiling of a panel of artisanal cheese producers against a trained panel and their relationship to consumer liking (external preference mapping). Performance was analyzed statistically at an individual level using the Fisher's test (F) for discrimination, the mean square error for repeatability, and Manhattan plots for visualizing the intra-panel homogeneity. At group level, performance was evaluated using ANOVA. External preference mapping technique was applied to determine the efficiency of each sensory profile. Results showed that the producers panel was discriminant and repetitive with a performance similar to that of the trained panel. Manhattan plots showed that the performance of artisanal cheese producers was more homogeneous than trained panelists. The correlation between sensory profiles (Rv = 0.95) demonstrated similarities in the generation and use of sensory profiles. The external preference maps generated individually with the profiles of each panel were also similar. Recruiting individuals familiar with the production of artisanal cheeses as panelists is a viable strategy for sensory characterization of artisanal cheeses within their context of origin because their results were similar to those from the trained panel and can be correlated with consumer liking data. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. Application of a Fully Numerical Guidance to Mars Aerocapture

    NASA Technical Reports Server (NTRS)

    Matz, Daniel A.; Lu, Ping; Mendeck, Gavin F.; Sostaric, Ronald R.

    2017-01-01

    An advanced guidance algorithm, Fully Numerical Predictor-corrector Aerocapture Guidance (FNPAG), has been developed to perform aerocapture maneuvers in an optimal manner. It is a model-based, numerical guidance that benefits from requiring few adjustments across a variety of different hypersonic vehicle lift-to-drag ratios, ballistic co-efficients, and atmospheric entry conditions. In this paper, FNPAG is first applied to the Mars Rigid Vehicle (MRV) mid lift-to-drag ratio concept. Then the study is generalized to a design map of potential Mars aerocapture missions and vehicles, ranging from the scale and requirements of recent robotic to potential human and precursor missions. The design map results show the versatility of FNPAG and provide insight for the design of Mars aerocapture vehicles and atmospheric entry conditions to achieve desired performance.

  7. A Hybrid Indoor Localization and Navigation System with Map Matching for Pedestrians Using Smartphones.

    PubMed

    Tian, Qinglin; Salcic, Zoran; Wang, Kevin I-Kai; Pan, Yun

    2015-12-05

    Pedestrian dead reckoning is a common technique applied in indoor inertial navigation systems that is able to provide accurate tracking performance within short distances. Sensor drift is the main bottleneck in extending the system to long-distance and long-term tracking. In this paper, a hybrid system integrating traditional pedestrian dead reckoning based on the use of inertial measurement units, short-range radio frequency systems and particle filter map matching is proposed. The system is a drift-free pedestrian navigation system where position error and sensor drift is regularly corrected and is able to provide long-term accurate and reliable tracking. Moreover, the whole system is implemented on a commercial off-the-shelf smartphone and achieves real-time positioning and tracking performance with satisfactory accuracy.

  8. Active and Passive Remote Sensing Data Time Series for Flood Detection and Surface Water Mapping

    NASA Astrophysics Data System (ADS)

    Bioresita, Filsa; Puissant, Anne; Stumpf, André; Malet, Jean-Philippe

    2017-04-01

    As a consequence of environmental changes surface waters are undergoing changes in time and space. A better knowledge of the spatial and temporal distribution of surface waters resources becomes essential to support sustainable policies and development activities. Especially because surface waters, are not only a vital sweet water resource, but can also pose hazards to human settlements and infrastructures through flooding. Floods are a highly frequent disaster in the world and can caused huge material losses. Detecting and mapping their spatial distribution is fundamental to ascertain damages and for relief efforts. Spaceborne Synthetic Aperture Radar (SAR) is an effective way to monitor surface waters bodies over large areas since it provides excellent temporal coverage and, all-weather day-and-night imaging capabilities. However, emergent vegetation, trees, wind or flow turbulence can increase radar back-scatter returns and pose problems for the delineation of inundated areas. In such areas, passive remote sensing data can be used to identify vegetated areas and support the interpretation of SAR data. The availability of new Earth Observation products, for example Sentinel-1 (active) and Sentinel-2 (passive) imageries, with both high spatial and temporal resolution, have the potential to facilitate flood detection and monitoring of surface waters changes which are very dynamic in space and time. In this context, the research consists of two parts. In the first part, the objective is to propose generic and reproducible methodologies for the analysis of Sentinel-1 time series data for floods detection and surface waters mapping. The processing chain comprises a series of pre-processing steps and the statistical modeling of the pixel value distribution to produce probabilistic maps for the presence of surface waters. Images pre-processing for all Sentinel-1 images comprise the reduction SAR effect like orbit errors, speckle noise, and geometric effects. A modified Split Based Approach (MSBA) is used in order to focus on surface water areas automatically and facilitate the estimation of class models for water and non-water areas. A Finite Mixture Model is employed as the underlying statistical model to produce probabilistic maps. Subsequently, bilateral filtering is applied to take into account spatial neighborhood relationships in the generation of final map. The elimination of shadows effect is performed in a post-processing step. The processing chain is tested on three case studies. The first case is a flood event in central Ireland, the second case is located in Yorkshire county / Great Britain, and the third test case covers a recent flood event in northern Italy. The tests showed that the modified SBA step and the Finite Mixture Models can be applied for the automatic surface water detection in a variety of test cases. An evaluation again Copernicus products derived from very-high resolution imagery was performed, and showed a high overall accuracy and F-measure of the obtained maps. This evaluation also showed that the use of probability maps and bilateral filtering improved the accuracy of classification results significantly. Based on this quantitative evaluation, it is concluded that the processing chain can be applied for flood mapping from Sentinel-1 data. To estimate robust statistical distributions the method requires sufficient surface waters areas in the observed zone and sufficient contrast between surface waters and other land use classes. Ongoing research addresses the fusion of Sentinel-1 and passive remote sensing data (e.g. Sentinel-2) in order to reduce the current shortcomings in the developed processing chain. In this work, fusion is performed at the feature level to better account for the difference image properties of SAR and optical sensors. Further, the processing chain is currently being optimized in terms of calculation time for a further integration as a flood mapping service on the A2S (Alsace Aval Sentinel) high-performance computing infrastructure of University of Strasbourg.

  9. White Matter Fiber-based Analysis of T1w/T2w Ratio Map.

    PubMed

    Chen, Haiwei; Budin, Francois; Noel, Jean; Prieto, Juan Carlos; Gilmore, John; Rasmussen, Jerod; Wadhwa, Pathik D; Entringer, Sonja; Buss, Claudia; Styner, Martin

    2017-02-01

    To develop, test, evaluate and apply a novel tool for the white matter fiber-based analysis of T1w/T2w ratio maps quantifying myelin content. The cerebral white matter in the human brain develops from a mostly non-myelinated state to a nearly fully mature white matter myelination within the first few years of life. High resolution T1w/T2w ratio maps are believed to be effective in quantitatively estimating myelin content on a voxel-wise basis. We propose the use of a fiber-tract-based analysis of such T1w/T2w ratio data, as it allows us to separate fiber bundles that a common regional analysis imprecisely groups together, and to associate effects to specific tracts rather than large, broad regions. We developed an intuitive, open source tool to facilitate such fiber-based studies of T1w/T2w ratio maps. Via its Graphical User Interface (GUI) the tool is accessible to non-technical users. The framework uses calibrated T1w/T2w ratio maps and a prior fiber atlas as an input to generate profiles of T1w/T2w values. The resulting fiber profiles are used in a statistical analysis that performs along-tract functional statistical analysis. We applied this approach to a preliminary study of early brain development in neonates. We developed an open-source tool for the fiber based analysis of T1w/T2w ratio maps and tested it in a study of brain development.

  10. White matter fiber-based analysis of T1w/T2w ratio map

    NASA Astrophysics Data System (ADS)

    Chen, Haiwei; Budin, Francois; Noel, Jean; Prieto, Juan Carlos; Gilmore, John; Rasmussen, Jerod; Wadhwa, Pathik D.; Entringer, Sonja; Buss, Claudia; Styner, Martin

    2017-02-01

    Purpose: To develop, test, evaluate and apply a novel tool for the white matter fiber-based analysis of T1w/T2w ratio maps quantifying myelin content. Background: The cerebral white matter in the human brain develops from a mostly non-myelinated state to a nearly fully mature white matter myelination within the first few years of life. High resolution T1w/T2w ratio maps are believed to be effective in quantitatively estimating myelin content on a voxel-wise basis. We propose the use of a fiber-tract-based analysis of such T1w/T2w ratio data, as it allows us to separate fiber bundles that a common regional analysis imprecisely groups together, and to associate effects to specific tracts rather than large, broad regions. Methods: We developed an intuitive, open source tool to facilitate such fiber-based studies of T1w/T2w ratio maps. Via its Graphical User Interface (GUI) the tool is accessible to non-technical users. The framework uses calibrated T1w/T2w ratio maps and a prior fiber atlas as an input to generate profiles of T1w/T2w values. The resulting fiber profiles are used in a statistical analysis that performs along-tract functional statistical analysis. We applied this approach to a preliminary study of early brain development in neonates. Results: We developed an open-source tool for the fiber based analysis of T1w/T2w ratio maps and tested it in a study of brain development.

  11. Performance back-deduction from a loading to flow coefficient map: Application to radial turbine

    NASA Astrophysics Data System (ADS)

    Carbonneau, Xavier; Binder, Nicolas

    2012-12-01

    Radial turbine stages are often used for applications requiring off-design operation, as turbocharging for instance. The off-design ability of such stages is commonly analyzed through the traditional turbine map, plotting the reduced mass-flow against the pressure-ratio, for reduced-speed lines. However, some alternatives are possible, such as the flow-coefficient ( Ψ) to loading-coefficient ( φ) diagram where the pressure-ratio lines are actually straight lines, very convenient property to perform prediction. A robust method re-creating this map from a predicted Ψ-φ diagram is needed. Recent work has shown that this back-deduction quality, without the use of any loss models, depends on the knowledge of an intermediate pressure-ratio. A modelization of this parameter is then proposed. The comparison with both experimental and CFD results is presented, with quite good agreement for mass flow rate and rotational speed, and for the intermediate pressure ratio. The last part of the paper is dedicated to the application of the intermediate pressure-ratio knowledge to the improvement of the deduction of the pressure ratio lines in the Ψ-φ diagram. Beside this improvement, the back-deduction method of the classical map is structured, applied and evaluated.

  12. Detection of myocardial ischemia by automated, motion-corrected, color-encoded perfusion maps compared with visual analysis of adenosine stress cardiovascular magnetic resonance imaging at 3 T: a pilot study.

    PubMed

    Doesch, Christina; Papavassiliu, Theano; Michaely, Henrik J; Attenberger, Ulrike I; Glielmi, Christopher; Süselbeck, Tim; Fink, Christian; Borggrefe, Martin; Schoenberg, Stefan O

    2013-09-01

    The purpose of this study was to compare automated, motion-corrected, color-encoded (AMC) perfusion maps with qualitative visual analysis of adenosine stress cardiovascular magnetic resonance imaging for detection of flow-limiting stenoses. Myocardial perfusion measurements applying the standard adenosine stress imaging protocol and a saturation-recovery temporal generalized autocalibrating partially parallel acquisition (t-GRAPPA) turbo fast low angle shot (Turbo FLASH) magnetic resonance imaging sequence were performed in 25 patients using a 3.0-T MAGNETOM Skyra (Siemens Healthcare Sector, Erlangen, Germany). Perfusion studies were analyzed using AMC perfusion maps and qualitative visual analysis. Angiographically detected coronary artery (CA) stenoses greater than 75% or 50% or more with a myocardial perfusion reserve index less than 1.5 were considered as hemodynamically relevant. Diagnostic performance and time requirement for both methods were compared. Interobserver and intraobserver reliability were also assessed. A total of 29 CA stenoses were included in the analysis. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy for detection of ischemia on a per-patient basis were comparable using the AMC perfusion maps compared to visual analysis. On a per-CA territory basis, the attribution of an ischemia to the respective vessel was facilitated using the AMC perfusion maps. Interobserver and intraobserver reliability were better for the AMC perfusion maps (concordance correlation coefficient, 0.94 and 0.93, respectively) compared to visual analysis (concordance correlation coefficient, 0.73 and 0.79, respectively). In addition, in comparison to visual analysis, the AMC perfusion maps were able to significantly reduce analysis time from 7.7 (3.1) to 3.2 (1.9) minutes (P < 0.0001). The AMC perfusion maps yielded a diagnostic performance on a per-patient and on a per-CA territory basis comparable with the visual analysis. Furthermore, this approach demonstrated higher interobserver and intraobserver reliability as well as a better time efficiency when compared to visual analysis.

  13. Acoustic-articulatory mapping in vowels by locally weighted regression

    PubMed Central

    McGowan, Richard S.; Berger, Michael A.

    2009-01-01

    A method for mapping between simultaneously measured articulatory and acoustic data is proposed. The method uses principal components analysis on the articulatory and acoustic variables, and mapping between the domains by locally weighted linear regression, or loess [Cleveland, W. S. (1979). J. Am. Stat. Assoc. 74, 829–836]. The latter method permits local variation in the slopes of the linear regression, assuming that the function being approximated is smooth. The methodology is applied to vowels of four speakers in the Wisconsin X-ray Microbeam Speech Production Database, with formant analysis. Results are examined in terms of (1) examples of forward (articulation-to-acoustics) mappings and inverse mappings, (2) distributions of local slopes and constants, (3) examples of correlations among slopes and constants, (4) root-mean-square error, and (5) sensitivity of formant frequencies to articulatory change. It is shown that the results are qualitatively correct and that loess performs better than global regression. The forward mappings show different root-mean-square error properties than the inverse mappings indicating that this method is better suited for the forward mappings than the inverse mappings, at least for the data chosen for the current study. Some preliminary results on sensitivity of the first two formant frequencies to the two most important articulatory principal components are presented. PMID:19813812

  14. EnGeoMAP - geological applications within the EnMAP hyperspectral satellite science program

    NASA Astrophysics Data System (ADS)

    Boesche, N. K.; Mielke, C.; Rogass, C.; Guanter, L.

    2016-12-01

    Hyperspectral investigations from near field to space substantially contribute to geological exploration and mining monitoring of raw material and mineral deposits. Due to their spectral characteristics, large mineral occurrences and minefields can be identified from space and the spatial distribution of distinct proxy minerals be mapped. In the frame of the EnMAP hyperspectral satellite science program a mineral and elemental mapping tool was developed - the EnGeoMAP. It contains a basic mineral mapping and a rare earth element mapping approach. This study shows the performance of EnGeoMAP based on simulated EnMAP data of the rare earth element bearing Mountain Pass Carbonatite Complex, USA, and the Rodalquilar and Lomilla Calderas, Spain, which host the economically relevant gold-silver, lead-zinc-silver-gold and alunite deposits. The mountain pass image data was simulated on the basis of AVIRIS Next Generation images, while the Rodalquilar data is based on HyMap images. The EnGeoMAP - Base approach was applied to both images, while the mountain pass image data were additionally analysed using the EnGeoMAP - REE software tool. The results are mineral and elemental maps that serve as proxies for the regional lithology and deposit types. The validation of the maps is based on chemical analyses of field samples. Current airborne sensors meet the spatial and spectral requirements for detailed mineral mapping and future hyperspectral space borne missions will additionally provide a large coverage. For those hyperspectral missions, EnGeoMAP is a rapid data analysis tool that is provided to spectral geologists working in mineral exploration.

  15. Improving depth maps of plants by using a set of five cameras

    NASA Astrophysics Data System (ADS)

    Kaczmarek, Adam L.

    2015-03-01

    Obtaining high-quality depth maps and disparity maps with the use of a stereo camera is a challenging task for some kinds of objects. The quality of these maps can be improved by taking advantage of a larger number of cameras. The research on the usage of a set of five cameras to obtain disparity maps is presented. The set consists of a central camera and four side cameras. An algorithm for making disparity maps called multiple similar areas (MSA) is introduced. The algorithm was specially designed for the set of five cameras. Experiments were performed with the MSA algorithm and the stereo matching algorithm based on the sum of sum of squared differences (sum of SSD, SSSD) measure. Moreover, the following measures were included in the experiments: sum of absolute differences (SAD), zero-mean SAD (ZSAD), zero-mean SSD (ZSSD), locally scaled SAD (LSAD), locally scaled SSD (LSSD), normalized cross correlation (NCC), and zero-mean NCC (ZNCC). Algorithms presented were applied to images of plants. Making depth maps of plants is difficult because parts of leaves are similar to each other. The potential usability of the described algorithms is especially high in agricultural applications such as robotic fruit harvesting.

  16. Applied photo interpretation for airbrush cartography

    NASA Technical Reports Server (NTRS)

    Inge, J. L.; Bridges, P. M.

    1976-01-01

    Lunar and planetary exploration has required the development of new techniques of cartographic portrayal. Conventional photo-interpretive methods employing size, shape, shadow, tone, pattern, and texture are applied to computer-processed satellite television images. Comparative judgements are affected by illumination, resolution, variations in surface coloration, and transmission or processing artifacts. The portrayal of tonal densities in a relief illustration is performed using a unique airbrush technique derived from hill-shading of contour maps. The control of tone and line quality is essential because the mid-gray to dark tone densities must be finalized prior to the addition of highlights to the drawing. This is done with an electric eraser until the drawing is completed. The drawing density is controlled with a reflectance-reading densitometer to meet certain density guidelines. The versatility of planetary photo-interpretive methods for airbrushed map portrayals is demonstrated by the application of these techniques to the synthesis of nonrelief data.

  17. Approximating Long-Term Statistics Early in the Global Precipitation Measurement Era

    NASA Technical Reports Server (NTRS)

    Stanley, Thomas; Kirschbaum, Dalia B.; Huffman, George J.; Adler, Robert F.

    2017-01-01

    Long-term precipitation records are vital to many applications, especially the study of extreme events. The Tropical Rainfall Measuring Mission (TRMM) has served this need, but TRMMs successor mission, Global Precipitation Measurement (GPM), does not yet provide a long-term record. Quantile mapping, the conversion of values across paired empirical distributions, offers a simple, established means to approximate such long-term statistics, but only within appropriately defined domains. This method was applied to a case study in Central America, demonstrating that quantile mapping between TRMM and GPM data maintains the performance of a real-time landslide model. Use of quantile mapping could bring the benefits of the latest satellite-based precipitation dataset to existing user communities such as those for hazard assessment, crop forecasting, numerical weather prediction, and disease tracking.

  18. Performance of Low-Density Parity-Check Coded Modulation

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2010-01-01

    This paper reports the simulated performance of each of the nine accumulate-repeat-4-jagged-accumulate (AR4JA) low-density parity-check (LDPC) codes [3] when used in conjunction with binary phase-shift-keying (BPSK), quadrature PSK (QPSK), 8-PSK, 16-ary amplitude PSK (16- APSK), and 32-APSK.We also report the performance under various mappings of bits to modulation symbols, 16-APSK and 32-APSK ring scalings, log-likelihood ratio (LLR) approximations, and decoder variations. One of the simple and well-performing LLR approximations can be expressed in a general equation that applies to all of the modulation types.

  19. Fusion of Location Fingerprinting and Trilateration Based on the Example of Differential Wi-Fi Positioning

    NASA Astrophysics Data System (ADS)

    Retscher, G.

    2017-09-01

    Positioning of mobile users in indoor environments with Wireless Fidelity (Wi-Fi) has become very popular whereby location fingerprinting and trilateration are the most commonly employed methods. In both the received signal strength (RSS) of the surrounding access points (APs) are scanned and used to estimate the user's position. Within the scope of this study the advantageous qualities of both methods are identified and selected to benefit their combination. By a fusion of these technologies a higher performance for Wi-Fi positioning is achievable. For that purpose, a novel approach based on the well-known Differential GPS (DGPS) principle of operation is developed and applied. This approach for user localization and tracking is termed Differential Wi-Fi (DWi-Fi) by analogy with DGPS. From reference stations deployed in the area of interest differential measurement corrections are derived and applied at the mobile user side. Hence, range or coordinate corrections can be estimated from a network of reference station observations as it is done in common CORS GNSS networks. A low-cost realization with Raspberry Pi units is employed for these reference stations. These units serve at the same time as APs broadcasting Wi-Fi signals as well as reference stations scanning the receivable Wi-Fi signals of the surrounding APs. As the RSS measurements are carried out continuously at the reference stations dynamically changing maps of RSS distributions, so-called radio maps, are derived. Similar as in location fingerprinting this radio maps represent the RSS fingerprints at certain locations. From the areal modelling of the correction parameters in combination with the dynamically updated radio maps the location of the user can be estimated in real-time. The novel approach is presented and its performance demonstrated in this paper.

  20. Re-examining data-intensive surface water models with high-resolution topography derived from unmanned aerial system photogrammetry

    NASA Astrophysics Data System (ADS)

    Pai, H.; Tyler, S.

    2017-12-01

    Small, unmanned aerial systems (sUAS) are quickly becoming a cost-effective and easily deployable tool for high spatial resolution environmental sensing. Land surface studies from sUAS imagery have largely focused on accurate topographic mapping, quantifying geomorphologic changes, and classification/identification of vegetation, sediment, and water quality tracers. In this work, we explore a further application of sUAS-derived topographic mapping to a two-dimensional (2-d), depth-averaged river hydraulic model (Flow and Sediment Transport with Morphological Evolution of Channels, FaSTMECH) along a short, meandering reach of East River, Colorado. On August 8, 2016, we flew a sUAS as part of the Center for Transformative Environmental Monitoring Programs with a consumer-grade visible camera and created a digital elevation map ( 1.5 cm resolution; 5 cm accuracy; 500 m long river corridor) with Agisoft Photoscan software. With the elevation map, we created a longitudinal water surface elevation (WSE) profile by manually delineating the bank-water interface and river bathymetry by applying refraction corrections for more accurate water depth estimates, an area of ongoing research for shallow and clear river systems. We tested both uncorrected and refraction-corrected bathymetries with the steady-state, 2-d model, applying sensitivities for dissipation parameters (bed roughness and eddy characteristics). Model performance was judged from the WSE data and measured stream velocities. While the models converged, performance and insights from model output could be improved with better bed roughness characterization and additional water depth cross-validation for refraction corrections. Overall, this work shows the applicability of sUAS-derived products to a multidimensional river model, where bathymetric data of high resolution and accuracy are key model input requirements.

  1. Design and evaluation of a freeform lens by using a method of luminous intensity mapping and a differential equation

    NASA Astrophysics Data System (ADS)

    Essameldin, Mahmoud; Fleischmann, Friedrich; Henning, Thomas; Lang, Walter

    2017-02-01

    Freeform optical systems are playing an important role in the field of illumination engineering for redistributing the light intensity, because of its capability of achieving accurate and efficient results. The authors have presented the basic idea of the freeform lens design method at the 117th annual meeting of the German Society of Applied Optics (DGAOProceedings). Now, we demonstrate the feasibility of the design method by designing and evaluating a freeform lens. The concepts of luminous intensity mapping, energy conservation and differential equation are combined in designing a lens for non-imaging applications. The required procedures to design a lens including the simulations are explained in detail. The optical performance is investigated by using a numerical simulation of optical ray tracing. For evaluation, the results are compared with another recently published design method, showing the accurate performance of the proposed method using a reduced number of mapping angles. As a part of the tolerance analyses of the fabrication processes, the influence of the light source misalignments (translation and orientation) on the beam-shaping performance is presented. Finally, the importance of considering the extended light source while designing a freeform lens using the proposed method is discussed.

  2. Benchmarking short sequence mapping tools

    PubMed Central

    2013-01-01

    Background The development of next-generation sequencing instruments has led to the generation of millions of short sequences in a single run. The process of aligning these reads to a reference genome is time consuming and demands the development of fast and accurate alignment tools. However, the current proposed tools make different compromises between the accuracy and the speed of mapping. Moreover, many important aspects are overlooked while comparing the performance of a newly developed tool to the state of the art. Therefore, there is a need for an objective evaluation method that covers all the aspects. In this work, we introduce a benchmarking suite to extensively analyze sequencing tools with respect to various aspects and provide an objective comparison. Results We applied our benchmarking tests on 9 well known mapping tools, namely, Bowtie, Bowtie2, BWA, SOAP2, MAQ, RMAP, GSNAP, Novoalign, and mrsFAST (mrFAST) using synthetic data and real RNA-Seq data. MAQ and RMAP are based on building hash tables for the reads, whereas the remaining tools are based on indexing the reference genome. The benchmarking tests reveal the strengths and weaknesses of each tool. The results show that no single tool outperforms all others in all metrics. However, Bowtie maintained the best throughput for most of the tests while BWA performed better for longer read lengths. The benchmarking tests are not restricted to the mentioned tools and can be further applied to others. Conclusion The mapping process is still a hard problem that is affected by many factors. In this work, we provided a benchmarking suite that reveals and evaluates the different factors affecting the mapping process. Still, there is no tool that outperforms all of the others in all the tests. Therefore, the end user should clearly specify his needs in order to choose the tool that provides the best results. PMID:23758764

  3. Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics.

    PubMed

    Chen, Wenan; Larrabee, Beth R; Ovsyannikova, Inna G; Kennedy, Richard B; Haralambieva, Iana H; Poland, Gregory A; Schaid, Daniel J

    2015-07-01

    Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf. Copyright © 2015 by the Genetics Society of America.

  4. Estimating costs and performance of systems for machine processing of remotely sensed data

    NASA Technical Reports Server (NTRS)

    Ballard, R. J.; Eastwood, L. F., Jr.

    1977-01-01

    This paper outlines a method for estimating computer processing times and costs incurred in producing information products from digital remotely sensed data. The method accounts for both computation and overhead, and may be applied to any serial computer. The method is applied to estimate the cost and computer time involved in producing Level II Land Use and Vegetative Cover Maps for a five-state midwestern region. The results show that the amount of data to be processed overloads some example computer systems, but that the processing is feasible on others.

  5. Mapping landslide source and transport areas in VHR images with Object-Based Analysis and Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Heleno, Sandra; Matias, Magda; Pina, Pedro

    2015-04-01

    Visual interpretation of satellite imagery remains extremely demanding in terms of resources and time, especially when dealing with numerous multi-scale landslides affecting wide areas, such as is the case of rainfall-induced shallow landslides. Applying automated methods can contribute to more efficient landslide mapping and updating of existing inventories, and in recent years the number and variety of approaches is rapidly increasing. Very High Resolution (VHR) images, acquired by space-borne sensors with sub-metric precision, such as Ikonos, Quickbird, Geoeye and Worldview, are increasingly being considered as the best option for landslide mapping, but these new levels of spatial detail also present new challenges to state of the art image analysis tools, asking for automated methods specifically suited to map landslide events on VHR optical images. In this work we develop and test a methodology for semi-automatic landslide recognition and mapping of landslide source and transport areas. The method combines object-based image analysis and a Support Vector Machine supervised learning algorithm, and was tested using a GeoEye-1 multispectral image, sensed 3 days after a damaging landslide event in Madeira Island, together with a pre-event LiDAR DEM. Our approach has proved successful in the recognition of landslides on a 15 Km2-wide study area, with 81 out of 85 landslides detected in its validation regions. The classifier also showed reasonable performance (false positive rate 60% and false positive rate below 36% in both validation regions) in the internal mapping of landslide source and transport areas, in particular in the sunnier east-facing slopes. In the less illuminated areas the classifier is still able to accurately map the source areas, but performs poorly in the mapping of landslide transport areas.

  6. Anodal tDCS applied during multitasking training leads to transferable performance gains.

    PubMed

    Filmer, Hannah L; Lyons, Maxwell; Mattingley, Jason B; Dux, Paul E

    2017-10-11

    Cognitive training can lead to performance improvements that are specific to the tasks trained. Recent research has suggested that transcranial direct current stimulation (tDCS) applied during training of a simple response-selection paradigm can broaden performance benefits to an untrained task. Here we assessed the impact of combined tDCS and training on multitasking, stimulus-response mapping specificity, response-inhibition, and spatial attention performance in a cohort of healthy adults. Participants trained over four days with concurrent tDCS - anodal, cathodal, or sham - applied to the left prefrontal cortex. Immediately prior to, 1 day after, and 2 weeks after training, performance was assessed on the trained multitasking paradigm, an untrained multitasking paradigm, a go/no-go inhibition task, and a visual search task. Training combined with anodal tDCS, compared with training plus cathodal or sham stimulation, enhanced performance for the untrained multitasking paradigm and visual search tasks. By contrast, there were no training benefits for the go/no-go task. Our findings demonstrate that anodal tDCS combined with multitasking training can extend to untrained multitasking paradigms as well as spatial attention, but with no extension to the domain of response inhibition.

  7. An approach to improve the spatial resolution of a force mapping sensing system

    NASA Astrophysics Data System (ADS)

    Negri, Lucas Hermann; Manfron Schiefer, Elberth; Sade Paterno, Aleksander; Muller, Marcia; Luís Fabris, José

    2016-02-01

    This paper proposes a smart sensor system capable of detecting sparse forces applied to different positions of a metal plate. The sensing is performed with strain transducers based on fiber Bragg gratings (FBG) distributed under the plate. Forces actuating in nine squared regions of the plate, resulting from up to three different loads applied simultaneously to the plate, were monitored with seven transducers. The system determines the magnitude of the force/pressure applied on each specific area, even in the absence of a dedicated transducer for that area. The set of strain transducers with coupled responses and a compressive sensing algorithm are employed to solve the underdetermined inverse problem which emerges from mapping the force. In this configuration, experimental results have shown that the system is capable of recovering the value of the load distributed on the plate with a signal-to-noise ratio better than 12 dB, when the plate is submitted to three simultaneous test loads. The proposed method is a practical illustration of compressive sensing algorithms for the reduction of the number of FBG-based transducers used in a quasi-distributed configuration.

  8. The October 2015 flash-floods in south eastern France: hydrological analyses, inundation mapping and impact estimations

    NASA Astrophysics Data System (ADS)

    Payrastre, Olivier; Bourgin, François; Lebouc, Laurent; Le Bihan, Guillaume; Gaume, Eric

    2017-04-01

    The October 2015 flash-floods in south eastern France caused more than twenty fatalities, high damages and large economic losses in high density urban areas of the Mediterranean coast, including the cities of Mandelieu-La Napoule, Cannes and Antibes. Following a post event survey and preliminary analyses conducted within the framework of the Hymex project, we set up an entire simulation chain at the regional scale to better understand this outstanding event. Rainfall-runoff simulations, inundation mapping and a first estimation of the impacts are conducted following the approach developed and successfully applied for two large flash-flood events in two different French regions (Gard in 2002 and Var in 2010) by Le Bihan (2016). A distributed rainfall-runoff model applied at high resolution for the whole area - including numerous small ungauged basins - is used to feed a semi-automatic hydraulic approach (Cartino method) applied along the river network - including small tributaries. Estimation of the impacts is then performed based on the delineation of the flooded areas and geographic databases identifying buildings and population at risk.

  9. Recombination patterns reveal information about centromere location on linkage maps.

    PubMed

    Limborg, Morten T; McKinney, Garrett J; Seeb, Lisa W; Seeb, James E

    2016-05-01

    Linkage mapping is often used to identify genes associated with phenotypic traits and for aiding genome assemblies. Still, many emerging maps do not locate centromeres - an essential component of the genomic landscape. Here, we demonstrate that for genomes with strong chiasma interference, approximate centromere placement is possible by phasing the same data used to generate linkage maps. Assuming one obligate crossover per chromosome arm, information about centromere location can be revealed by tracking the accumulated recombination frequency along linkage groups, similar to half-tetrad analyses. We validate the method on a linkage map for sockeye salmon (Oncorhynchus nerka) with known centromeric regions. Further tests suggest that the method will work well in other salmonids and other eukaryotes. However, the method performed weakly when applied to a male linkage map (rainbow trout; O. mykiss) characterized by low and unevenly distributed recombination - a general feature of male meiosis in many species. Further, a high frequency of double crossovers along chromosome arms in barley reduced resolution for locating centromeric regions on most linkage groups. Despite these limitations, our method should work well for high-density maps in species with strong recombination interference and will enrich many existing and future mapping resources. © 2015 The Authors. Molecular Ecology Resources published by John Wiley & Sons Ltd.

  10. Evaluation of four supervised learning methods for groundwater spring potential mapping in Khalkhal region (Iran) using GIS-based features

    NASA Astrophysics Data System (ADS)

    Naghibi, Seyed Amir; Moradi Dashtpagerdi, Mostafa

    2017-01-01

    One important tool for water resources management in arid and semi-arid areas is groundwater potential mapping. In this study, four data-mining models including K-nearest neighbor (KNN), linear discriminant analysis (LDA), multivariate adaptive regression splines (MARS), and quadric discriminant analysis (QDA) were used for groundwater potential mapping to get better and more accurate groundwater potential maps (GPMs). For this purpose, 14 groundwater influence factors were considered, such as altitude, slope angle, slope aspect, plan curvature, profile curvature, slope length, topographic wetness index (TWI), stream power index, distance from rivers, river density, distance from faults, fault density, land use, and lithology. From 842 springs in the study area, in the Khalkhal region of Iran, 70 % (589 springs) were considered for training and 30 % (253 springs) were used as a validation dataset. Then, KNN, LDA, MARS, and QDA models were applied in the R statistical software and the results were mapped as GPMs. Finally, the receiver operating characteristics (ROC) curve was implemented to evaluate the performance of the models. According to the results, the area under the curve of ROCs were calculated as 81.4, 80.5, 79.6, and 79.2 % for MARS, QDA, KNN, and LDA, respectively. So, it can be concluded that the performances of KNN and LDA were acceptable and the performances of MARS and QDA were excellent. Also, the results depicted high contribution of altitude, TWI, slope angle, and fault density, while plan curvature and land use were seen to be the least important factors.

  11. Combined interpretation of multiple geophysical techniques: an archaeological case study

    NASA Astrophysics Data System (ADS)

    Riedl, S.; Reichmann, S.; Tronicke, J.; Lück, E.

    2009-04-01

    In order to locate and ascertain the dimensions of an ancient orangery, we explored an area of about 70 m x 60 m in the Rheinsberg Palace Garden (Germany) with multiple geophysical techniques. The Rheinsberg Park, situated about 100 km northwest of Berlin, Germany, was established by the Prussian emperors in the 18th century. Due to redesign of the architecture and the landscaping during the past 300 years, buildings were dismantled and detailed knowledge about some original buildings got lost. We surveyed an area close to a gazebo where, after historical sources, an orangery was planned around the year 1740. However, today it is not clear to what extent this plan has been realized and if remains of this building are still buried in the subsurface. Applied geophysical techniques include magnetic gradiometry, frequency domain electromagnetic (FDEM) and direct current (DC) resistivity mapping as well as ground penetrating radar (GPR). To get an overview of the site, we performed FDEM electrical conductivity mapping using an EM38 instrument and magnetic gradiometry with caesium magnetometers. Both data sets were collected with an in- and crossline data point spacing of ca. 10 cm and 50 cm, respectively. DC resistivity surveying was performed using a pole-pole electrode configuration with an electrode spacing of 1.5 m and a spacing of 1.0 m between individual readings. A 3-D GPR survey was conducted using 200 MHz antennae and in- and crossline spacing of ca. 10 cm and 40 cm, respectively. A standard processing sequence including 3-D migration was applied. A combined interpretation of all collected data sets illustrates that the magnetic gradient and the EM38 conductivity maps is are dominated by anomalies from metallic water pipes from belonging to the irrigation system of the park. The DC resistivity map outlines a rectangular area which might indicate the extension of a former building south of the gazebo. The 3-D GPR data set provides further insights about subsurface structures and relevant geometries. From this data set, we interpret the depth and the extent of foundation and wall remains in the southern and central part of the site indicating the extent of the old orangery. This case study clearly illustrates the benefit of using multiple geophysical methods in archaeological studies. It further illustrates the advantage of 3-D GPR surveying at sites where anthropogenic disturbances (such as metallic pipes and other utilities) might limit the applicability of commonly applied mapping techniques such as magnetic gradiometry or EM38 conductivity mapping.

  12. Neural networks for learning and prediction with applications to remote sensing and speech perception

    NASA Astrophysics Data System (ADS)

    Gjaja, Marin N.

    1997-11-01

    Neural networks for supervised and unsupervised learning are developed and applied to problems in remote sensing, continuous map learning, and speech perception. Adaptive Resonance Theory (ART) models are real-time neural networks for category learning, pattern recognition, and prediction. Unsupervised fuzzy ART networks synthesize fuzzy logic and neural networks, and supervised ARTMAP networks incorporate ART modules for prediction and classification. New ART and ARTMAP methods resulting from analyses of data structure, parameter specification, and category selection are developed. Architectural modifications providing flexibility for a variety of applications are also introduced and explored. A new methodology for automatic mapping from Landsat Thematic Mapper (TM) and terrain data, based on fuzzy ARTMAP, is developed. System capabilities are tested on a challenging remote sensing problem, prediction of vegetation classes in the Cleveland National Forest from spectral and terrain features. After training at the pixel level, performance is tested at the stand level, using sites not seen during training. Results are compared to those of maximum likelihood classifiers, back propagation neural networks, and K-nearest neighbor algorithms. Best performance is obtained using a hybrid system based on a convex combination of fuzzy ARTMAP and maximum likelihood predictions. This work forms the foundation for additional studies exploring fuzzy ARTMAP's capability to estimate class mixture composition for non-homogeneous sites. Exploratory simulations apply ARTMAP to the problem of learning continuous multidimensional mappings. A novel system architecture retains basic ARTMAP properties of incremental and fast learning in an on-line setting while adding components to solve this class of problems. The perceptual magnet effect is a language-specific phenomenon arising early in infant speech development that is characterized by a warping of speech sound perception. An unsupervised neural network model is proposed that embodies two principal hypotheses supported by experimental data--that sensory experience guides language-specific development of an auditory neural map and that a population vector can predict psychological phenomena based on map cell activities. Model simulations show how a nonuniform distribution of map cell firing preferences can develop from language-specific input and give rise to the magnet effect.

  13. Analysis of multiplex gene expression maps obtained by voxelation.

    PubMed

    An, Li; Xie, Hongbo; Chin, Mark H; Obradovic, Zoran; Smith, Desmond J; Megalooikonomou, Vasileios

    2009-04-29

    Gene expression signatures in the mammalian brain hold the key to understanding neural development and neurological disease. Researchers have previously used voxelation in combination with microarrays for acquisition of genome-wide atlases of expression patterns in the mouse brain. On the other hand, some work has been performed on studying gene functions, without taking into account the location information of a gene's expression in a mouse brain. In this paper, we present an approach for identifying the relation between gene expression maps obtained by voxelation and gene functions. To analyze the dataset, we chose typical genes as queries and aimed at discovering similar gene groups. Gene similarity was determined by using the wavelet features extracted from the left and right hemispheres averaged gene expression maps, and by the Euclidean distance between each pair of feature vectors. We also performed a multiple clustering approach on the gene expression maps, combined with hierarchical clustering. Among each group of similar genes and clusters, the gene function similarity was measured by calculating the average gene function distances in the gene ontology structure. By applying our methodology to find similar genes to certain target genes we were able to improve our understanding of gene expression patterns and gene functions. By applying the clustering analysis method, we obtained significant clusters, which have both very similar gene expression maps and very similar gene functions respectively to their corresponding gene ontologies. The cellular component ontology resulted in prominent clusters expressed in cortex and corpus callosum. The molecular function ontology gave prominent clusters in cortex, corpus callosum and hypothalamus. The biological process ontology resulted in clusters in cortex, hypothalamus and choroid plexus. Clusters from all three ontologies combined were most prominently expressed in cortex and corpus callosum. The experimental results confirm the hypothesis that genes with similar gene expression maps might have similar gene functions. The voxelation data takes into account the location information of gene expression level in mouse brain, which is novel in related research. The proposed approach can potentially be used to predict gene functions and provide helpful suggestions to biologists.

  14. Public Health Analysis Transport Optimization Model v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beyeler, Walt; Finley, Patrick; Walser, Alex

    PHANTOM models logistic functions of national public health systems. The system enables public health officials to visualize and coordinate options for public health surveillance, diagnosis, response and administration in an integrated analytical environment. Users may simulate and analyze system performance applying scenarios that represent current conditions or future contingencies what-if analyses of potential systemic improvements. Public health networks are visualized as interactive maps, with graphical displays of relevant system performance metrics as calculated by the simulation modeling components.

  15. Mapping photopolarimeter spectrometer instrument feasibility study for future planetary flight missions

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Evaluations are summarized directed towards defining optimal instrumentation for performing planetary polarization measurements from a spacecraft platform. An overview of the science rationale for polarimetric measurements is given to point out the importance of such measurements for future studies and exploration of the outer planets. The key instrument features required to perform the needed measurements are discussed and applied to the requirements for the Cassini mission to Saturn. The resultant conceptual design of a spectro-polarimeter photometer for Cassini is described in detail.

  16. Beethoven's last piano sonata and those who follow crocodiles: cross-domain mappings of auditory pitch in a musical context.

    PubMed

    Eitan, Zohar; Timmers, Renee

    2010-03-01

    Though auditory pitch is customarily mapped in Western cultures onto spatial verticality (high-low), both anthropological reports and cognitive studies suggest that pitch may be mapped onto a wide variety of other domains. We collected a total number of 35 pitch mappings and investigated in four experiments how these mappings are used and structured. In particular, we inquired (1) how Western subjects apply Western and non-Western metaphors to "high" and "low" pitches, (2) whether mappings applied in an abstract conceptual task are similarly applied by listeners to actual music, (3) how mappings of spatial height relate to these pitch mappings, and (4) how mappings of "high" and "low" pitch associate with other dimensions, in particular quantity, size, intensity and valence. The results show strong agreement among Western participants in applying familiar and unfamiliar metaphors for pitch, in both an abstract, conceptual task (Exp. 1) and in a music listening task (Exp. 2), indicating that diverse cross-domain mappings for pitch exist latently besides the common verticality metaphor. Furthermore, limited overlap between mappings of spatial height and pitch height was found, suggesting that, the ubiquity of the verticality metaphor in Western usage notwithstanding, cross-domain pitch mappings are largely independent of that metaphor, and seem to be based upon other underlying dimensions. Part of the discrepancy between spatial height and pitch height is that, for pitch, "up" is not necessarily "more," nor is it necessarily "good." High pitch is only "more" for height, intensity and brightness. It is "less" for mass, size and quantity. We discuss implications of these findings for music and speech prosody, and their relevance to notions of embodied cognition and of cross-domain magnitude representation. Copyright 2009 Elsevier B.V. All rights reserved.

  17. Process mapping as a framework for performance improvement in emergency general surgery.

    PubMed

    DeGirolamo, Kristin; D'Souza, Karan; Hall, William; Joos, Emilie; Garraway, Naisan; Sing, Chad Kim; McLaughlin, Patrick; Hameed, Morad

    2017-12-01

    Emergency general surgery conditions are often thought of as being too acute for the development of standardized approaches to quality improvement. However, process mapping, a concept that has been applied extensively in manufacturing quality improvement, is now being used in health care. The objective of this study was to create process maps for small bowel obstruction in an effort to identify potential areas for quality improvement. We used the American College of Surgeons Emergency General Surgery Quality Improvement Program pilot database to identify patients who received nonoperative or operative management of small bowel obstruction between March 2015 and March 2016. This database, patient charts and electronic health records were used to create process maps from the time of presentation to discharge. Eighty-eight patients with small bowel obstruction (33 operative; 55 nonoperative) were identified. Patients who received surgery had a complication rate of 32%. The processes of care from the time of presentation to the time of follow-up were highly elaborate and variable in terms of duration; however, the sequences of care were found to be consistent. We used data visualization strategies to identify bottlenecks in care, and they showed substantial variability in terms of operating room access. Variability in the operative care of small bowel obstruction is high and represents an important improvement opportunity in general surgery. Process mapping can identify common themes, even in acute care, and suggest specific performance improvement measures.

  18. Process mapping as a framework for performance improvement in emergency general surgery.

    PubMed

    DeGirolamo, Kristin; D'Souza, Karan; Hall, William; Joos, Emilie; Garraway, Naisan; Sing, Chad Kim; McLaughlin, Patrick; Hameed, Morad

    2018-02-01

    Emergency general surgery conditions are often thought of as being too acute for the development of standardized approaches to quality improvement. However, process mapping, a concept that has been applied extensively in manufacturing quality improvement, is now being used in health care. The objective of this study was to create process maps for small bowel obstruction in an effort to identify potential areas for quality improvement. We used the American College of Surgeons Emergency General Surgery Quality Improvement Program pilot database to identify patients who received nonoperative or operative management of small bowel obstruction between March 2015 and March 2016. This database, patient charts and electronic health records were used to create process maps from the time of presentation to discharge. Eighty-eight patients with small bowel obstruction (33 operative; 55 nonoperative) were identified. Patients who received surgery had a complication rate of 32%. The processes of care from the time of presentation to the time of follow-up were highly elaborate and variable in terms of duration; however, the sequences of care were found to be consistent. We used data visualization strategies to identify bottlenecks in care, and they showed substantial variability in terms of operating room access. Variability in the operative care of small bowel obstruction is high and represents an important improvement opportunity in general surgery. Process mapping can identify common themes, even in acute care, and suggest specific performance improvement measures.

  19. Satellite SAR interferometric techniques applied to emergency mapping

    NASA Astrophysics Data System (ADS)

    Stefanova Vassileva, Magdalena; Riccardi, Paolo; Lecci, Daniele; Giulio Tonolo, Fabio; Boccardo Boccardo, Piero; Chiesa, Giuliana; Angeluccetti, Irene

    2017-04-01

    This paper aim to investigate the capabilities of the currently available SAR interferometric algorithms in the field of emergency mapping. Several tests have been performed exploiting the Copernicus Sentinel-1 data using the COTS software ENVI/SARscape 5.3. Emergency Mapping can be defined as "creation of maps, geo-information products and spatial analyses dedicated to providing situational awareness emergency management and immediate crisis information for response by means of extraction of reference (pre-event) and crisis (post-event) geographic information/data from satellite or aerial imagery". The conventional differential SAR interferometric technique (DInSAR) and the two currently available multi-temporal SAR interferometric approaches, i.e. Permanent Scatterer Interferometry (PSI) and Small BAseline Subset (SBAS), have been applied to provide crisis information useful for the emergency management activities. Depending on the considered Emergency Management phase, it may be distinguished between rapid mapping, i.e. fast provision of geospatial data regarding the area affected for the immediate emergency response, and monitoring mapping, i.e. detection of phenomena for risk prevention and mitigation activities. In order to evaluate the potential and limitations of the aforementioned SAR interferometric approaches for the specific rapid and monitoring mapping application, five main factors have been taken into account: crisis information extracted, input data required, processing time and expected accuracy. The results highlight that DInSAR has the capacity to delineate areas affected by large and sudden deformations and fulfills most of the immediate response requirements. The main limiting factor of interferometry is the availability of suitable SAR acquisition immediately after the event (e.g. Sentinel-1 mission characterized by 6-day revisiting time may not always satisfy the immediate emergency request). PSI and SBAS techniques are suitable to produce monitoring maps for risk prevention and mitigation purposes. Nevertheless, multi-temporal techniques require large SAR temporal datasets, i.e. 20 and more images. Being the Sentinel-1 missions operational only since April 2014, multi-mission SAR datasets should be therefore exploited to carry out historical analysis.

  20. Automated crystallographic ligand building using the medial axis transform of an electron-density isosurface.

    PubMed

    Aishima, Jun; Russel, Daniel S; Guibas, Leonidas J; Adams, Paul D; Brunger, Axel T

    2005-10-01

    Automatic fitting methods that build molecules into electron-density maps usually fail below 3.5 A resolution. As a first step towards addressing this problem, an algorithm has been developed using an approximation of the medial axis to simplify an electron-density isosurface. This approximation captures the central axis of the isosurface with a graph which is then matched against a graph of the molecular model. One of the first applications of the medial axis to X-ray crystallography is presented here. When applied to ligand fitting, the method performs at least as well as methods based on selecting peaks in electron-density maps. Generalization of the method to recognition of common features across multiple contour levels could lead to powerful automatic fitting methods that perform well even at low resolution.

  1. Marker-Based Hierarchical Segmentation and Classification Approach for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.; Benediktsson, Jon Atli; Chanussot, Jocelyn

    2011-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which is a combination of hierarchical step-wise optimization and spectral clustering, has given good performances for hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. First, pixelwise classification is performed and the most reliably classified pixels are selected as markers, with the corresponding class labels. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. The experimental results show that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for hyperspectral image analysis.

  2. Surface inspection using FTIR spectroscopy

    NASA Technical Reports Server (NTRS)

    Powell, G. L.; Smyrl, N. R.; Williams, D. M.; Meyers, H. M., III; Barber, T. E.; Marrero-Rivera, M.

    1995-01-01

    The use of reflectance Fourier transform infrared (FTIR) spectroscopy as a tool for surface inspection is described. Laboratory instruments and portable instruments can support remote sensing probes that can map chemical contaminants on surfaces with detection limits under the best of conditions in the sub-nanometer range, i.e.. near absolute cleanliness, excellent performance in the sub-micrometer range, and useful performance for films tens of microns thick. Examples of discovering and quantifying contamination such as mineral oils and greases, vegetable oils, and silicone oils on aluminum foil, galvanized sheet steel, smooth aluminum tubing, and sandblasted 7075 aluminum alloy and D6AC steel. The ability to map in time and space the distribution of oil stains on metals is demonstrated. Techniques associated with quantitatively applying oils to metals, subsequently verifying the application, and non-linear relationships between reflectance and the quantity oil are described.

  3. Monitoring landscape metrics by point sampling: accuracy in estimating Shannon's diversity and edge density.

    PubMed

    Ramezani, Habib; Holm, Sören; Allard, Anna; Ståhl, Göran

    2010-05-01

    Environmental monitoring of landscapes is of increasing interest. To quantify landscape patterns, a number of metrics are used, of which Shannon's diversity, edge length, and density are studied here. As an alternative to complete mapping, point sampling was applied to estimate the metrics for already mapped landscapes selected from the National Inventory of Landscapes in Sweden (NILS). Monte-Carlo simulation was applied to study the performance of different designs. Random and systematic samplings were applied for four sample sizes and five buffer widths. The latter feature was relevant for edge length, since length was estimated through the number of points falling in buffer areas around edges. In addition, two landscape complexities were tested by applying two classification schemes with seven or 20 land cover classes to the NILS data. As expected, the root mean square error (RMSE) of the estimators decreased with increasing sample size. The estimators of both metrics were slightly biased, but the bias of Shannon's diversity estimator was shown to decrease when sample size increased. In the edge length case, an increasing buffer width resulted in larger bias due to the increased impact of boundary conditions; this effect was shown to be independent of sample size. However, we also developed adjusted estimators that eliminate the bias of the edge length estimator. The rates of decrease of RMSE with increasing sample size and buffer width were quantified by a regression model. Finally, indicative cost-accuracy relationships were derived showing that point sampling could be a competitive alternative to complete wall-to-wall mapping.

  4. Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies.

    PubMed

    Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong

    2017-05-07

    Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18 F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans-each containing 1/8th of the total number of events-were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18 F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of [Formula: see text], the tracer transport rate (ml · min -1 · ml -1 ), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced [Formula: see text] maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced [Formula: see text] estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.

  5. Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in-vivo studies

    PubMed Central

    Petibon, Yoann; Rakvongthai, Yothin; Fakhri, Georges El; Ouyang, Jinsong

    2017-01-01

    Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves -TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in-vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans - each containing 1/8th of the total number of events - were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard Ordered Subset Expectation Maximization (OSEM) reconstruction algorithm on one side, and the One-Step Late Maximum a Posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of K1, the tracer transport rate (mL.min−1.mL−1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced K1 maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced K1 estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in-vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance. PMID:28379843

  6. Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies

    NASA Astrophysics Data System (ADS)

    Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong

    2017-05-01

    Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans—each containing 1/8th of the total number of events—were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of {{K}1} , the tracer transport rate (ml · min-1 · ml-1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced {{K}1} maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced {{K}1} estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.

  7. Quantitative assessment of a spatial multicriteria model for highly pathogenic avian influenza H5N1 in Thailand, and application in Cambodia

    PubMed Central

    Paul, Mathilde C.; Goutard, Flavie L.; Roulleau, Floriane; Holl, Davun; Thanapongtharm, Weerapong; Roger, François L.; Tran, Annelise

    2016-01-01

    The Highly Pathogenic Avian Influenza H5N1 (HPAI) virus is now considered endemic in several Asian countries. In Cambodia, the virus has been circulating in the poultry population since 2004, with a dramatic effect on farmers’ livelihoods and public health. In Thailand, surveillance and control are still important to prevent any new H5N1 incursion. Risk mapping can contribute effectively to disease surveillance and control systems, but is a very challenging task in the absence of reliable disease data. In this work, we used spatial multicriteria decision analysis (MCDA) to produce risk maps for HPAI H5N1 in poultry. We aimed to i) evaluate the performance of the MCDA approach to predict areas suitable for H5N1 based on a dataset from Thailand, comparing the predictive capacities of two sources of a priori knowledge (literature and experts), and ii) apply the best method to produce a risk map for H5N1 in poultry in Cambodia. Our results showed that the expert-based model had a very high predictive capacity in Thailand (AUC = 0.97). Applied in Cambodia, MCDA mapping made it possible to identify hotspots suitable for HPAI H5N1 in the Tonlé Sap watershed, around the cities of Battambang and Kampong Cham, and along the Vietnamese border. PMID:27489997

  8. Patient-specific coronary territory maps

    NASA Astrophysics Data System (ADS)

    Beliveau, Pascale; Setser, Randolph; Cheriet, Farida; O'Donnell, Thomas

    2007-03-01

    It is standard practice for physicians to rely on empirical, population based models to define the relationship between regions of left ventricular (LV) myocardium and the coronary arteries which supply them with blood. Physicians use these models to infer the presence and location of disease within the coronary arteries based on the condition of the myocardium within their distribution (which can be established non-invasively using imaging techniques such as ultrasound or magnetic resonance imaging). However, coronary artery anatomy often varies from the assumed model distribution in the individual patient; thus, a non-invasive method to determine the correspondence between coronary artery anatomy and LV myocardium would have immediate clinical impact. This paper introduces an image-based rendering technique for visualizing maps of coronary distribution in a patient-specific approach. From an image volume derived from computed tomography (CT) images, a segmentation of the LV epicardial surface, as well as the paths of the coronary arteries, is obtained. These paths form seed points for a competitive region growing algorithm applied to the surface of the LV. A ray casting procedure in spherical coordinates from the center of the LV is then performed. The cast rays are mapped to a two-dimensional circular based surface forming our coronary distribution map. We applied our technique to a patient with known coronary artery disease and a qualitative evaluation by an expert in coronary cardiac anatomy showed promising results.

  9. Direct estimation of tracer-kinetic parameter maps from highly undersampled brain dynamic contrast enhanced MRI.

    PubMed

    Guo, Yi; Lingala, Sajan Goud; Zhu, Yinghua; Lebel, R Marc; Nayak, Krishna S

    2017-10-01

    The purpose of this work was to develop and evaluate a T 1 -weighted dynamic contrast enhanced (DCE) MRI methodology where tracer-kinetic (TK) parameter maps are directly estimated from undersampled (k,t)-space data. The proposed reconstruction involves solving a nonlinear least squares optimization problem that includes explicit use of a full forward model to convert parameter maps to (k,t)-space, utilizing the Patlak TK model. The proposed scheme is compared against an indirect method that creates intermediate images by parallel imaging and compressed sensing before to TK modeling. Thirteen fully sampled brain tumor DCE-MRI scans with 5-second temporal resolution are retrospectively undersampled at rates R = 20, 40, 60, 80, and 100 for each dynamic frame. TK maps are quantitatively compared based on root mean-squared-error (rMSE) and Bland-Altman analysis. The approach is also applied to four prospectively R = 30 undersampled whole-brain DCE-MRI data sets. In the retrospective study, the proposed method performed statistically better than indirect method at R ≥ 80 for all 13 cases. This approach provided restoration of TK parameter values with less errors in tumor regions of interest, an improvement compared to a state-of-the-art indirect method. Applied prospectively, the proposed method provided whole-brain, high-resolution TK maps with good image quality. Model-based direct estimation of TK maps from k,t-space DCE-MRI data is feasible and is compatible up to 100-fold undersampling. Magn Reson Med 78:1566-1578, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  10. Force-field functor theory: classical force-fields which reproduce equilibrium quantum distributions

    PubMed Central

    Babbush, Ryan; Parkhill, John; Aspuru-Guzik, Alán

    2013-01-01

    Feynman and Hibbs were the first to variationally determine an effective potential whose associated classical canonical ensemble approximates the exact quantum partition function. We examine the existence of a map between the local potential and an effective classical potential which matches the exact quantum equilibrium density and partition function. The usefulness of such a mapping rests in its ability to readily improve Born-Oppenheimer potentials for use with classical sampling. We show that such a map is unique and must exist. To explore the feasibility of using this result to improve classical molecular mechanics, we numerically produce a map from a library of randomly generated one-dimensional potential/effective potential pairs then evaluate its performance on independent test problems. We also apply the map to simulate liquid para-hydrogen, finding that the resulting radial pair distribution functions agree well with path integral Monte Carlo simulations. The surprising accessibility and transferability of the technique suggest a quantitative route to adapting Born-Oppenheimer potentials, with a motivation similar in spirit to the powerful ideas and approximations of density functional theory. PMID:24790954

  11. Probabilistic flood extent estimates from social media flood observations

    NASA Astrophysics Data System (ADS)

    Brouwer, Tom; Eilander, Dirk; van Loenen, Arnejan; Booij, Martijn J.; Wijnberg, Kathelijne M.; Verkade, Jan S.; Wagemaker, Jurjen

    2017-05-01

    The increasing number and severity of floods, driven by phenomena such as urbanization, deforestation, subsidence and climate change, create a growing need for accurate and timely flood maps. In this paper we present and evaluate a method to create deterministic and probabilistic flood maps from Twitter messages that mention locations of flooding. A deterministic flood map created for the December 2015 flood in the city of York (UK) showed good performance (F(2) = 0.69; a statistic ranging from 0 to 1, with 1 expressing a perfect fit with validation data). The probabilistic flood maps we created showed that, in the York case study, the uncertainty in flood extent was mainly induced by errors in the precise locations of flood observations as derived from Twitter data. Errors in the terrain elevation data or in the parameters of the applied algorithm contributed less to flood extent uncertainty. Although these maps tended to overestimate the actual probability of flooding, they gave a reasonable representation of flood extent uncertainty in the area. This study illustrates that inherently uncertain data from social media can be used to derive information about flooding.

  12. AEKF-SLAM: A New Algorithm for Robotic Underwater Navigation

    PubMed Central

    Yuan, Xin; Martínez-Ortega, José-Fernán; Fernández, José Antonio Sánchez; Eckert, Martina

    2017-01-01

    In this work, we focus on key topics related to underwater Simultaneous Localization and Mapping (SLAM) applications. Moreover, a detailed review of major studies in the literature and our proposed solutions for addressing the problem are presented. The main goal of this paper is the enhancement of the accuracy and robustness of the SLAM-based navigation problem for underwater robotics with low computational costs. Therefore, we present a new method called AEKF-SLAM that employs an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-based SLAM approach stores the robot poses and map landmarks in a single state vector, while estimating the state parameters via a recursive and iterative estimation-update process. Hereby, the prediction and update state (which exist as well in the conventional EKF) are complemented by a newly proposed augmentation stage. Applied to underwater robot navigation, the AEKF-SLAM has been compared with the classic and popular FastSLAM 2.0 algorithm. Concerning the dense loop mapping and line mapping experiments, it shows much better performances in map management with respect to landmark addition and removal, which avoid the long-term accumulation of errors and clutters in the created map. Additionally, the underwater robot achieves more precise and efficient self-localization and a mapping of the surrounding landmarks with much lower processing times. Altogether, the presented AEKF-SLAM method achieves reliably map revisiting, and consistent map upgrading on loop closure. PMID:28531135

  13. Lateral spread hazard mapping of the northern Salt Lake Valley, Utah, for a M7.0 scenario earthquake

    USGS Publications Warehouse

    Olsen, M.J.; Bartlett, S.F.; Solomon, B.J.

    2007-01-01

    This paper describes the methodology used to develop a lateral spread-displacement hazard map for northern Salt Lake Valley, Utah, using a scenario M7.0 earthquake occurring on the Salt Lake City segment of the Wasatch fault. The mapping effort is supported by a substantial amount of geotechnical, geologic, and topographic data compiled for the Salt Lake Valley, Utah. ArcGIS?? routines created for the mapping project then input this information to perform site-specific lateral spread analyses using methods developed by Bartlett and Youd (1992) and Youd et al. (2002) at individual borehole locations. The distributions of predicted lateral spread displacements from the boreholes located spatially within a geologic unit were subsequently used to map the hazard for that particular unit. The mapped displacement zones consist of low hazard (0-0.1 m), moderate hazard (0.1-0.3 m), high hazard (0.3-1.0 m), and very high hazard (> 1.0 m). As expected, the produced map shows the highest hazard in the alluvial deposits at the center of the valley and in sandy deposits close to the fault. This mapping effort is currently being applied to the southern part of the Salt Lake Valley, Utah, and probabilistic maps are being developed for the entire valley. ?? 2007, Earthquake Engineering Research Institute.

  14. An Adaptive Scheme for Robot Localization and Mapping with Dynamically Configurable Inter-Beacon Range Measurements

    PubMed Central

    Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal

    2014-01-01

    This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption. PMID:24776938

  15. An adaptive scheme for robot localization and mapping with dynamically configurable inter-beacon range measurements.

    PubMed

    Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal

    2014-04-25

    This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption.

  16. Shadow Detection from Very High Resoluton Satellite Image Using Grabcut Segmentation and Ratio-Band Algorithms

    NASA Astrophysics Data System (ADS)

    Kadhim, N. M. S. M.; Mourshed, M.; Bray, M. T.

    2015-03-01

    Very-High-Resolution (VHR) satellite imagery is a powerful source of data for detecting and extracting information about urban constructions. Shadow in the VHR satellite imageries provides vital information on urban construction forms, illumination direction, and the spatial distribution of the objects that can help to further understanding of the built environment. However, to extract shadows, the automated detection of shadows from images must be accurate. This paper reviews current automatic approaches that have been used for shadow detection from VHR satellite images and comprises two main parts. In the first part, shadow concepts are presented in terms of shadow appearance in the VHR satellite imageries, current shadow detection methods, and the usefulness of shadow detection in urban environments. In the second part, we adopted two approaches which are considered current state-of-the-art shadow detection, and segmentation algorithms using WorldView-3 and Quickbird images. In the first approach, the ratios between the NIR and visible bands were computed on a pixel-by-pixel basis, which allows for disambiguation between shadows and dark objects. To obtain an accurate shadow candidate map, we further refine the shadow map after applying the ratio algorithm on the Quickbird image. The second selected approach is the GrabCut segmentation approach for examining its performance in detecting the shadow regions of urban objects using the true colour image from WorldView-3. Further refinement was applied to attain a segmented shadow map. Although the detection of shadow regions is a very difficult task when they are derived from a VHR satellite image that comprises a visible spectrum range (RGB true colour), the results demonstrate that the detection of shadow regions in the WorldView-3 image is a reasonable separation from other objects by applying the GrabCut algorithm. In addition, the derived shadow map from the Quickbird image indicates significant performance of the ratio algorithm. The differences in the characteristics of the two satellite imageries in terms of spatial and spectral resolution can play an important role in the estimation and detection of the shadow of urban objects.

  17. Registration of 4D time-series of cardiac images with multichannel Diffeomorphic Demons.

    PubMed

    Peyrat, Jean-Marc; Delingette, Hervé; Sermesant, Maxime; Pennec, Xavier; Xu, Chenyang; Ayache, Nicholas

    2008-01-01

    In this paper, we propose a generic framework for intersubject non-linear registration of 4D time-series images. In this framework, spatio-temporal registration is defined by mapping trajectories of physical points as opposed to spatial registration that solely aims at mapping homologous points. First, we determine the trajectories we want to register in each sequence using a motion tracking algorithm based on the Diffeomorphic Demons algorithm. Then, we perform simultaneously pairwise registrations of corresponding time-points with the constraint to map the same physical points over time. We show this trajectory registration can be formulated as a multichannel registration of 3D images. We solve it using the Diffeomorphic Demons algorithm extended to vector-valued 3D images. This framework is applied to the inter-subject non-linear registration of 4D cardiac CT sequences.

  18. Analysis on bilateral hindlimb mapping in motor cortex of the rat by an intracortical microstimulation method.

    PubMed

    Seong, Han Yu; Cho, Ji Young; Choi, Byeong Sam; Min, Joong Kee; Kim, Yong Hwan; Roh, Sung Woo; Kim, Jeong Hoon; Jeon, Sang Ryong

    2014-04-01

    Intracortical microstimulation (ICMS) is a technique that was developed to derive movement representation of the motor cortex. Although rats are now commonly used in motor mapping studies, the precise characteristics of rat motor map, including symmetry and consistency across animals, and the possibility of repeated stimulation have not yet been established. We performed bilateral hindlimb mapping of motor cortex in six Sprague-Dawley rats using ICMS. ICMS was applied to the left and the right cerebral hemisphere at 0.3 mm intervals vertically and horizontally from the bregma, and any movement of the hindlimbs was noted. The majority (80%± 11%) of responses were not restricted to a single joint, which occurred simultaneously at two or three hindlimb joints. The size and shape of hindlimb motor cortex was variable among rats, but existed on the convex side of the cerebral hemisphere in all rats. The results did not show symmetry according to specific joints in each rats. Conclusively, the hindlimb representation in the rat motor cortex was conveniently mapped using ICMS, but the characteristics and inter-individual variability suggest that precise individual mapping is needed to clarify motor distribution in rats.

  19. Application of LiDAR Date to Assess the Landslide Susceptibility Map Using Weights of Evidence Method - AN Example from Podhale Region (southern Poland)

    NASA Astrophysics Data System (ADS)

    Kamiński, Mirosław

    2016-06-01

    Podhale is a region in southern Poland, which is the northernmost part of the Central Carpathian Mountains. It is characterized by the presence of a large number of landslides that threaten the local infrastructure. In an article presents application of LiDAR data and geostatistical methods to assess landslides susceptibility map. Landslide inventory map were performed using LiDAR data and field work. The Weights of Evidence method was applied to assess landslides susceptibility map. Used factors for modeling: slope gradient, slope aspect, elevation, drainage density, faults density, lithology and curvature. All maps were subdivided into different classes. Then were converted to grid format in the ArcGIS 10.0. The conditional independence test was carried out to determine factors that are conditionally independent of each other with landslides. As a result, chi-square test for further GIS analysis used only five factors: slope gradient, slope aspect, elevation, drainage density and lithology. The final prediction results, it is concluded that the susceptibility map gives useful information both on present instability of the area and its possible future evolution in agreement with the morphological evolution of the area.

  20. Classifying the Diversity of Bus Mapping Systems

    NASA Astrophysics Data System (ADS)

    Said, Mohd Shahmy Mohd; Forrest, David

    2018-05-01

    This study represents the first stage of an investigation into understanding the nature of different approaches to mapping bus routes and bus network, and how they may best be applied in different public transport situations. In many cities, bus services represent an important facet of easing traffic congestion and reducing pollution. However, with the entrenched car culture in many countries, persuading people to change their mode of transport is a major challenge. To promote this modal shift, people need to know what services are available and where (and when) they go. Bus service maps provide an invaluable element of providing suitable public transport information, but are often overlooked by transport planners, and are under-researched by cartographers. The method here consists of the creation of a map evaluation form and performing assessment of published bus networks maps. The analyses were completed by a combination of quantitative and qualitative data analysis of various aspects of cartographic design and classification. This paper focuses on the resulting classification, which is illustrated by a series of examples. This classification will facilitate more in depth investigations into the details of cartographic design for such maps and help direct areas for user evaluation.

  1. An Interactive Immersive Serious Game Application for Kunyu Quantu World Map

    NASA Astrophysics Data System (ADS)

    Peng, S.-T.; Hsu, S.-Y.; Hsieh, K.-C.

    2015-08-01

    In recent years, more and more digital technologies and innovative concepts are applied on museum education. One of the concepts applied is "Serious game." Serious game is not designed for entertainment purpose but allows users to learn real world's cultural and educational knowledge in the virtual world through game-experiencing. Technologies applied on serious game are identical to those applied on entertainment game. Nowadays, the interactive technology applications considering users' movement and gestures in physical spaces are developing rapidly, which are extensively used in entertainment games, such as Kinect-based games. The ability to explore space via Kinect-based games can be incorporated into the design of serious game. The ancient world map, Kunyu Quantu, from the collection of the National Palace Museum is therefore applied in serious game development. In general, the ancient world map does not only provide geological information, but also contains museum knowledge. This particular ancient world map is an excellent content applied in games as teaching material. In the 17th century, it was first used by a missionary as a medium to teach the Kangxi Emperor of the latest geologic and scientific spirits from the West. On this map, it also includes written biological knowledge and climate knowledge. Therefore, this research aims to present the design of the interactive and immersive serious game based installation that developed from the rich content of the Kunyu Quantu World Map, and to analyse visitor's experience in terms of real world's cultural knowledge learning and interactive responses.

  2. Image Fusion Applied to Satellite Imagery for the Improved Mapping and Monitoring of Coral Reefs: a Proposal

    NASA Astrophysics Data System (ADS)

    Gholoum, M.; Bruce, D.; Hazeam, S. Al

    2012-07-01

    A coral reef ecosystem, one of the most complex marine environmental systems on the planet, is defined as biologically diverse and immense. It plays an important role in maintaining a vast biological diversity for future generations and functions as an essential spawning, nursery, breeding and feeding ground for many kinds of marine species. In addition, coral reef ecosystems provide valuable benefits such as fisheries, ecological goods and services and recreational activities to many communities. However, this valuable resource is highly threatened by a number of environmental changes and anthropogenic impacts that can lead to reduced coral growth and production, mass coral mortality and loss of coral diversity. With the growth of these threats on coral reef ecosystems, there is a strong management need for mapping and monitoring of coral reef ecosystems. Remote sensing technology can be a valuable tool for mapping and monitoring of these ecosystems. However, the diversity and complexity of coral reef ecosystems, the resolution capabilities of satellite sensors and the low reflectivity of shallow water increases the difficulties to identify and classify its features. This paper reviews the methods used in mapping and monitoring coral reef ecosystems. In addition, this paper proposes improved methods for mapping and monitoring coral reef ecosystems based on image fusion techniques. This image fusion techniques will be applied to satellite images exhibiting high spatial and low to medium spectral resolution with images exhibiting low spatial and high spectral resolution. Furthermore, a new method will be developed to fuse hyperspectral imagery with multispectral imagery. The fused image will have a large number of spectral bands and it will have all pairs of corresponding spatial objects. This will potentially help to accurately classify the image data. Accuracy assessment use ground truth will be performed for the selected methods to determine the quality of the information derived from image classification. The research will be applied to the Kuwait's southern coral reefs: Kubbar and Um Al-Maradim.

  3. Innovation and application of ANN in Europe demonstrated by Kohonen maps

    NASA Technical Reports Server (NTRS)

    Goser, Karl

    1994-01-01

    One of the most important contributions to neural networks comes from Kohonen, Helsinki/Espoo, Finland, who had the idea of self-organizating maps in 1981. He verified his idea by an algorithm of which many applications make use of. The impetus for this idea came from biology, a field where the Europeans have always been very active at several research laboratories. The challenge was to model the self-organization found in the brain. Today one goal is the development of more sophisticated neurons which model the biological neurons more exactly. They should come to a better performance of neural nets with only a few complex neurons instead of many simple ones. A lot of application concepts arise from this idea: Kohonen himself applied it to speech recognition, but the project did not overcome much more than the recognition of the numerals one to ten at that time. A more promising application for self-organizing maps is process control and process monitoring. Several proposals were made which concern parameter classification of semiconductor technologies, design of integrated circuits, and control of chemical processes. Self-organizing maps were applied to robotics. The neural concept was introduced into electric power systems. At Dortmund we are working on a system which has to monitor the quality and the reliability of gears and electrical motors in equipment installed in coal mines. The results are promising and the probability to apply the system in the field is very high. A special feature of the system is that linguistic rules which are embedded in a fuzzy controller analyze the data of the self-organizing map in regard to life expectation of the gears. It seems that the fuzzy technique will introduce the technology of neural networks in a tandem mode. These technologies together with the genetic algorithms start to form the attractive field of computational intelligence.

  4. Mapping High Dimensional Sparse Customer Requirements into Product Configurations

    NASA Astrophysics Data System (ADS)

    Jiao, Yao; Yang, Yu; Zhang, Hongshan

    2017-10-01

    Mapping customer requirements into product configurations is a crucial step for product design, while, customers express their needs ambiguously and locally due to the lack of domain knowledge. Thus the data mining process of customer requirements might result in fragmental information with high dimensional sparsity, leading the mapping procedure risk uncertainty and complexity. The Expert Judgment is widely applied against that background since there is no formal requirements for systematic or structural data. However, there are concerns on the repeatability and bias for Expert Judgment. In this study, an integrated method by adjusted Local Linear Embedding (LLE) and Naïve Bayes (NB) classifier is proposed to map high dimensional sparse customer requirements to product configurations. The integrated method adjusts classical LLE to preprocess high dimensional sparse dataset to satisfy the prerequisite of NB for classifying different customer requirements to corresponding product configurations. Compared with Expert Judgment, the adjusted LLE with NB performs much better in a real-world Tablet PC design case both in accuracy and robustness.

  5. The importance of correctly characterising the English spelling system when devising and evaluating methods of reading instruction: Comment on Taylor, Davis, and Rastle (2017).

    PubMed

    Bowers, Jeffrey S; Bowers, Peter N

    2018-05-01

    Taylor, Davis, and Rastle employed an artificial language learning paradigm to compare phonics and meaning-based approaches to reading instruction. Adults were taught consonant, vowel, and consonant (CVC) words composed of novel letters when the mappings between letters and sounds were completely systematic and the mappings between letters and meaning were completely arbitrary. At test, performance on naming tasks was better following training that emphasised the phonological rather than the semantic mappings, whereas performance on semantic tasks was similar in the two conditions. The authors concluded that these findings support phonics for early reading instruction in English. However, in our view, these conclusions are not justified given that the artificial language mischaracterised both the phonological and semantic mappings in English. Furthermore, the way participants studied the arbitrary letter-meaning correspondences bears little relation to meaning-based strategies used in schools. To compare phonics with meaning-based instruction it must be determined whether phonics is better than alternative forms of instruction that fully exploit the regularities within the semantic route. This is rarely assessed because of a widespread and mistaken assumption that underpins so much basic and applied research, namely, that the main function of spellings is to represent sounds.

  6. Correction of Gradient Nonlinearity Bias in Quantitative Diffusion Parameters of Renal Tissue with Intra Voxel Incoherent Motion.

    PubMed

    Malyarenko, Dariya I; Pang, Yuxi; Senegas, Julien; Ivancevic, Marko K; Ross, Brian D; Chenevert, Thomas L

    2015-12-01

    Spatially non-uniform diffusion weighting bias due to gradient nonlinearity (GNL) causes substantial errors in apparent diffusion coefficient (ADC) maps for anatomical regions imaged distant from magnet isocenter. Our previously-described approach allowed effective removal of spatial ADC bias from three orthogonal DWI measurements for mono-exponential media of arbitrary anisotropy. The present work evaluates correction feasibility and performance for quantitative diffusion parameters of the two-component IVIM model for well-perfused and nearly isotropic renal tissue. Sagittal kidney DWI scans of a volunteer were performed on a clinical 3T MRI scanner near isocenter and offset superiorly. Spatially non-uniform diffusion weighting due to GNL resulted both in shift and broadening of perfusion-suppressed ADC histograms for off-center DWI relative to unbiased measurements close to isocenter. Direction-average DW-bias correctors were computed based on the known gradient design provided by vendor. The computed bias maps were empirically confirmed by coronal DWI measurements for an isotropic gel-flood phantom. Both phantom and renal tissue ADC bias for off-center measurements was effectively removed by applying pre-computed 3D correction maps. Comparable ADC accuracy was achieved for corrections of both b -maps and DWI intensities in presence of IVIM perfusion. No significant bias impact was observed for IVIM perfusion fraction.

  7. Correction of Gradient Nonlinearity Bias in Quantitative Diffusion Parameters of Renal Tissue with Intra Voxel Incoherent Motion

    PubMed Central

    Malyarenko, Dariya I.; Pang, Yuxi; Senegas, Julien; Ivancevic, Marko K.; Ross, Brian D.; Chenevert, Thomas L.

    2015-01-01

    Spatially non-uniform diffusion weighting bias due to gradient nonlinearity (GNL) causes substantial errors in apparent diffusion coefficient (ADC) maps for anatomical regions imaged distant from magnet isocenter. Our previously-described approach allowed effective removal of spatial ADC bias from three orthogonal DWI measurements for mono-exponential media of arbitrary anisotropy. The present work evaluates correction feasibility and performance for quantitative diffusion parameters of the two-component IVIM model for well-perfused and nearly isotropic renal tissue. Sagittal kidney DWI scans of a volunteer were performed on a clinical 3T MRI scanner near isocenter and offset superiorly. Spatially non-uniform diffusion weighting due to GNL resulted both in shift and broadening of perfusion-suppressed ADC histograms for off-center DWI relative to unbiased measurements close to isocenter. Direction-average DW-bias correctors were computed based on the known gradient design provided by vendor. The computed bias maps were empirically confirmed by coronal DWI measurements for an isotropic gel-flood phantom. Both phantom and renal tissue ADC bias for off-center measurements was effectively removed by applying pre-computed 3D correction maps. Comparable ADC accuracy was achieved for corrections of both b-maps and DWI intensities in presence of IVIM perfusion. No significant bias impact was observed for IVIM perfusion fraction. PMID:26811845

  8. Comparing the performance of various digital soil mapping approaches to map physical soil properties

    NASA Astrophysics Data System (ADS)

    Laborczi, Annamária; Takács, Katalin; Pásztor, László

    2015-04-01

    Spatial information on physical soil properties is intensely expected, in order to support environmental related and land use management decisions. One of the most widely used properties to characterize soils physically is particle size distribution (PSD), which determines soil water management and cultivability. According to their size, different particles can be categorized as clay, silt, or sand. The size intervals are defined by national or international textural classification systems. The relative percentage of sand, silt, and clay in the soil constitutes textural classes, which are also specified miscellaneously in various national and/or specialty systems. The most commonly used is the classification system of the United States Department of Agriculture (USDA). Soil texture information is essential input data in meteorological, hydrological and agricultural prediction modelling. Although Hungary has a great deal of legacy soil maps and other relevant soil information, it often occurs, that maps do not exist on a certain characteristic with the required thematic and/or spatial representation. The recent developments in digital soil mapping (DSM), however, provide wide opportunities for the elaboration of object specific soil maps (OSSM) with predefined parameters (resolution, accuracy, reliability etc.). Due to the simultaneous richness of available Hungarian legacy soil data, spatial inference methods and auxiliary environmental information, there is a high versatility of possible approaches for the compilation of a given soil map. This suggests the opportunity of optimization. For the creation of an OSSM one might intend to identify the optimum set of soil data, method and auxiliary co-variables optimized for the resources (data costs, computation requirements etc.). We started comprehensive analysis of the effects of the various DSM components on the accuracy of the output maps on pilot areas. The aim of this study is to compare and evaluate different digital soil mapping methods and sets of ancillary variables for producing the most accurate spatial prediction of texture classes in a given area of interest. Both legacy and recently collected data on PSD were used as reference information. The predictor variable data set consisted of digital elevation model and its derivatives, lithology, land use maps as well as various bands and indices of satellite images. Two conceptionally different approaches can be applied in the mapping process. Textural classification can be realized after particle size data were spatially extended by proper geostatistical method. Alternatively, the textural classification is carried out first, followed by the spatial extension through suitable data mining method. According to the first approach, maps of sand, silt and clay percentage have been computed through regression kriging (RK). Since the three maps are compositional (their sum must be 100%), we applied Additive Log-Ratio (alr) transformation, instead of kriging them independently. Finally, the texture class map has been compiled according to the USDA categories from the three maps. Different combinations of reference and training soil data and auxiliary covariables resulted several different maps. On the basis of the other way, the PSD were classified firstly into the USDA categories, then the texture class maps were compiled directly by data mining methods (classification trees and random forests). The various results were compared to each other as well as to the RK maps. The performance of the different methods and data sets has been examined by testing the accuracy of the geostatistically computed and the directly classified results to assess the most predictive and accurate method. Acknowledgement: Our work was supported by the Hungarian National Scientific Research Foundation (OTKA, Grant No. K105167).

  9. Applied cartographic communication: map symbolization for atlases.

    USGS Publications Warehouse

    Morrison, J.L.

    1984-01-01

    A detailed investigation of the symbolization used on general-purpose atlas reference maps. It indicates how theories of cartographic communication can be put into practice. Two major points emerge. First, that a logical scheme can be constructed from existing cartographic research and applied to an analysis of the choice of symbolization on a map. Second, the same structure appears to allow the cartographer to specify symbolization as a part of map design. An introductory review of cartographic communication is followed by an analysis of selected maps' usage of point, area and line symbols, boundaries, text and colour usage.-after Author

  10. Applications systems verification and transfer project. Volume 8: Satellite snow mapping and runoff prediction handbook

    NASA Technical Reports Server (NTRS)

    Bowley, C. J.; Barnes, J. C.; Rango, A.

    1981-01-01

    The purpose of the handbook is to update the various snowcover interpretation techniques, document the snow mapping techniques used in the various ASVT study areas, and describe the ways snowcover data have been applied to runoff prediction. Through documentation in handbook form, the methodology developed in the Snow Mapping ASVT can be applied to other areas.

  11. Topology-Aware Performance Optimization and Modeling of Adaptive Mesh Refinement Codes for Exascale

    DOE PAGES

    Chan, Cy P.; Bachan, John D.; Kenny, Joseph P.; ...

    2017-01-26

    Here, we introduce a topology-aware performance optimization and modeling workflow for AMR simulation that includes two new modeling tools, ProgrAMR and Mota Mapper, which interface with the BoxLib AMR framework and the SSTmacro network simulator. ProgrAMR allows us to generate and model the execution of task dependency graphs from high-level specifications of AMR-based applications, which we demonstrate by analyzing two example AMR-based multigrid solvers with varying degrees of asynchrony. Mota Mapper generates multiobjective, network topology-aware box mappings, which we apply to optimize the data layout for the example multigrid solvers. While the sensitivity of these solvers to layout and executionmore » strategy appears to be modest for balanced scenarios, the impact of better mapping algorithms can be significant when performance is highly constrained by network hop latency. Furthermore, we show that network latency in the multigrid bottom solve is the main contributing factor preventing good scaling on exascale-class machines.« less

  12. Topology-Aware Performance Optimization and Modeling of Adaptive Mesh Refinement Codes for Exascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Cy P.; Bachan, John D.; Kenny, Joseph P.

    Here, we introduce a topology-aware performance optimization and modeling workflow for AMR simulation that includes two new modeling tools, ProgrAMR and Mota Mapper, which interface with the BoxLib AMR framework and the SSTmacro network simulator. ProgrAMR allows us to generate and model the execution of task dependency graphs from high-level specifications of AMR-based applications, which we demonstrate by analyzing two example AMR-based multigrid solvers with varying degrees of asynchrony. Mota Mapper generates multiobjective, network topology-aware box mappings, which we apply to optimize the data layout for the example multigrid solvers. While the sensitivity of these solvers to layout and executionmore » strategy appears to be modest for balanced scenarios, the impact of better mapping algorithms can be significant when performance is highly constrained by network hop latency. Furthermore, we show that network latency in the multigrid bottom solve is the main contributing factor preventing good scaling on exascale-class machines.« less

  13. Scaling and Graphical Transport-Map Analysis of Ambipolar Schottky-Barrier Thin-Film Transistors Based on a Parallel Array of Si Nanowires.

    PubMed

    Jeon, Dae-Young; Pregl, Sebastian; Park, So Jeong; Baraban, Larysa; Cuniberti, Gianaurelio; Mikolajick, Thomas; Weber, Walter M

    2015-07-08

    Si nanowire (Si-NW) based thin-film transistors (TFTs) have been considered as a promising candidate for next-generation flexible and wearable electronics as well as sensor applications with high performance. Here, we have fabricated ambipolar Schottky-barrier (SB) TFTs consisting of a parallel array of Si-NWs and performed an in-depth study related to their electrical performance and operation mechanism through several electrical parameters extracted from the channel length scaling based method. Especially, the newly suggested current-voltage (I-V) contour map clearly elucidates the unique operation mechanism of the ambipolar SB-TFTs, governed by Schottky-junction between NiSi2 and Si-NW. Further, it reveals for the first-time in SB based FETs the important internal electrostatic coupling between the channel and externally applied voltages. This work provides helpful information for the realization of practical circuits with ambipolar SB-TFTs that can be transferred to different substrate technologies and applications.

  14. A Kinect based sign language recognition system using spatio-temporal features

    NASA Astrophysics Data System (ADS)

    Memiş, Abbas; Albayrak, Songül

    2013-12-01

    This paper presents a sign language recognition system that uses spatio-temporal features on RGB video images and depth maps for dynamic gestures of Turkish Sign Language. Proposed system uses motion differences and accumulation approach for temporal gesture analysis. Motion accumulation method, which is an effective method for temporal domain analysis of gestures, produces an accumulated motion image by combining differences of successive video frames. Then, 2D Discrete Cosine Transform (DCT) is applied to accumulated motion images and temporal domain features transformed into spatial domain. These processes are performed on both RGB images and depth maps separately. DCT coefficients that represent sign gestures are picked up via zigzag scanning and feature vectors are generated. In order to recognize sign gestures, K-Nearest Neighbor classifier with Manhattan distance is performed. Performance of the proposed sign language recognition system is evaluated on a sign database that contains 1002 isolated dynamic signs belongs to 111 words of Turkish Sign Language (TSL) in three different categories. Proposed sign language recognition system has promising success rates.

  15. Synthetic vision display evaluation studies

    NASA Technical Reports Server (NTRS)

    Regal, David M.; Whittington, David H.

    1994-01-01

    The goal of this research was to help us understand the display requirements for a synthetic vision system for the High Speed Civil Transport (HSCT). Four experiments were conducted to examine the effects of different levels of perceptual cue complexity in displays used by pilots in a flare and landing task. Increased levels of texture mapping of terrain and runway produced mixed results, including harder but shorter landings and a lower flare initiation altitude. Under higher workload conditions, increased texture resulted in an improvement in performance. An increase in familiar size cues did not result in improved performance. Only a small difference was found between displays using two patterns of high resolution texture mapping. The effects of increased perceptual cue complexity on performance was not as strong as would be predicted from the pilot's subjective reports or from related literature. A description of the role of a synthetic vision system in the High Speed Civil Transport is provide along with a literature review covering applied research related to perceptual cue usage in aircraft displays.

  16. Sentinel node mapping for gastric cancer: a prospective multicenter trial in Japan.

    PubMed

    Kitagawa, Yuko; Takeuchi, Hiroya; Takagi, Yu; Natsugoe, Shoji; Terashima, Masanori; Murakami, Nozomu; Fujimura, Takashi; Tsujimoto, Hironori; Hayashi, Hideki; Yoshimizu, Nobunari; Takagane, Akinori; Mohri, Yasuhiko; Nabeshima, Kazuhito; Uenosono, Yoshikazu; Kinami, Shinichi; Sakamoto, Junichi; Morita, Satoshi; Aikou, Takashi; Miwa, Koichi; Kitajima, Masaki

    2013-10-10

    Complicated gastric lymphatic drainage potentially undermines the utility of sentinel node (SN) biopsy in patients with gastric cancer. Encouraged by several favorable single-institution reports, we conducted a multicenter, single-arm, phase II study of SN mapping that used a standardized dual tracer endoscopic injection technique. Patients with previously untreated cT1 or cT2 gastric adenocarcinomas < 4 cm in gross diameter were eligible for inclusion in this study. SN mapping was performed by using a standardized dual tracer endoscopic injection technique. Following biopsy of the identified SNs, mandatory comprehensive D2 or modified D2 gastrectomy was performed according to current Japanese Gastric Cancer Association guidelines. Among 433 patients who gave preoperative consent, 397 were deemed eligible on the basis of surgical findings. SN biopsy was performed in all patients, and the SN detection rate was 97.5% (387 of 397). Of 57 patients with lymph node metastasis by conventional hematoxylin and eosin staining, 93% (53 of 57) had positive SNs, and the accuracy of nodal evaluation for metastasis was 99% (383 of 387). Only four false-negative SN biopsies were observed, and pathologic analysis revealed that three of those biopsies were pT2 or tumors > 4 cm. We observed no serious adverse effects related to endoscopic tracer injection or the SN mapping procedure. The endoscopic dual tracer method for SN biopsy was confirmed as safe and effective when applied to the superficial, relatively small gastric adenocarcinomas included in this study.

  17. Robust Vehicle Detection in Aerial Images Based on Cascaded Convolutional Neural Networks.

    PubMed

    Zhong, Jiandan; Lei, Tao; Yao, Guangle

    2017-11-24

    Vehicle detection in aerial images is an important and challenging task. Traditionally, many target detection models based on sliding-window fashion were developed and achieved acceptable performance, but these models are time-consuming in the detection phase. Recently, with the great success of convolutional neural networks (CNNs) in computer vision, many state-of-the-art detectors have been designed based on deep CNNs. However, these CNN-based detectors are inefficient when applied in aerial image data due to the fact that the existing CNN-based models struggle with small-size object detection and precise localization. To improve the detection accuracy without decreasing speed, we propose a CNN-based detection model combining two independent convolutional neural networks, where the first network is applied to generate a set of vehicle-like regions from multi-feature maps of different hierarchies and scales. Because the multi-feature maps combine the advantage of the deep and shallow convolutional layer, the first network performs well on locating the small targets in aerial image data. Then, the generated candidate regions are fed into the second network for feature extraction and decision making. Comprehensive experiments are conducted on the Vehicle Detection in Aerial Imagery (VEDAI) dataset and Munich vehicle dataset. The proposed cascaded detection model yields high performance, not only in detection accuracy but also in detection speed.

  18. Robust Vehicle Detection in Aerial Images Based on Cascaded Convolutional Neural Networks

    PubMed Central

    Zhong, Jiandan; Lei, Tao; Yao, Guangle

    2017-01-01

    Vehicle detection in aerial images is an important and challenging task. Traditionally, many target detection models based on sliding-window fashion were developed and achieved acceptable performance, but these models are time-consuming in the detection phase. Recently, with the great success of convolutional neural networks (CNNs) in computer vision, many state-of-the-art detectors have been designed based on deep CNNs. However, these CNN-based detectors are inefficient when applied in aerial image data due to the fact that the existing CNN-based models struggle with small-size object detection and precise localization. To improve the detection accuracy without decreasing speed, we propose a CNN-based detection model combining two independent convolutional neural networks, where the first network is applied to generate a set of vehicle-like regions from multi-feature maps of different hierarchies and scales. Because the multi-feature maps combine the advantage of the deep and shallow convolutional layer, the first network performs well on locating the small targets in aerial image data. Then, the generated candidate regions are fed into the second network for feature extraction and decision making. Comprehensive experiments are conducted on the Vehicle Detection in Aerial Imagery (VEDAI) dataset and Munich vehicle dataset. The proposed cascaded detection model yields high performance, not only in detection accuracy but also in detection speed. PMID:29186756

  19. Mapping cerebrovascular reactivity using concurrent fMRI and near infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Tong, Yunjie; Bergethon, Peter R.; Frederick, Blaise d.

    2011-02-01

    Cerebrovascular reactivity (CVR) reflects the compensatory dilatory capacity of cerebral vasculature to a dilatory stimulus and is an important indicator of brain vascular reserve. fMRI has been proven to be an effective imaging technique to obtain the CVR map when the subjects perform CO2 inhalation or the breath holding task (BH). However, the traditional data analysis inaccurately models the BOLD using a boxcar function with fixed time delay. We propose a novel way to process the fMRI data obtained during a blocked BH by using the simultaneously collected near infrared spectroscopy (NIRS) data as regressor1. In this concurrent NIRS and fMRI study, 6 healthy subjects performed a blocked BH (5 breath holds with 20s durations intermitted by 40s of regular breathing). A NIRS probe of two sources and two detectors separated by 3 cm was placed on the right side of prefrontal area of the subjects. The time course of changes in oxy-hemoglobin (Δ[HbO]) was calculated from NIRS data and shifted in time by various amounts, and resampled to the fMRI acquisition rate. Each shifted time course was used as regressor in FEAT (the analysis tool in FSL). The resulting z-statistic maps were concatenated in time and the maximal value was taken along the time for all the voxels to generate a 3-D CVR map. The new method produces more accurate and thorough CVR maps; moreover, it enables us to produce a comparable baseline cerebral vascular map if applied to resting state (RS) data.

  20. Towards improved hardware component attenuation correction in PET/MR hybrid imaging

    NASA Astrophysics Data System (ADS)

    Paulus, D. H.; Tellmann, L.; Quick, H. H.

    2013-11-01

    In positron emission tomography/computed tomography (PET/CT) hybrid imaging attenuation correction (AC) of the patient tissue and patient table is performed by converting the CT-based Hounsfield units (HU) to linear attenuation coefficients (LAC) of PET. When applied to the new field of hardware component AC in PET/magnetic resonance (MR) hybrid imaging, this conversion method may result in local overcorrection of PET activity values. The aim of this study thus was to optimize the conversion parameters for CT-based AC of hardware components in PET/MR. Systematic evaluation and optimization of the HU to LAC conversion parameters has been performed for the hardware component attenuation map (µ-map) of a flexible radiofrequency (RF) coil used in PET/MR imaging. Furthermore, spatial misregistration of this RF coil to its µ-map was simulated by shifting the µ-map in different directions and the effect on PET quantification was evaluated. Measurements of a PET NEMA standard emission phantom were performed on an integrated hybrid PET/MR system. Various CT parameters were used to calculate different µ-maps for the flexible RF coil and to evaluate the impact on the PET activity concentration. A 511 keV transmission scan of the local RF coil was used as standard of reference to adapt the slope of the conversion from HUs to LACs at 511 keV. The average underestimation of the PET activity concentration due to the non-attenuation corrected RF coil in place was calculated to be 5.0% in the overall phantom. When considering attenuation only in the upper volume of the phantom, the average difference to the reference scan without RF coil is 11.0%. When the PET/CT conversion is applied, an average overestimation of 3.1% (without extended CT scale) and 4.2% (with extended CT scale) is observed in the top volume of the NEMA phantom. Using the adapted conversion resulting from this study, the deviation in the top volume of the phantom is reduced to -0.5% and shows the lowest standard deviation inside the phantom in comparison to all other conversions. Simulation of a µ-map misregistration shows acceptable results for shifts below 5 mm for the flexible surface RF coil. The adapted conversion from HUs to LAC at 511 keV within this study can improve hardware component AC in PET/MR hybrid imaging as shown for a flexible RF surface coil. Furthermore, these results have a direct impact on the improvement of the hardware component AC of the examined flexible RF coil in conjunction with position determination.

  1. Intraoperative language localization in multilingual patients with gliomas.

    PubMed

    Bello, Lorenzo; Acerbi, Francesco; Giussani, Carlo; Baratta, Pietro; Taccone, Paolo; Songa, Valeria; Fava, Marica; Stocchetti, Nino; Papagno, Costanza; Gaini, Sergio M

    2006-07-01

    Intraoperative localization of speech is problematic in patients who are fluent in different languages. Previous studies have generated various results depending on the series of patients studied, the type of language, and the sensitivity of the tasks applied. It is not clear whether languages are mediated by multiple and separate cortical areas or shared by common areas. Globally considered, previous studies recommended performing a multiple intraoperative mapping for all the languages in which the patient is fluent. The aim of this work was to study the feasibility of performing an intraoperative multiple language mapping in a group of multilingual patients with a glioma undergoing awake craniotomy for tumor removal and to describe the intraoperative cortical and subcortical findings in the area of craniotomy, with the final goal to maximally preserve patients' functional language. Seven late, highly proficient multilingual patients with a left frontal glioma were submitted preoperatively to a battery of tests to evaluate oral language production, comprehension, and repetition. Each language was tested serially starting from the first acquired language. Items that were correctly named during these tests were used to build personalized blocks to be used intraoperatively. Language mapping was undertaken during awake craniotomies by the use of an Ojemann cortical stimulator during counting and oral naming tasks. Subcortical stimulation by using the same current threshold was applied during tumor resection, in a back and forth fashion, and the same tests. Cortical sites essential for oral naming were found in 87.5% of patients, those for the first acquired language in one to four sites, those for the other languages in one to three sites. Sites for each language were distinct and separate. Number and location of sites were not predictable, being randomly and widely distributed in the cortex around or less frequently over the tumor area. Subcortical stimulations found tracts for the first acquired language in four patients and for the other languages in three patients. Three of these patients decreased their fluency immediately after surgery, affecting the first acquired language, which fully recovered in two patients and partially in one. The procedure was agile and well tolerated by the patients. These findings show that multiple cortical and subcortical language mapping during awake craniotomy for tumor removal is a feasible procedure. They support the concept that intraoperative mapping should be performed for all the languages in which the patient is fluent in to preserve functional integrity.

  2. Mixture-Tuned, Clutter Matched Filter for Remote Detection of Subpixel Spectral Signals

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Mandrake, Lukas; Green, Robert O.

    2013-01-01

    Mapping localized spectral features in large images demands sensitive and robust detection algorithms. Two aspects of large images that can harm matched-filter detection performance are addressed simultaneously. First, multimodal backgrounds may thwart the typical Gaussian model. Second, outlier features can trigger false detections from large projections onto the target vector. Two state-of-the-art approaches are combined that independently address outlier false positives and multimodal backgrounds. The background clustering models multimodal backgrounds, and the mixture tuned matched filter (MT-MF) addresses outliers. Combining the two methods captures significant additional performance benefits. The resulting mixture tuned clutter matched filter (MT-CMF) shows effective performance on simulated and airborne datasets. The classical MNF transform was applied, followed by k-means clustering. Then, each cluster s mean, covariance, and the corresponding eigenvalues were estimated. This yields a cluster-specific matched filter estimate as well as a cluster- specific feasibility score to flag outlier false positives. The technology described is a proof of concept that may be employed in future target detection and mapping applications for remote imaging spectrometers. It is of most direct relevance to JPL proposals for airborne and orbital hyperspectral instruments. Applications include subpixel target detection in hyperspectral scenes for military surveillance. Earth science applications include mineralogical mapping, species discrimination for ecosystem health monitoring, and land use classification.

  3. Mapping Environmental Contaminants at Ray Mine, AZ

    NASA Technical Reports Server (NTRS)

    McCubbin, Ian; Lang, Harold

    2000-01-01

    Airborne Visible and InfraRed Imaging Spectrometer (AVIRIS) data was collected over Ray Mine as part of a demonstration project for the Environmental Protection Agency (EPA) through the Advanced Measurement Initiative (AMI). The overall goal of AMI is to accelerate adoption and application of advanced measurement technologies for cost effective environmental monitoring. The site was selected to demonstrate the benefit to EPA in using advanced remote sensing technologies for the detection of environmental contaminants due to the mineral extraction industry. The role of the Jet Propulsion Laboratory in this pilot study is to provide data as well as performing calibration, data analysis, and validation of the AVIRIS results. EPA is also interested in developing protocols that use commercial software to perform such work on other high priority EPA sites. Reflectance retrieval was performed using outputs generated by the MODTRAN radiative transfer model and field spectra collected for the purpose of calibration. We are presenting advanced applications of the ENVI software package using n-Dimensional Partial Unmixing to identify image-derived endmembers that best match target materials reference spectra from multiple spectral libraries. Upon identification of the image endmembers the Mixture Tuned Match Filter algorithm was applied to map the endmembers within each scene. Using this technique it was possible to map four different mineral classes that are associated with mine generated acid waste.

  4. High-resolution melt analysis to identify and map sequence-tagged site anchor points onto linkage maps: a white lupin (Lupinus albus) map as an exemplar.

    PubMed

    Croxford, Adam E; Rogers, Tom; Caligari, Peter D S; Wilkinson, Michael J

    2008-01-01

    * The provision of sequence-tagged site (STS) anchor points allows meaningful comparisons between mapping studies but can be a time-consuming process for nonmodel species or orphan crops. * Here, the first use of high-resolution melt analysis (HRM) to generate STS markers for use in linkage mapping is described. This strategy is rapid and low-cost, and circumvents the need for labelled primers or amplicon fractionation. * Using white lupin (Lupinus albus, x = 25) as a case study, HRM analysis was applied to identify 91 polymorphic markers from expressed sequence tag (EST)-derived and genomic libraries. Of these, 77 generated STS anchor points in the first fully resolved linkage map of the species. The map also included 230 amplified fragment length polymorphisms (AFLP) loci, spanned 1916 cM (84.2% coverage) and divided into the expected 25 linkage groups. * Quantitative trait loci (QTL) analyses performed on the population revealed genomic regions associated with several traits, including the agronomically important time to flowering (tf), alkaloid synthesis and stem height (Ph). Use of HRM-STS markers also allowed us to make direct comparisons between our map and that of the related crop, Lupinus angustifolius, based on the conversion of RFLP, microsatellite and single nucleotide polymorphism (SNP) markers into HRM markers.

  5. Condition Number as a Measure of Noise Performance of Diffusion Tensor Data Acquisition Schemes with MRI

    NASA Astrophysics Data System (ADS)

    Skare, Stefan; Hedehus, Maj; Moseley, Michael E.; Li, Tie-Qiang

    2000-12-01

    Diffusion tensor mapping with MRI can noninvasively track neural connectivity and has great potential for neural scientific research and clinical applications. For each diffusion tensor imaging (DTI) data acquisition scheme, the diffusion tensor is related to the measured apparent diffusion coefficients (ADC) by a transformation matrix. With theoretical analysis we demonstrate that the noise performance of a DTI scheme is dependent on the condition number of the transformation matrix. To test the theoretical framework, we compared the noise performances of different DTI schemes using Monte-Carlo computer simulations and experimental DTI measurements. Both the simulation and the experimental results confirmed that the noise performances of different DTI schemes are significantly correlated with the condition number of the associated transformation matrices. We therefore applied numerical algorithms to optimize a DTI scheme by minimizing the condition number, hence improving the robustness to experimental noise. In the determination of anisotropic diffusion tensors with different orientations, MRI data acquisitions using a single optimum b value based on the mean diffusivity can produce ADC maps with regional differences in noise level. This will give rise to rotational variances of eigenvalues and anisotropy when diffusion tensor mapping is performed using a DTI scheme with a limited number of diffusion-weighting gradient directions. To reduce this type of artifact, a DTI scheme with not only a small condition number but also a large number of evenly distributed diffusion-weighting gradients in 3D is preferable.

  6. Phase retrieval in digital speckle pattern interferometry by use of a smoothed space-frequency distribution.

    PubMed

    Federico, Alejandro; Kaufmann, Guillermo H

    2003-12-10

    We evaluate the use of a smoothed space-frequency distribution (SSFD) to retrieve optical phase maps in digital speckle pattern interferometry (DSPI). The performance of this method is tested by use of computer-simulated DSPI fringes. Phase gradients are found along a pixel path from a single DSPI image, and the phase map is finally determined by integration. This technique does not need the application of a phase unwrapping algorithm or the introduction of carrier fringes in the interferometer. It is shown that a Wigner-Ville distribution with a smoothing Gaussian kernel gives more-accurate results than methods based on the continuous wavelet transform. We also discuss the influence of filtering on smoothing of the DSPI fringes and some additional limitations that emerge when this technique is applied. The performance of the SSFD method for processing experimental data is then illustrated.

  7. GPR Use and Activities in Denmark

    NASA Astrophysics Data System (ADS)

    Ringgaard, Jørgen; Wisén, Roger

    2014-05-01

    Academic work on GPR in Denmark is performed both by the Technical University of Denmark (DTU) and the University of Copenhagen (KU). The work at DTU includes development of antennas and systems, e.g. an airborne ice-sounder GPR system (POLARIS) that today is in frequent use for monitoring of ice thickness in Greenland. DTU often collaborates with ESA (European Space Agency) regarding electromagnetic development projects. At KU there is an ongoing work with GPR applied to water resources. The main objective is to study flux of water and matter across different hydrological domains. There are several recent publications from KU describing research for data analysis and modelling as well as hydro geophysical applications. Also the Geological Survey of Denmark and Greenland (GEUS) performs frequent geological mapping with GPR. There have been mainly two actors on the Danish commercial market for several years: FalkGeo and Ramboll. Falkgeo has been active for many years acquiring data for several different applications such as archeology, utilities and roads. Their equipment pool comprises both a multichannel Terravision system form GSSI and a 2D system from Mala Geoscience with a comprehensive range of antennas. Ramboll has performed GPR surveys for two decades mainly with 2D systems from GSSI. In recent years Ramboll has also obtained a system with RTA antennas from Mala Geoscience and a multichannel system from 3D-Radar. These systems have opened markets both for deeper geological mapping and for shallow mapping. The geological mapping with the Mala system has often been combined with resistivity imaging (CVES) and refraction seismic. The 3D system has been applied in airports and on road for mapping of layer thicknesses, delamination and for control of asphalt works. Other areas comprise bridge deck evaluation and utility mapping. Ramboll also acts as client advisor for BaneDanmark, a state owned company who operates and develops the Danish state railway network. For this Ramboll has written a guideline for application of GPR on BaneDanmark railways. There are no national guidelines or test sites in Denmark. The use of GPR on roads is very limited in Denmark compared to our neighboring countries. This is possibly due to conservatism in the industry and due to the fact that Denmark decided not to participate in a collaboration between some of our neighboring countries about preparation of guidelines for application of GPR on roads, the Mara Nord Project. An improvement in accuracy and more automatized routines for mapping of delamination and stripping would also widen the market for application of GPR in airports and on roads. International guidelines for application of GPR in several fields would also help to make authorities recognize it as a valid complement and alternative to other established methods. This abstract is a contribution to COST Action TU1208.

  8. Quantitative water content mapping at clinically relevant field strengths: a comparative study at 1.5 T and 3 T.

    PubMed

    Abbas, Zaheer; Gras, Vincent; Möllenhoff, Klaus; Oros-Peusquens, Ana-Maria; Shah, Nadim Joni

    2015-02-01

    Quantitative water content mapping in vivo using MRI is a very valuable technique to detect, monitor and understand diseases of the brain. At 1.5 T, this technology has already been successfully used, but it has only recently been applied at 3T because of significantly increased RF field inhomogeneity at the higher field strength. To validate the technology at 3T, we estimate and compare in vivo quantitative water content maps at 1.5 T and 3T obtained with a protocol proposed recently for 3T MRI. The proposed MRI protocol was applied on twenty healthy subjects at 1.5 T and 3T; the same post-processing algorithms were used to estimate the water content maps. The 1.5 T and 3T maps were subsequently aligned and compared on a voxel-by-voxel basis. Statistical analysis was performed to detect possible differences between the estimated 1.5 T and 3T water maps. Our analysis indicates that the water content values obtained at 1.5 T and 3T did not show significant systematic differences. On average the difference did not exceed the standard deviation of the water content at 1.5 T. Furthermore, the contrast-to-noise ratio (CNR) of the estimated water content map was increased at 3T by a factor of at least 1.5. Vulnerability to RF inhomogeneity increases dramatically with the increasing static magnetic field strength. However, using advanced corrections for the sensitivity profile of the MR coils, it is possible to preserve quantitative accuracy while benefiting from the increased CNR at the higher field strength. Indeed, there was no significant difference in the water content values obtained in the brain at 1.5 T and 3T. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Identification of phreatophytic groundwater dependent ecosystems using geospatial technologies

    NASA Astrophysics Data System (ADS)

    Perez Hoyos, Isabel Cristina

    The protection of groundwater dependent ecosystems (GDEs) is increasingly being recognized as an essential aspect for the sustainable management and allocation of water resources. Ecosystem services are crucial for human well-being and for a variety of flora and fauna. However, the conservation of GDEs is only possible if knowledge about their location and extent is available. Several studies have focused on the identification of GDEs at specific locations using ground-based measurements. However, recent progress in technologies such as remote sensing and their integration with geographic information systems (GIS) has provided alternative ways to map GDEs at much larger spatial extents. This study is concerned with the discovery of patterns in geospatial data sets using data mining techniques for mapping phreatophytic GDEs in the United States at 1 km spatial resolution. A methodology to identify the probability of an ecosystem to be groundwater dependent is developed. Probabilities are obtained by modeling the relationship between the known locations of GDEs and main factors influencing groundwater dependency, namely water table depth (WTD) and aridity index (AI). A methodology is proposed to predict WTD at 1 km spatial resolution using relevant geospatial data sets calibrated with WTD observations. An ensemble learning algorithm called random forest (RF) is used in order to model the distribution of groundwater in three study areas: Nevada, California, and Washington, as well as in the entire United States. RF regression performance is compared with a single regression tree (RT). The comparison is based on contrasting training error, true prediction error, and variable importance estimates of both methods. Additionally, remote sensing variables are omitted from the process of fitting the RF model to the data to evaluate the deterioration in the model performance when these variables are not used as an input. Research results suggest that although the prediction accuracy of a single RT is reduced in comparison with RFs, single trees can still be used to understand the interactions that might be taking place between predictor variables and the response variable. Regarding RF, there is a great potential in using the power of an ensemble of trees for prediction of WTD. The superior capability of RF to accurately map water table position in Nevada, California, and Washington demonstrate that this technique can be applied at scales larger than regional levels. It is also shown that the removal of remote sensing variables from the RF training process degrades the performance of the model. Using the predicted WTD, the probability of an ecosystem to be groundwater dependent (GDE probability) is estimated at 1 km spatial resolution. The modeling technique is evaluated in the state of Nevada, USA to develop a systematic approach for the identification of GDEs and it is then applied in the United States. The modeling approach selected for the development of the GDE probability map results from a comparison of the performance of classification trees (CT) and classification forests (CF). Predictive performance evaluation for the selection of the most accurate model is achieved using a threshold independent technique, and the prediction accuracy of both models is assessed in greater detail using threshold-dependent measures. The resulting GDE probability map can potentially be used for the definition of conservation areas since it can be translated into a binary classification map with two classes: GDE and NON-GDE. These maps are created by selecting a probability threshold. It is demonstrated that the choice of this threshold has dramatic effects on deterministic model performance measures.

  10. Unsupervised Multi-Scale Change Detection from SAR Imagery for Monitoring Natural and Anthropogenic Disasters

    NASA Astrophysics Data System (ADS)

    Ajadi, Olaniyi A.

    Radar remote sensing can play a critical role in operational monitoring of natural and anthropogenic disasters. Despite its all-weather capabilities, and its high performance in mapping, and monitoring of change, the application of radar remote sensing in operational monitoring activities has been limited. This has largely been due to: (1) the historically high costs associated with obtaining radar data; (2) slow data processing, and delivery procedures; and (3) the limited temporal sampling that was provided by spaceborne radar-based satellites. Recent advances in the capabilities of spaceborne Synthetic Aperture Radar (SAR) sensors have developed an environment that now allows for SAR to make significant contributions to disaster monitoring. New SAR processing strategies that can take full advantage of these new sensor capabilities are currently being developed. Hence, with this PhD dissertation, I aim to: (i) investigate unsupervised change detection techniques that can reliably extract signatures from time series of SAR images, and provide the necessary flexibility for application to a variety of natural, and anthropogenic hazard situations; (ii) investigate effective methods to reduce the effects of speckle and other noise on change detection performance; (iii) automate change detection algorithms using probabilistic Bayesian inferencing; and (iv) ensure that the developed technology is applicable to current, and future SAR sensors to maximize temporal sampling of a hazardous event. This is achieved by developing new algorithms that rely on image amplitude information only, the sole image parameter that is available for every single SAR acquisition.. The motivation and implementation of the change detection concept are described in detail in Chapter 3. In the same chapter, I demonstrated the technique's performance using synthetic data as well as a real-data application to map wildfire progression. I applied Radiometric Terrain Correction (RTC) to the data to increase the sampling frequency, while the developed multiscale-driven approach reliably identified changes embedded in largely stationary background scenes. With this technique, I was able to identify the extent of burn scars with high accuracy. I further applied the application of the change detection technology to oil spill mapping. The analysis highlights that the approach described in Chapter 3 can be applied to this drastically different change detection problem with only little modification. While the core of the change detection technique remained unchanged, I made modifications to the pre-processing step to enable change detection from scenes of continuously varying background. I introduced the Lipschitz regularity (LR) transformation as a technique to normalize the typically dynamic ocean surface, facilitating high performance oil spill detection independent of environmental conditions during image acquisition. For instance, I showed that LR processing reduces the sensitivity of change detection performance to variations in surface winds, which is a known limitation in oil spill detection from SAR. Finally, I applied the change detection technique to aufeis flood mapping along the Sagavanirktok River. Due to the complex nature of aufeis flooded areas, I substituted the resolution-preserving speckle filter used in Chapter 3 with curvelet filters. In addition to validating the performance of the change detection results, I also provide evidence of the wealth of information that can be extracted about aufeis flooding events once a time series of change detection information was extracted from SAR imagery. A summary of the developed change detection techniques is conducted and suggested future work is presented in Chapter 6.

  11. Comparison of Pixel-Based and Object-Based Classification Using Parameters and Non-Parameters Approach for the Pattern Consistency of Multi Scale Landcover

    NASA Astrophysics Data System (ADS)

    Juniati, E.; Arrofiqoh, E. N.

    2017-09-01

    Information extraction from remote sensing data especially land cover can be obtained by digital classification. In practical some people are more comfortable using visual interpretation to retrieve land cover information. However, it is highly influenced by subjectivity and knowledge of interpreter, also takes time in the process. Digital classification can be done in several ways, depend on the defined mapping approach and assumptions on data distribution. The study compared several classifiers method for some data type at the same location. The data used Landsat 8 satellite imagery, SPOT 6 and Orthophotos. In practical, the data used to produce land cover map in 1:50,000 map scale for Landsat, 1:25,000 map scale for SPOT and 1:5,000 map scale for Orthophotos, but using visual interpretation to retrieve information. Maximum likelihood Classifiers (MLC) which use pixel-based and parameters approach applied to such data, and also Artificial Neural Network classifiers which use pixel-based and non-parameters approach applied too. Moreover, this study applied object-based classifiers to the data. The classification system implemented is land cover classification on Indonesia topographic map. The classification applied to data source, which is expected to recognize the pattern and to assess consistency of the land cover map produced by each data. Furthermore, the study analyse benefits and limitations the use of methods.

  12. On the potential of long wavelength imaging radars for mapping vegetation types and woody biomass in tropical rain forests

    NASA Technical Reports Server (NTRS)

    Rignot, Eric J.; Zimmermann, Reiner; Oren, Ram

    1995-01-01

    In the tropical rain forests of Manu, in Peru, where forest biomass ranges from 4 kg/sq m in young forest succession up to 100 kg/sq m in old, undisturbed floodplain stands, the P-band polarimetric radar data gathered in June of 1993 by the AIRSAR (Airborne Synthetic Aperture Radar) instrument separate most major vegetation formations and also perform better than expected in estimating woody biomass. The worldwide need for large scale, updated biomass estimates, achieved with a uniformly applied method, as well as reliable maps of land cover, justifies a more in-depth exploration of long wavelength imaging radar applications for tropical forests inventories.

  13. Distributed coaxial cable crack sensors for crack mapping in RC

    NASA Astrophysics Data System (ADS)

    Greene, Gary G.; Belarbi, Abdeldjelil; Chen, Genda; McDaniel, Ryan

    2005-05-01

    New type of distributed coaxial cable sensors for health monitoring of large-scale civil infrastructure was recently proposed and developed by the authors. This paper shows the results and performance of such sensors mounted on near surface of two flexural beams and a large scale reinforced concrete box girder that was subjected to twenty cycles of combined shear and torsion. The main objectives of this health monitoring study was to correlate the sensor's response to strain in the member, and show that magnitude of the signal's reflection coefficient is related to increases in applied load, repeated cycles, cracking, crack mapping, and yielding. The effect of multiple adjacent cracks, and signal loss was also investigated.

  14. A Survey of Variable Extragalactic Sources with XTE's All Sky Monitor (ASM)

    NASA Technical Reports Server (NTRS)

    Jernigan, Garrett

    1998-01-01

    The original goal of the project was the near real-time detection of AGN utilizing the SSC 3 of the ASM on XTE which does a deep integration on one 100 square degree region of the sky. While the SSC never performed sufficiently well to allow the success of this goal, the work on the project has led to the development of a new analysis method for coded aperture systems which has now been applied to ASM data for mapping regions near clusters of galaxies such as the Perseus Cluster and the Coma Cluster. Publications are in preparation that describe both the new method and the results from mapping clusters of galaxies.

  15. Adaptive Neuron Model: An architecture for the rapid learning of nonlinear topological transformations

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.

  16. Pixel-based skin segmentation in psoriasis images.

    PubMed

    George, Y; Aldeen, M; Garnavi, R

    2016-08-01

    In this paper, we present a detailed comparison study of skin segmentation methods for psoriasis images. Different techniques are modified and then applied to a set of psoriasis images acquired from the Royal Melbourne Hospital, Melbourne, Australia, with aim of finding the best technique suited for application to psoriasis images. We investigate the effect of different colour transformations on skin detection performance. In this respect, explicit skin thresholding is evaluated with three different decision boundaries (CbCr, HS and rgHSV). Histogram-based Bayesian classifier is applied to extract skin probability maps (SPMs) for different colour channels. This is then followed by using different approaches to find a binary skin map (SM) image from the SPMs. The approaches used include binary decision tree (DT) and Otsu's thresholding. Finally, a set of morphological operations are implemented to refine the resulted SM image. The paper provides detailed analysis and comparison of the performance of the Bayesian classifier in five different colour spaces (YCbCr, HSV, RGB, XYZ and CIELab). The results show that histogram-based Bayesian classifier is more effective than explicit thresholding, when applied to psoriasis images. It is also found that decision boundary CbCr outperforms HS and rgHSV. Another finding is that the SPMs of Cb, Cr, H and B-CIELab colour bands yield the best SMs for psoriasis images. In this study, we used a set of 100 psoriasis images for training and testing the presented methods. True Positive (TP) and True Negative (TN) are used as statistical evaluation measures.

  17. A secure image encryption method based on dynamic harmony search (DHS) combined with chaotic map

    NASA Astrophysics Data System (ADS)

    Mirzaei Talarposhti, Khadijeh; Khaki Jamei, Mehrzad

    2016-06-01

    In recent years, there has been increasing interest in the security of digital images. This study focuses on the gray scale image encryption using dynamic harmony search (DHS). In this research, first, a chaotic map is used to create cipher images, and then the maximum entropy and minimum correlation coefficient is obtained by applying a harmony search algorithm on them. This process is divided into two steps. In the first step, the diffusion of a plain image using DHS to maximize the entropy as a fitness function will be performed. However, in the second step, a horizontal and vertical permutation will be applied on the best cipher image, which is obtained in the previous step. Additionally, DHS has been used to minimize the correlation coefficient as a fitness function in the second step. The simulation results have shown that by using the proposed method, the maximum entropy and the minimum correlation coefficient, which are approximately 7.9998 and 0.0001, respectively, have been obtained.

  18. Mapping cell populations in flow cytometry data for cross-sample comparison using the Friedman-Rafsky test statistic as a distance measure.

    PubMed

    Hsiao, Chiaowen; Liu, Mengya; Stanton, Rick; McGee, Monnie; Qian, Yu; Scheuermann, Richard H

    2016-01-01

    Flow cytometry (FCM) is a fluorescence-based single-cell experimental technology that is routinely applied in biomedical research for identifying cellular biomarkers of normal physiological responses and abnormal disease states. While many computational methods have been developed that focus on identifying cell populations in individual FCM samples, very few have addressed how the identified cell populations can be matched across samples for comparative analysis. This article presents FlowMap-FR, a novel method for cell population mapping across FCM samples. FlowMap-FR is based on the Friedman-Rafsky nonparametric test statistic (FR statistic), which quantifies the equivalence of multivariate distributions. As applied to FCM data by FlowMap-FR, the FR statistic objectively quantifies the similarity between cell populations based on the shapes, sizes, and positions of fluorescence data distributions in the multidimensional feature space. To test and evaluate the performance of FlowMap-FR, we simulated the kinds of biological and technical sample variations that are commonly observed in FCM data. The results show that FlowMap-FR is able to effectively identify equivalent cell populations between samples under scenarios of proportion differences and modest position shifts. As a statistical test, FlowMap-FR can be used to determine whether the expression of a cellular marker is statistically different between two cell populations, suggesting candidates for new cellular phenotypes by providing an objective statistical measure. In addition, FlowMap-FR can indicate situations in which inappropriate splitting or merging of cell populations has occurred during gating procedures. We compared the FR statistic with the symmetric version of Kullback-Leibler divergence measure used in a previous population matching method with both simulated and real data. The FR statistic outperforms the symmetric version of KL-distance in distinguishing equivalent from nonequivalent cell populations. FlowMap-FR was also employed as a distance metric to match cell populations delineated by manual gating across 30 FCM samples from a benchmark FlowCAP data set. An F-measure of 0.88 was obtained, indicating high precision and recall of the FR-based population matching results. FlowMap-FR has been implemented as a standalone R/Bioconductor package so that it can be easily incorporated into current FCM data analytical workflows. © The Authors. Published by Wiley Periodicals, Inc. on behalf of ISAC.

  19. Limited Dissolved Phosphorus Runoff Losses from Layered Double Hydroxide and Struvite Fertilizers in a Rainfall Simulation Study.

    PubMed

    Everaert, Maarten; da Silva, Rodrigo C; Degryse, Fien; McLaughlin, Mike J; Smolders, Erik

    2018-03-01

    The enrichment of P in surface waters has been linked to P runoff from agricultural fields amended with fertilizers. Novel slow-release mineral fertilizers, such as struvite and P-exchanged layered double hydroxides (LDHs), have received increasing attention for P recycling from waste streams, and these fertilizers may potentially reduce the risk of runoff losses. Here, a rainfall simulation experiment was performed to evaluate P runoff associated with the application of recycled slow-release fertilizers relative to that of a soluble fertilizer. Monoammonium phosphate (MAP), struvite, and LDH granular fertilizers were broadcasted at equal total P doses on soil packed in trays (5% slope) and covered with perennial ryegrass ( L.). Four rainfall simulation events of 30 min were performed at 1, 5, 15, and 30 d after the fertilizer application. Runoff water from the trays was collected, filtered, and analyzed for dissolved P. For the MAP treatment, P runoff losses were high in the first two rain events and leveled off in later rain events. In total, 42% of the applied P in the MAP treatment was lost due to runoff. In the slow-release fertilizer treatments, P runoff losses were limited to 1.9 (struvite) and 2.4% (LDH) of the applied doses and were more similar over the different rain events. The use of these novel P fertilizer forms could be beneficial in areas with a high risk of surface water eutrophication and a history of intensive fertilization. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  20. Mapping microbubble viscosity using fluorescence lifetime imaging of molecular rotors

    PubMed Central

    Hosny, Neveen A.; Mohamedi, Graciela; Rademeyer, Paul; Owen, Joshua; Wu, Yilei; Tang, Meng-Xing; Eckersley, Robert J.; Stride, Eleanor; Kuimova, Marina K.

    2013-01-01

    Encapsulated microbubbles are well established as highly effective contrast agents for ultrasound imaging. There remain, however, some significant challenges to fully realize the potential of microbubbles in advanced applications such as perfusion mapping, targeted drug delivery, and gene therapy. A key requirement is accurate characterization of the viscoelastic surface properties of the microbubbles, but methods for independent, nondestructive quantification and mapping of these properties are currently lacking. We present here a strategy for performing these measurements that uses a small fluorophore termed a “molecular rotor” embedded in the microbubble surface, whose fluorescence lifetime is directly related to the viscosity of its surroundings. We apply fluorescence lifetime imaging to show that shell viscosities vary widely across the population of the microbubbles and are influenced by the shell composition and the manufacturing process. We also demonstrate that heterogeneous viscosity distributions exist within individual microbubble shells even with a single surfactant component. PMID:23690599

  1. Improving deep convolutional neural networks with mixed maxout units.

    PubMed

    Zhao, Hui-Zhen; Liu, Fu-Xian; Li, Long-Yue

    2017-01-01

    Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that "non-maximal features are unable to deliver" and "feature mapping subspace pooling is insufficient," we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance.

  2. Personal sleep pattern visualization using sequence-based kernel self-organizing map on sound data.

    PubMed

    Wu, Hongle; Kato, Takafumi; Yamada, Tomomi; Numao, Masayuki; Fukui, Ken-Ichi

    2017-07-01

    We propose a method to discover sleep patterns via clustering of sound events recorded during sleep. The proposed method extends the conventional self-organizing map algorithm by kernelization and sequence-based technologies to obtain a fine-grained map that visualizes the distribution and changes of sleep-related events. We introduced features widely applied in sound processing and popular kernel functions to the proposed method to evaluate and compare performance. The proposed method provides a new aspect of sleep monitoring because the results demonstrate that sound events can be directly correlated to an individual's sleep patterns. In addition, by visualizing the transition of cluster dynamics, sleep-related sound events were found to relate to the various stages of sleep. Therefore, these results empirically warrant future study into the assessment of personal sleep quality using sound data. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Automatic face recognition in HDR imaging

    NASA Astrophysics Data System (ADS)

    Pereira, Manuela; Moreno, Juan-Carlos; Proença, Hugo; Pinheiro, António M. G.

    2014-05-01

    The gaining popularity of the new High Dynamic Range (HDR) imaging systems is raising new privacy issues caused by the methods used for visualization. HDR images require tone mapping methods for an appropriate visualization on conventional and non-expensive LDR displays. These visualization methods might result in completely different visualization raising several issues on privacy intrusion. In fact, some visualization methods result in a perceptual recognition of the individuals, while others do not even show any identity. Although perceptual recognition might be possible, a natural question that can rise is how computer based recognition will perform using tone mapping generated images? In this paper, a study where automatic face recognition using sparse representation is tested with images that result from common tone mapping operators applied to HDR images. Its ability for the face identity recognition is described. Furthermore, typical LDR images are used for the face recognition training.

  4. Flood Hazard Mapping by Applying Fuzzy TOPSIS Method

    NASA Astrophysics Data System (ADS)

    Han, K. Y.; Lee, J. Y.; Keum, H.; Kim, B. J.; Kim, T. H.

    2017-12-01

    There are lots of technical methods to integrate various factors for flood hazard mapping. The purpose of this study is to suggest the methodology of integrated flood hazard mapping using MCDM(Multi Criteria Decision Making). MCDM problems involve a set of alternatives that are evaluated on the basis of conflicting and incommensurate criteria. In this study, to apply MCDM to assessing flood risk, maximum flood depth, maximum velocity, and maximum travel time are considered as criterion, and each applied elements are considered as alternatives. The scheme to find the efficient alternative closest to a ideal value is appropriate way to assess flood risk of a lot of element units(alternatives) based on various flood indices. Therefore, TOPSIS which is most commonly used MCDM scheme is adopted to create flood hazard map. The indices for flood hazard mapping(maximum flood depth, maximum velocity, and maximum travel time) have uncertainty concerning simulation results due to various values according to flood scenario and topographical condition. These kind of ambiguity of indices can cause uncertainty of flood hazard map. To consider ambiguity and uncertainty of criterion, fuzzy logic is introduced which is able to handle ambiguous expression. In this paper, we made Flood Hazard Map according to levee breach overflow using the Fuzzy TOPSIS Technique. We confirmed the areas where the highest grade of hazard was recorded through the drawn-up integrated flood hazard map, and then produced flood hazard map can be compared them with those indicated in the existing flood risk maps. Also, we expect that if we can apply the flood hazard map methodology suggested in this paper even to manufacturing the current flood risk maps, we will be able to make a new flood hazard map to even consider the priorities for hazard areas, including more varied and important information than ever before. Keywords : Flood hazard map; levee break analysis; 2D analysis; MCDM; Fuzzy TOPSIS Acknowlegement This research was supported by a grant (17AWMP-B079625-04) from Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  5. Multiple imputation of rainfall missing data in the Iberian Mediterranean context

    NASA Astrophysics Data System (ADS)

    Miró, Juan Javier; Caselles, Vicente; Estrela, María José

    2017-11-01

    Given the increasing need for complete rainfall data networks, in recent years have been proposed diverse methods for filling gaps in observed precipitation series, progressively more advanced that traditional approaches to overcome the problem. The present study has consisted in validate 10 methods (6 linear, 2 non-linear and 2 hybrid) that allow multiple imputation, i.e., fill at the same time missing data of multiple incomplete series in a dense network of neighboring stations. These were applied for daily and monthly rainfall in two sectors in the Júcar River Basin Authority (east Iberian Peninsula), which is characterized by a high spatial irregularity and difficulty of rainfall estimation. A classification of precipitation according to their genetic origin was applied as pre-processing, and a quantile-mapping adjusting as post-processing technique. The results showed in general a better performance for the non-linear and hybrid methods, highlighting that the non-linear PCA (NLPCA) method outperforms considerably the Self Organizing Maps (SOM) method within non-linear approaches. On linear methods, the Regularized Expectation Maximization method (RegEM) was the best, but far from NLPCA. Applying EOF filtering as post-processing of NLPCA (hybrid approach) yielded the best results.

  6. Application of Contraction Mappings to the Control of Nonlinear Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Killingsworth, W. R., Jr.

    1972-01-01

    The theoretical and applied aspects of successive approximation techniques are considered for the determination of controls for nonlinear dynamical systems. Particular emphasis is placed upon the methods of contraction mappings and modified contraction mappings. It is shown that application of the Pontryagin principle to the optimal nonlinear regulator problem results in necessary conditions for optimality in the form of a two point boundary value problem (TPBVP). The TPBVP is represented by an operator equation and functional analytic results on the iterative solution of operator equations are applied. The general convergence theorems are translated and applied to those operators arising from the optimal regulation of nonlinear systems. It is shown that simply structured matrices and similarity transformations may be used to facilitate the calculation of the matrix Green functions and the evaluation of the convergence criteria. A controllability theory based on the integral representation of TPBVP's, the implicit function theorem, and contraction mappings is developed for nonlinear dynamical systems. Contraction mappings are theoretically and practically applied to a nonlinear control problem with bounded input control and the Lipschitz norm is used to prove convergence for the nondifferentiable operator. A dynamic model representing community drug usage is developed and the contraction mappings method is used to study the optimal regulation of the nonlinear system.

  7. Scanner qualification with IntenCD based reticle error correction

    NASA Astrophysics Data System (ADS)

    Elblinger, Yair; Finders, Jo; Demarteau, Marcel; Wismans, Onno; Minnaert Janssen, Ingrid; Duray, Frank; Ben Yishai, Michael; Mangan, Shmoolik; Cohen, Yaron; Parizat, Ziv; Attal, Shay; Polonsky, Netanel; Englard, Ilan

    2010-03-01

    Scanner introduction into the fab production environment is a challenging task. An efficient evaluation of scanner performance matrices during factory acceptance test (FAT) and later on during site acceptance test (SAT) is crucial for minimizing the cycle time for pre and post production-start activities. If done effectively, the matrices of base line performance established during the SAT are used as a reference for scanner performance and fleet matching monitoring and maintenance in the fab environment. Key elements which can influence the cycle time of the SAT, FAT and maintenance cycles are the imaging, process and mask characterizations involved with those cycles. Discrete mask measurement techniques are currently in use to create across-mask CDU maps. By subtracting these maps from their final wafer measurement CDU map counterparts, it is possible to assess the real scanner induced printed errors within certain limitations. The current discrete measurement methods are time consuming and some techniques also overlook mask based effects other than line width variations, such as transmission and phase variations, all of which influence the final printed CD variability. Applied Materials Aera2TM mask inspection tool with IntenCDTM technology can scan the mask at high speed, offer full mask coverage and accurate assessment of all masks induced source of errors simultaneously, making it beneficial for scanner qualifications and performance monitoring. In this paper we report on a study that was done to improve a scanner introduction and qualification process using the IntenCD application to map the mask induced CD non uniformity. We will present the results of six scanners in production and discuss the benefits of the new method.

  8. The Geographic Information System applied to study schistosomiasis in Pernambuco

    PubMed Central

    Barbosa, Verônica Santos; Loyo, Rodrigo Moraes; Guimarães, Ricardo José de Paula Souza e; Barbosa, Constança Simões

    2017-01-01

    ABSTRACT OBJECTIVE Diagnose risk environments for schistosomiasis in coastal localities of Pernambuco using geoprocessing techniques. METHODS A coproscopic and malacological survey were carried out in the Forte Orange and Serrambi areas. Environmental variables (temperature, salinity, pH, total dissolved solids and water fecal coliform dosage) were collected from Biomphalaria breeding sites or foci. The spatial analysis was performed using ArcGis 10.1 software, applying the kernel estimator, elevation map, and distance map. RESULTS In Forte Orange, 4.3% of the population had S. mansoni and were found two B. glabrata and 26 B. straminea breeding sites. The breeding sites had temperatures of 25ºC to 41ºC, pH of 6.9 to 11.1, total dissolved solids between 148 and 661, and salinity of 1,000 d. In Serrambi, 4.4% of the population had S. mansoni and were found seven B. straminea and seven B. glabrata breeding sites. Breeding sites had temperatures of 24ºC to 36ºC, pH of 7.1 to 9.8, total dissolved solids between 116 and 855, and salinity of 1,000 d. The kernel estimator shows the clusters of positive patients and foci of Biomphalaria, and the digital elevation map indicates areas of rainwater concentration. The distance map shows the proximity of the snail foci with schools and health facilities. CONCLUSIONS Geoprocessing techniques prove to be a competent tool for locating and scaling the risk areas for schistosomiasis, and can subsidize the health services control actions. PMID:29166439

  9. The efficacy of the 'mind map' study technique.

    PubMed

    Farrand, Paul; Hussain, Fearzana; Hennessy, Enid

    2002-05-01

    To examine the effectiveness of using the 'mind map' study technique to improve factual recall from written information. To obtain baseline data, subjects completed a short test based on a 600-word passage of text prior to being randomly allocated to form two groups: 'self-selected study technique' and 'mind map'. After a 30-minute interval the self-selected study technique group were exposed to the same passage of text previously seen and told to apply existing study techniques. Subjects in the mind map group were trained in the mind map technique and told to apply it to the passage of text. Recall was measured after an interfering task and a week later. Measures of motivation were taken. Barts and the London School of Medicine and Dentistry, University of London. 50 second- and third-year medical students. Recall of factual material improved for both the mind map and self-selected study technique groups at immediate test compared with baseline. However this improvement was only robust after a week for those in the mind map group. At 1 week, the factual knowledge in the mind map group was greater by 10% (adjusting for baseline) (95% CI -1% to 22%). However motivation for the technique used was lower in the mind map group; if motivation could have been made equal in the groups, the improvement with mind mapping would have been 15% (95% CI 3% to 27%). Mind maps provide an effective study technique when applied to written material. However before mind maps are generally adopted as a study technique, consideration has to be given towards ways of improving motivation amongst users.

  10. Spatial analysis of groundwater levels using Fuzzy Logic and geostatistical tools

    NASA Astrophysics Data System (ADS)

    Theodoridou, P. G.; Varouchakis, E. A.; Karatzas, G. P.

    2017-12-01

    The spatial variability evaluation of the water table of an aquifer provides useful information in water resources management plans. Geostatistical methods are often employed to map the free surface of an aquifer. In geostatistical analysis using Kriging techniques the selection of the optimal variogram is very important for the optimal method performance. This work compares three different criteria to assess the theoretical variogram that fits to the experimental one: the Least Squares Sum method, the Akaike Information Criterion and the Cressie's Indicator. Moreover, variable distance metrics such as the Euclidean, Minkowski, Manhattan, Canberra and Bray-Curtis are applied to calculate the distance between the observation and the prediction points, that affects both the variogram calculation and the Kriging estimator. A Fuzzy Logic System is then applied to define the appropriate neighbors for each estimation point used in the Kriging algorithm. The two criteria used during the Fuzzy Logic process are the distance between observation and estimation points and the groundwater level value at each observation point. The proposed techniques are applied to a data set of 250 hydraulic head measurements distributed over an alluvial aquifer. The analysis showed that the Power-law variogram model and Manhattan distance metric within ordinary kriging provide the best results when the comprehensive geostatistical analysis process is applied. On the other hand, the Fuzzy Logic approach leads to a Gaussian variogram model and significantly improves the estimation performance. The two different variogram models can be explained in terms of a fractional Brownian motion approach and of aquifer behavior at local scale. Finally, maps of hydraulic head spatial variability and of predictions uncertainty are constructed for the area with the two different approaches comparing their advantages and drawbacks.

  11. Effectiveness of higher order thinking skills (HOTS) based i-Think map concept towards primary students

    NASA Astrophysics Data System (ADS)

    Ping, Owi Wei; Ahmad, Azhar; Adnan, Mazlini; Hua, Ang Kean

    2017-05-01

    Higher Order Thinking Skills (HOTS) is a new concept of education reform based on the Taxonomies Bloom. The concept concentrate on student understanding in learning process based on their own methods. Through the HOTS questions are able to train students to think creatively, critic and innovative. The aim of this study was to identify the student's proficiency in solving HOTS Mathematics question by using i-Think map. This research takes place in Sabak Bernam, Selangor. The method applied is quantitative approach that involves approximately all of the standard five students. Pra-posttest was conduct before and after the intervention using i-Think map in solving the HOTS questions. The result indicates significant improvement for post-test, which prove that applying i-Think map enhance the students ability to solve HOTS question. Survey's analysis showed 90% of the students agree having i-Thinking map in analysis the question carefully and using keywords in the map to solve the questions. As conclusion, this process benefits students to minimize in making the mistake when solving the questions. Therefore, teachers are necessarily to guide students in applying the eligible i-Think map and methods in analyzing the question through finding the keywords.

  12. Self-organizing feature maps for dynamic control of radio resources in CDMA microcellular networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    1998-03-01

    The application of artificial neural networks to the channel assignment problem for cellular code-division multiple access (CDMA) cellular networks has previously been investigated. CDMA takes advantage of voice activity and spatial isolation because its capacity is only interference limited, unlike time-division multiple access (TDMA) and frequency-division multiple access (FDMA) where capacities are bandwidth-limited. Any reduction in interference in CDMA translates linearly into increased capacity. To satisfy the high demands for new services and improved connectivity for mobile communications, microcellular and picocellular systems are being introduced. For these systems, there is a need to develop robust and efficient management procedures for the allocation of power and spectrum to maximize radio capacity. Topology-conserving mappings play an important role in the biological processing of sensory inputs. The same principles underlying Kohonen's self-organizing feature maps (SOFMs) are applied to the adaptive control of radio resources to minimize interference, hence, maximize capacity in direct-sequence (DS) CDMA networks. The approach based on SOFMs is applied to some published examples of both theoretical and empirical models of DS/CDMA microcellular networks in metropolitan areas. The results of the approach for these examples are informally compared to the performance of algorithms, based on Hopfield- Tank neural networks and on genetic algorithms, for the channel assignment problem.

  13. Some Experience Using SEN2COR

    NASA Astrophysics Data System (ADS)

    Pflug, Bringfried; Bieniarz, Jakub; Debaecker, Vincent; Louis, Jérôme; Müller-Wilms, Uwe

    2016-04-01

    ESA has developed and launched the Sentinel-2A optical imaging mission that delivers optical data products designed to feed downstream services mainly related to land monitoring, emergency management and security. Many of these applications require accurate correction of satellite images for atmospheric effects to ensure the highest quality of scientific exploitation of Sentinel-2 data. Therefore the atmospheric correction processor Sen2Cor was developed by TPZ V on behalf of ESA. TPZ F and DLR have teamed up in order to provide the calibration and validation of the Level-2A processor Sen2Cor. Level-2A processing is applied to Top-Of-Atmosphere (TOA) Level-1C ortho-image reflectance products. Level-2A main output is the Bottom-Of-Atmosphere (BOA) corrected reflectance product. Additional outputs are an Aerosol Optical Thickness (AOT) map, a Water Vapour (WV) map and a Scene Classification (SC) map with Quality Indicators for cloud and snow probabilities. The poster will present some processing examples of Sen2Cor applied to Sentinel-2A data together with first performance investigations. Different situations will be covered like processing with and without DEM (Digital Elevation Model). Sen2Cor processing is controlled by several configuration parameters. Some examples will be presented demonstrating the influence of different settings of some parameters.

  14. Redwing: A MOOSE application for coupling MPACT and BISON

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frederick N. Gleicher; Michael Rose; Tom Downar

    Fuel performance and whole core neutron transport programs are often used to analyze fuel behavior as it is depleted in a reactor. For fuel performance programs, internal models provide the local intra-pin power density, fast neutron flux, burnup, and fission rate density, which are needed for a fuel performance analysis. The fuel performance internal models have a number of limitations. These include effects on the intra-pin power distribution by nearby assembly elements, such as water channels and control rods, and the further limitation of applicability to a specified fuel type such as low enriched UO2. In addition, whole core neutronmore » transport codes need an accurate intra-pin temperature distribution in order to calculate neutron cross sections. Fuel performance simulations are able to model the intra-pin fuel displacement as the fuel expands and densifies. These displacements must be accurately modeled in order to capture the eventual mechanical contact of the fuel and the clad; the correct radial gap width is needed for an accurate calculation of the temperature distribution of the fuel rod. Redwing is a MOOSE-based application that enables coupling between MPACT and BISON for transport and fuel performance coupling. MPACT is a 3D neutron transport and reactor core simulator based on the method of characteristics (MOC). The development of MPACT began at the University of Michigan (UM) and now is under the joint development of ORNL and UM as part of the DOE CASL Simulation Hub. MPACT is able to model the effects of local assembly elements and is able calculate intra-pin quantities such as the local power density on a volumetric mesh for any fuel type. BISON is a fuel performance application of Multi-physics Object Oriented Simulation Environment (MOOSE), which is under development at Idaho National Laboratory. BISON is able to solve the nonlinearly coupled mechanical deformation and heat transfer finite element equations that model a fuel element as it is depleted in a nuclear reactor. Redwing couples BISON and MPACT in a single application. Redwing maps and transfers the individual intra-pin quantities such as fission rate density, power density, and fast neutron flux from the MPACT volumetric mesh to the individual BISON finite element meshes. For a two-way coupling Redwing maps and transfers the individual pin temperature field and axially dependent coolant densities from the BISON mesh to the MPACT volumetric mesh. Details of the mapping are given. Redwing advances the simulation with the MPACT solution for each depletion time step and then advances the multiple BISON simulations for fuel performance calculations. Sub-cycle advancement can be applied to the individual BISON simulations and allows multiple time steps to be applied to the fuel performance simulations. Currently, only loose coupling where data from a previous time step is applied to the current time step is performed.« less

  15. Land subsidence susceptibility and hazard mapping: the case of Amyntaio Basin, Greece

    NASA Astrophysics Data System (ADS)

    Tzampoglou, P.; Loupasakis, C.

    2017-09-01

    Landslide susceptibility and hazard mapping has been applying for more than 20 years succeeding the assessment of the landslide risk and the mitigation the phenomena. On the contrary, equivalent maps aiming to study and mitigate land subsidence phenomena caused by the overexploitation of the aquifers are absent from the international literature. The current study focuses at the Amyntaio basin, located in West Macedonia at Florina prefecture. As proved by numerous studies the wider area has been severely affected by the overexploitation of the aquifers, caused by the mining and the agricultural activities. The intensive ground water level drop has triggered extensive land subsidence phenomena, especially at the perimeter of the open pit coal mine operating at the site, causing damages to settlements and infrastructure. The land subsidence susceptibility and risk maps were produced by applying the semi-quantitative WLC (Weighted Linear Combination) method, especially calibrated for this particular catastrophic event. The results were evaluated by using detailed field mapping data referring to the spatial distribution of the surface ruptures caused by the subsidence. The high correlation between the produced maps and the field mapping data, have proved the great value of the maps and of the applied technique on the management and the mitigation of the phenomena. Obviously, these maps can be safely used by decision-making authorities for the future urban safety development.

  16. Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao, Yinghua; Chen, Guang-Hong; Hacker, Timothy A.

    Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan wasmore » performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937.9% for the 500 mA FBP, 25 mA SIR, and 25 mA FBP, respectively. In numerical simulations, SIR mitigated streak artifacts in the low dose data and yielded flow maps with mean error <7% and standard deviation <9% of mean, for 30×30 pixel ROIs (12.9 × 12.9 mm{sup 2}). In comparison, low dose FBP flow errors were −38% to +258%, and standard deviation was 6%–93%. Additionally, low dose SIR achieved 4.6 times improvement in flow map CNR{sup 2} per unit input dose compared to low dose FBP. Conclusions: SIR reconstruction can reduce image noise and mitigate streaking artifacts caused by photon starvation in dynamic CT myocardial perfusion data sets acquired at low dose (low tube current), and improve perfusion map quality in comparison to FBP reconstruction at the same dose.« less

  17. A Three-Dimensional Foil Bearing Performance Map Applied to Oil-Free Turbomachinery

    DTIC Science & Technology

    2009-04-01

    in diesel engine turbochargers , auxiliary power units (APUs), and selected hot section bearings in gas turbines (7–9). While these Oil-Free...film. Regardless of the strategy, research suggests proper thermal management is a key fundamental necessity for the successful deployment and...Turbo Expo 2006, Barcelona, Spain, GT2006-90572, 2006. 7. Heshmat, H.; Walton, II, J. F.; DellaCorte, C.; Valco, M. Oil-Free Turbocharger

  18. Cloud E-Learning Service Strategies for Improving E-Learning Innovation Performance in a Fuzzy Environment by Using a New Hybrid Fuzzy Multiple Attribute Decision-Making Model

    ERIC Educational Resources Information Center

    Su, Chiu Hung; Tzeng, Gwo-Hshiung; Hu, Shu-Kung

    2016-01-01

    The purpose of this study was to address this problem by applying a new hybrid fuzzy multiple criteria decision-making model including (a) using the fuzzy decision-making trial and evaluation laboratory (DEMATEL) technique to construct the fuzzy scope influential network relationship map (FSINRM) and determine the fuzzy influential weights of the…

  19. Multislice CT perfusion imaging of the lung in detection of pulmonary embolism

    NASA Astrophysics Data System (ADS)

    Hong, Helen; Lee, Jeongjin

    2006-03-01

    We propose a new subtraction technique for accurately imaging lung perfusion and efficiently detecting pulmonary embolism in chest MDCT angiography. Our method is composed of five stages. First, optimal segmentation technique is performed for extracting same volume of the lungs, major airway and vascular structures from pre- and post-contrast images with different lung density. Second, initial registration based on apex, hilar point and center of inertia (COI) of each unilateral lung is proposed to correct the gross translational mismatch. Third, initial alignment is refined by iterative surface registration. For fast and robust convergence of the distance measure to the optimal value, a 3D distance map is generated by the narrow-band distance propagation. Fourth, 3D nonlinear filter is applied to the lung parenchyma to compensate for residual spiral artifacts and artifacts caused by heart motion. Fifth, enhanced vessels are visualized by subtracting registered pre-contrast images from post-contrast images. To facilitate visualization of parenchyma enhancement, color-coded mapping and image fusion is used. Our method has been successfully applied to ten patients of pre- and post-contrast images in chest MDCT angiography. Experimental results show that the performance of our method is very promising compared with conventional methods with the aspects of its visual inspection, accuracy and processing time.

  20. Fast vessel segmentation in retinal images using multi-scale enhancement and second-order local entropy

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Zamora, G.; Bauman, W.; Soliz, P.

    2012-03-01

    Retinal vasculature is one of the most important anatomical structures in digital retinal photographs. Accurate segmentation of retinal blood vessels is an essential task in automated analysis of retinopathy. This paper presents a new and effective vessel segmentation algorithm that features computational simplicity and fast implementation. This method uses morphological pre-processing to decrease the disturbance of bright structures and lesions before vessel extraction. Next, a vessel probability map is generated by computing the eigenvalues of the second derivatives of Gaussian filtered image at multiple scales. Then, the second order local entropy thresholding is applied to segment the vessel map. Lastly, a rule-based decision step, which measures the geometric shape difference between vessels and lesions is applied to reduce false positives. The algorithm is evaluated on the low-resolution DRIVE and STARE databases and the publicly available high-resolution image database from Friedrich-Alexander University Erlangen-Nuremberg, Germany). The proposed method achieved comparable performance to state of the art unsupervised vessel segmentation methods with a competitive faster speed on the DRIVE and STARE databases. For the high resolution fundus image database, the proposed algorithm outperforms an existing approach both on performance and speed. The efficiency and robustness make the blood vessel segmentation method described here suitable for broad application in automated analysis of retinal images.

  1. On the sensitivity of geospatial low impact development locations to the centralized sewer network.

    PubMed

    Zischg, Jonatan; Zeisl, Peter; Winkler, Daniel; Rauch, Wolfgang; Sitzenfrei, Robert

    2018-04-01

    In the future, infrastructure systems will have to become smarter, more sustainable, and more resilient requiring new methods of urban infrastructure design. In the field of urban drainage, green infrastructure is a promising design concept with proven benefits to runoff reduction, stormwater retention, pollution removal, and/or the creation of attractive living spaces. Such 'near-nature' concepts are usually distributed over the catchment area in small scale units. In many cases, these above-ground structures interact with the existing underground pipe infrastructure, resulting in hybrid solutions. In this work, we investigate the effect of different placement strategies for low impact development (LID) structures on hydraulic network performance of existing drainage networks. Based on a sensitivity analysis, geo-referenced maps are created which identify the most effective LID positions within the city framework (e.g. to improve network resilience). The methodology is applied to a case study to test the effectiveness of the approach and compare different placement strategies. The results show that with a simple targeted LID placement strategy, the flood performance is improved by an additional 34% as compared to a random placement strategy. The developed map is easy to communicate and can be rapidly applied by decision makers when deciding on stormwater policies.

  2. Profile-Based LC-MS Data Alignment—A Bayesian Approach

    PubMed Central

    Tsai, Tsung-Heng; Tadesse, Mahlet G.; Wang, Yue; Ressom, Habtom W.

    2014-01-01

    A Bayesian alignment model (BAM) is proposed for alignment of liquid chromatography-mass spectrometry (LC-MS) data. BAM belongs to the category of profile-based approaches, which are composed of two major components: a prototype function and a set of mapping functions. Appropriate estimation of these functions is crucial for good alignment results. BAM uses Markov chain Monte Carlo (MCMC) methods to draw inference on the model parameters and improves on existing MCMC-based alignment methods through 1) the implementation of an efficient MCMC sampler and 2) an adaptive selection of knots. A block Metropolis-Hastings algorithm that mitigates the problem of the MCMC sampler getting stuck at local modes of the posterior distribution is used for the update of the mapping function coefficients. In addition, a stochastic search variable selection (SSVS) methodology is used to determine the number and positions of knots. We applied BAM to a simulated data set, an LC-MS proteomic data set, and two LC-MS metabolomic data sets, and compared its performance with the Bayesian hierarchical curve registration (BHCR) model, the dynamic time-warping (DTW) model, and the continuous profile model (CPM). The advantage of applying appropriate profile-based retention time correction prior to performing a feature-based approach is also demonstrated through the metabolomic data sets. PMID:23929872

  3. Computer-assisted photogrammetric mapping systems for geologic studies-A progress report

    USGS Publications Warehouse

    Pillmore, C.L.; Dueholm, K.S.; Jepsen, H.S.; Schuch, C.H.

    1981-01-01

    Photogrammetry has played an important role in geologic mapping for many years; however, only recently have attempts been made to automate mapping functions for geology. Computer-assisted photogrammetric mapping systems for geologic studies have been developed and are currently in use in offices of the Geological Survey of Greenland at Copenhagen, Denmark, and the U.S. Geological Survey at Denver, Colorado. Though differing somewhat, the systems are similar in that they integrate Kern PG-2 photogrammetric plotting instruments and small desk-top computers that are programmed to perform special geologic functions and operate flat-bed plotters by means of specially designed hardware and software. A z-drive capability, in which stepping motors control the z-motions of the PG-2 plotters, is an integral part of both systems. This feature enables the computer to automatically position the floating mark on computer-calculated, previously defined geologic planes, such as contacts or the base of coal beds, throughout the stereoscopic model in order to improve the mapping capabilities of the instrument and to aid in correlation and tracing of geologic units. The common goal is to enhance the capabilities of the PG-2 plotter and provide a means by which geologists can make conventional geologic maps more efficiently and explore ways to apply computer technology to geologic studies. ?? 1981.

  4. Metric Optimization for Surface Analysis in the Laplace-Beltrami Embedding Space

    PubMed Central

    Lai, Rongjie; Wang, Danny J.J.; Pelletier, Daniel; Mohr, David; Sicotte, Nancy; Toga, Arthur W.

    2014-01-01

    In this paper we present a novel approach for the intrinsic mapping of anatomical surfaces and its application in brain mapping research. Using the Laplace-Beltrami eigen-system, we represent each surface with an isometry invariant embedding in a high dimensional space. The key idea in our system is that we realize surface deformation in the embedding space via the iterative optimization of a conformal metric without explicitly perturbing the surface or its embedding. By minimizing a distance measure in the embedding space with metric optimization, our method generates a conformal map directly between surfaces with highly uniform metric distortion and the ability of aligning salient geometric features. Besides pairwise surface maps, we also extend the metric optimization approach for group-wise atlas construction and multi-atlas cortical label fusion. In experimental results, we demonstrate the robustness and generality of our method by applying it to map both cortical and hippocampal surfaces in population studies. For cortical labeling, our method achieves excellent performance in a cross-validation experiment with 40 manually labeled surfaces, and successfully models localized brain development in a pediatric study of 80 subjects. For hippocampal mapping, our method produces much more significant results than two popular tools on a multiple sclerosis study of 109 subjects. PMID:24686245

  5. Capability of Integrated MODIS Imagery and ALOS for Oil Palm, Rubber and Forest Areas Mapping in Tropical Forest Regions

    PubMed Central

    Razali, Sheriza Mohd; Marin, Arnaldo; Nuruddin, Ahmad Ainuddin; Shafri, Helmi Zulhaidi Mohd; Hamid, Hazandy Abdul

    2014-01-01

    Various classification methods have been applied for low resolution of the entire Earth's surface from recorded satellite images, but insufficient study has determined which method, for which satellite data, is economically viable for tropical forest land use mapping. This study employed Iterative Self Organizing Data Analysis Techniques (ISODATA) and K-Means classification techniques to classified Moderate Resolution Imaging Spectroradiometer (MODIS) Surface Reflectance satellite image into forests, oil palm groves, rubber plantations, mixed horticulture, mixed oil palm and rubber and mixed forest and rubber. Even though frequent cloud cover has been a challenge for mapping tropical forests, our MODIS land use classification map found that 2008 ISODATA-1 performed well with overall accuracy of 94%, with the highest Producer's Accuracy of Forest with 86%, and were consistent with MODIS Land Cover 2008 (MOD12Q1), respectively. The MODIS land use classification was able to distinguish young oil palm groves from open areas, rubber and mature oil palm plantations, on the Advanced Land Observing Satellite (ALOS) map, whereas rubber was more easily distinguished from an open area than from mixed rubber and forest. This study provides insight on the potential for integrating regional databases and temporal MODIS data, in order to map land use in tropical forest regions. PMID:24811079

  6. Capability of integrated MODIS imagery and ALOS for oil palm, rubber and forest areas mapping in tropical forest regions.

    PubMed

    Razali, Sheriza Mohd; Marin, Arnaldo; Nuruddin, Ahmad Ainuddin; Shafri, Helmi Zulhaidi Mohd; Hamid, Hazandy Abdul

    2014-05-07

    Various classification methods have been applied for low resolution of the entire Earth's surface from recorded satellite images, but insufficient study has determined which method, for which satellite data, is economically viable for tropical forest land use mapping. This study employed Iterative Self Organizing Data Analysis Techniques (ISODATA) and K-Means classification techniques to classified Moderate Resolution Imaging Spectroradiometer (MODIS) Surface Reflectance satellite image into forests, oil palm groves, rubber plantations, mixed horticulture, mixed oil palm and rubber and mixed forest and rubber. Even though frequent cloud cover has been a challenge for mapping tropical forests, our MODIS land use classification map found that 2008 ISODATA-1 performed well with overall accuracy of 94%, with the highest Producer's Accuracy of Forest with 86%, and were consistent with MODIS Land Cover 2008 (MOD12Q1), respectively. The MODIS land use classification was able to distinguish young oil palm groves from open areas, rubber and mature oil palm plantations, on the Advanced Land Observing Satellite (ALOS) map, whereas rubber was more easily distinguished from an open area than from mixed rubber and forest. This study provides insight on the potential for integrating regional databases and temporal MODIS data, in order to map land use in tropical forest regions.

  7. Improvements on mapping soil liquefaction at a regional scale

    NASA Astrophysics Data System (ADS)

    Zhu, Jing

    Earthquake induced soil liquefaction is an important secondary hazard during earthquakes and can lead to significant damage to infrastructure. Mapping liquefaction hazard is important in both planning for earthquake events and guiding relief efforts by positioning resources once the events have occurred. This dissertation addresses two aspects of liquefaction hazard mapping at a regional scale including 1) predictive liquefaction hazard mapping and 2) post-liquefaction cataloging. First, current predictive hazard liquefaction mapping relies on detailed geologic maps and geotechnical data, which are not always available in at-risk regions. This dissertation improves the predictive liquefaction hazard mapping by the development and validation of geospatial liquefaction models (Chapter 2 and 3) that predict liquefaction extent and are appropriate for global application. The geospatial liquefaction models are developed using logistic regression from a liquefaction database consisting of the data from 27 earthquake events from six countries. The model that performs best over the entire dataset includes peak ground velocity (PGV), VS30, distance to river, distance to coast, and precipitation. The model that performs best over the noncoastal dataset includes PGV, VS30, water table depth, distance to water body, and precipitation. Second, post-earthquake liquefaction cataloging historically relies on field investigation that is often limited by time and expense, and therefore results in limited and incomplete liquefaction inventories. This dissertation improves the post-earthquake cataloging by the development and validation of a remote sensing-based method that can be quickly applied over a broad region after an earthquake and provide a detailed map of liquefaction surface effects (Chapter 4). Our method uses the optical satellite images before and after an earthquake event from the WorldView-2 satellite with 2 m spatial resolution and eight spectral bands. Our method uses the changes of spectral variables that are sensitive to surface moisture and soil characteristics paired with a supervised classification.

  8. The Use of Kosher Phenotyping for Mapping QTL Affecting Susceptibility to Bovine Respiratory Disease

    PubMed Central

    Eitam, Harel; Yishay, Moran; Schiavini, Fausta; Soller, Morris; Bagnato, Alessandro; Shabtay, Ariel

    2016-01-01

    Bovine respiratory disease (BRD) is the leading cause of morbidity and mortality in feedlot cattle, caused by multiple pathogens that become more virulent in response to stress. As clinical signs often go undetected and various preventive strategies failed, identification of genes affecting BRD is essential for selection for resistance. Selective DNA pooling (SDP) was applied in a genome wide association study (GWAS) to map BRD QTLs in Israeli Holstein male calves. Kosher scoring of lung adhesions was used to allocate 122 and 62 animals to High (Glatt Kosher) and Low (Non-Kosher) resistant groups, respectively. Genotyping was performed using the Illumina BovineHD BeadChip according to the Infinium protocol. Moving average of -logP was used to map QTLs and Log drop was used to define their boundaries (QTLRs). The combined procedure was efficient for high resolution mapping. Nineteen QTLRs distributed over 13 autosomes were found, some overlapping previous studies. The QTLRs contain polymorphic functional and expression candidate genes to affect kosher status, with putative immunological and wound healing activities. Kosher phenotyping was shown to be a reliable means to map QTLs affecting BRD morbidity. PMID:27077383

  9. A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery.

    PubMed

    Huang, Huasheng; Deng, Jizhong; Lan, Yubin; Yang, Aqing; Deng, Xiaoling; Zhang, Lei

    2018-01-01

    Appropriate Site Specific Weed Management (SSWM) is crucial to ensure the crop yields. Within SSWM of large-scale area, remote sensing is a key technology to provide accurate weed distribution information. Compared with satellite and piloted aircraft remote sensing, unmanned aerial vehicle (UAV) is capable of capturing high spatial resolution imagery, which will provide more detailed information for weed mapping. The objective of this paper is to generate an accurate weed cover map based on UAV imagery. The UAV RGB imagery was collected in 2017 October over the rice field located in South China. The Fully Convolutional Network (FCN) method was proposed for weed mapping of the collected imagery. Transfer learning was used to improve generalization capability, and skip architecture was applied to increase the prediction accuracy. After that, the performance of FCN architecture was compared with Patch_based CNN algorithm and Pixel_based CNN method. Experimental results showed that our FCN method outperformed others, both in terms of accuracy and efficiency. The overall accuracy of the FCN approach was up to 0.935 and the accuracy for weed recognition was 0.883, which means that this algorithm is capable of generating accurate weed cover maps for the evaluated UAV imagery.

  10. A fine structure genomic map of the region of 12q13 containing SAS and CDK4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linder, C.Y.; Elkahloun, A.G.; Su, Y.A.

    1994-09-01

    We have recently adapted a method, originally described by Rackwitz, to the rapid restriction mapping of multiple cosmid DNA samples. Linearization of the cosmids at the lambda cohesive site using lambda terminase is followed by partial digestion with selected restriction enzymes and hybridization to oligonucleotides specific for the right or left hand termini. Partial digestions are performed in a microtiter plate thus allowing up to 12 cosmid clones to be digested with one restriction enzyme. We have applied this rapid restriction mapping method to cosmids derived from a region of chromosome 12q13 that has recently been shown to be amplifiedmore » in a variety of cancers including malignant fibrous histiocytoma, fibrosarcoma, liposarcoma, osteosarcoma and brain tumors. A small segment of this amplification unit containing three genes, SAS (a membrane protein), CDK4 (a cyclin dependent kinase) and OS-9 (a recently described cDNA) has been analyzed with the system described above. This fine structure genomic map will be useful for completing the expression map of this region as well as characterizing its pattern of amplification in tumor specimens.« less

  11. Phytoplankton global mapping from space with a support vector machine algorithm

    NASA Astrophysics Data System (ADS)

    de Boissieu, Florian; Menkes, Christophe; Dupouy, Cécile; Rodier, Martin; Bonnet, Sophie; Mangeas, Morgan; Frouin, Robert J.

    2014-11-01

    In recent years great progress has been made in global mapping of phytoplankton from space. Two main trends have emerged, the recognition of phytoplankton functional types (PFT) based on reflectance normalized to chlorophyll-a concentration, and the recognition of phytoplankton size class (PSC) based on the relationship between cell size and chlorophyll-a concentration. However, PFTs and PSCs are not decorrelated, and one approach can complement the other in a recognition task. In this paper, we explore the recognition of several dominant PFTs by combining reflectance anomalies, chlorophyll-a concentration and other environmental parameters, such as sea surface temperature and wind speed. Remote sensing pixels are labeled thanks to coincident in-situ pigment data from GeP&CO, NOMAD and MAREDAT datasets, covering various oceanographic environments. The recognition is made with a supervised Support Vector Machine classifier trained on the labeled pixels. This algorithm enables a non-linear separation of the classes in the input space and is especially adapted for small training datasets as available here. Moreover, it provides a class probability estimate, allowing one to enhance the robustness of the classification results through the choice of a minimum probability threshold. A greedy feature selection associated to a 10-fold cross-validation procedure is applied to select the most discriminative input features and evaluate the classification performance. The best classifiers are finally applied on daily remote sensing datasets (SeaWIFS, MODISA) and the resulting dominant PFT maps are compared with other studies. Several conclusions are drawn: (1) the feature selection highlights the weight of temperature, chlorophyll-a and wind speed variables in phytoplankton recognition; (2) the classifiers show good results and dominant PFT maps in agreement with phytoplankton distribution knowledge; (3) classification on MODISA data seems to perform better than on SeaWIFS data, (4) the probability threshold screens correctly the areas of smallest confidence such as the interclass regions.

  12. Key issues in decomposing fMRI during naturalistic and continuous music experience with independent component analysis.

    PubMed

    Cong, Fengyu; Puoliväli, Tuomas; Alluri, Vinoo; Sipola, Tuomo; Burunat, Iballa; Toiviainen, Petri; Nandi, Asoke K; Brattico, Elvira; Ristaniemi, Tapani

    2014-02-15

    Independent component analysis (ICA) has been often used to decompose fMRI data mostly for the resting-state, block and event-related designs due to its outstanding advantage. For fMRI data during free-listening experiences, only a few exploratory studies applied ICA. For processing the fMRI data elicited by 512-s modern tango, a FFT based band-pass filter was used to further pre-process the fMRI data to remove sources of no interest and noise. Then, a fast model order selection method was applied to estimate the number of sources. Next, both individual ICA and group ICA were performed. Subsequently, ICA components whose temporal courses were significantly correlated with musical features were selected. Finally, for individual ICA, common components across majority of participants were found by diffusion map and spectral clustering. The extracted spatial maps (by the new ICA approach) common across most participants evidenced slightly right-lateralized activity within and surrounding the auditory cortices. Meanwhile, they were found associated with the musical features. Compared with the conventional ICA approach, more participants were found to have the common spatial maps extracted by the new ICA approach. Conventional model order selection methods underestimated the true number of sources in the conventionally pre-processed fMRI data for the individual ICA. Pre-processing the fMRI data by using a reasonable band-pass digital filter can greatly benefit the following model order selection and ICA with fMRI data by naturalistic paradigms. Diffusion map and spectral clustering are straightforward tools to find common ICA spatial maps. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Exploring Google Earth Engine platform for big data processing: classification of multi-temporal satellite imagery for crop mapping

    NASA Astrophysics Data System (ADS)

    Shelestov, Andrii; Lavreniuk, Mykola; Kussul, Nataliia; Novikov, Alexei; Skakun, Sergii

    2017-02-01

    Many applied problems arising in agricultural monitoring and food security require reliable crop maps at national or global scale. Large scale crop mapping requires processing and management of large amount of heterogeneous satellite imagery acquired by various sensors that consequently leads to a “Big Data” problem. The main objective of this study is to explore efficiency of using the Google Earth Engine (GEE) platform when classifying multi-temporal satellite imagery with potential to apply the platform for a larger scale (e.g. country level) and multiple sensors (e.g. Landsat-8 and Sentinel-2). In particular, multiple state-of-the-art classifiers available in the GEE platform are compared to produce a high resolution (30 m) crop classification map for a large territory ( 28,100 km2 and 1.0 M ha of cropland). Though this study does not involve large volumes of data, it does address efficiency of the GEE platform to effectively execute complex workflows of satellite data processing required with large scale applications such as crop mapping. The study discusses strengths and weaknesses of classifiers, assesses accuracies that can be achieved with different classifiers for the Ukrainian landscape, and compares them to the benchmark classifier using a neural network approach that was developed in our previous studies. The study is carried out for the Joint Experiment of Crop Assessment and Monitoring (JECAM) test site in Ukraine covering the Kyiv region (North of Ukraine) in 2013. We found that Google Earth Engine (GEE) provides very good performance in terms of enabling access to the remote sensing products through the cloud platform and providing pre-processing; however, in terms of classification accuracy, the neural network based approach outperformed support vector machine (SVM), decision tree and random forest classifiers available in GEE.

  14. Exploring MALDI-TOF MS approach for a rapid identification of Mycobacterium avium ssp. paratuberculosis field isolates.

    PubMed

    Ricchi, M; Mazzarelli, A; Piscini, A; Di Caro, A; Cannas, A; Leo, S; Russo, S; Arrigoni, N

    2017-03-01

    The aim of the study was to explore the suitability of matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOF MS) for a rapid and correct identification of Mycobacterium avium ssp. paratuberculosis (MAP) field isolates. MALDI-TOF MS approach is becoming one of the most popular tests for the identification of intact bacterial cells which has been shown to be fast and reliable. For this purpose, 36 MAP field isolates were analysed through MALDI-TOF MS and the spectra compared with two different databases: one provided by the vendor of the system employed (Biotyper ver. 3·0; Bruker Daltonics) and a homemade database containing spectra from both tuberculous and nontuberculous Mycobacteria. Moreover, principal component analysis procedure was employed to confirm the ability of MALDI-TOF MS to discriminate between very closely related subspecies. Our results suggest MAP can be differentiated from other Mycobacterium species, both when the species are very close (M. intracellulare) and when belonging to different subspecies (M. avium ssp. avium and M. avium ssp. silvaticum). The procedure applied is fast, easy to perform, and achieves an earlier accurate species identification of MAP and nontuberculous Mycobacteria in comparison to other procedures. The gold standard test for the diagnosis of paratuberculosis is still isolation of MAP by cultural methods, but additional assays, such as qPCR and subculturing for determination of mycobactin dependency are required to confirm its identification. We have provided here evidence pertaining to the usefulness of MALDI-TOF MS approach for a rapid identification of this mycobacterium among other members of M. avium complex. © 2016 The Society for Applied Microbiology.

  15. Automated matching of multiple terrestrial laser scans for stem mapping without the use of artificial references

    NASA Astrophysics Data System (ADS)

    Liu, Jingbin; Liang, Xinlian; Hyyppä, Juha; Yu, Xiaowei; Lehtomäki, Matti; Pyörälä, Jiri; Zhu, Lingli; Wang, Yunsheng; Chen, Ruizhi

    2017-04-01

    Terrestrial laser scanning has been widely used to analyze the 3D structure of a forest in detail and to generate data at the level of a reference plot for forest inventories without destructive measurements. Multi-scan terrestrial laser scanning is more commonly applied to collect plot-level data so that all of the stems can be detected and analyzed. However, it is necessary to match the point clouds of multiple scans to yield a point cloud with automated processing. Mismatches between datasets will lead to errors during the processing of multi-scan data. Classic registration methods based on flat surfaces cannot be directly applied in forest environments; therefore, artificial reference objects have conventionally been used to assist with scan matching. The use of artificial references requires additional labor and expertise, as well as greatly increasing the cost. In this study, we present an automated processing method for plot-level stem mapping that matches multiple scans without artificial references. In contrast to previous studies, the registration method developed in this study exploits the natural geometric characteristics among a set of tree stems in a plot and combines the point clouds of multiple scans into a unified coordinate system. Integrating multiple scans improves the overall performance of stem mapping in terms of the correctness of tree detection, as well as the bias and the root-mean-square errors of forest attributes such as diameter at breast height and tree height. In addition, the automated processing method makes stem mapping more reliable and consistent among plots, reduces the costs associated with plot-based stem mapping, and enhances the efficiency.

  16. Concept mapping: Impact on content and organization of technical writing in science

    NASA Astrophysics Data System (ADS)

    Conklin, Elaine

    The purpose of this quasi-experimental study was to compare the relationship between concept mapping and the content and organization of technical writing of ninth grade biology students. All students in the study completed a prewriting assessment. The experimental group received concept map instruction while the control group performed alternate tasks. After instruction, both groups completed the postwriting assessment and mean differences were compared using the t statistic for independent measures. Additionally, scores on the concept map were correlated to the scores on the postwriting assessment using the Pearson correlation coefficient. Finally, attitudes toward using concept mapping as a prewriting strategy were analyzed using the t statistic for repeated measures. Concept mapping significantly improved the depth of content; however, no statistical significance was detected for organization. Students had a significantly positive change in attitude toward using concept mapping to plan a writing assessment, organize information, and think creatively. The findings indicated concept mapping had a positive effect on the students' abilities to select concepts appropriate to respond to a writing prompt, integrate facts into complete thoughts and ideas, and apply it in novel situations. Concept maps appeared to facilitate learning how to process information and transform it into expository writing. Sustained practice in designing concept maps may influence organization as well as content. Developing a systematic approach to synthesize well-organized and coherent arguments in response to a writing task is an invaluable communication skill that has implications for the learner across disciplines and prepares them for higher education and the workforce.

  17. New Approaches to Tsunami Hazard Mitigation Demonstrated in Oregon

    NASA Astrophysics Data System (ADS)

    Priest, G. R.; Rizzo, A.; Madin, I.; Lyles Smith, R.; Stimely, L.

    2012-12-01

    Oregon Department of Geology and Mineral Industries and Oregon Emergency Management collaborated over the last four years to increase tsunami preparedness for residents and visitors to the Oregon coast. Utilizing support from the National Tsunami Hazards Mitigation Program (NTHMP), new approaches to outreach and tsunami hazard assessment were developed and then applied. Hazard assessment was approached by first doing two pilot studies aimed at calibrating theoretical models to direct observations of tsunami inundation gleaned from the historical and prehistoric (paleoseismic/paleotsunami) data. The results of these studies were then submitted to peer-reviewed journals and translated into 1:10,000-12,000-scale inundation maps. The inundation maps utilize a powerful new tsunami model, SELFE, developed by Joseph Zhang at the Oregon Health & Science University. SELFE uses unstructured computational grids and parallel processing technique to achieve fast accurate simulation of tsunami interactions with fine-scale coastal morphology. The inundation maps were simplified into tsunami evacuation zones accessed as map brochures and an interactive mapping portal at http://www.oregongeology.org/tsuclearinghouse/. Unique in the world are new evacuation maps that show separate evacuation zones for distant versus locally generated tsunamis. The brochure maps explain that evacuation time is four hours or more for distant tsunamis but 15-20 minutes for local tsunamis that are invariably accompanied by strong ground shaking. Since distant tsunamis occur much more frequently than local tsunamis, the two-zone maps avoid needless over evacuation (and expense) caused by one-zone maps. Inundation mapping for the entire Oregon coast will be complete by ~2014. Educational outreach was accomplished first by doing a pilot study to measure effectiveness of various approaches using before and after polling and then applying the most effective methods. In descending order, the most effective methods were: (1) door-to-door (person-to-person) education, (2) evacuation drills, (3) outreach to K-12 schools, (4) media events, and (5) workshops targeted to key audiences (lodging facilities, teachers, and local officials). Community organizers were hired to apply these five methods to clusters of small communities, measuring performance by before and after polling. Organizers were encouraged to approach the top priority, person-to-person education, by developing Community Emergency Response Teams (CERT) or CERT-like organizations in each community, thereby leaving behind a functioning volunteer-based group that will continue the outreach program and build long term resiliency. One of the most effective person-to-person educational tools was the Map Your Neighborhood program that brings people together so they can sketch the basic layout of their neighborhoods to depict key earthquake and tsunami hazards and mitigation solutions. The various person-to-person volunteer efforts and supporting outreach activities are knitting communities together and creating a permanent culture of tsunami and earthquake preparedness. All major Oregon coastal population centers will have been covered by this intensive outreach program by ~2014.

  18. High-confidence coding and noncoding transcriptome maps

    PubMed Central

    2017-01-01

    The advent of high-throughput RNA sequencing (RNA-seq) has led to the discovery of unprecedentedly immense transcriptomes encoded by eukaryotic genomes. However, the transcriptome maps are still incomplete partly because they were mostly reconstructed based on RNA-seq reads that lack their orientations (known as unstranded reads) and certain boundary information. Methods to expand the usability of unstranded RNA-seq data by predetermining the orientation of the reads and precisely determining the boundaries of assembled transcripts could significantly benefit the quality of the resulting transcriptome maps. Here, we present a high-performing transcriptome assembly pipeline, called CAFE, that significantly improves the original assemblies, respectively assembled with stranded and/or unstranded RNA-seq data, by orienting unstranded reads using the maximum likelihood estimation and by integrating information about transcription start sites and cleavage and polyadenylation sites. Applying large-scale transcriptomic data comprising 230 billion RNA-seq reads from the ENCODE, Human BodyMap 2.0, The Cancer Genome Atlas, and GTEx projects, CAFE enabled us to predict the directions of about 220 billion unstranded reads, which led to the construction of more accurate transcriptome maps, comparable to the manually curated map, and a comprehensive lncRNA catalog that includes thousands of novel lncRNAs. Our pipeline should not only help to build comprehensive, precise transcriptome maps from complex genomes but also to expand the universe of noncoding genomes. PMID:28396519

  19. Robot map building based on fuzzy-extending DSmT

    NASA Astrophysics Data System (ADS)

    Li, Xinde; Huang, Xinhan; Wu, Zuyu; Peng, Gang; Wang, Min; Xiong, Youlun

    2007-11-01

    With the extensive application of mobile robots in many different fields, map building in unknown environments has been one of the principal issues in the field of intelligent mobile robot. However, Information acquired in map building presents characteristics of uncertainty, imprecision and even high conflict, especially in the course of building grid map using sonar sensors. In this paper, we extended DSmT with Fuzzy theory by considering the different fuzzy T-norm operators (such as Algebraic Product operator, Bounded Product operator, Einstein Product operator and Default minimum operator), in order to develop a more general and flexible combinational rule for more extensive application. At the same time, we apply fuzzy-extended DSmT to mobile robot map building with the help of new self-localization method based on neighboring field appearance matching( -NFAM), to make the new tool more robust in very complex environment. An experiment is conducted to reconstruct the map with the new tool in indoor environment, in order to compare their performances in map building with four T-norm operators, when Pioneer II mobile robot runs along the same trace. Finally, a conclusion is reached that this study develops a new idea to extend DSmT, also provides a new approach for autonomous navigation of mobile robot, and provides a human-computer interactive interface to manage and manipulate the robot remotely.

  20. Navigation experience and mental representations of the environment: do pilots build better cognitive maps?

    PubMed

    Sutton, Jennifer E; Buset, Melanie; Keller, Mikayla

    2014-01-01

    A number of careers involve tasks that place demands on spatial cognition, but it is still unclear how and whether skills acquired in such applied experiences transfer to other spatial tasks. The current study investigated the association between pilot training and the ability to form a mental survey representation, or cognitive map, of a novel, ground-based, virtual environment. Undergraduate students who were engaged in general aviation pilot training and controls matched to the pilots on gender and video game usage freely explored a virtual town. Subsequently, participants performed a direction estimation task that tested the accuracy of their cognitive map representation of the town. In addition, participants completed the Object Perspective Test and rated their spatial abilities. Pilots were significantly more accurate than controls at estimating directions but did not differ from controls on the Object Perspective Test. Locations in the town were visited at a similar rate by the two groups, indicating that controls' relatively lower accuracy was not due to failure to fully explore the town. Pilots' superior performance is likely due to better online cognitive processing during exploration, suggesting the spatial updating they engage in during flight transfers to a non-aviation context.

  1. Navigation Experience and Mental Representations of the Environment: Do Pilots Build Better Cognitive Maps?

    PubMed Central

    Sutton, Jennifer E.; Buset, Melanie; Keller, Mikayla

    2014-01-01

    A number of careers involve tasks that place demands on spatial cognition, but it is still unclear how and whether skills acquired in such applied experiences transfer to other spatial tasks. The current study investigated the association between pilot training and the ability to form a mental survey representation, or cognitive map, of a novel, ground-based, virtual environment. Undergraduate students who were engaged in general aviation pilot training and controls matched to the pilots on gender and video game usage freely explored a virtual town. Subsequently, participants performed a direction estimation task that tested the accuracy of their cognitive map representation of the town. In addition, participants completed the Object Perspective Test and rated their spatial abilities. Pilots were significantly more accurate than controls at estimating directions but did not differ from controls on the Object Perspective Test. Locations in the town were visited at a similar rate by the two groups, indicating that controls' relatively lower accuracy was not due to failure to fully explore the town. Pilots' superior performance is likely due to better online cognitive processing during exploration, suggesting the spatial updating they engage in during flight transfers to a non-aviation context. PMID:24603608

  2. MonoSLAM: real-time single camera SLAM.

    PubMed

    Davison, Andrew J; Reid, Ian D; Molton, Nicholas D; Stasse, Olivier

    2007-06-01

    We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.

  3. Frequency Count Attribute Oriented Induction of Corporate Network Data for Mapping Business Activity

    NASA Astrophysics Data System (ADS)

    Tanutama, Lukas

    2014-03-01

    Companies increasingly rely on Internet for effective and efficient business communication. As Information Technology infrastructure backbone for business activities, corporate network connects the company to Internet and enables its activities globally. It carries data packets generated by the activities of the users performing their business tasks. Traditionally, infrastructure operations mainly maintain data carrying capacity and network devices performance. It would be advantageous if a company knows what activities are running in its network. The research provides a simple method of mapping the business activity reflected by the network data. To map corporate users' activities, a slightly modified Attribute Oriented Induction (AOI) approach to mine the network data was applied. The frequency of each protocol invoked were counted to show what the user intended to do. The collected data was samples taken within a certain sampling period. Samples were taken due to the enormous data packets generated. Protocols of interest are only Internet related while intranet protocols are ignored. It can be concluded that the method could provide the management a general overview of the usage of its infrastructure and lead to efficient, effective and secure ICT infrastructure.

  4. A new chaotic multi-verse optimization algorithm for solving engineering optimization problems

    NASA Astrophysics Data System (ADS)

    Sayed, Gehad Ismail; Darwish, Ashraf; Hassanien, Aboul Ella

    2018-03-01

    Multi-verse optimization algorithm (MVO) is one of the recent meta-heuristic optimization algorithms. The main inspiration of this algorithm came from multi-verse theory in physics. However, MVO like most optimization algorithms suffers from low convergence rate and entrapment in local optima. In this paper, a new chaotic multi-verse optimization algorithm (CMVO) is proposed to overcome these problems. The proposed CMVO is applied on 13 benchmark functions and 7 well-known design problems in the engineering and mechanical field; namely, three-bar trust, speed reduce design, pressure vessel problem, spring design, welded beam, rolling element-bearing and multiple disc clutch brake. In the current study, a modified feasible-based mechanism is employed to handle constraints. In this mechanism, four rules were used to handle the specific constraint problem through maintaining a balance between feasible and infeasible solutions. Moreover, 10 well-known chaotic maps are used to improve the performance of MVO. The experimental results showed that CMVO outperforms other meta-heuristic optimization algorithms on most of the optimization problems. Also, the results reveal that sine chaotic map is the most appropriate map to significantly boost MVO's performance.

  5. Ultra-high resolution, polarization sensitive transversal optical coherence tomography for structural analysis and strain mapping

    NASA Astrophysics Data System (ADS)

    Wiesauer, Karin; Pircher, Michael; Goetzinger, Erich; Hitzenberger, Christoph K.; Engelke, Rainer; Ahrens, Gisela; Pfeiffer, Karl; Ostrzinski, Ute; Gruetzner, Gabi; Oster, Reinhold; Stifter, David

    2006-02-01

    Optical coherence tomography (OCT) is a contactless and non-invasive technique nearly exclusively applied for bio-medical imaging of tissues. Besides the internal structure, additionally strains within the sample can be mapped when OCT is performed in a polarization sensitive (PS) way. In this work, we demonstrate the benefits of PS-OCT imaging for non-biological applications. We have developed the OCT technique beyond the state-of-the-art: based on transversal ultra-high resolution (UHR-)OCT, where an axial resolution below 2 μm within materials is obtained using a femtosecond laser as light source, we have modified the setup for polarization sensitive measurements (transversal UHR-PS-OCT). We perform structural analysis and strain mapping for different types of samples: for a highly strained elastomer specimen we demonstrate the necessity of UHR-imaging. Furthermore, we investigate epoxy waveguide structures, photoresist moulds for the fabrication of micro-electromechanical parts (MEMS), and the glass-fibre composite outer shell of helicopter rotor blades where cracks are present. For these examples, transversal scanning UHR-PS-OCT is shown to provide important information about the structural properties and the strain distribution within the samples.

  6. Calibration and Validation of Tundra Plant Functional Type Fractional Cover Mapping

    NASA Astrophysics Data System (ADS)

    Macander, M. J.; Nelson, P.; Frost, G. V., Jr.

    2017-12-01

    Fractional cover maps are being developed for selected tundra plant functional types (PFTs) across >500,000 sq. km of arctic Alaska and adjacent Canada at 30 m resolution. Training and validation data include a field-based training dataset based on point-intercept sampling method at hundreds of plots spanning bioclimatic and geomorphic gradients. We also compiled 50 blocks of 1-5 cm resolution RGB image mosaics in Alaska (White Mountains, North Slope, and Yukon-Kuskokwim Delta) and the Yukon Territory. The mosaics and associated surface and canopy height models were developed using a consumer drone and structure from motion processing. We summarized both the in situ measurements and drone imagery to determine cover of two PFTs: Low and Tall Deciduous Shrub, and Light Fruticose/Foliose Lichen. We applied these data to train 2 m (limited extent) and 30 m (wall to wall) maps of PFT fractional cover for shrubs and lichen. Predictors for 2 m models were commercial satellite imagery such as WorldView-2 and Worldview-3, analyzed on the ABoVE Science Cloud. Predictors for 30 m models were primarily reflectance composites and spectral metrics developed from Landsat imagery, using Google Earth Engine. We compared the performance of models developed from the in situ and drone-derived training data and identify best practices to improve the performance and efficiency of arctic PFT fractional cover mapping.

  7. An evidence map of the effect of Tai Chi on health outcomes.

    PubMed

    Solloway, Michele R; Taylor, Stephanie L; Shekelle, Paul G; Miake-Lye, Isomi M; Beroes, Jessica M; Shanman, Roberta M; Hempel, Susanne

    2016-07-27

    This evidence map describes the volume and focus of Tai Chi research reporting health outcomes. Originally developed as a martial art, Tai Chi is typically taught as a series of slow, low-impact movements that integrate the breath, mind, and physical activity to achieve greater awareness and a sense of well-being. The evidence map is based on a systematic review of systematic reviews. We searched 11 electronic databases from inception to February 2014, screened reviews of reviews, and consulted with topic experts. We used a bubble plot to graphically display clinical topics, literature size, number of reviews, and a broad estimate of effectiveness. The map is based on 107 systematic reviews. Two thirds of the reviews were published in the last five years. The topics with the largest number of published randomized controlled trials (RCTs) were general health benefits (51 RCTs), psychological well-being (37 RCTs), interventions for older adults (31 RCTs), balance (27 RCTs), hypertension (18 RCTs), fall prevention (15 RCTs), and cognitive performance (11 RCTs). The map identified a number of areas with evidence of a potentially positive treatment effect on patient outcomes, including Tai Chi for hypertension, fall prevention outside of institutions, cognitive performance, osteoarthritis, depression, chronic obstructive pulmonary disease, pain, balance confidence, and muscle strength. However, identified reviews cautioned that firm conclusions cannot be drawn due to methodological limitations in the original studies and/or an insufficient number of existing research studies. Tai Chi has been applied in diverse clinical areas, and for a number of these, systematic reviews have indicated promising results. The evidence map provides a visual overview of Tai Chi research volume and content. PROSPERO CRD42014009907.

  8. Using rule-based natural language processing to improve disease normalization in biomedical text.

    PubMed

    Kang, Ning; Singh, Bharat; Afzal, Zubair; van Mulligen, Erik M; Kors, Jan A

    2013-01-01

    In order for computers to extract useful information from unstructured text, a concept normalization system is needed to link relevant concepts in a text to sources that contain further information about the concept. Popular concept normalization tools in the biomedical field are dictionary-based. In this study we investigate the usefulness of natural language processing (NLP) as an adjunct to dictionary-based concept normalization. We compared the performance of two biomedical concept normalization systems, MetaMap and Peregrine, on the Arizona Disease Corpus, with and without the use of a rule-based NLP module. Performance was assessed for exact and inexact boundary matching of the system annotations with those of the gold standard and for concept identifier matching. Without the NLP module, MetaMap and Peregrine attained F-scores of 61.0% and 63.9%, respectively, for exact boundary matching, and 55.1% and 56.9% for concept identifier matching. With the aid of the NLP module, the F-scores of MetaMap and Peregrine improved to 73.3% and 78.0% for boundary matching, and to 66.2% and 69.8% for concept identifier matching. For inexact boundary matching, performances further increased to 85.5% and 85.4%, and to 73.6% and 73.3% for concept identifier matching. We have shown the added value of NLP for the recognition and normalization of diseases with MetaMap and Peregrine. The NLP module is general and can be applied in combination with any concept normalization system. Whether its use for concept types other than disease is equally advantageous remains to be investigated.

  9. An advanced method for classifying atmospheric circulation types based on prototypes connectivity graph

    NASA Astrophysics Data System (ADS)

    Zagouras, Athanassios; Argiriou, Athanassios A.; Flocas, Helena A.; Economou, George; Fotopoulos, Spiros

    2012-11-01

    Classification of weather maps at various isobaric levels as a methodological tool is used in several problems related to meteorology, climatology, atmospheric pollution and to other fields for many years. Initially the classification was performed manually. The criteria used by the person performing the classification are features of isobars or isopleths of geopotential height, depending on the type of maps to be classified. Although manual classifications integrate the perceptual experience and other unquantifiable qualities of the meteorology specialists involved, these are typically subjective and time consuming. Furthermore, during the last years different approaches of automated methods for atmospheric circulation classification have been proposed, which present automated and so-called objective classifications. In this paper a new method of atmospheric circulation classification of isobaric maps is presented. The method is based on graph theory. It starts with an intelligent prototype selection using an over-partitioning mode of fuzzy c-means (FCM) algorithm, proceeds to a graph formulation for the entire dataset and produces the clusters based on the contemporary dominant sets clustering method. Graph theory is a novel mathematical approach, allowing a more efficient representation of spatially correlated data, compared to the classical Euclidian space representation approaches, used in conventional classification methods. The method has been applied to the classification of 850 hPa atmospheric circulation over the Eastern Mediterranean. The evaluation of the automated methods is performed by statistical indexes; results indicate that the classification is adequately comparable with other state-of-the-art automated map classification methods, for a variable number of clusters.

  10. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    NASA Astrophysics Data System (ADS)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.

  11. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.

    PubMed

    Novosad, Philip; Reader, Andrew J

    2016-06-21

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.

  12. Characterizing the local optoelectronic performance of organic solar cells with scanning-probe microscopy

    NASA Astrophysics Data System (ADS)

    Coffey, David C.

    2007-12-01

    Conjugated polymers, small molecules, and colloidal semiconductor nanocrystals are promising materials for use in low-cost, thin-film solar cells. The photovoltaic performance of these materials, however, is highly dependent on film structure, and directly correlating local film structures with device performance remains challenging. This dissertation describes several techniques we have developed to probe and control the local optoelectronic properties of organic semiconducting films. First, with an aim of rapidly fabricating photovoltaic films with varying morphology, we demonstrate that Dip-Pen Nanolithography (DPN) can be used to control nanoscale phase separation with sub-150 nm lateral resolution in polymer films that are 20--80 nm thick. This control is based on writing monolayer chemical templates that nucleate phase separation, and we use this technique to study heterogeneous nucleation in thin films. Second, we use time-resolved electrostatic force microscopy (trEFM) to measure photoexcited charge in polymer films with a resolution of 100 nm and 100 mus. We show that such data can predict the external quantum efficiencies of polymer photodiodes, and can thus link device performance with local optoelectronic properties. When applied to the study of blended polyfluorene films, we show that domain centers can buildup charge faster then domain interfaces, which indicates that polymer/polymer blend devices should be modeled as having impure donor/acceptor domains. Third, we use photoconductive atomic force microscopy (pcAFM) to map local photocurrents with 20 nm-resolution in polymer/fullerene solar cells- achieving an order of magnitude better resolution than previous techniques. We present photocurrent maps under short-circuit conditions (zero applied bias), as well as under various applied voltages. We find significant variations in the short-circuit current between regions that appear identical in AFM topography. These variations occur from one domain to another, as well as on larger length scales incorporating multiple domains. Our results suggest that organic solar cells can be significantly improved with better donor/acceptor structuring.

  13. Exploring nonlinear feature space dimension reduction and data representation in breast Cadx with Laplacian eigenmaps and t-SNE.

    PubMed

    Jamieson, Andrew R; Giger, Maryellen L; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha

    2010-01-01

    In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, "Laplacian eigenmaps for dimensionality reduction and data representation," Neural Comput. 15, 1373-1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, "Visualizing data using t-SNE," J. Mach. Learn. Res. 9, 2579-2605 (2008)]. These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier's AUC performance. In the large U.S. data set, sample high performance results include, AUC0.632+ = 0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+ = 0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+ = 0.90 with interval [0.847;0.919], all using the MCMC-BANN. Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space.

  14. Spectroscopic Ellipsometry Studies of Thin Film a-Si:H/nc-Si:H Micromorph Solar Cell Fabrication in the p-i-n Superstrate Configuration

    NASA Astrophysics Data System (ADS)

    Huang, Zhiquan

    Spectroscopic ellipsometry (SE) is a non-invasive optical probe that is capable of accurately and precisely measuring the structure of thin films, such as their thicknesses and void volume fractions, and in addition their optical properties, typically defined by the index of refraction and extinction coefficient spectra. Because multichannel detection systems integrated into SE instrumentation have been available for some time now, the data acquisition time possible for complete SE spectra has been reduced significantly. As a result, real time spectroscopic ellipsometry (RTSE) has become feasible for monitoring thin film nucleation and growth during the deposition of thin films as well as during their removal in processes of thin film etching. Also because of the reduced acquisition time, mapping SE is possible by mounting an SE instrument with a multichannel detector onto a mechanical translation stage. Such an SE system is capable of mapping the thin film structure and its optical properties over the substrate area, and thereby evaluating the spatial uniformity of the component layers. In thin film photovoltaics, such structural and optical property measurements mapped over the substrate area can be applied to guide device optimization by correlating small area device performance with the associated local properties. In this thesis, a detailed ex-situ SE study of hydrogenated amorphous silicon (a-Si:H) thin films and solar cells prepared by plasma enhanced chemical vapor deposition (PECVD) has been presented. An SE analysis procedure with step-by-step error minimization has been applied to obtain accurate measures of the structural and optical properties of the component layers of the solar cells. Growth evolution diagrams were developed as functions of the deposition parameters in PECVD for both p-type and n-type layers to characterize the regimes of accumulated thickness over which a-Si:H, hydrogenated nanocrystalline silicon (nc-Si:H) and mixed phase (a+nc)-Si:H thin films are obtained. The underlying materials for these depositions were newly-deposited intrinsic a-Si:H layers on thermal oxide coated crystalline silicon wafers, designed to simulate specific device configurations. As a result, these growth evolution diagrams can be applied to both p-i-n and n-i-p solar cell optimization. In this thesis, the n-layer growth evolution diagram expressed in terms of hydrogen dilution ratio was applied in correlations with the performance of p-i-n single junction devices in order to optimize these devices. Moreover, ex-situ mapping SE was also employed over the area of multilayer structures in order to achieve better statistics for solar cell optimization by correlating structural parameters locally with small area solar cell performance parameters. In the study of (a-Si:H p-i-n)/(nc-Si:H p-i-n) tandem solar cells, RTSE was successfully applied to monitor the fabrication of the top cell, and efforts to optimize the nanocrystalline p-layer and i-layer of the bottom cell were initiated.

  15. Rapid computation of single PET scan rest-stress myocardial blood flow parametric images by table look up.

    PubMed

    Guehl, Nicolas J; Normandin, Marc D; Wooten, Dustin W; Rozen, Guy; Ruskin, Jeremy N; Shoup, Timothy M; Woo, Jonghye; Ptaszek, Leon M; Fakhri, Georges El; Alpert, Nathaniel M

    2017-09-01

    We have recently reported a method for measuring rest-stress myocardial blood flow (MBF) using a single, relatively short, PET scan session. The method requires two IV tracer injections, one to initiate rest imaging and one at peak stress. We previously validated absolute flow quantitation in ml/min/cc for standard bull's eye, segmental analysis. In this work, we extend the method for fast computation of rest-stress MBF parametric images. We provide an analytic solution to the single-scan rest-stress flow model which is then solved using a two-dimensional table lookup method (LM). Simulations were performed to compare the accuracy and precision of the lookup method with the original nonlinear method (NLM). Then the method was applied to 16 single scan rest/stress measurements made in 12 pigs: seven studied after infarction of the left anterior descending artery (LAD) territory, and nine imaged in the native state. Parametric maps of rest and stress MBF as well as maps of left (f LV ) and right (f RV ) ventricular spill-over fractions were generated. Regions of interest (ROIs) for 17 myocardial segments were defined in bull's eye fashion on the parametric maps. The mean of each ROI was then compared to the rest (K 1r ) and stress (K 1s ) MBF estimates obtained from fitting the 17 regional TACs with the NLM. In simulation, the LM performed as well as the NLM in terms of precision and accuracy. The simulation did not show that bias was introduced by the use of a predefined two-dimensional lookup table. In experimental data, parametric maps demonstrated good statistical quality and the LM was computationally much more efficient than the original NLM. Very good agreement was obtained between the mean MBF calculated on the parametric maps for each of the 17 ROIs and the regional MBF values estimated by the NLM (K 1map LM  = 1.019 × K 1 ROI NLM  + 0.019, R 2  = 0.986; mean difference = 0.034 ± 0.036 mL/min/cc). We developed a table lookup method for fast computation of parametric imaging of rest and stress MBF. Our results show the feasibility of obtaining good quality MBF maps using modest computational resources, thus demonstrating that the method can be applied in a clinical environment to obtain full quantitative MBF information. © 2017 American Association of Physicists in Medicine.

  16. Soft bilateral filtering volumetric shadows using cube shadow maps

    PubMed Central

    Ali, Hatam H.; Sunar, Mohd Shahrizal; Kolivand, Hoshang

    2017-01-01

    Volumetric shadows often increase the realism of rendered scenes in computer graphics. Typical volumetric shadows techniques do not provide a smooth transition effect in real-time with conservation on crispness of boundaries. This research presents a new technique for generating high quality volumetric shadows by sampling and interpolation. Contrary to conventional ray marching method, which requires extensive time, this proposed technique adopts downsampling in calculating ray marching. Furthermore, light scattering is computed in High Dynamic Range buffer to generate tone mapping. The bilateral interpolation is used along a view rays to smooth transition of volumetric shadows with respect to preserving-edges. In addition, this technique applied a cube shadow map to create multiple shadows. The contribution of this technique isreducing the number of sample points in evaluating light scattering and then introducing bilateral interpolation to improve volumetric shadows. This contribution is done by removing the inherent deficiencies significantly in shadow maps. This technique allows obtaining soft marvelous volumetric shadows, having a good performance and high quality, which show its potential for interactive applications. PMID:28632740

  17. Active edge maps for medical image registration

    NASA Astrophysics Data System (ADS)

    Kerwin, William; Yuan, Chun

    2001-07-01

    Applying edge detection prior to performing image registration yields several advantages over raw intensity- based registration. Advantages include the ability to register multicontrast or multimodality images, immunity to intensity variations, and the potential for computationally efficient algorithms. In this work, a common framework for edge-based image registration is formulated as an adaptation of snakes used in boundary detection. Called active edge maps, the new formulation finds a one-to-one transformation T(x) that maps points in a source image to corresponding locations in a target image using an energy minimization approach. The energy consists of an image component that is small when edge features are well matched in the two images, and an internal term that restricts T(x) to allowable configurations. The active edge map formulation is illustrated here with a specific example developed for affine registration of carotid artery magnetic resonance images. In this example, edges are identified using a magnitude of gradient operator, image energy is determined using a Gaussian weighted distance function, and the internal energy includes separate, adjustable components that control volume preservation and rigidity.

  18. MAP as a model for practice-based learning and improvement in child psychiatry training.

    PubMed

    Kataoka, Sheryl H; Podell, Jennifer L; Zima, Bonnie T; Best, Karin; Sidhu, Shawn; Jura, Martha Bates

    2014-01-01

    Not only is there a growing literature demonstrating the positive outcomes that result from implementing evidence based treatments (EBTs) but also studies that suggest a lack of delivery of these EBTs in "usual care" practices. One way to address this deficit is to improve the quality of psychotherapy teaching for clinicians-in-training. The Accreditation Council for Graduate Medical Education (ACGME) requires all training programs to assess residents in a number of competencies including Practice-Based Learning and Improvements (PBLI). This article describes the piloting of Managing and Adapting Practice (MAP) for child psychiatry fellows, to teach them both EBT and PBLI skills. Eight child psychiatry trainees received 5 full days of MAP training and are delivering MAP in a year-long outpatient teaching clinic. In this setting, MAP is applied to the complex, multiply diagnosed psychiatric patients that present to this clinic. This article describes how MAP tools and resources assist in teaching trainees each of the eight required competency components of PBLI, including identifying deficits in expertise, setting learning goals, performing learning activities, conducting quality improvement methods in practice, incorporating formative feedback, using scientific studies to inform practice, using technology for learning, and participating in patient education. A case example illustrates the use of MAP in teaching PBLI. MAP provides a unique way to teach important quality improvement and practice-based learning skills to trainees while training them in important psychotherapy competence.

  19. Surface analysis and mechanical behaviour mapping of vertically aligned CNT forest array through nanoindentation

    NASA Astrophysics Data System (ADS)

    Koumoulos, Elias P.; Charitidis, C. A.

    2017-02-01

    Carbon nanotube (CNT) based architectures have increased the scientific interest owning to their exceptional performance rendering them promising candidates for advanced industrial applications in the nanotechnology field. Despite individual CNTs being considered as one of the most known strong materials, much less is known about other CNT forms, such as CNT arrays, in terms of their mechanical performance (integrity). In this work, thermal chemical vapor deposition (CVD) method is employed to produce vertically aligned multiwall (VA-MW) CNT carpets. Their structural properties were studied by means of scanning electron microscopy (SEM), X-Ray diffraction (XRD) and Raman spectroscopy, while their hydrophobic behavior was investigated via contact angle measurements. The resistance to indentation deformation of VA-MWCNT carpets was investigated through nanoindentation technique. The synthesized VA-MWCNTs carpets consisted of well-aligned MWCNTs. Static contact angle measurements were performed with water and glycerol, revealing a rather super-hydrophobic behavior. The structural analysis, hydrophobic behavior and indentation response of VA-MWCNTs carpets synthesized via CVD method are clearly demonstrated. Additionally, cycle indentation load-depth curve was applied and hysteresis loops were observed in the indenter loading-unloading cycle due to the local stress distribution. Hardness (as resistance to applied load) and modulus mapping, at 200 nm of displacement for a grid of 70 μm2 is presented. Through trajection, the resistance is clearly divided in 2 regions, namely the MWCNT probing and the in-between area MWCNT - MWCNT interface.

  20. Adaptation Method for Overall and Local Performances of Gas Turbine Engine Model

    NASA Astrophysics Data System (ADS)

    Kim, Sangjo; Kim, Kuisoon; Son, Changmin

    2018-04-01

    An adaptation method was proposed to improve the modeling accuracy of overall and local performances of gas turbine engine. The adaptation method was divided into two steps. First, the overall performance parameters such as engine thrust, thermal efficiency, and pressure ratio were adapted by calibrating compressor maps, and second, the local performance parameters such as temperature of component intersection and shaft speed were adjusted by additional adaptation factors. An optimization technique was used to find the correlation equation of adaptation factors for compressor performance maps. The multi-island genetic algorithm (MIGA) was employed in the present optimization. The correlations of local adaptation factors were generated based on the difference between the first adapted engine model and performance test data. The proposed adaptation method applied to a low-bypass ratio turbofan engine of 12,000 lb thrust. The gas turbine engine model was generated and validated based on the performance test data in the sea-level static condition. In flight condition at 20,000 ft and 0.9 Mach number, the result of adapted engine model showed improved prediction in engine thrust (overall performance parameter) by reducing the difference from 14.5 to 3.3%. Moreover, there was further improvement in the comparison of low-pressure turbine exit temperature (local performance parameter) as the difference is reduced from 3.2 to 0.4%.

  1. Performance metrics and variance partitioning reveal sources of uncertainty in species distribution models

    USGS Publications Warehouse

    Watling, James I.; Brandt, Laura A.; Bucklin, David N.; Fujisaki, Ikuko; Mazzotti, Frank J.; Romañach, Stephanie; Speroterra, Carolina

    2015-01-01

    Species distribution models (SDMs) are widely used in basic and applied ecology, making it important to understand sources and magnitudes of uncertainty in SDM performance and predictions. We analyzed SDM performance and partitioned variance among prediction maps for 15 rare vertebrate species in the southeastern USA using all possible combinations of seven potential sources of uncertainty in SDMs: algorithms, climate datasets, model domain, species presences, variable collinearity, CO2 emissions scenarios, and general circulation models. The choice of modeling algorithm was the greatest source of uncertainty in SDM performance and prediction maps, with some additional variation in performance associated with the comprehensiveness of the species presences used for modeling. Other sources of uncertainty that have received attention in the SDM literature such as variable collinearity and model domain contributed little to differences in SDM performance or predictions in this study. Predictions from different algorithms tended to be more variable at northern range margins for species with more northern distributions, which may complicate conservation planning at the leading edge of species' geographic ranges. The clear message emerging from this work is that researchers should use multiple algorithms for modeling rather than relying on predictions from a single algorithm, invest resources in compiling a comprehensive set of species presences, and explicitly evaluate uncertainty in SDM predictions at leading range margins.

  2. Mapping and monitoring changes in vegetation communities of Jasper Ridge, CA, using spectral fractions derived from AVIRIS images

    NASA Technical Reports Server (NTRS)

    Sabol, Donald E., Jr.; Roberts, Dar A.; Adams, John B.; Smith, Milton O.

    1993-01-01

    An important application of remote sensing is to map and monitor changes over large areas of the land surface. This is particularly significant with the current interest in monitoring vegetation communities. Most of traditional methods for mapping different types of plant communities are based upon statistical classification techniques (i.e., parallel piped, nearest-neighbor, etc.) applied to uncalibrated multispectral data. Classes from these techniques are typically difficult to interpret (particularly to a field ecologist/botanist). Also, classes derived for one image can be very different from those derived from another image of the same area, making interpretation of observed temporal changes nearly impossible. More recently, neural networks have been applied to classification. Neural network classification, based upon spectral matching, is weak in dealing with spectral mixtures (a condition prevalent in images of natural surfaces). Another approach to mapping vegetation communities is based on spectral mixture analysis, which can provide a consistent framework for image interpretation. Roberts et al. (1990) mapped vegetation using the band residuals from a simple mixing model (the same spectral endmembers applied to all image pixels). Sabol et al. (1992b) and Roberts et al. (1992) used different methods to apply the most appropriate spectral endmembers to each image pixel, thereby allowing mapping of vegetation based upon the the different endmember spectra. In this paper, we describe a new approach to classification of vegetation communities based upon the spectra fractions derived from spectral mixture analysis. This approach was applied to three 1992 AVIRIS images of Jasper Ridge, California to observe seasonal changes in surface composition.

  3. A stochastic approach to estimate the uncertainty of dose mapping caused by uncertainties in b-spline registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hub, Martina; Thieke, Christian; Kessler, Marc L.

    2012-04-15

    Purpose: In fractionated radiation therapy, image guidance with daily tomographic imaging becomes more and more clinical routine. In principle, this allows for daily computation of the delivered dose and for accumulation of these daily dose distributions to determine the actually delivered total dose to the patient. However, uncertainties in the mapping of the images can translate into errors of the accumulated total dose, depending on the dose gradient. In this work, an approach to estimate the uncertainty of mapping between medical images is proposed that identifies areas bearing a significant risk of inaccurate dose accumulation. Methods: This method accounts formore » the geometric uncertainty of image registration and the heterogeneity of the dose distribution, which is to be mapped. Its performance is demonstrated in context of dose mapping based on b-spline registration. It is based on evaluation of the sensitivity of dose mapping to variations of the b-spline coefficients combined with evaluation of the sensitivity of the registration metric with respect to the variations of the coefficients. It was evaluated based on patient data that was deformed based on a breathing model, where the ground truth of the deformation, and hence the actual true dose mapping error, is known. Results: The proposed approach has the potential to distinguish areas of the image where dose mapping is likely to be accurate from other areas of the same image, where a larger uncertainty must be expected. Conclusions: An approach to identify areas where dose mapping is likely to be inaccurate was developed and implemented. This method was tested for dose mapping, but it may be applied in context of other mapping tasks as well.« less

  4. Testing random forest classification for identifying lava flows and mapping age groups on a single Landsat 8 image

    NASA Astrophysics Data System (ADS)

    Li, Long; Solana, Carmen; Canters, Frank; Kervyn, Matthieu

    2017-10-01

    Mapping lava flows using satellite images is an important application of remote sensing in volcanology. Several volcanoes have been mapped through remote sensing using a wide range of data, from optical to thermal infrared and radar images, using techniques such as manual mapping, supervised/unsupervised classification, and elevation subtraction. So far, spectral-based mapping applications mainly focus on the use of traditional pixel-based classifiers, without much investigation into the added value of object-based approaches and into advantages of using machine learning algorithms. In this study, Nyamuragira, characterized by a series of > 20 overlapping lava flows erupted over the last century, was used as a case study. The random forest classifier was tested to map lava flows based on pixels and objects. Image classification was conducted for the 20 individual flows and for 8 groups of flows of similar age using a Landsat 8 image and a DEM of the volcano, both at 30-meter spatial resolution. Results show that object-based classification produces maps with continuous and homogeneous lava surfaces, in agreement with the physical characteristics of lava flows, while lava flows mapped through the pixel-based classification are heterogeneous and fragmented including much "salt and pepper noise". In terms of accuracy, both pixel-based and object-based classification performs well but the former results in higher accuracies than the latter except for mapping lava flow age groups without using topographic features. It is concluded that despite spectral similarity, lava flows of contrasting age can be well discriminated and mapped by means of image classification. The classification approach demonstrated in this study only requires easily accessible image data and can be applied to other volcanoes as well if there is sufficient information to calibrate the mapping.

  5. A stochastic approach to estimate the uncertainty of dose mapping caused by uncertainties in b-spline registration

    PubMed Central

    Hub, Martina; Thieke, Christian; Kessler, Marc L.; Karger, Christian P.

    2012-01-01

    Purpose: In fractionated radiation therapy, image guidance with daily tomographic imaging becomes more and more clinical routine. In principle, this allows for daily computation of the delivered dose and for accumulation of these daily dose distributions to determine the actually delivered total dose to the patient. However, uncertainties in the mapping of the images can translate into errors of the accumulated total dose, depending on the dose gradient. In this work, an approach to estimate the uncertainty of mapping between medical images is proposed that identifies areas bearing a significant risk of inaccurate dose accumulation. Methods: This method accounts for the geometric uncertainty of image registration and the heterogeneity of the dose distribution, which is to be mapped. Its performance is demonstrated in context of dose mapping based on b-spline registration. It is based on evaluation of the sensitivity of dose mapping to variations of the b-spline coefficients combined with evaluation of the sensitivity of the registration metric with respect to the variations of the coefficients. It was evaluated based on patient data that was deformed based on a breathing model, where the ground truth of the deformation, and hence the actual true dose mapping error, is known. Results: The proposed approach has the potential to distinguish areas of the image where dose mapping is likely to be accurate from other areas of the same image, where a larger uncertainty must be expected. Conclusions: An approach to identify areas where dose mapping is likely to be inaccurate was developed and implemented. This method was tested for dose mapping, but it may be applied in context of other mapping tasks as well. PMID:22482640

  6. Easy monitoring of velocity fields in microfluidic devices using spatiotemporal image correlation spectroscopy.

    PubMed

    Travagliati, Marco; Girardo, Salvatore; Pisignano, Dario; Beltram, Fabio; Cecchini, Marco

    2013-09-03

    Spatiotemporal image correlation spectroscopy (STICS) is a simple and powerful technique, well established as a tool to probe protein dynamics in cells. Recently, its potential as a tool to map velocity fields in lab-on-a-chip systems was discussed. However, the lack of studies on its performance has prevented its use for microfluidics applications. Here, we systematically and quantitatively explore STICS microvelocimetry in microfluidic devices. We exploit a simple experimental setup, based on a standard bright-field inverted microscope (no fluorescence required) and a high-fps camera, and apply STICS to map liquid flow in polydimethylsiloxane (PDMS) microchannels. Our data demonstrates optimal 2D velocimetry up to 10 mm/s flow and spatial resolution down to 5 μm.

  7. Comparison of tissue viability imaging and colorimetry: skin blanching.

    PubMed

    Zhai, Hongbo; Chan, Heidi P; Farahmand, Sara; Nilsson, Gert E; Maibach, Howard I

    2009-02-01

    Operator-independent assessment of skin blanching is important in the development and evaluation of topically applied steroids. Spectroscopic instruments based on hand-held probes, however, include elements of operator dependence such as difference in applied pressure and probe misalignment, while laser Doppler-based methods are better suited for demonstration of skin vasodilatation than for vasoconstriction. To demonstrate the potential of the emerging technology of Tissue Viability Imaging (TiVi) in the objective and operator-independent assessment of skin blanching. The WheelsBridge TiVi600 Tissue Viability Imager was used for quantification of human skin blanching with the Minolta chromameter CR 200 as an independent colorimeter reference method. Desoximetasone gel 0.05% was applied topically on the volar side of the forearm under occlusion for 6 h in four healthy adults. In a separate study, the induction of blanching in the occlusion phase was mapped using a transparent occlusion cover. The relative uncertainty in the blanching estimate produced by the Tissue Viability Imager was about 5% and similar to that of the chromameter operated by a single user and taking the a(*) parameter as a measure of blanching. Estimation of skin blanching could also be performed in the presence of a transient paradoxical erythema, using the integrated TiVi software. The successive induction of skin blanching during the occlusion phase could readily be mapped by the Tissue Viability Imager. TiVi seems to be suitable for operator-independent and remote mapping of human skin blanching, eliminating the main disadvantages of methods based on hand-held probes.

  8. Adapting sensory data for multiple robots performing spill cleanup

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storjohann, K.; Saltzen, E.

    1990-09-01

    This paper describes a possible method of converting a single performing robot algorithm into a multiple performing robot algorithm without the need to modify previously written codes. The algorithm to be converted involves spill detection and clean up by the HERMIES-III mobile robot. In order to achieve the goal of multiple performing robots with this algorithm, two steps are taken. First, the task is formally divided into two sub-tasks, spill detection and spill clean-up, the former of which is allocated to the added performing robot, HERMIES-IIB. Second, a inverse perspective mapping, is applied to the data acquired by the newmore » performing robot (HERMIES-IIB), allowing the data to be processed by the previously written algorithm without re-writing the code. 6 refs., 4 figs.« less

  9. Bit-level plane image encryption based on coupled map lattice with time-varying delay

    NASA Astrophysics Data System (ADS)

    Lv, Xiupin; Liao, Xiaofeng; Yang, Bo

    2018-04-01

    Most of the existing image encryption algorithms had two basic properties: confusion and diffusion in a pixel-level plane based on various chaotic systems. Actually, permutation in a pixel-level plane could not change the statistical characteristics of an image, and many of the existing color image encryption schemes utilized the same method to encrypt R, G and B components, which means that the three color components of a color image are processed three times independently. Additionally, dynamical performance of a single chaotic system degrades greatly with finite precisions in computer simulations. In this paper, a novel coupled map lattice with time-varying delay therefore is applied in color images bit-level plane encryption to solve the above issues. Spatiotemporal chaotic system with both much longer period in digitalization and much excellent performances in cryptography is recommended. Time-varying delay embedded in coupled map lattice enhances dynamical behaviors of the system. Bit-level plane image encryption algorithm has greatly reduced the statistical characteristics of an image through the scrambling processing. The R, G and B components cross and mix with one another, which reduces the correlation among the three components. Finally, simulations are carried out and all the experimental results illustrate that the proposed image encryption algorithm is highly secure, and at the same time, also demonstrates superior performance.

  10. A case study of a precision fertilizer application task generation for wheat based on classified hyperspectral data from UAV combined with farm history data

    NASA Astrophysics Data System (ADS)

    Kaivosoja, Jere; Pesonen, Liisa; Kleemola, Jouko; Pölönen, Ilkka; Salo, Heikki; Honkavaara, Eija; Saari, Heikki; Mäkynen, Jussi; Rajala, Ari

    2013-10-01

    Different remote sensing methods for detecting variations in agricultural fields have been studied in last two decades. There are already existing systems for planning and applying e.g. nitrogen fertilizers to the cereal crop fields. However, there are disadvantages such as high costs, adaptability, reliability, resolution aspects and final products dissemination. With an unmanned aerial vehicle (UAV) based airborne methods, data collection can be performed cost-efficiently with desired spatial and temporal resolutions, below clouds and under diverse weather conditions. A new Fabry-Perot interferometer based hyperspectral imaging technology implemented in an UAV has been introduced. In this research, we studied the possibilities of exploiting classified raster maps from hyperspectral data to produce a work task for a precision fertilizer application. The UAV flight campaign was performed in a wheat test field in Finland in the summer of 2012. Based on the campaign, we have classified raster maps estimating the biomass and nitrogen contents at approximately stage 34 in the Zadoks scale. We combined the classified maps with farm history data such as previous yield maps. Then we generalized the combined results and transformed it to a vectorized zonal task map suitable for farm machinery. We present the selected weights for each dataset in the processing chain and the resultant variable rate application (VRA) task. The additional fertilization according to the generated task was shown to be beneficial for the amount of yield. However, our study is indicating that there are still many uncertainties within the process chain.

  11. Depth Estimation of Submerged Aquatic Vegetation in Clear Water Streams Using Low-Altitude Optical Remote Sensing

    PubMed Central

    Visser, Fleur; Buis, Kerst; Verschoren, Veerle; Meire, Patrick

    2015-01-01

    UAVs and other low-altitude remote sensing platforms are proving very useful tools for remote sensing of river systems. Currently consumer grade cameras are still the most commonly used sensors for this purpose. In particular, progress is being made to obtain river bathymetry from the optical image data collected with such cameras, using the strong attenuation of light in water. No studies have yet applied this method to map submergence depth of aquatic vegetation, which has rather different reflectance characteristics from river bed substrate. This study therefore looked at the possibilities to use the optical image data to map submerged aquatic vegetation (SAV) depth in shallow clear water streams. We first applied the Optimal Band Ratio Analysis method (OBRA) of Legleiter et al. (2009) to a dataset of spectral signatures from three macrophyte species in a clear water stream. The results showed that for each species the ratio of certain wavelengths were strongly associated with depth. A combined assessment of all species resulted in equally strong associations, indicating that the effect of spectral variation in vegetation is subsidiary to spectral variation due to depth changes. Strongest associations (R2-values ranging from 0.67 to 0.90 for different species) were found for combinations including one band in the near infrared (NIR) region between 825 and 925 nm and one band in the visible light region. Currently data of both high spatial and spectral resolution is not commonly available to apply the OBRA results directly to image data for SAV depth mapping. Instead a novel, low-cost data acquisition method was used to obtain six-band high spatial resolution image composites using a NIR sensitive DSLR camera. A field dataset of SAV submergence depths was used to develop regression models for the mapping of submergence depth from image pixel values. Band (combinations) providing the best performing models (R2-values up to 0.77) corresponded with the OBRA findings. A 10% error was achieved under sub-optimal data collection conditions, which indicates that the method could be suitable for many SAV mapping applications. PMID:26437410

  12. Depth Estimation of Submerged Aquatic Vegetation in Clear Water Streams Using Low-Altitude Optical Remote Sensing.

    PubMed

    Visser, Fleur; Buis, Kerst; Verschoren, Veerle; Meire, Patrick

    2015-09-30

    UAVs and other low-altitude remote sensing platforms are proving very useful tools for remote sensing of river systems. Currently consumer grade cameras are still the most commonly used sensors for this purpose. In particular, progress is being made to obtain river bathymetry from the optical image data collected with such cameras, using the strong attenuation of light in water. No studies have yet applied this method to map submergence depth of aquatic vegetation, which has rather different reflectance characteristics from river bed substrate. This study therefore looked at the possibilities to use the optical image data to map submerged aquatic vegetation (SAV) depth in shallow clear water streams. We first applied the Optimal Band Ratio Analysis method (OBRA) of Legleiter et al. (2009) to a dataset of spectral signatures from three macrophyte species in a clear water stream. The results showed that for each species the ratio of certain wavelengths were strongly associated with depth. A combined assessment of all species resulted in equally strong associations, indicating that the effect of spectral variation in vegetation is subsidiary to spectral variation due to depth changes. Strongest associations (R²-values ranging from 0.67 to 0.90 for different species) were found for combinations including one band in the near infrared (NIR) region between 825 and 925 nm and one band in the visible light region. Currently data of both high spatial and spectral resolution is not commonly available to apply the OBRA results directly to image data for SAV depth mapping. Instead a novel, low-cost data acquisition method was used to obtain six-band high spatial resolution image composites using a NIR sensitive DSLR camera. A field dataset of SAV submergence depths was used to develop regression models for the mapping of submergence depth from image pixel values. Band (combinations) providing the best performing models (R²-values up to 0.77) corresponded with the OBRA findings. A 10% error was achieved under sub-optimal data collection conditions, which indicates that the method could be suitable for many SAV mapping applications.

  13. Application of nonlinear transformations to automatic flight control

    NASA Technical Reports Server (NTRS)

    Meyer, G.; Su, R.; Hunt, L. R.

    1984-01-01

    The theory of transformations of nonlinear systems to linear ones is applied to the design of an automatic flight controller for the UH-1H helicopter. The helicopter mathematical model is described and it is shown to satisfy the necessary and sufficient conditions for transformability. The mapping is constructed, taking the nonlinear model to canonical form. The performance of the automatic control system in a detailed simulation on the flight computer is summarized.

  14. Deductive Sensemaking Principles Using Personal Constructs of the Field Commanders

    DTIC Science & Technology

    2007-06-01

    unclassified b . ABSTRACT unclassified c. THIS PAGE unclassified ABSTRACT The virtue of defining and measuring the commander’s performance solely...seeing the world in which he lives; ( b ) the individual builds constructs and tries them on evolving contexts; (c) the construct can be applied to...ability to (a) anticipate future state of nature; ( b ) cope with state changing in the battlefield; and (c) replicate past experiences and map them into

  15. An Annotated Bibliography on Tactical Map Display Symbology

    DTIC Science & Technology

    1989-08-01

    failure of attention to be focused on one element selectively in filtering tasks where only that one element was relevant to the discrimination. Failure of...The present study evaluates a class of models of human information processing made popular by Broadbent . A brief tachistoscopic display of one or two...213-219. Two experiments were performed to test Neisser’s two-stage model of recognition as applied to matching. Evidence of parallel processing was

  16. Applying Lidar and High-Resolution Multispectral Imagery for Improved Quantification and Mapping of Tundra Vegetation Structure and Distribution in the Alaskan Arctic

    NASA Astrophysics Data System (ADS)

    Greaves, Heather E.

    Climate change is disproportionately affecting high northern latitudes, and the extreme temperatures, remoteness, and sheer size of the Arctic tundra biome have always posed challenges that make application of remote sensing technology especially appropriate. Advances in high-resolution remote sensing continually improve our ability to measure characteristics of tundra vegetation communities, which have been difficult to characterize previously due to their low stature and their distribution in complex, heterogeneous patches across large landscapes. In this work, I apply terrestrial lidar, airborne lidar, and high-resolution airborne multispectral imagery to estimate tundra vegetation characteristics for a research area near Toolik Lake, Alaska. Initially, I explored methods for estimating shrub biomass from terrestrial lidar point clouds, finding that a canopy-volume based algorithm performed best. Although shrub biomass estimates derived from airborne lidar data were less accurate than those from terrestrial lidar data, algorithm parameters used to derive biomass estimates were similar for both datasets. Additionally, I found that airborne lidar-based shrub biomass estimates were just as accurate whether calibrated against terrestrial lidar data or harvested shrub biomass--suggesting that terrestrial lidar potentially could replace destructive biomass harvest. Along with smoothed Normalized Differenced Vegetation Index (NDVI) derived from airborne imagery, airborne lidar-derived canopy volume was an important predictor in a Random Forest model trained to estimate shrub biomass across the 12.5 km2 covered by our lidar and imagery data. The resulting 0.80 m resolution shrub biomass maps should provide important benchmarks for change detection in the Toolik area, especially as deciduous shrubs continue to expand in tundra regions. Finally, I applied 33 lidar- and imagery-derived predictor layers in a validated Random Forest modeling approach to map vegetation community distribution at 20 cm resolution across the data collection area, creating maps that will enable validation of coarser maps, as well as study of fine-scale ecological processes in the area. These projects have pushed the limits of what can be accomplished for vegetation mapping using airborne remote sensing in a challenging but important region; it is my hope that the methods explored here will illuminate potential paths forward as landscapes and technologies inevitably continue to change.

  17. Mapping forested wetlands in the Great Zhan River Basin through integrating optical, radar, and topographical data classification techniques.

    PubMed

    Na, X D; Zang, S Y; Wu, C S; Li, W L

    2015-11-01

    Knowledge of the spatial extent of forested wetlands is essential to many studies including wetland functioning assessment, greenhouse gas flux estimation, and wildlife suitable habitat identification. For discriminating forested wetlands from their adjacent land cover types, researchers have resorted to image analysis techniques applied to numerous remotely sensed data. While with some success, there is still no consensus on the optimal approaches for mapping forested wetlands. To address this problem, we examined two machine learning approaches, random forest (RF) and K-nearest neighbor (KNN) algorithms, and applied these two approaches to the framework of pixel-based and object-based classifications. The RF and KNN algorithms were constructed using predictors derived from Landsat 8 imagery, Radarsat-2 advanced synthetic aperture radar (SAR), and topographical indices. The results show that the objected-based classifications performed better than per-pixel classifications using the same algorithm (RF) in terms of overall accuracy and the difference of their kappa coefficients are statistically significant (p<0.01). There were noticeably omissions for forested and herbaceous wetlands based on the per-pixel classifications using the RF algorithm. As for the object-based image analysis, there were also statistically significant differences (p<0.01) of Kappa coefficient between results performed based on RF and KNN algorithms. The object-based classification using RF provided a more visually adequate distribution of interested land cover types, while the object classifications based on the KNN algorithm showed noticeably commissions for forested wetlands and omissions for agriculture land. This research proves that the object-based classification with RF using optical, radar, and topographical data improved the mapping accuracy of land covers and provided a feasible approach to discriminate the forested wetlands from the other land cover types in forestry area.

  18. Enabling high-quality observations of surface imperviousness for water runoff modelling from unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Tokarczyk, Piotr; Leitao, Joao Paulo; Rieckermann, Jörg; Schindler, Konrad; Blumensaat, Frank

    2015-04-01

    Modelling rainfall-runoff in urban areas is increasingly applied to support flood risk assessment particularly against the background of a changing climate and an increasing urbanization. These models typically rely on high-quality data for rainfall and surface characteristics of the area. While recent research in urban drainage has been focusing on providing spatially detailed rainfall data, the technological advances in remote sensing that ease the acquisition of detailed land-use information are less prominently discussed within the community. The relevance of such methods increase as in many parts of the globe, accurate land-use information is generally lacking, because detailed image data is unavailable. Modern unmanned air vehicles (UAVs) allow acquiring high-resolution images on a local level at comparably lower cost, performing on-demand repetitive measurements, and obtaining a degree of detail tailored for the purpose of the study. In this study, we investigate for the first time the possibility to derive high-resolution imperviousness maps for urban areas from UAV imagery and to use this information as input for urban drainage models. To do so, an automatic processing pipeline with a modern classification method is tested and applied in a state-of-the-art urban drainage modelling exercise. In a real-life case study in the area of Lucerne, Switzerland, we compare imperviousness maps generated from a consumer micro-UAV and standard large-format aerial images acquired by the Swiss national mapping agency (swisstopo). After assessing their correctness, we perform an end-to-end comparison, in which they are used as an input for an urban drainage model. Then, we evaluate the influence which different image data sources and their processing methods have on hydrological and hydraulic model performance. We analyze the surface runoff of the 307 individual sub-catchments regarding relevant attributes, such as peak runoff and volume. Finally, we evaluate the model's channel flow prediction performance through a cross-comparison with reference flow measured at the catchment outlet. We show that imperviousness maps generated using UAV imagery processed with modern classification methods achieve accuracy comparable with standard, off-the-shelf aerial imagery. In the examined case study, we find that the different imperviousness maps only have a limited influence on modelled surface runoff and pipe flows. We conclude that UAV imagery represents a valuable alternative data source for urban drainage model applications due to the possibility to flexibly acquire up-to-date aerial images at a superior quality and a competitive price. Our analyses furthermore suggest that spatially more detailed urban drainage models can even better benefit from the full detail of UAV imagery.

  19. High-quality observation of surface imperviousness for urban runoff modelling using UAV imagery

    NASA Astrophysics Data System (ADS)

    Tokarczyk, P.; Leitao, J. P.; Rieckermann, J.; Schindler, K.; Blumensaat, F.

    2015-01-01

    Modelling rainfall-runoff in urban areas is increasingly applied to support flood risk assessment particularly against the background of a changing climate and an increasing urbanization. These models typically rely on high-quality data for rainfall and surface characteristics of the area. While recent research in urban drainage has been focusing on providing spatially detailed rainfall data, the technological advances in remote sensing that ease the acquisition of detailed land-use information are less prominently discussed within the community. The relevance of such methods increase as in many parts of the globe, accurate land-use information is generally lacking, because detailed image data is unavailable. Modern unmanned air vehicles (UAVs) allow acquiring high-resolution images on a local level at comparably lower cost, performing on-demand repetitive measurements, and obtaining a degree of detail tailored for the purpose of the study. In this study, we investigate for the first time the possibility to derive high-resolution imperviousness maps for urban areas from UAV imagery and to use this information as input for urban drainage models. To do so, an automatic processing pipeline with a modern classification method is tested and applied in a state-of-the-art urban drainage modelling exercise. In a real-life case study in the area of Lucerne, Switzerland, we compare imperviousness maps generated from a consumer micro-UAV and standard large-format aerial images acquired by the Swiss national mapping agency (swisstopo). After assessing their correctness, we perform an end-to-end comparison, in which they are used as an input for an urban drainage model. Then, we evaluate the influence which different image data sources and their processing methods have on hydrological and hydraulic model performance. We analyze the surface runoff of the 307 individual subcatchments regarding relevant attributes, such as peak runoff and volume. Finally, we evaluate the model's channel flow prediction performance through a cross-comparison with reference flow measured at the catchment outlet. We show that imperviousness maps generated using UAV imagery processed with modern classification methods achieve accuracy comparable with standard, off-the-shelf aerial imagery. In the examined case study, we find that the different imperviousness maps only have a limited influence on modelled surface runoff and pipe flows. We conclude that UAV imagery represents a valuable alternative data source for urban drainage model applications due to the possibility to flexibly acquire up-to-date aerial images at a superior quality and a competitive price. Our analyses furthermore suggest that spatially more detailed urban drainage models can even better benefit from the full detail of UAV imagery.

  20. Constrained energy minimization applied to apparent reflectance and single-scattering albedo spectra: a comparison

    NASA Astrophysics Data System (ADS)

    Resmini, Ronald G.; Graver, William R.; Kappus, Mary E.; Anderson, Mark E.

    1996-11-01

    Constrained energy minimization (CEM) has been applied to the mapping of the quantitative areal distribution of the mineral alunite in an approximately 1.8 km2 area of the Cuprite mining district, Nevada. CEM is a powerful technique for rapid quantitative mineral mapping which requires only the spectrum of the mineral to be mapped. A priori knowledge of background spectral signatures is not required. Our investigation applies CEM to calibrated radiance data converted to apparent reflectance (AR) and to single scattering albedo (SSA) spectra. The radiance data were acquired by the 210 channel, 0.4 micrometers to 2.5 micrometers airborne Hyperspectral Digital Imagery Collection Experiment sensor. CEM applied to AR spectra assumes linear mixing of the spectra of the materials exposed at the surface. This assumption is likely invalid as surface materials, which are often mixtures of particulates of different substances, are more properly modeled as intimate mixtures and thus spectral mixing analyses must take account of nonlinear effects. One technique for approximating nonlinear mixing requires the conversion of AR spectra to SSA spectra. The results of CEM applied to SSA spectra are compared to those of CEM applied to AR spectra. The occurrence of alunite is similar though not identical to mineral maps produced with both the SSA and AR spectra. Alunite is slightly more widespread based on processing with the SSA spectra. Further, fractional abundances derived from the SSA spectra are, in general, higher than those derived from AR spectra. Implications for the interpretation of quantitative mineral mapping with hyperspectral remote sensing data are discussed.

  1. A Voxel-Based Approach to Explore Local Dose Differences Associated With Radiation-Induced Lung Damage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palma, Giuseppe; Monti, Serena; D'Avino, Vittoria

    Purpose: To apply a voxel-based (VB) approach aimed at exploring local dose differences associated with late radiation-induced lung damage (RILD). Methods and Materials: An interinstitutional database of 98 patients who were Hodgkin lymphoma (HL) survivors treated with postchemotherapy supradiaphragmatic radiation therapy was analyzed in the study. Eighteen patients experienced late RILD, classified according to the Radiation Therapy Oncology Group scoring system. Each patient's computed tomographic (CT) scan was normalized to a single reference case anatomy (common coordinate system, CCS) through a log-diffeomorphic approach. The obtained deformation fields were used to map the dose of each patient into the CCS. Themore » coregistration robustness and the dose mapping accuracy were evaluated by geometric and dose scores. Two different statistical mapping schemes for nonparametric multiple permutation inference on dose maps were applied, and the corresponding P<.05 significance lung subregions were generated. A receiver operating characteristic (ROC)-based test was performed on the mean dose extracted from each subregion. Results: The coregistration process resulted in a geometrically robust and accurate dose warping. A significantly higher dose was consistently delivered to RILD patients in voxel clusters near the peripheral medial-basal portion of the lungs. The area under the ROC curves (AUC) from the mean dose of the voxel clusters was higher than the corresponding AUC derived from the total lung mean dose. Conclusions: We implemented a framework including a robust registration process and a VB approach accounting for the multiple comparison problem in dose-response modeling, and applied it to a cohort of HL survivors to explore a local dose–RILD relationship in the lungs. Patients with RILD received a significantly greater dose in parenchymal regions where low doses (∼6 Gy) were delivered. Interestingly, the relation between differences in the high-dose range and RILD seems to lack a clear spatial signature.« less

  2. Temporal Stability of Rotors and Atrial Activation Patterns in Persistent Human Atrial Fibrillation: A High-Density Epicardial Mapping Study of Prolonged Recordings.

    PubMed

    Walters, Tomos E; Lee, Geoffrey; Morris, Gwilym; Spence, Steven; Larobina, Marco; Atkinson, Victoria; Antippa, Phillip; Goldblatt, John; Royse, Alistair; O'Keefe, Michael; Sanders, Prashanthan; Morton, Joseph B; Kistler, Peter M; Kalman, Jonathan M

    This study aimed to determine the spatiotemporal stability of rotors and other atrial activation patterns over 10 min in longstanding, persistent AF, along with the relationship of rotors to short cycle-length (CL) activity. The prevalence, stability, and mechanistic importance of rotors in human atrial fibrillation (AF) remain unclear. Epicardial mapping was performed in 10 patients undergoing cardiac surgery, with bipolar electrograms recorded over 10 min using a triangular plaque (area: 6.75 cm 2 ; 117 bipoles; spacing: 2.5 mm) applied to the left atrial posterior wall (n = 9) and the right atrial free wall (n = 4). Activations were identified throughout 6 discrete 10-s segments of AF spanning 10 min, and dynamic activation mapping was performed. The distributions of 4,557 generated activation patterns within each mapped region were compared between the 6 segments. The dominant activation pattern was the simultaneous presence of multiple narrow wave fronts (26%). Twelve percent of activations represented transient rotors, seen in 85% of mapped regions with a median duration of 3 rotations. A total of 87% were centered on an area of short CL activity (<100 ms), although such activity had a positive predictive value for rotors of only 0.12. The distribution of activation patterns and wave-front directionality were highly stable over time, with a single dominant pattern within a 10-s AF segment recurring across all 6 segments in 62% of mapped regions. In patients with longstanding, persistent AF, activation patterns are spatiotemporally stable over 10 min. Transient rotors can be demonstrated in the majority of mapped regions, are spatiotemporally associated with short CL activity, and, when recurrent, demonstrate anatomical determinism. Copyright © 2015 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  3. Harmonisation of geological data to support geohazard mapping: the case of eENVplus project

    NASA Astrophysics Data System (ADS)

    Cipolloni, Carlo; Krivic, Matija; Novak, Matevž; Pantaloni, Marco; Šinigoj, Jasna

    2014-05-01

    In the eENVplus project, which aims is to unlock huge amounts of environmental datamanaged by the national and regional environmental agencies and other public and private organisations, we have developed a cross-border pilot on the geological data harmonisation through the integration and harmonisation of existing services. The pilot analyses the methodology and results of the OneGeology-Europe project, elaborated at the scale of 1:1M, to point out difficulties and unsolved problems highlighted during the project. This preliminary analysis is followed by a comparison of two geological maps provided by the neighbouring countries with the objective to compare and define the geometric and semantic anomalous contacts between geological polygons and lines in the maps. This phase will be followed by a detailed scale geological map analysis aimed to solve the anomalies identified in the previous phase. The two Geological Surveys involved into the pilot will discuss the problems highlighted during this phase. Subsequently the semantic description will be redefined and the geometry of the polygons in geological maps will be redrawn or adjusted according to a lithostratigraphic approach that takes in account the homogeneity of age, lithology, depositional environment and consolidation degree of geological units. The two Geological Surveys have decided to apply the harmonisation process on two different dataset: the first is represented by the Geological Map at the scale of 1:1,000,000, partially harmonised within the OneGeology-Europe project that will be re-aligned with GE INSPIRE data model to produce data and services compliant with INSPIRE target schema. The main target of Geological Surveys is to produce data and web services compliant with the wider international schema, where there are more options to provide data, with specific attributes that are important to obtain the geohazard map as in the case of this pilot project; therefore we have decided to apply GeoSciML 3.2 schema to the dataset that represents Geological Map at the scale of 1:100,000. Within the pilot will be realised two main geohazard examples with a semi-automatized procedure based on a specific tool component integrated in the client: a landslide susceptibility map and a potential flooding map. In this work we want to present the first results obtained with use case geo-processing procedure in the first test phase, where we have developed a dataset compliant with GE INSPIRE to perform the landslide and flooding susceptibility maps.

  4. A Framework for the Validation of Probabilistic Seismic Hazard Analysis Maps Using Strong Ground Motion Data

    NASA Astrophysics Data System (ADS)

    Bydlon, S. A.; Beroza, G. C.

    2015-12-01

    Recent debate on the efficacy of Probabilistic Seismic Hazard Analysis (PSHA), and the utility of hazard maps (i.e. Stein et al., 2011; Hanks et al., 2012), has prompted a need for validation of such maps using recorded strong ground motion data. Unfortunately, strong motion records are limited spatially and temporally relative to the area and time windows hazard maps encompass. We develop a framework to test the predictive powers of PSHA maps that is flexible with respect to a map's specified probability of exceedance and time window, and the strong motion receiver coverage. Using a combination of recorded and interpolated strong motion records produced through the ShakeMap environment, we compile a record of ground motion intensity measures for California from 2002-present. We use this information to perform an area-based test of California PSHA maps inspired by the work of Ward (1995). Though this framework is flexible in that it can be applied to seismically active areas where ShakeMap-like ground shaking interpolations have or can be produced, this testing procedure is limited by the relatively short lifetime of strong motion recordings and by the desire to only test with data collected after the development of the PSHA map under scrutiny. To account for this, we use the assumption that PSHA maps are time independent to adapt the testing procedure for periods of recorded data shorter than the lifetime of a map. We note that accuracy of this testing procedure will only improve as more data is collected, or as the time-horizon of interest is reduced, as has been proposed for maps of areas experiencing induced seismicity. We believe that this procedure can be used to determine whether PSHA maps are accurately portraying seismic hazard and whether discrepancies are localized or systemic.

  5. Making a georeferenced mosaic of historical map series using constrained polynomial fit

    NASA Astrophysics Data System (ADS)

    Molnár, G.

    2009-04-01

    Present day GIS software packages make it possible to handle several hundreds of rasterised map sheets. For proper usage of such datasets we usually have two requirements: First these map sheets should be georeferenced, secondly these georeferenced maps should fit properly together, without overlap and short. Both requirements can be fulfilled easily, if the geodetic background for the map series is accurate, and the projection of the map series is known. In this case the individual map sheets should be georeferenced in the projected coordinate system of the map series. This means every individual map sheets are georeferenced using overprinted coordinate grid or image corner projected coordinates as ground control points (GCPs). If after this georeferencing procedure the map sheets do not fit together (for example because of using different projection for every map sheet, as it is in the case of Third Military Survey) a common projection can be chosen, and all the georeferenced maps should be transformed to this common projection using a map-to-map transformation. If the geodetic background is not so strong, ie. there are distortions inside the map sheets, a polynomial (linear quadratic or cubic) polynomial fit can be used for georeferencing the map sheets. Finding identical surface objects (as GCPs) on the historical map and on a present day cartographic map, let us to determine a transformation between raw image coordinates (x,y) and the projected coordinates (Easting, Northing, E,N). This means, for all the map sheets, several GCPs should be found, (for linear, quadratic of cubic transformations at least 3, 5 or 10 respectively) and every map sheets should be transformed to a present day coordinate system individually using these GCPs. The disadvantage of this method is that, after the transformation, the individual transformed map sheets not necessarily fit together properly any more. To overcome this problem neither the reverse order of procedure helps: if we make the mosaic first (eg. graphically) and we try the polynomial fit of this mosaic afterwards, neither using this can we reduce the error of internal inaccuracy of the map-sheets. We can overcome this problem by calculating the transformation parameters of polynomial fit with constrains (Mikhail, 1976). The constrain is that the common edge of two neighboring map-sheets should be transformed identically, ie. the right edge of the left image and the left edge of the right image should fit together after the transformation. This condition should fulfill for all the internal (not only the vertical, but also for the horizontal) edges of the mosaic. Constrains are expressed as a relationship between parameters: The parameters of the polynomial transformation should fulfill not only the least squares adjustment criteria but also the constrain: the transformed coordinates should be identical on the image edges. (With the example mentioned above, for image points of the rightmost column of the left image the transformed coordinates should be the same a for the image points of the leftmost column of the right image, and these transformed coordinates can depend on the line number image coordinate of the raster point.) The normal equation system can be calculated with Lagrange-multipliers. The resulting set of parameters for all map-sheets should be applied on the transformation of the images. This parameter set can not been directly applied in GIS software for the transformation. The simplest solution applying this parameters is ‘simulating' GCPs for every image, and applying these simulated GCPs for the georeferencing of the individual map sheets. This method is applied on a set of map-sheets of the First military Survey of the Habsburg Empire with acceptable results. Reference: Mikhail, E. M.: Observations and Least Squares. IEP—A Dun-Donnelley Publisher, New York, 1976. 497 pp.

  6. Mapping disease at an approximated individual level using aggregate data: a case study of mapping New Hampshire birth defects.

    PubMed

    Shi, Xun; Miller, Stephanie; Mwenda, Kevin; Onda, Akikazu; Reese, Judy; Onega, Tracy; Gui, Jiang; Karagas, Margret; Demidenko, Eugene; Moeschler, John

    2013-09-06

    Limited by data availability, most disease maps in the literature are for relatively large and subjectively-defined areal units, which are subject to problems associated with polygon maps. High resolution maps based on objective spatial units are needed to more precisely detect associations between disease and environmental factors. We propose to use a Restricted and Controlled Monte Carlo (RCMC) process to disaggregate polygon-level location data to achieve mapping aggregate data at an approximated individual level. RCMC assigns a random point location to a polygon-level location, in which the randomization is restricted by the polygon and controlled by the background (e.g., population at risk). RCMC allows analytical processes designed for individual data to be applied, and generates high-resolution raster maps. We applied RCMC to the town-level birth defect data for New Hampshire and generated raster maps at the resolution of 100 m. Besides the map of significance of birth defect risk represented by p-value, the output also includes a map of spatial uncertainty and a map of hot spots. RCMC is an effective method to disaggregate aggregate data. An RCMC-based disease mapping maximizes the use of available spatial information, and explicitly estimates the spatial uncertainty resulting from aggregation.

  7. A Generic Framework of Performance Measurement in Networked Enterprises

    NASA Astrophysics Data System (ADS)

    Kim, Duk-Hyun; Kim, Cheolhan

    Performance measurement (PM) is essential for managing networked enterprises (NEs) because it greatly affects the effectiveness of collaboration among members of NE.PM in NE requires somewhat different approaches from PM in a single enterprise because of heterogeneity, dynamism, and complexity of NE’s. This paper introduces a generic framework of PM in NE (we call it NEPM) based on the Balanced Scorecard (BSC) approach. In NEPM key performance indicators and cause-and-effect relationships among them are defined in a generic strategy map. NEPM could be applied to various types of NEs after specializing KPIs and relationships among them. Effectiveness of NEPM is shown through a case study of some Korean NEs.

  8. A Marker-Based Approach for the Automated Selection of a Single Segmentation from a Hierarchical Set of Image Segmentations

    NASA Technical Reports Server (NTRS)

    Tarabalka, Y.; Tilton, J. C.; Benediktsson, J. A.; Chanussot, J.

    2012-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which combines region object finding with region object clustering, has given good performances for multi- and hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. Two classification-based approaches for automatic marker selection are adapted and compared for this purpose. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. Three different implementations of the M-HSEG method are proposed and their performances in terms of classification accuracies are compared. The experimental results, presented for three hyperspectral airborne images, demonstrate that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for remote sensing image analysis.

  9. An Active Learning Activity to Reinforce the Design Components of the Corticosteroids

    PubMed Central

    Mandela, Prashant

    2018-01-01

    Despite the popularity of active learning applications over the past few decades, few activities have been reported for the field of medicinal chemistry. The purpose of this study is to report a new active learning activity, describe participant contributions, and examine participant performance on the assessment questions mapped to the objective covered by the activity. In this particular activity, students are asked to design two novel corticosteroids as a group (6–8 students per group) based on the design characteristics of marketed corticosteroids covered in lecture coupled with their pharmaceutics knowledge from the previous semester and then defend their design to the class through an interactive presentation model. Although class performance on the objective mapped to this material on the assessment did not reach statistical significance, use of this activity has allowed fruitful discussion of misunderstood concepts and facilitated multiple changes to the lecture presentation. As pharmacy schools continue to emphasize alternative learning pedagogies, publication of previously implemented activities demonstrating their use will help others apply similar methodologies. PMID:29401733

  10. An Active Learning Activity to Reinforce the Design Components of the Corticosteroids.

    PubMed

    Slauson, Stephen R; Mandela, Prashant

    2018-02-05

    Despite the popularity of active learning applications over the past few decades, few activities have been reported for the field of medicinal chemistry. The purpose of this study is to report a new active learning activity, describe participant contributions, and examine participant performance on the assessment questions mapped to the objective covered by the activity. In this particular activity, students are asked to design two novel corticosteroids as a group (6-8 students per group) based on the design characteristics of marketed corticosteroids covered in lecture coupled with their pharmaceutics knowledge from the previous semester and then defend their design to the class through an interactive presentation model. Although class performance on the objective mapped to this material on the assessment did not reach statistical significance, use of this activity has allowed fruitful discussion of misunderstood concepts and facilitated multiple changes to the lecture presentation. As pharmacy schools continue to emphasize alternative learning pedagogies, publication of previously implemented activities demonstrating their use will help others apply similar methodologies.

  11. Residual strain mapping of Roman styli from Iulia Concordia, Italy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salvemini, Filomena, E-mail: floriana.salvemini@fi.isc.cnr.it; Università degli Studi di Firenze, Dipartimento di Scienze della Terra; Grazzi, Francesco

    Iulia Concordia is an important Roman settlement known for the production of iron objects and weapons during the Roman Empire. A huge number of well-preserved styli were found in the past century in the bed of an old channel. In order to shed light about the production processes used by Roman for stylus manufacturing, a neutron diffraction residual strain analysis was performed on the POLDI materials science diffractometer at the Paul Scherrer Institut in Switzerland. Here, we present results from our investigation conducted on 11 samples, allowing to define, in a non-invasive way, the residual strain map related to themore » ancient Roman working techniques. - Highlights: • We examined 11 Roman styli from the settlement of Iulia Concordia, Italy. • We performed a neutron diffraction residual strain analysis on POLDI at PSI (CH). • We identified the production processes used by Roman for stylus manufacturing. • We clarified the way and direction of working applied for different classes of styli.« less

  12. Follow-up of solar lentigo depigmentation with a retinaldehyde-based cream by clinical evaluation and calibrated colour imaging.

    PubMed

    Questel, E; Durbise, E; Bardy, A-L; Schmitt, A-M; Josse, G

    2015-05-01

    To assess an objective method evaluating the effects of a retinaldehyde-based cream (RA-cream) on solar lentigines; 29 women randomly applied RA-cream on lentigines of one hand and a control cream on the other, once daily for 3 months. A specific method enabling a reliable visualisation of the lesions was proposed, using high-magnification colour-calibrated camera imaging. Assessment was performed using clinical evaluation by Physician Global Assessment score and image analysis. Luminance determination on the numeric images was performed either on the basis of 5 independent expert's consensus borders or probability map analysis via an algorithm automatically detecting the pigmented area. Both image analysis methods showed a similar lightening of ΔL* = 2 after a 3-month treatment by RA-cream, in agreement with single-blind clinical evaluation. High-magnification colour-calibrated camera imaging combined with probability map analysis is a fast and precise method to follow lentigo depigmentation. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Gender differences in working memory networks: A BrainMap meta-analysis

    PubMed Central

    Hill, Ashley C.; Laird, Angela R.; Robinson, Jennifer L.

    2014-01-01

    Gender differences in psychological processes have been of great interest in a variety of fields. While the majority of research in this area has focused on specific differences in relation to test performance, this study sought to determine the underlying neurofunctional differences observed during working memory, a pivotal cognitive process shown to be predictive of academic achievement and intelligence. Using the BrainMap database, we performed a meta-analysis and applied activation likelihood estimation to our search set. Our results demonstrate consistent working memory networks across genders, but also provide evidence for gender-specific networks whereby females consistently activate more limbic (e.g., amygdala and hippocampus) and prefrontal structures (e.g., right inferior frontal gyrus), and males activate a distributed network inclusive of more parietal regions. These data provide a framework for future investigation using functional or effective connectivity methods to elucidate the underpinnings of gender differences in neural network recruitment during working memory tasks. PMID:25042764

  14. Gender differences in working memory networks: a BrainMap meta-analysis.

    PubMed

    Hill, Ashley C; Laird, Angela R; Robinson, Jennifer L

    2014-10-01

    Gender differences in psychological processes have been of great interest in a variety of fields. While the majority of research in this area has focused on specific differences in relation to test performance, this study sought to determine the underlying neurofunctional differences observed during working memory, a pivotal cognitive process shown to be predictive of academic achievement and intelligence. Using the BrainMap database, we performed a meta-analysis and applied activation likelihood estimation to our search set. Our results demonstrate consistent working memory networks across genders, but also provide evidence for gender-specific networks whereby females consistently activate more limbic (e.g., amygdala and hippocampus) and prefrontal structures (e.g., right inferior frontal gyrus), and males activate a distributed network inclusive of more parietal regions. These data provide a framework for future investigations using functional or effective connectivity methods to elucidate the underpinnings of gender differences in neural network recruitment during working memory tasks. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Alternative Fuels Data Center: Maps and Data

    Science.gov Websites

    Biofuelsatlas BioFuels Atlas is an interactive map for comparing biomass feedstocks and biofuels by location . This tool helps users select from and apply biomass data layers to a map, as well as query and download State Biodiesel-stations View Map Graph E85-stations-map E85 Fueling Station Locations by State E85

  16. Time-reversal imaging for classification of submerged elastic targets via Gibbs sampling and the Relevance Vector Machine.

    PubMed

    Dasgupta, Nilanjan; Carin, Lawrence

    2005-04-01

    Time-reversal imaging (TRI) is analogous to matched-field processing, although TRI is typically very wideband and is appropriate for subsequent target classification (in addition to localization). Time-reversal techniques, as applied to acoustic target classification, are highly sensitive to channel mismatch. Hence, it is crucial to estimate the channel parameters before time-reversal imaging is performed. The channel-parameter statistics are estimated here by applying a geoacoustic inversion technique based on Gibbs sampling. The maximum a posteriori (MAP) estimate of the channel parameters are then used to perform time-reversal imaging. Time-reversal implementation requires a fast forward model, implemented here by a normal-mode framework. In addition to imaging, extraction of features from the time-reversed images is explored, with these applied to subsequent target classification. The classification of time-reversed signatures is performed by the relevance vector machine (RVM). The efficacy of the technique is analyzed on simulated in-channel data generated by a free-field finite element method (FEM) code, in conjunction with a channel propagation model, wherein the final classification performance is demonstrated to be relatively insensitive to the associated channel parameters. The underlying theory of Gibbs sampling and TRI are presented along with the feature extraction and target classification via the RVM.

  17. Unsupervised Domain Adaptation with Multiple Acoustic Models

    DTIC Science & Technology

    2010-12-01

    Discriminative MAP Adaptation Standard ML-MAP has been extended to incorporate discrim- inative training criteria such as MMI and MPE [10]. Dis- criminative MAP...smoothing variable I . For example, the MMI - MAP mean is given by ( mmi -map) jm = fnumjm (O) den jm(O)g+Djm̂jm + I (ml-map) jm f numjm den... MMI training, and Djm is the Gaussian-dependent parameter for the extended Baum-Welch (EBW) algorithm. MMI -MAP has been successfully applied in

  18. Mapping Venus: Modeling the Magellan Mission.

    ERIC Educational Resources Information Center

    Richardson, Doug

    1997-01-01

    Provides details of an activity designed to help students understand the relationship between astronomy and geology. Applies concepts of space research and map-making technology to the construction of a topographic map of a simulated section of Venus. (DDR)

  19. A low-frequency near-field interferometric-TOA 3-D Lightning Mapping Array

    NASA Astrophysics Data System (ADS)

    Lyu, Fanchao; Cummer, Steven A.; Solanki, Rahulkumar; Weinert, Joel; McTague, Lindsay; Katko, Alex; Barrett, John; Zigoneanu, Lucian; Xie, Yangbo; Wang, Wenqi

    2014-11-01

    We report on the development of an easily deployable LF near-field interferometric-time of arrival (TOA) 3-D Lightning Mapping Array applied to imaging of entire lightning flashes. An interferometric cross-correlation technique is applied in our system to compute windowed two-sensor time differences with submicrosecond time resolution before TOA is used for source location. Compared to previously reported LF lightning location systems, our system captures many more LF sources. This is due mainly to the improved mapping of continuous lightning processes by using this type of hybrid interferometry/TOA processing method. We show with five station measurements that the array detects and maps different lightning processes, such as stepped and dart leaders, during both in-cloud and cloud-to-ground flashes. Lightning images mapped by our LF system are remarkably similar to those created by VHF mapping systems, which may suggest some special links between LF and VHF emission during lightning processes.

  20. Human cDNA mapping using fluorescence in situ hybridization. Final progress report, April 1, 1994--July 31, 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korenberg, J.R.

    The ultimate goal of this research is to generate and apply novel technologies to speed completion and integration of the human genome map and sequence with biomedical problems. To do this, techniques were developed and genome-wide resources generated. This includes a genome-wide Mapped and Integrated BAC/PAC Resource that has been used for gene finding, map completion and anchoring, breakpoint definition and sequencing. In the last period of the grant, the Human Mapped BAC/PAC Resource was also applied to determine regions of human variation and to develop a novel paradigm of primate evolution through to humans. Further, in order to moremore » rapidly evaluate animal models of human disease, a BAC Map of the mouse was generated in collaboration with the MTI Genome Center, Dr. Bruce Birren.« less

  1. Mapping urban environmental noise: a land use regression method.

    PubMed

    Xie, Dan; Liu, Yi; Chen, Jining

    2011-09-01

    Forecasting and preventing urban noise pollution are major challenges in urban environmental management. Most existing efforts, including experiment-based models, statistical models, and noise mapping, however, have limited capacity to explain the association between urban growth and corresponding noise change. Therefore, these conventional methods can hardly forecast urban noise at a given outlook of development layout. This paper, for the first time, introduces a land use regression method, which has been applied for simulating urban air quality for a decade, to construct an urban noise model (LUNOS) in Dalian Municipality, Northwest China. The LUNOS model describes noise as a dependent variable of surrounding various land areas via a regressive function. The results suggest that a linear model performs better in fitting monitoring data, and there is no significant difference of the LUNOS's outputs when applied to different spatial scales. As the LUNOS facilitates a better understanding of the association between land use and urban environmental noise in comparison to conventional methods, it can be regarded as a promising tool for noise prediction for planning purposes and aid smart decision-making.

  2. Mapping species distribution of Canarian Monteverde forest by field spectroradiometry and satellite imagery

    NASA Astrophysics Data System (ADS)

    Martín-Luis, Antonio; Arbelo, Manuel; Hernández-Leal, Pedro; Arbelo-Bayó, Manuel

    2016-10-01

    Reliable and updated maps of vegetation in protected natural areas are essential for a proper management and conservation. Remote sensing is a valid tool for this purpose. In this study, a methodology based on a WorldView-2 (WV-2) satellite image and in situ spectral signatures measurements was applied to map the Canarian Monteverde ecosystem located in the north of the Tenerife Island (Canary Islands, Spain). Due to the high spectral similarity of vegetation species in the study zone, a Multiple Endmember Spectral Mixture Analysis (MESMA) was performed. MESMA determines the fractional cover of different components within one pixel and it allows for a pixel-by-pixel variation of endmembers. Two libraries of endmembers were collected for the most abundant species in the test area. The first library was collected from in situ spectral signatures measured with an ASD spectroradiometer during a field campaign in June 2015. The second library was obtained from pure pixels identified in the satellite image for the same species. The accuracy of the mapping process was assessed from a set of independent validation plots. The overall accuracy for the ASD-based method was 60.51 % compared to the 86.67 % reached for the WV-2 based mapping. The results suggest the possibility of using WV-2 images for monitoring and regularly updating the maps of the Monteverde forest on the island of Tenerife.

  3. Semisupervised GDTW kernel-based fuzzy c-means algorithm for mapping vegetation dynamics in mining region using normalized difference vegetation index time series

    NASA Astrophysics Data System (ADS)

    Jia, Duo; Wang, Cangjiao; Lei, Shaogang

    2018-01-01

    Mapping vegetation dynamic types in mining areas is significant for revealing the mechanisms of environmental damage and for guiding ecological construction. Dynamic types of vegetation can be identified by applying interannual normalized difference vegetation index (NDVI) time series. However, phase differences and time shifts in interannual time series decrease mapping accuracy in mining regions. To overcome these problems and to increase the accuracy of mapping vegetation dynamics, an interannual Landsat time series for optimum vegetation growing status was constructed first by using the enhanced spatial and temporal adaptive reflectance fusion model algorithm. We then proposed a Markov random field optimized semisupervised Gaussian dynamic time warping kernel-based fuzzy c-means (FCM) cluster algorithm for interannual NDVI time series to map dynamic vegetation types in mining regions. The proposed algorithm has been tested in the Shengli mining region and Shendong mining region, which are typical representatives of China's open-pit and underground mining regions, respectively. Experiments show that the proposed algorithm can solve the problems of phase differences and time shifts to achieve better performance when mapping vegetation dynamic types. The overall accuracies for the Shengli and Shendong mining regions were 93.32% and 89.60%, respectively, with improvements of 7.32% and 25.84% when compared with the original semisupervised FCM algorithm.

  4. Page layout analysis and classification for complex scanned documents

    NASA Astrophysics Data System (ADS)

    Erkilinc, M. Sezer; Jaber, Mustafa; Saber, Eli; Bauer, Peter; Depalov, Dejan

    2011-09-01

    A framework for region/zone classification in color and gray-scale scanned documents is proposed in this paper. The algorithm includes modules for extracting text, photo, and strong edge/line regions. Firstly, a text detection module which is based on wavelet analysis and Run Length Encoding (RLE) technique is employed. Local and global energy maps in high frequency bands of the wavelet domain are generated and used as initial text maps. Further analysis using RLE yields a final text map. The second module is developed to detect image/photo and pictorial regions in the input document. A block-based classifier using basis vector projections is employed to identify photo candidate regions. Then, a final photo map is obtained by applying probabilistic model based on Markov random field (MRF) based maximum a posteriori (MAP) optimization with iterated conditional mode (ICM). The final module detects lines and strong edges using Hough transform and edge-linkages analysis, respectively. The text, photo, and strong edge/line maps are combined to generate a page layout classification of the scanned target document. Experimental results and objective evaluation show that the proposed technique has a very effective performance on variety of simple and complex scanned document types obtained from MediaTeam Oulu document database. The proposed page layout classifier can be used in systems for efficient document storage, content based document retrieval, optical character recognition, mobile phone imagery, and augmented reality.

  5. Localization and Mapping Using a Non-Central Catadioptric Camera System

    NASA Astrophysics Data System (ADS)

    Khurana, M.; Armenakis, C.

    2018-05-01

    This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to "see and move" more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.

  6. Local adaptive tone mapping for video enhancement

    NASA Astrophysics Data System (ADS)

    Lachine, Vladimir; Dai, Min (.

    2015-03-01

    As new technologies like High Dynamic Range cameras, AMOLED and high resolution displays emerge on consumer electronics market, it becomes very important to deliver the best picture quality for mobile devices. Tone Mapping (TM) is a popular technique to enhance visual quality. However, the traditional implementation of Tone Mapping procedure is limited by pixel's value to value mapping, and the performance is restricted in terms of local sharpness and colorfulness. To overcome the drawbacks of traditional TM, we propose a spatial-frequency based framework in this paper. In the proposed solution, intensity component of an input video/image signal is split on low pass filtered (LPF) and high pass filtered (HPF) bands. Tone Mapping (TM) function is applied to LPF band to improve the global contrast/brightness, and HPF band is added back afterwards to keep the local contrast. The HPF band may be adjusted by a coring function to avoid noise boosting and signal overshooting. Colorfulness of an original image may be preserved or enhanced by chroma components correction by means of saturation function. Localized content adaptation is further improved by dividing an image to a set of non-overlapped regions and modifying each region individually. The suggested framework allows users to implement a wide range of tone mapping applications with perceptional local sharpness and colorfulness preserved or enhanced. Corresponding hardware circuit may be integrated in camera, video or display pipeline with minimal hardware budget

  7. Symplectic maps and chromatic optics in particle accelerators

    DOE PAGES

    Cai, Yunhai

    2015-07-06

    Here, we have applied the nonlinear map method to comprehensively characterize the chromatic optics in particle accelerators. Our approach is built on the foundation of symplectic transfer maps of magnetic elements. The chromatic lattice parameters can be transported from one element to another by the maps. We also introduce a Jacobian operator that provides an intrinsic linkage between the maps and the matrix with parameter dependence. The link allows us to directly apply the formulation of the linear optics to compute the chromatic lattice parameters. As an illustration, we analyze an alternating-gradient cell with nonlinear sextupoles, octupoles, and decapoles andmore » derive analytically their settings for the local chromatic compensation. Finally, the cell becomes nearly perfect up to the third-order of the momentum deviation.« less

  8. Building phenotype networks to improve QTL detection: a comparative analysis of fatty acid and fat traits in pigs.

    PubMed

    Yang, B; Navarro, N; Noguera, J L; Muñoz, M; Guo, T F; Yang, K X; Ma, J W; Folch, J M; Huang, L S; Pérez-Enciso, M

    2011-10-01

    Models in QTL mapping can be improved by considering all potential variables, i.e. we can use remaining traits other than the trait under study as potential predictors. QTL mapping is often conducted by correcting for a few fixed effects or covariates (e.g. sex, age), although many traits with potential causal relationships between them are recorded. In this work, we evaluate by simulation several procedures to identify optimum models in QTL scans: forward selection, undirected dependency graph and QTL-directed dependency graph (QDG). The latter, QDG, performed better in terms of power and false discovery rate and was applied to fatty acid (FA) composition and fat deposition traits in two pig F2 crosses from China and Spain. Compared with the typical QTL mapping, QDG approach revealed several new QTL. To the contrary, several FA QTL on chromosome 4 (e.g. Palmitic, C16:0; Stearic, C18:0) detected by typical mapping vanished after adjusting for phenotypic covariates in QDG mapping. This suggests that the QTL detected in typical mapping could be indirect. When a QTL is supported by both approaches, there is an increased confidence that the QTL have a primary effect on the corresponding trait. An example is a QTL for C16:1 on chromosome 8. In conclusion, mapping QTL based on causal phenotypic networks can increase power and help to make more biologically sound hypothesis on the genetic architecture of complex traits. © 2011 Blackwell Verlag GmbH.

  9. PSF mapping-based correction of eddy-current-induced distortions in diffusion-weighted echo-planar imaging.

    PubMed

    In, Myung-Ho; Posnansky, Oleg; Speck, Oliver

    2016-05-01

    To accurately correct diffusion-encoding direction-dependent eddy-current-induced geometric distortions in diffusion-weighted echo-planar imaging (DW-EPI) and to minimize the calibration time at 7 Tesla (T). A point spread function (PSF) mapping based eddy-current calibration method is newly presented to determine eddy-current-induced geometric distortions even including nonlinear eddy-current effects within the readout acquisition window. To evaluate the temporal stability of eddy-current maps, calibration was performed four times within 3 months. Furthermore, spatial variations of measured eddy-current maps versus their linear superposition were investigated to enable correction in DW-EPIs with arbitrary diffusion directions without direct calibration. For comparison, an image-based eddy-current correction method was additionally applied. Finally, this method was combined with a PSF-based susceptibility-induced distortion correction approach proposed previously to correct both susceptibility and eddy-current-induced distortions in DW-EPIs. Very fast eddy-current calibration in a three-dimensional volume is possible with the proposed method. The measured eddy-current maps are very stable over time and very similar maps can be obtained by linear superposition of principal-axes eddy-current maps. High resolution in vivo brain results demonstrate that the proposed method allows more efficient eddy-current correction than the image-based method. The combination of both PSF-based approaches allows distortion-free images, which permit reliable analysis in diffusion tensor imaging applications at 7T. © 2015 Wiley Periodicals, Inc.

  10. Geodesy- and geology-based slip-rate models for the Western United States (excluding California) national seismic hazard maps

    USGS Publications Warehouse

    Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne

    2014-01-01

    The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.

  11. Target-specific digital soil mapping supporting terroir mapping in Tokaj Wine Region, Hungary

    NASA Astrophysics Data System (ADS)

    Takács, Katalin; Szabó, József; Laborczi, Annamária; Szatmári, Gábor; László, Péter; Koós, Sándor; Bakacsi, Zsófia; Pásztor, László

    2016-04-01

    Tokaj Wine Region - located in Northeast-Hungary, at Hegyalja, in Tokaj Mountains - is a historical region for botrityzed dessert wine making. Very recently the sustainable quality wine production in the region was targeted, which requires detailed and "terroir-based approach" characterization of viticultural land and the survey of the state of vineyards. Terroir is a homogeneous area that relates to both environmental and cultural factors, that influence the grape and wine quality. Soil plays dominant role determining the viticultural potential and terroir delineation. According to viticultural experts the most relevant soil properties are drainage, water holding capacity, soil depth and pH. Not all of these soil characteristics can be directly measured, therefore the synthesis of observed soil properties is needed to satisfy the requirements of terroir mapping. The sampling strategy was designed to be representative to the combinations of basic environmental parameters (slope, aspect and geology) which determine the main soil properties of the vineyards. Field survey was carried out in two steps. At first soil samples were collected from 200 sites to obtain a general view about the pedology of the area. In the second stage further 650 samples were collected and the sampling strategy was designed based on spatial annealing technique taking into consideration the results of the preliminary survey and the local characteristics of vineyards. The data collection regarded soil type, soil depth, parent material, rate of erosion, organic matter content and further physical and chemical soil properties which support the inference of the proper soil parameters. In the framework of the recent project 33 primary and secondary soil property, soil class and soil function maps were compiled. A set of the resulting maps supports to meet the demands of the Hungarian standard viticultural potential assessment, while the majority of the maps is intended to be applied for terroir delineation. The spatial extension was performed by two, different methods which are widely applied in digital soil mapping. Regression kriging was used for creating continuous soil property maps, category type soil maps were compiled by classification trees method. Accuracy assessment was also provided for all of the soil map products. Our poster will present the summary of the project workflow - the design of sampling strategy, field survey, digital soil mapping process - and some examples of the resulting soil property maps indicating their applicability in terroir delineation. Acknowledgement: The authors are grateful to the Tokaj Kereskedöház Ltd. which has been supporting the project for the survey of the state of vineyards. Digital soil mapping was partly supported by the Hungarian National Scientific Research Foundation (OTKA, Grant No. K105167).

  12. The mapping of yeast's G-protein coupled receptor with an atomic force microscope

    NASA Astrophysics Data System (ADS)

    Takenaka, Musashi; Miyachi, Yusuke; Ishii, Jun; Ogino, Chiaki; Kondo, Akihiko

    2015-03-01

    An atomic force microscope (AFM) can measure the adhesion force between a sample and a cantilever while simultaneously applying a rupture force during the imaging of a sample. An AFM should be useful in targeting specific proteins on a cell surface. The present study proposes the use of an AFM to measure the adhesion force between targeting receptors and their ligands, and to map the targeting receptors. In this study, Ste2p, one of the G protein-coupled receptors (GPCRs), was chosen as the target receptor. The specific force between Ste2p on a yeast cell surface and a cantilever modified with its ligand, α-factor, was measured and found to be approximately 250 pN. In addition, through continuous measuring of the cell surface, a mapping of the receptors on the cell surface could be performed, which indicated the differences in the Ste2p expression levels. Therefore, the proposed AFM system is accurate for cell diagnosis.

  13. Improving deep convolutional neural networks with mixed maxout units

    PubMed Central

    Liu, Fu-xian; Li, Long-yue

    2017-01-01

    Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that “non-maximal features are unable to deliver” and “feature mapping subspace pooling is insufficient,” we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance. PMID:28727737

  14. Automatic co-registration of 3D multi-sensor point clouds

    NASA Astrophysics Data System (ADS)

    Persad, Ravi Ancil; Armenakis, Costas

    2017-08-01

    We propose an approach for the automatic coarse alignment of 3D point clouds which have been acquired from various platforms. The method is based on 2D keypoint matching performed on height map images of the point clouds. Initially, a multi-scale wavelet keypoint detector is applied, followed by adaptive non-maxima suppression. A scale, rotation and translation-invariant descriptor is then computed for all keypoints. The descriptor is built using the log-polar mapping of Gabor filter derivatives in combination with the so-called Rapid Transform. In the final step, source and target height map keypoint correspondences are determined using a bi-directional nearest neighbour similarity check, together with a threshold-free modified-RANSAC. Experiments with urban and non-urban scenes are presented and results show scale errors ranging from 0.01 to 0.03, 3D rotation errors in the order of 0.2° to 0.3° and 3D translation errors from 0.09 m to 1.1 m.

  15. Experimental setup for evaluating an adaptive user interface for teleoperation control

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Peetha, Srikanth; Abubakar, Shamsudeen; Saadatzi, Mohammad Nasser; Cremer, Sven; Popa, Dan O.

    2017-05-01

    A vital part of human interactions with a machine is the control interface, which single-handedly could define the user satisfaction and the efficiency of performing a task. This paper elaborates the implementation of an experimental setup to study an adaptive algorithm that can help the user better tele-operate the robot. The formulation of the adaptive interface and associate learning algorithms are general enough to apply when the mapping between the user controls and the robot actuators is complex and/or ambiguous. The method uses a genetic algorithm to find the optimal parameters that produce the input-output mapping for teleoperation control. In this paper, we describe the experimental setup and associated results that was used to validate the adaptive interface to a differential drive robot from two different input devices; a joystick, and a Myo gesture control armband. Results show that after the learning phase, the interface converges to an intuitive mapping that can help even inexperienced users drive the system to a goal location.

  16. Cosmic Microwave Background Mapmaking with a Messenger Field

    NASA Astrophysics Data System (ADS)

    Huffenberger, Kevin M.; Næss, Sigurd K.

    2018-01-01

    We apply a messenger field method to solve the linear minimum-variance mapmaking equation in the context of Cosmic Microwave Background (CMB) observations. In simulations, the method produces sky maps that converge significantly faster than those from a conjugate gradient descent algorithm with a diagonal preconditioner, even though the computational cost per iteration is similar. The messenger method recovers large scales in the map better than conjugate gradient descent, and yields a lower overall χ2. In the single, pencil beam approximation, each iteration of the messenger mapmaking procedure produces an unbiased map, and the iterations become more optimal as they proceed. A variant of the method can handle differential data or perform deconvolution mapmaking. The messenger method requires no preconditioner, but a high-quality solution needs a cooling parameter to control the convergence. We study the convergence properties of this new method and discuss how the algorithm is feasible for the large data sets of current and future CMB experiments.

  17. Multiple-component Decomposition from Millimeter Single-channel Data

    NASA Astrophysics Data System (ADS)

    Rodríguez-Montoya, Iván; Sánchez-Argüelles, David; Aretxaga, Itziar; Bertone, Emanuele; Chávez-Dagostino, Miguel; Hughes, David H.; Montaña, Alfredo; Wilson, Grant W.; Zeballos, Milagros

    2018-03-01

    We present an implementation of a blind source separation algorithm to remove foregrounds off millimeter surveys made by single-channel instruments. In order to make possible such a decomposition over single-wavelength data, we generate levels of artificial redundancy, then perform a blind decomposition, calibrate the resulting maps, and lastly measure physical information. We simulate the reduction pipeline using mock data: atmospheric fluctuations, extended astrophysical foregrounds, and point-like sources, but we apply the same methodology to the Aztronomical Thermal Emission Camera/ASTE survey of the Great Observatories Origins Deep Survey–South (GOODS-S). In both applications, our technique robustly decomposes redundant maps into their underlying components, reducing flux bias, improving signal-to-noise ratio, and minimizing information loss. In particular, GOODS-S is decomposed into four independent physical components: one of them is the already-known map of point sources, two are atmospheric and systematic foregrounds, and the fourth component is an extended emission that can be interpreted as the confusion background of faint sources.

  18. Polarization-Sensitive Hyperspectral Imaging in vivo: A Multimode Dermoscope for Skin Analysis

    NASA Astrophysics Data System (ADS)

    Vasefi, Fartash; MacKinnon, Nicholas; Saager, Rolf B.; Durkin, Anthony J.; Chave, Robert; Lindsley, Erik H.; Farkas, Daniel L.

    2014-05-01

    Attempts to understand the changes in the structure and physiology of human skin abnormalities by non-invasive optical imaging are aided by spectroscopic methods that quantify, at the molecular level, variations in tissue oxygenation and melanin distribution. However, current commercial and research systems to map hemoglobin and melanin do not correlate well with pathology for pigmented lesions or darker skin. We developed a multimode dermoscope that combines polarization and hyperspectral imaging with an efficient analytical model to map the distribution of specific skin bio-molecules. This corrects for the melanin-hemoglobin misestimation common to other systems, without resorting to complex and computationally intensive tissue optical models. For this system's proof of concept, human skin measurements on melanocytic nevus, vitiligo, and venous occlusion conditions were performed in volunteers. The resulting molecular distribution maps matched physiological and anatomical expectations, confirming a technologic approach that can be applied to next generation dermoscopes and having biological plausibility that is likely to appeal to dermatologists.

  19. A comparative assessment of GIS-based data mining models and a novel ensemble model in groundwater well potential mapping

    NASA Astrophysics Data System (ADS)

    Naghibi, Seyed Amir; Moghaddam, Davood Davoodi; Kalantar, Bahareh; Pradhan, Biswajeet; Kisi, Ozgur

    2017-05-01

    In recent years, application of ensemble models has been increased tremendously in various types of natural hazard assessment such as landslides and floods. However, application of this kind of robust models in groundwater potential mapping is relatively new. This study applied four data mining algorithms including AdaBoost, Bagging, generalized additive model (GAM), and Naive Bayes (NB) models to map groundwater potential. Then, a novel frequency ratio data mining ensemble model (FREM) was introduced and evaluated. For this purpose, eleven groundwater conditioning factors (GCFs), including altitude, slope aspect, slope angle, plan curvature, stream power index (SPI), river density, distance from rivers, topographic wetness index (TWI), land use, normalized difference vegetation index (NDVI), and lithology were mapped. About 281 well locations with high potential were selected. Wells were randomly partitioned into two classes for training the models (70% or 197) and validating them (30% or 84). AdaBoost, Bagging, GAM, and NB algorithms were employed to get groundwater potential maps (GPMs). The GPMs were categorized into potential classes using natural break method of classification scheme. In the next stage, frequency ratio (FR) value was calculated for the output of the four aforementioned models and were summed, and finally a GPM was produced using FREM. For validating the models, area under receiver operating characteristics (ROC) curve was calculated. The ROC curve for prediction dataset was 94.8, 93.5, 92.6, 92.0, and 84.4% for FREM, Bagging, AdaBoost, GAM, and NB models, respectively. The results indicated that FREM had the best performance among all the models. The better performance of the FREM model could be related to reduction of over fitting and possible errors. Other models such as AdaBoost, Bagging, GAM, and NB also produced acceptable performance in groundwater modelling. The GPMs produced in the current study may facilitate groundwater exploitation by determining high and very high groundwater potential zones.

  20. A Round-Efficient Authenticated Key Agreement Scheme Based on Extended Chaotic Maps for Group Cloud Meeting.

    PubMed

    Lin, Tsung-Hung; Tsung, Chen-Kun; Lee, Tian-Fu; Wang, Zeng-Bo

    2017-12-03

    The security is a critical issue for business purposes. For example, the cloud meeting must consider strong security to maintain the communication privacy. Considering the scenario with cloud meeting, we apply extended chaotic map to present passwordless group authentication key agreement, termed as Passwordless Group Authentication Key Agreement (PL-GAKA). PL-GAKA improves the computation efficiency for the simple group password-based authenticated key agreement (SGPAKE) proposed by Lee et al. in terms of computing the session key. Since the extended chaotic map has equivalent security level to the Diffie-Hellman key exchange scheme applied by SGPAKE, the security of PL-GAKA is not sacrificed when improving the computation efficiency. Moreover, PL-GAKA is a passwordless scheme, so the password maintenance is not necessary. Short-term authentication is considered, hence the communication security is stronger than other protocols by dynamically generating session key in each cloud meeting. In our analysis, we first prove that each meeting member can get the correct information during the meeting. We analyze common security issues for the proposed PL-GAKA in terms of session key security, mutual authentication, perfect forward security, and data integrity. Moreover, we also demonstrate that communicating in PL-GAKA is secure when suffering replay attacks, impersonation attacks, privileged insider attacks, and stolen-verifier attacks. Eventually, an overall comparison is given to show the performance between PL-GAKA, SGPAKE and related solutions.

  1. A new method for automated high-dimensional lesion segmentation evaluated in vascular injury and applied to the human occipital lobe.

    PubMed

    Mah, Yee-Haur; Jager, Rolf; Kennard, Christopher; Husain, Masud; Nachev, Parashkev

    2014-07-01

    Making robust inferences about the functional neuroanatomy of the brain is critically dependent on experimental techniques that examine the consequences of focal loss of brain function. Unfortunately, the use of the most comprehensive such technique-lesion-function mapping-is complicated by the need for time-consuming and subjective manual delineation of the lesions, greatly limiting the practicability of the approach. Here we exploit a recently-described general measure of statistical anomaly, zeta, to devise a fully-automated, high-dimensional algorithm for identifying the parameters of lesions within a brain image given a reference set of normal brain images. We proceed to evaluate such an algorithm in the context of diffusion-weighted imaging of the commonest type of lesion used in neuroanatomical research: ischaemic damage. Summary performance metrics exceed those previously published for diffusion-weighted imaging and approach the current gold standard-manual segmentation-sufficiently closely for fully-automated lesion-mapping studies to become a possibility. We apply the new method to 435 unselected images of patients with ischaemic stroke to derive a probabilistic map of the pattern of damage in lesions involving the occipital lobe, demonstrating the variation of anatomical resolvability of occipital areas so as to guide future lesion-function studies of the region. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Evaluation of multi-resolution satellite sensors for assessing water quality and bottom depth of Lake Garda.

    PubMed

    Giardino, Claudia; Bresciani, Mariano; Cazzaniga, Ilaria; Schenk, Karin; Rieger, Patrizia; Braga, Federica; Matta, Erica; Brando, Vittorio E

    2014-12-15

    In this study we evaluate the capabilities of three satellite sensors for assessing water composition and bottom depth in Lake Garda, Italy. A consistent physics-based processing chain was applied to Moderate Resolution Imaging Spectroradiometer (MODIS), Landsat-8 Operational Land Imager (OLI) and RapidEye. Images gathered on 10 June 2014 were corrected for the atmospheric effects with the 6SV code. The computed remote sensing reflectance (Rrs) from MODIS and OLI were converted into water quality parameters by adopting a spectral inversion procedure based on a bio-optical model calibrated with optical properties of the lake. The same spectral inversion procedure was applied to RapidEye and to OLI data to map bottom depth. In situ measurements of Rrs and of concentrations of water quality parameters collected in five locations were used to evaluate the models. The bottom depth maps from OLI and RapidEye showed similar gradients up to 7 m (r = 0.72). The results indicate that: (1) the spatial and radiometric resolutions of OLI enabled mapping water constituents and bottom properties; (2) MODIS was appropriate for assessing water quality in the pelagic areas at a coarser spatial resolution; and (3) RapidEye had the capability to retrieve bottom depth at high spatial resolution. Future work should evaluate the performance of the three sensors in different bio-optical conditions.

  3. Vegetation mapping from high-resolution satellite images in the heterogeneous arid environments of Socotra Island (Yemen)

    NASA Astrophysics Data System (ADS)

    Malatesta, Luca; Attorre, Fabio; Altobelli, Alfredo; Adeeb, Ahmed; De Sanctis, Michele; Taleb, Nadim M.; Scholte, Paul T.; Vitale, Marcello

    2013-01-01

    Socotra Island (Yemen), a global biodiversity hotspot, is characterized by high geomorphological and biological diversity. In this study, we present a high-resolution vegetation map of the island based on combining vegetation analysis and classification with remote sensing. Two different image classification approaches were tested to assess the most accurate one in mapping the vegetation mosaic of Socotra. Spectral signatures of the vegetation classes were obtained through a Gaussian mixture distribution model, and a sequential maximum a posteriori (SMAP) classification was applied to account for the heterogeneity and the complex spatial pattern of the arid vegetation. This approach was compared to the traditional maximum likelihood (ML) classification. Satellite data were represented by a RapidEye image with 5 m pixel resolution and five spectral bands. Classified vegetation relevés were used to obtain the training and evaluation sets for the main plant communities. Postclassification sorting was performed to adjust the classification through various rule-based operations. Twenty-eight classes were mapped, and SMAP, with an accuracy of 87%, proved to be more effective than ML (accuracy: 66%). The resulting map will represent an important instrument for the elaboration of conservation strategies and the sustainable use of natural resources in the island.

  4. Direct Volume Rendering with Shading via Three-Dimensional Textures

    NASA Technical Reports Server (NTRS)

    VanGelder, Allen; Kim, Kwansik

    1996-01-01

    A new and easy-to-implement method for direct volume rendering that uses 3D texture maps for acceleration, and incorporates directional lighting, is described. The implementation, called Voltx, produces high-quality images at nearly interactive speeds on workstations with hardware support for three-dimensional texture maps. Previously reported methods did not incorporate a light model, and did not address issues of multiple texture maps for large volumes. Our research shows that these extensions impact performance by about a factor of ten. Voltx supports orthographic, perspective, and stereo views. This paper describes the theory and implementation of this technique, and compares it to the shear-warp factorization approach. A rectilinear data set is converted into a three-dimensional texture map containing color and opacity information. Quantized normal vectors and a lookup table provide efficiency. A new tesselation of the sphere is described, which serves as the basis for normal-vector quantization. A new gradient-based shading criterion is described, in which the gradient magnitude is interpreted in the context of the field-data value and the material classification parameters, and not in isolation. In the rendering phase, the texture map is applied to a stack of parallel planes, which effectively cut the texture into many slabs. The slabs are composited to form an image.

  5. Genetic Mapping of Quantitative Trait Loci Controlling Growth and Wood Quality Traits in Eucalyptus Grandis Using a Maternal Half-Sib Family and Rapd Markers

    PubMed Central

    Grattapaglia, D.; Bertolucci, FLG.; Penchel, R.; Sederoff, R. R.

    1996-01-01

    Quantitative trait loci (QTL) mapping of forest productivity traits was performed using an open pollinated half-sib family of Eucalyptus grandis. For volume growth, a sequential QTL mapping approach was applied using bulk segregant analysis (BSA), selective genotyping (SG) and cosegregation analysis (CSA). Despite the low heritability of this trait and the heterogeneous genetic background employed for mapping. BSA detected one putative QTL and SG two out of the three later found by CSA. The three putative QTL for volume growth were found to control 13.7% of the phenotypic variation, corresponding to an estimated 43.7% of the genetic variation. For wood specific gravity five QTL were identified controlling 24.7% of the phenotypic variation corresponding to 49% of the genetic variation. Overlapping QTL for CBH, WSG and percentage dry weight of bark were observed. A significant case of digenic epistasis was found, involving unlinked QTL for volume. Our results demonstrate the applicability of the within half-sib design for QTL mapping in forest trees and indicate the existence of major genes involved in the expression of economically important traits related to forest productivity in Eucalyptus grandis. These findings have important implications for marker-assisted tree breeding. PMID:8913761

  6. A VS30 map for California with geologic and topographic constraints

    USGS Publications Warehouse

    Thompson, Eric; Wald, David J.; Worden, Charles

    2014-01-01

    For many earthquake engineering applications, site response is estimated through empirical correlations with the time‐averaged shear‐wave velocity to 30 m depth (VS30). These applications therefore depend on the availability of either site‐specific VS30 measurements or VS30 maps at local, regional, and global scales. Because VS30 measurements are sparse, a proxy frequently is needed to estimate VS30 at unsampled locations. We present a new VS30 map for California, which accounts for observational constraints from multiple sources and spatial scales, such as geology, topography, and site‐specific VS30measurements. We apply the geostatistical approach of regression kriging (RK) to combine these constraints for predicting VS30. For the VS30 trend, we start with geology‐based VS30 values and identify two distinct trends between topographic gradient and the residuals from the geology VS30 model. One trend applies to deep and fine Quaternary alluvium, whereas the second trend is slightly stronger and applies to Pleistocene sedimentary units. The RK framework ensures that the resulting map of California is locally refined to reflect the rapidly expanding database of VS30 measurements throughout California. We compare the accuracy of the new mapping method to a previously developed map of VS30 for California. We also illustrate the sensitivity of ground motions to the new VS30 map by comparing real and scenario ShakeMaps with VS30 values from our new map to those for existingVS30 maps.

  7. APPS-IV Civil Works Data Extraction/Data Base Application Study

    DTIC Science & Technology

    1982-09-01

    QA 1 2 3 889 ETL-031 0 APPS-IV civil works data extraction/data base application study (phase 1) Jonathan C. Howland Autometric, Incorporated 5205... STUDY FINAL REPORT (PHASE I) 6. PERFORMING ORG. REPORT NUMBER 900-0081 7. AUTHOR(@) S. CONTRACT ON GRANT NUMBER(B) DAAK70-8 I -C-026 1 Jonathan C...CAPIR system was applied to a flood damage potential study . The particular applications were structure mapping and land use interpretation. A Civil

  8. Integration of magnetometric, gpr and geoelectric measurements applied to the study of the new Viggiano archaeological site (Southern Italy).

    NASA Astrophysics Data System (ADS)

    Rizzo, E.; Chianese, D.; Lapenna, V.; Piscitelli, S.

    2003-04-01

    In the frame of a collaboration with the Archaeological Superintendence of the Basilicata Region (Southern Italy), the Geophysical Lab of IMAA-CNR planned a multidisciplinary investigation in the archaeological site of Viggiano, integrating magnetic mapping, Ground Penetrating Radar profiling and 3D electrical resistivity imaging. The archaeological site, located in Agri Valley (Southern Italy, Basilicata), is an ancient structure developed in successive phases between IV and III century B.C. In this area during some shovel tests archaeological remnants have been identified in the western part. Successively the archaeologists hypothesized the presence of buried structures in the eastern part too, where we performed a geophysical survey. In particular, a magnetic map by means of a caesium vapour magnetometer G-858 GEOMETRICS has been carried to find the external perimeter; more than 50 Georadar profiles using SIR 2000 instrument have been performed to delineate the internal buried structures and the electrical resistivity method has been applied to estimate the depth of buried structures. According to the archaeological hypothesis significant wall structures have been identified in the eastern part. In conclusion, the integration of different geophysical techniques allows us to obtain very intriguing information about the shape, the dimension and the depth of the oriental buried wall structures giving a contribute to better develop a new hypothesis about the history of the archaeological site of Viggiano.

  9. Stereo-vision-based terrain mapping for off-road autonomous navigation

    NASA Astrophysics Data System (ADS)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-05-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  10. Accurate motor mapping in awake common marmosets using micro-electrocorticographical stimulation and stochastic threshold estimation

    NASA Astrophysics Data System (ADS)

    Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi

    2018-06-01

    Objective. Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. Approach. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Main results. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Significance. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.

  11. Accurate motor mapping in awake common marmosets using micro-electrocorticographical stimulation and stochastic threshold estimation.

    PubMed

    Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi

    2018-06-01

    Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.

  12. Hematocrit Measurement with R2* and Quantitative Susceptibility Mapping in Postmortem Brain.

    PubMed

    Walsh, A J; Sun, H; Emery, D J; Wilman, A H

    2018-05-24

    Noninvasive venous oxygenation quantification with MR imaging will improve the neurophysiologic investigation and the understanding of the pathophysiology in neurologic diseases. Available MR imaging methods are limited by sensitivity to flow and often require assumptions of the hematocrit level. In situ postmortem imaging enables evaluation of methods in a fully deoxygenated environment without flow artifacts, allowing direct calculation of hematocrit. This study compares 2 venous oxygenation quantification methods in in situ postmortem subjects. Transverse relaxation (R2*) mapping and quantitative susceptibility mapping were performed on a whole-body 4.7T MR imaging system. Intravenous measurements in major draining intracranial veins were compared between the 2 methods in 3 postmortem subjects. The quantitative susceptibility mapping technique was also applied in 10 healthy control subjects and compared with reference venous oxygenation values. In 2 early postmortem subjects, R2* mapping and quantitative susceptibility mapping measurements within intracranial veins had a significant and strong correlation ( R 2 = 0.805, P = .004 and R 2 = 0.836, P = .02). Higher R2* and susceptibility values were consistently demonstrated within gravitationally dependent venous segments during the early postmortem period. Hematocrit ranged from 0.102 to 0.580 in postmortem subjects, with R2* and susceptibility as large as 291 seconds -1 and 1.75 ppm, respectively. Measurements of R2* and quantitative susceptibility mapping within large intracranial draining veins have a high correlation in early postmortem subjects. This study supports the use of quantitative susceptibility mapping for evaluation of in vivo venous oxygenation and postmortem hematocrit concentrations. © 2018 by American Journal of Neuroradiology.

  13. Stereo Vision Based Terrain Mapping for Off-Road Autonomous Navigation

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-01-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as no-go regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  14. Make Your Own Mashup Maps

    ERIC Educational Resources Information Center

    Lucking, Robert A.; Christmann, Edwin P.; Whiting, Mervyn J.

    2008-01-01

    "Mashup" is a new technology term used to describe a web application that combines data or technology from several different sources. You can apply this concept in your classroom by having students create their own mashup maps. Google Maps provides you with the simple tools, map databases, and online help you'll need to quickly master this…

  15. An Analysis of Prospective Teachers' Knowledge for Constructing Concept Maps

    ERIC Educational Resources Information Center

    Subramaniam, Karthigeyan; Esprívalo Harrell, Pamela

    2015-01-01

    Background: Literature contends that a teacher's knowledge of concept map-based tasks influence how their students perceive the task and execute the creation of acceptable concept maps. Teachers who are skilled concept mappers are able to (1) understand and apply the operational terms to construct a hierarchical/non-hierarchical concept map; (2)…

  16. Analyzing the Scientific Evolution of Social Work Using Science Mapping

    ERIC Educational Resources Information Center

    Martínez, Ma Angeles; Cobo, Manuel Jesús; Herrera, Manuel; Herrera-Viedma, Enrique

    2015-01-01

    Objectives: This article reports the first science mapping analysis of the social work field, which shows its conceptual structure and scientific evolution. Methods: Science Mapping Analysis Software Tool, a bibliometric science mapping tool based on co-word analysis and h-index, is applied using a sample of 18,794 research articles published from…

  17. Construct Maps: A Tool to Organize Validity Evidence

    ERIC Educational Resources Information Center

    McClarty, Katie Larsen

    2013-01-01

    The construct map is a promising tool for organizing the data standard-setting panelists interpret. The challenge in applying construct maps to standard-setting procedures will be the judicious selection of data to include within this organizing framework. Therefore, this commentary focuses on decisions about what to include in the construct map.…

  18. Using Mental Map Principles to Interpret American Indian Cartography

    ERIC Educational Resources Information Center

    Mitchell, Martin D.

    2014-01-01

    The understanding of maps drawn or significantly influenced by American Indians fosters critical thinking, cultural diversity, and awareness of a much-neglected topic in cartography. Line styles, scale depiction, and the sizing of individual entities are discussed in the context of applying principles from mental maps to American Indian maps and…

  19. Comparing bias correction methods in downscaling meteorological variables for a hydrologic impact study in an arid area in China

    NASA Astrophysics Data System (ADS)

    Fang, G. H.; Yang, J.; Chen, Y. N.; Zammit, C.

    2015-06-01

    Water resources are essential to the ecosystem and social economy in the desert and oasis of the arid Tarim River basin, northwestern China, and expected to be vulnerable to climate change. It has been demonstrated that regional climate models (RCMs) provide more reliable results for a regional impact study of climate change (e.g., on water resources) than general circulation models (GCMs). However, due to their considerable bias it is still necessary to apply bias correction before they are used for water resources research. In this paper, after a sensitivity analysis on input meteorological variables based on the Sobol' method, we compared five precipitation correction methods and three temperature correction methods in downscaling RCM simulations applied over the Kaidu River basin, one of the headwaters of the Tarim River basin. Precipitation correction methods applied include linear scaling (LS), local intensity scaling (LOCI), power transformation (PT), distribution mapping (DM) and quantile mapping (QM), while temperature correction methods are LS, variance scaling (VARI) and DM. The corrected precipitation and temperature were compared to the observed meteorological data, prior to being used as meteorological inputs of a distributed hydrologic model to study their impacts on streamflow. The results show (1) streamflows are sensitive to precipitation, temperature and solar radiation but not to relative humidity and wind speed; (2) raw RCM simulations are heavily biased from observed meteorological data, and its use for streamflow simulations results in large biases from observed streamflow, and all bias correction methods effectively improved these simulations; (3) for precipitation, PT and QM methods performed equally best in correcting the frequency-based indices (e.g., standard deviation, percentile values) while the LOCI method performed best in terms of the time-series-based indices (e.g., Nash-Sutcliffe coefficient, R2); (4) for temperature, all correction methods performed equally well in correcting raw temperature; and (5) for simulated streamflow, precipitation correction methods have more significant influence than temperature correction methods and the performances of streamflow simulations are consistent with those of corrected precipitation; i.e., the PT and QM methods performed equally best in correcting flow duration curve and peak flow while the LOCI method performed best in terms of the time-series-based indices. The case study is for an arid area in China based on a specific RCM and hydrologic model, but the methodology and some results can be applied to other areas and models.

  20. Multi-Collinearity Based Model Selection for Landslide Susceptibility Mapping: A Case Study from Ulus District of Karabuk, Turkey

    NASA Astrophysics Data System (ADS)

    Sahin, E. K.; Colkesen, I., , Dr; Kavzoglu, T.

    2017-12-01

    Identification of localities prone to landslide areas plays an important role for emergency planning, disaster management and recovery planning. Due to its great importance for disaster management, producing accurate and up-to-date landslide susceptibility maps is essential for hazard mitigation purpose and regional planning. The main objective of the present study was to apply multi-collinearity based model selection approach for the production of a landslide susceptibility map of Ulus district of Karabuk, Turkey. It is a fact that data do not contain enough information to describe the problem under consideration when the factors are highly correlated with each other. In such cases, choosing a subset of the original features will often lead to better performance. This paper presents multi-collinearity based model selection approach to deal with the high correlation within the dataset. Two collinearity diagnostic factors (Tolerance (TOL) and the Variance Inflation Factor (VIF)) are commonly used to identify multi-collinearity. Values of VIF that exceed 10.0 and TOL values less than 1.0 are often regarded as indicating multi-collinearity. Five causative factors (slope length, curvature, plan curvature, profile curvature and topographical roughness index) were found highly correlated with each other among 15 factors available for the study area. As a result, the five correlated factors were removed from the model estimation, and performances of the models including the remaining 10 factors (aspect, drainage density, elevation, lithology, land use/land cover, NDVI, slope, sediment transport index, topographical position index and topographical wetness index) were evaluated using logistic regression. The performance of prediction model constructed with 10 factors was compared to that of 15-factor model. The prediction performance of two susceptibility maps was evaluated by overall accuracy and the area under the ROC curve (AUC) values. Results showed that overall accuracy and AUC was calculated as 77.15% and 96.62% respectively for the model with 10 selected factors whilst they were estimated as 73.45% and 89.45% respectively for the model with all factors. It is clear that the multi-collinearity based model outperformed the conventional model in the mapping of landslide susceptibility.

  1. Mapping monthly rainfall erosivity in Europe.

    PubMed

    Ballabio, Cristiano; Borrelli, Pasquale; Spinoni, Jonathan; Meusburger, Katrin; Michaelides, Silas; Beguería, Santiago; Klik, Andreas; Petan, Sašo; Janeček, Miloslav; Olsen, Preben; Aalto, Juha; Lakatos, Mónika; Rymszewicz, Anna; Dumitrescu, Alexandru; Tadić, Melita Perčec; Diodato, Nazzareno; Kostalova, Julia; Rousseva, Svetla; Banasik, Kazimierz; Alewell, Christine; Panagos, Panos

    2017-02-01

    Rainfall erosivity as a dynamic factor of soil loss by water erosion is modelled intra-annually for the first time at European scale. The development of Rainfall Erosivity Database at European Scale (REDES) and its 2015 update with the extension to monthly component allowed to develop monthly and seasonal R-factor maps and assess rainfall erosivity both spatially and temporally. During winter months, significant rainfall erosivity is present only in part of the Mediterranean countries. A sudden increase of erosivity occurs in major part of European Union (except Mediterranean basin, western part of Britain and Ireland) in May and the highest values are registered during summer months. Starting from September, R-factor has a decreasing trend. The mean rainfall erosivity in summer is almost 4 times higher (315MJmmha -1 h -1 ) compared to winter (87MJmmha -1 h -1 ). The Cubist model has been selected among various statistical models to perform the spatial interpolation due to its excellent performance, ability to model non-linearity and interpretability. The monthly prediction is an order more difficult than the annual one as it is limited by the number of covariates and, for consistency, the sum of all months has to be close to annual erosivity. The performance of the Cubist models proved to be generally high, resulting in R 2 values between 0.40 and 0.64 in cross-validation. The obtained months show an increasing trend of erosivity occurring from winter to summer starting from western to Eastern Europe. The maps also show a clear delineation of areas with different erosivity seasonal patterns, whose spatial outline was evidenced by cluster analysis. The monthly erosivity maps can be used to develop composite indicators that map both intra-annual variability and concentration of erosive events. Consequently, spatio-temporal mapping of rainfall erosivity permits to identify the months and the areas with highest risk of soil loss where conservation measures should be applied in different seasons of the year. Copyright © 2016 British Geological Survey, NERC. Published by Elsevier B.V. All rights reserved.

  2. Registration of interferometric SAR images

    NASA Technical Reports Server (NTRS)

    Lin, Qian; Vesecky, John F.; Zebker, Howard A.

    1992-01-01

    Interferometric synthetic aperture radar (INSAR) is a new way of performing topography mapping. Among the factors critical to mapping accuracy is the registration of the complex SAR images from repeated orbits. A new algorithm for registering interferometric SAR images is presented. A new figure of merit, the average fluctuation function of the phase difference image, is proposed to evaluate the fringe pattern quality. The process of adjusting the registration parameters according to the fringe pattern quality is optimized through a downhill simplex minimization algorithm. The results of applying the proposed algorithm to register two pairs of Seasat SAR images with a short baseline (75 m) and a long baseline (500 m) are shown. It is found that the average fluctuation function is a very stable measure of fringe pattern quality allowing very accurate registration.

  3. Performance analysis of a compact and low-cost mapping-grade mobile laser scanning system

    NASA Astrophysics Data System (ADS)

    Julge, Kalev; Vajakas, Toivo; Ellmann, Artu

    2017-10-01

    The performance of a low-cost, self-contained, compact, and easy to deploy mapping-grade mobile laser scanning (MLS) system, which is composed of a light detection and ranging sensor Velodyne VLP-16 and a dual antenna global navigation satellite system/inertial navigation system SBG Systems Ellipse-D, is analyzed. The field tests were carried out in car-mounted and backpack modes for surveying road engineering structures (such as roads, parking lots, underpasses, and tunnels) and coastal erosion zones, respectively. The impact of applied calculation principles on trajectory postprocessing, direct georeferencing, and the theoretical accuracy of the system is analyzed. A calibration method, based on Bound Optimization BY Quadratic Approximation, for finding the boresight angles of an MLS system is proposed. The resulting MLS point clouds are compared with high-accuracy static terrestrial laser scanning data and survey-grade MLS data from a commercially manufactured MLS system. The vertical, horizontal, and relative accuracy are assessed-the root-mean-square error (RMSE) values were determined to be 8, 15, and 3 cm, respectively. Thus, the achieved mapping-grade accuracy demonstrates that this relatively compact and inexpensive self-assembled MLS can be successfully used for surveying the geometry and deformations of terrain, buildings, road, and other engineering structures.

  4. Line fitting based feature extraction for object recognition

    NASA Astrophysics Data System (ADS)

    Li, Bing

    2014-06-01

    Image feature extraction plays a significant role in image based pattern applications. In this paper, we propose a new approach to generate hierarchical features. This new approach applies line fitting to adaptively divide regions based upon the amount of information and creates line fitting features for each subsequent region. It overcomes the feature wasting drawback of the wavelet based approach and demonstrates high performance in real applications. For gray scale images, we propose a diffusion equation approach to map information-rich pixels (pixels near edges and ridge pixels) into high values, and pixels in homogeneous regions into small values near zero that form energy map images. After the energy map images are generated, we propose a line fitting approach to divide regions recursively and create features for each region simultaneously. This new feature extraction approach is similar to wavelet based hierarchical feature extraction in which high layer features represent global characteristics and low layer features represent local characteristics. However, the new approach uses line fitting to adaptively focus on information-rich regions so that we avoid the feature waste problems of the wavelet approach in homogeneous regions. Finally, the experiments for handwriting word recognition show that the new method provides higher performance than the regular handwriting word recognition approach.

  5. An electrical impedance tomography (EIT) multi-electrode needle-probe device for local assessment of heterogeneous tissue impeditivity.

    PubMed

    Meroni, Davide; Maglioli, Camilla Carpano; Bovio, Dario; Greco, Francesco G; Aliverti, Andrea

    2017-07-01

    Electrical Impedance Tomography (EIT) is an image reconstruction technique applied in medicine for the electrical imaging of living tissues. In literature there is the evidence that a large resistivity variation related to the differences of the human tissues exists. As a result of this interest for the electrical characterization of the biological samples, recently the attention is also focused on the identification and characterization of the human tissue, by studying the homogeneity of its structure. An 8 electrodes needle-probe device has been developed with the intent of identifying the structural inhomogeneities under the surface layers. Ex-vivo impeditivity measurements, by placing the needle-probe in 5 different patterns of fat and lean porcine tissue, were performed, and impeditivity maps were obtained by EIDORS open source software for image reconstruction in electrical impedance. The values composing the maps have been analyzed, pointing out a good tissue discrimination, and the conformity with the real images. We conclude that this device is able to perform impeditivity maps matching to reality for position and orientation. In all the five patterns presented is possible to identify and replicate correctly the heterogeneous tissue under test. This new procedure can be helpful to the medical staff to completely characterize the biological sample, in different unclear situations.

  6. Translational Research Principles Applied to Education: The Mapping Educational Specialist Knowhow (MESH) Initiative

    ERIC Educational Resources Information Center

    Burden, Kevin; Younie, Sarah; Leask, Marilyn

    2013-01-01

    The Mapping Educational Specialist Knowhow (MESH) Initiative is part of a research project applying knowledge management principles which are well known in other sectors, public and private, to the education sector. The goal is to develop and test out the new ways of working, now possible with digital technologies, which can address long standing…

  7. Boson mapping techniques applied to constant gauge fields in QCD

    NASA Technical Reports Server (NTRS)

    Hess, Peter Otto; Lopez, J. C.

    1995-01-01

    Pairs of coordinates and derivatives of the constant gluon modes are mapped to new gluon-pair fields and their derivatives. Applying this mapping to the Hamiltonian of constant gluon fields results for large coupling constants into an effective Hamiltonian which separates into one describing a scalar field and another one for a field with spin two. The ground state is dominated by pairs of gluons coupled to color and spin zero with slight admixtures of color zero and spin two pairs. As color group we used SU(2).

  8. Recognizing lexical and semantic change patterns in evolving life science ontologies to inform mapping adaptation.

    PubMed

    Dos Reis, Julio Cesar; Dinh, Duy; Da Silveira, Marcos; Pruski, Cédric; Reynaud-Delaître, Chantal

    2015-03-01

    Mappings established between life science ontologies require significant efforts to maintain them up to date due to the size and frequent evolution of these ontologies. In consequence, automatic methods for applying modifications on mappings are highly demanded. The accuracy of such methods relies on the available description about the evolution of ontologies, especially regarding concepts involved in mappings. However, from one ontology version to another, a further understanding of ontology changes relevant for supporting mapping adaptation is typically lacking. This research work defines a set of change patterns at the level of concept attributes, and proposes original methods to automatically recognize instances of these patterns based on the similarity between attributes denoting the evolving concepts. This investigation evaluates the benefits of the proposed methods and the influence of the recognized change patterns to select the strategies for mapping adaptation. The summary of the findings is as follows: (1) the Precision (>60%) and Recall (>35%) achieved by comparing manually identified change patterns with the automatic ones; (2) a set of potential impact of recognized change patterns on the way mappings is adapted. We found that the detected correlations cover ∼66% of the mapping adaptation actions with a positive impact; and (3) the influence of the similarity coefficient calculated between concept attributes on the performance of the recognition algorithms. The experimental evaluations conducted with real life science ontologies showed the effectiveness of our approach to accurately characterize ontology evolution at the level of concept attributes. This investigation confirmed the relevance of the proposed change patterns to support decisions on mapping adaptation. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Performance comparison of machine learning algorithms and number of independent components used in fMRI decoding of belief vs. disbelief.

    PubMed

    Douglas, P K; Harris, Sam; Yuille, Alan; Cohen, Mark S

    2011-05-15

    Machine learning (ML) has become a popular tool for mining functional neuroimaging data, and there are now hopes of performing such analyses efficiently in real-time. Towards this goal, we compared accuracy of six different ML algorithms applied to neuroimaging data of persons engaged in a bivariate task, asserting their belief or disbelief of a variety of propositional statements. We performed unsupervised dimension reduction and automated feature extraction using independent component (IC) analysis and extracted IC time courses. Optimization of classification hyperparameters across each classifier occurred prior to assessment. Maximum accuracy was achieved at 92% for Random Forest, followed by 91% for AdaBoost, 89% for Naïve Bayes, 87% for a J48 decision tree, 86% for K*, and 84% for support vector machine. For real-time decoding applications, finding a parsimonious subset of diagnostic ICs might be useful. We used a forward search technique to sequentially add ranked ICs to the feature subspace. For the current data set, we determined that approximately six ICs represented a meaningful basis set for classification. We then projected these six IC spatial maps forward onto a later scanning session within subject. We then applied the optimized ML algorithms to these new data instances, and found that classification accuracy results were reproducible. Additionally, we compared our classification method to our previously published general linear model results on this same data set. The highest ranked IC spatial maps show similarity to brain regions associated with contrasts for belief > disbelief, and disbelief < belief. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. Remote Sensing Information Science Research

    NASA Technical Reports Server (NTRS)

    Clarke, Keith C.; Scepan, Joseph; Hemphill, Jeffrey; Herold, Martin; Husak, Gregory; Kline, Karen; Knight, Kevin

    2002-01-01

    This document is the final report summarizing research conducted by the Remote Sensing Research Unit, Department of Geography, University of California, Santa Barbara under National Aeronautics and Space Administration Research Grant NAG5-10457. This document describes work performed during the period of 1 March 2001 thorough 30 September 2002. This report includes a survey of research proposed and performed within RSRU and the UCSB Geography Department during the past 25 years. A broad suite of RSRU research conducted under NAG5-10457 is also described under themes of Applied Research Activities and Information Science Research. This research includes: 1. NASA ESA Research Grant Performance Metrics Reporting. 2. Global Data Set Thematic Accuracy Analysis. 3. ISCGM/Global Map Project Support. 4. Cooperative International Activities. 5. User Model Study of Global Environmental Data Sets. 6. Global Spatial Data Infrastructure. 7. CIESIN Collaboration. 8. On the Value of Coordinating Landsat Operations. 10. The California Marine Protected Areas Database: Compilation and Accuracy Issues. 11. Assessing Landslide Hazard Over a 130-Year Period for La Conchita, California Remote Sensing and Spatial Metrics for Applied Urban Area Analysis, including: (1) IKONOS Data Processing for Urban Analysis. (2) Image Segmentation and Object Oriented Classification. (3) Spectral Properties of Urban Materials. (4) Spatial Scale in Urban Mapping. (5) Variable Scale Spatial and Temporal Urban Growth Signatures. (6) Interpretation and Verification of SLEUTH Modeling Results. (7) Spatial Land Cover Pattern Analysis for Representing Urban Land Use and Socioeconomic Structures. 12. Colorado River Flood Plain Remote Sensing Study Support. 13. African Rainfall Modeling and Assessment. 14. Remote Sensing and GIS Integration.

  11. Automated Geo/Co-Registration of Multi-Temporal Very-High-Resolution Imagery.

    PubMed

    Han, Youkyung; Oh, Jaehong

    2018-05-17

    For time-series analysis using very-high-resolution (VHR) multi-temporal satellite images, both accurate georegistration to the map coordinates and subpixel-level co-registration among the images should be conducted. However, applying well-known matching methods, such as scale-invariant feature transform and speeded up robust features for VHR multi-temporal images, has limitations. First, they cannot be used for matching an optical image to heterogeneous non-optical data for georegistration. Second, they produce a local misalignment induced by differences in acquisition conditions, such as acquisition platform stability, the sensor's off-nadir angle, and relief displacement of the considered scene. Therefore, this study addresses the problem by proposing an automated geo/co-registration framework for full-scene multi-temporal images acquired from a VHR optical satellite sensor. The proposed method comprises two primary steps: (1) a global georegistration process, followed by (2) a fine co-registration process. During the first step, two-dimensional multi-temporal satellite images are matched to three-dimensional topographic maps to assign the map coordinates. During the second step, a local analysis of registration noise pixels extracted between the multi-temporal images that have been mapped to the map coordinates is conducted to extract a large number of well-distributed corresponding points (CPs). The CPs are finally used to construct a non-rigid transformation function that enables minimization of the local misalignment existing among the images. Experiments conducted on five Kompsat-3 full scenes confirmed the effectiveness of the proposed framework, showing that the georegistration performance resulted in an approximately pixel-level accuracy for most of the scenes, and the co-registration performance further improved the results among all combinations of the georegistered Kompsat-3 image pairs by increasing the calculated cross-correlation values.

  12. A Computer-Aided Diagnosis System for Measuring Carotid Artery Intima-Media Thickness (IMT) Using Quaternion Vectors.

    PubMed

    Kutbay, Uğurhan; Hardalaç, Fırat; Akbulut, Mehmet; Akaslan, Ünsal; Serhatlıoğlu, Selami

    2016-06-01

    This study aims investigating adjustable distant fuzzy c-means segmentation on carotid Doppler images, as well as quaternion-based convolution filters and saliency mapping procedures. We developed imaging software that will simplify the measurement of carotid artery intima-media thickness (IMT) on saliency mapping images. Additionally, specialists evaluated the present images and compared them with saliency mapping images. In the present research, we conducted imaging studies of 25 carotid Doppler images obtained by the Department of Cardiology at Fırat University. After implementing fuzzy c-means segmentation and quaternion-based convolution on all Doppler images, we obtained a model that can be analyzed easily by the doctors using a bottom-up saliency model. These methods were applied to 25 carotid Doppler images and then interpreted by specialists. In the present study, we used color-filtering methods to obtain carotid color images. Saliency mapping was performed on the obtained images, and the carotid artery IMT was detected and interpreted on the obtained images from both methods and the raw images are shown in Results. Also these results were investigated by using Mean Square Error (MSE) for the raw IMT images and the method which gives the best performance is the Quaternion Based Saliency Mapping (QBSM). 0,0014 and 0,000191 mm(2) MSEs were obtained for artery lumen diameters and plaque diameters in carotid arteries respectively. We found that computer-based image processing methods used on carotid Doppler could aid doctors' in their decision-making process. We developed software that could ease the process of measuring carotid IMT for cardiologists and help them to evaluate their findings.

  13. Occupancy mapping and surface reconstruction using local Gaussian processes with Kinect sensors.

    PubMed

    Kim, Soohwan; Kim, Jonghyuk

    2013-10-01

    Although RGB-D sensors have been successfully applied to visual SLAM and surface reconstruction, most of the applications aim at visualization. In this paper, we propose a noble method of building continuous occupancy maps and reconstructing surfaces in a single framework for both navigation and visualization. Particularly, we apply a Bayesian nonparametric approach, Gaussian process classification, to occupancy mapping. However, it suffers from high-computational complexity of O(n(3))+O(n(2)m), where n and m are the numbers of training and test data, respectively, limiting its use for large-scale mapping with huge training data, which is common with high-resolution RGB-D sensors. Therefore, we partition both training and test data with a coarse-to-fine clustering method and apply Gaussian processes to each local clusters. In addition, we consider Gaussian processes as implicit functions, and thus extract iso-surfaces from the scalar fields, continuous occupancy maps, using marching cubes. By doing that, we are able to build two types of map representations within a single framework of Gaussian processes. Experimental results with 2-D simulated data show that the accuracy of our approximated method is comparable to previous work, while the computational time is dramatically reduced. We also demonstrate our method with 3-D real data to show its feasibility in large-scale environments.

  14. An experimental comparison of standard stereo matching algorithms applied to cloud top height estimation from satellite IR images

    NASA Astrophysics Data System (ADS)

    Anzalone, Anna; Isgrò, Francesco

    2016-10-01

    The JEM-EUSO (Japanese Experiment Module-Extreme Universe Space Observatory) telescope will measure Ultra High Energy Cosmic Ray properties by detecting the UV fluorescent light generated in the interaction between cosmic rays and the atmosphere. Cloud information is crucial for a proper interpretation of these data. The problem of recovering the cloud-top height from satellite images in infrared has struck some attention over the last few decades, as a valuable tool for the atmospheric monitoring. A number of radiative methods do exist, like C02 slicing and Split Window algorithms, using one or more infrared bands. A different way to tackle the problem is, when possible, to exploit the availability of multiple views, and recover the cloud top height through stereo imaging and triangulation. A crucial step in the 3D reconstruction is the process that attempts to match a characteristic point or features selected in one image, with one of those detected in the second image. In this article the performance of a group matching algorithms that include both area-based and global techniques, has been tested. They are applied to stereo pairs of satellite IR images with the final aim of evaluating the cloud top height. Cloudy images from SEVIRI on the geostationary Meteosat Second Generation 9 and 10 (MSG-2, MSG-3) have been selected. After having applied to the cloudy scenes the algorithms for stereo matching, the outcoming maps of disparity are transformed in depth maps according to the geometry of the reference data system. As ground truth we have used the height maps provided by the database of MODIS (Moderate Resolution Imaging Spectroradiometer) on-board Terra/Aqua polar satellites, that contains images quasi-synchronous to the imaging provided by MSG.

  15. Rapid gene identification in sugar beet using deep sequencing of DNA from phenotypic pools selected from breeding panels.

    PubMed

    Ries, David; Holtgräwe, Daniela; Viehöver, Prisca; Weisshaar, Bernd

    2016-03-15

    The combination of bulk segregant analysis (BSA) and next generation sequencing (NGS), also known as mapping by sequencing (MBS), has been shown to significantly accelerate the identification of causal mutations for species with a reference genome sequence. The usual approach is to cross homozygous parents that differ for the monogenic trait to address, to perform deep sequencing of DNA from F2 plants pooled according to their phenotype, and subsequently to analyze the allele frequency distribution based on a marker table for the parents studied. The method has been successfully applied for EMS induced mutations as well as natural variation. Here, we show that pooling genetically diverse breeding lines according to a contrasting phenotype also allows high resolution mapping of the causal gene in a crop species. The test case was the monogenic locus causing red vs. green hypocotyl color in Beta vulgaris (R locus). We determined the allele frequencies of polymorphic sequences using sequence data from two diverging phenotypic pools of 180 B. vulgaris accessions each. A single interval of about 31 kbp among the nine chromosomes was identified which indeed contained the causative mutation. By applying a variation of the mapping by sequencing approach, we demonstrated that phenotype-based pooling of diverse accessions from breeding panels and subsequent direct determination of the allele frequency distribution can be successfully applied for gene identification in a crop species. Our approach made it possible to identify a small interval around the causative gene. Sequencing of parents or individual lines was not necessary. Whenever the appropriate plant material is available, the approach described saves time compared to the generation of an F2 population. In addition, we provide clues for planning similar experiments with regard to pool size and the sequencing depth required.

  16. Making maps of cosmic microwave background polarization for B-mode studies: the POLARBEAR example

    DOE PAGES

    Poletti, Davide; Fabbian, Giulio; Le Jeune, Maude; ...

    2017-03-30

    Analysis of cosmic microwave background (CMB) datasets typically requires some filtering of the raw time-ordered data. For instance, in the context of ground-based observations, filtering is frequently used to minimize the impact of low frequency noise, atmospheric contributions and/or scan synchronous signals on the resulting maps. In this paper, we have explicitly constructed a general filtering operator, which can unambiguously remove any set of unwanted modes in the data, and then amend the map-making procedure in order to incorporate and correct for it. We show that such an approach is mathematically equivalent to the solution of a problem in whichmore » the sky signal and unwanted modes are estimated simultaneously and the latter are marginalized over. We investigated the conditions under which this amended map-making procedure can render an unbiased estimate of the sky signal in realistic circumstances. We then discuss the potential implications of these observations on the choice of map-making and power spectrum estimation approaches in the context of B-mode polarization studies. Specifically, we have studied the effects of time-domain filtering on the noise correlation structure in the map domain, as well as impact it may haveon the performance of the popular pseudo-spectrum estimators. We conclude that although maps produced by the proposed estimators arguably provide the most faithful representation of the sky possible given the data, they may not straightforwardly lead to the best constraints on the power spectra of the underlying sky signal and special care may need to be taken to ensure this is the case. By contrast, simplified map-makers which do not explicitly correct for time-domain filtering, but leave it to subsequent steps in the data analysis, may perform equally well and be easier and faster to implement. We focused on polarization-sensitive measurements targeting the B-mode component of the CMB signal and apply the proposed methods to realistic simulations based on characteristics of an actual CMB polarization experiment, POLARBEAR. Finally, our analysis and conclusions are however more generally applicable.« less

  17. Making maps of cosmic microwave background polarization for B-mode studies: the POLARBEAR example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poletti, Davide; Fabbian, Giulio; Le Jeune, Maude

    Analysis of cosmic microwave background (CMB) datasets typically requires some filtering of the raw time-ordered data. For instance, in the context of ground-based observations, filtering is frequently used to minimize the impact of low frequency noise, atmospheric contributions and/or scan synchronous signals on the resulting maps. In this paper, we have explicitly constructed a general filtering operator, which can unambiguously remove any set of unwanted modes in the data, and then amend the map-making procedure in order to incorporate and correct for it. We show that such an approach is mathematically equivalent to the solution of a problem in whichmore » the sky signal and unwanted modes are estimated simultaneously and the latter are marginalized over. We investigated the conditions under which this amended map-making procedure can render an unbiased estimate of the sky signal in realistic circumstances. We then discuss the potential implications of these observations on the choice of map-making and power spectrum estimation approaches in the context of B-mode polarization studies. Specifically, we have studied the effects of time-domain filtering on the noise correlation structure in the map domain, as well as impact it may haveon the performance of the popular pseudo-spectrum estimators. We conclude that although maps produced by the proposed estimators arguably provide the most faithful representation of the sky possible given the data, they may not straightforwardly lead to the best constraints on the power spectra of the underlying sky signal and special care may need to be taken to ensure this is the case. By contrast, simplified map-makers which do not explicitly correct for time-domain filtering, but leave it to subsequent steps in the data analysis, may perform equally well and be easier and faster to implement. We focused on polarization-sensitive measurements targeting the B-mode component of the CMB signal and apply the proposed methods to realistic simulations based on characteristics of an actual CMB polarization experiment, POLARBEAR. Finally, our analysis and conclusions are however more generally applicable.« less

  18. Korean coastal water depth/sediment and land cover mapping (1:25,000) by computer analysis of LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Park, K. Y.; Miller, L. D.

    1978-01-01

    Computer analysis was applied to single date LANDSAT MSS imagery of a sample coastal area near Seoul, Korea equivalent to a 1:50,000 topographic map. Supervised image processing yielded a test classification map from this sample image containing 12 classes: 5 water depth/sediment classes, 2 shoreline/tidal classes, and 5 coastal land cover classes at a scale of 1:25,000 and with a training set accuracy of 76%. Unsupervised image classification was applied to a subportion of the site analyzed and produced classification maps comparable in results in a spatial sense. The results of this test indicated that it is feasible to produce such quantitative maps for detailed study of dynamic coastal processes given a LANDSAT image data base at sufficiently frequent time intervals.

  19. Implementing a Parallel Image Edge Detection Algorithm Based on the Otsu-Canny Operator on the Hadoop Platform.

    PubMed

    Cao, Jianfang; Chen, Lichao; Wang, Min; Tian, Yun

    2018-01-01

    The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance.

  20. Short communication: Investigation into Mycobacterium avium ssp. paratuberculosis in pasteurized milk in Italy.

    PubMed

    Serraino, A; Bonilauri, P; Giacometti, F; Ricchi, M; Cammi, G; Piva, S; Zambrini, V; Canever, A; Arrigoni, N

    2017-01-01

    This study investigated the presence of viable Mycobacterium avium ssp. paratuberculosis (MAP) in pasteurized milk produced by Italian industrial dairy plants to verify the prediction of a previously performed risk assessment. The study analyzed 160 one-liter bottles of pasteurized milk from 2 dairy plants located in 2 different regions. Traditional cultural protocols were applied to 500mL of pasteurized milk for each sample. The investigation focused also on the pasteurization parameters and data on the microbiological characteristics of raw milk (total bacterial count) and pasteurized milk (Enterobacteriaceae and Listeria monocytogenes). No sample was positive for MAP, the pasteurization parameters complied with European Union legislation, and the microbiological analysis of raw and pasteurized milk showed good microbiological quality. The results show that a 7-log (or >7) reduction could be a plausible value for commercial pasteurization. The combination of hygiene practices at farm level and commercial pasteurization yield very low or absent levels of MAP contamination in pasteurized milk, suggesting that pasteurized milk is not a significant source of human exposure to MAP in the dairies investigated. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  1. Connectopic mapping with resting-state fMRI.

    PubMed

    Haak, Koen V; Marquand, Andre F; Beckmann, Christian F

    2018-04-15

    Brain regions are often topographically connected: nearby locations within one brain area connect with nearby locations in another area. Mapping these connection topographies, or 'connectopies' in short, is crucial for understanding how information is processed in the brain. Here, we propose principled, fully data-driven methods for mapping connectopies using functional magnetic resonance imaging (fMRI) data acquired at rest by combining spectral embedding of voxel-wise connectivity 'fingerprints' with a novel approach to spatial statistical inference. We apply the approach in human primary motor and visual cortex, and show that it can trace biologically plausible, overlapping connectopies in individual subjects that follow these regions' somatotopic and retinotopic maps. As a generic mechanism to perform inference over connectopies, the new spatial statistics approach enables rigorous statistical testing of hypotheses regarding the fine-grained spatial profile of functional connectivity and whether that profile is different between subjects or between experimental conditions. The combined framework offers a fundamental alternative to existing approaches to investigating functional connectivity in the brain, from voxel- or seed-pair wise characterizations of functional association, towards a full, multivariate characterization of spatial topography. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Phase gradient imaging for positive contrast generation to superparamagnetic iron oxide nanoparticle-labeled targets in magnetic resonance imaging.

    PubMed

    Zhu, Haitao; Demachi, Kazuyuki; Sekino, Masaki

    2011-09-01

    Positive contrast imaging methods produce enhanced signal at large magnetic field gradient in magnetic resonance imaging. Several postprocessing algorithms, such as susceptibility gradient mapping and phase gradient mapping methods, have been applied for positive contrast generation to detect the cells targeted by superparamagnetic iron oxide nanoparticles. In the phase gradient mapping methods, smoothness condition has to be satisfied to keep the phase gradient unwrapped. Moreover, there has been no discussion about the truncation artifact associated with the algorithm of differentiation that is performed in k-space by the multiplication with frequency value. In this work, phase gradient methods are discussed by considering the wrapping problem when the smoothness condition is not satisfied. A region-growing unwrapping algorithm is used in the phase gradient image to solve the problem. In order to reduce the truncation artifact, a cosine function is multiplied in the k-space to eliminate the abrupt change at the boundaries. Simulation, phantom and in vivo experimental results demonstrate that the modified phase gradient mapping methods may produce improved positive contrast effects by reducing truncation or wrapping artifacts. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Surface-Constrained Volumetric Brain Registration Using Harmonic Mappings

    PubMed Central

    Joshi, Anand A.; Shattuck, David W.; Thompson, Paul M.; Leahy, Richard M.

    2015-01-01

    In order to compare anatomical and functional brain imaging data across subjects, the images must first be registered to a common coordinate system in which anatomical features are aligned. Intensity-based volume registration methods can align subcortical structures well, but the variability in sulcal folding patterns typically results in misalignment of the cortical surface. Conversely, surface-based registration using sulcal features can produce excellent cortical alignment but the mapping between brains is restricted to the cortical surface. Here we describe a method for volumetric registration that also produces an accurate one-to-one point correspondence between cortical surfaces. This is achieved by first parameterizing and aligning the cortical surfaces using sulcal landmarks. We then use a constrained harmonic mapping to extend this surface correspondence to the entire cortical volume. Finally, this mapping is refined using an intensity-based warp. We demonstrate the utility of the method by applying it to T1-weighted magnetic resonance images (MRI). We evaluate the performance of our proposed method relative to existing methods that use only intensity information; for this comparison we compute the inter-subject alignment of expert-labeled sub-cortical structures after registration. PMID:18092736

  4. Effect of modified atmosphere packaging on Quality Index Method (QIM) scores of farmed gilthead seabream (Sparus aurata L.) at low and abused temperatures.

    PubMed

    Campus, Marco; Bonaglini, Elia; Cappuccinelli, Roberto; Porcu, Maria Cristina; Tonelli, Roberto; Roggio, Tonina

    2011-04-01

    A Quality Index Method (QIM) scheme was developed for modified atmosphere packaging (MAP) packed gilthead seabream, and the effect of MAP gas mixtures (60% CO2 and 40% N2; 60% CO2, 30% O2, and 10% N2), temperature (2, 4, and 8 °C), and time of storage on QI scores was assessed. QI scores were crossed with sensory evaluation of cooked fish according to a modified Torry scheme to establish the rejection point. In order to reduce redundant parameters, a principal component analysis was applied on preliminary QIM parameters scores coming from the best performing MAP among those tested. The final QIM scheme consists of 13 parameters and a maximum demerit score of 25. The maximum storage time was found to be 13 d at 4 °C for MAP 60% CO2 and 40% N2. Storage at 2 °C do not substantially improved sensory parameters scores, while storage under temperature abuse (8 °C) accelerated drastically the rate of increase of QI scores and reduced the maximum storage time to 6 d.

  5. Mapping underwater sound noise and assessing its sources by using a self-organizing maps method.

    PubMed

    Rako, Nikolina; Vilibić, Ivica; Mihanović, Hrvoje

    2013-03-01

    This study aims to provide an objective mapping of the underwater noise and its sources over an Adriatic coastal marine habitat by applying the self-organizing maps (SOM) method. Systematic sampling of sea ambient noise (SAN) was carried out at ten predefined acoustic stations between 2007 and 2009. Analyses of noise levels were performed for 1/3 octave band standard centered frequencies in terms of instantaneous sound pressure levels averaged over 300 s to calculate the equivalent continuous sound pressure levels. Data on vessels' presence, type, and distance from the monitoring stations were also collected at each acoustic station during the acoustic sampling. Altogether 69 noise surveys were introduced to the SOM predefined 2 × 2 array. The overall results of the analysis distinguished two dominant underwater soundscapes, associating them mainly to the seasonal changes in the nautical tourism and fishing activities within the study area and to the wind and wave action. The analysis identified recreational vessels as the dominant anthropogenic source of underwater noise, particularly during the tourist season. The method demonstrated to be an efficient tool in predicting the SAN levels based on the vessel distribution, indicating also the possibility of its wider implication for marine conservation.

  6. Fast Shear Compounding Using Robust Two-dimensional Shear Wave Speed Calculation and Multi-directional Filtering

    PubMed Central

    Song, Pengfei; Manduca, Armando; Zhao, Heng; Urban, Matthew W.; Greenleaf, James F.; Chen, Shigao

    2014-01-01

    A fast shear compounding method was developed in this study using only one shear wave push-detect cycle, such that the shear wave imaging frame rate is preserved and motion artifacts are minimized. The proposed method is composed of the following steps: 1. applying a comb-push to produce multiple differently angled shear waves at different spatial locations simultaneously; 2. decomposing the complex shear wave field into individual shear wave fields with differently oriented shear waves using a multi-directional filter; 3. using a robust two-dimensional (2D) shear wave speed calculation to reconstruct 2D shear elasticity maps from each filter direction; 4. compounding these 2D maps from different directions into a final map. An inclusion phantom study showed that the fast shear compounding method could achieve comparable performance to conventional shear compounding without sacrificing the imaging frame rate. A multi-inclusion phantom experiment showed that the fast shear compounding method could provide a full field-of-view (FOV), 2D, and compounded shear elasticity map with three types of inclusions clearly resolved and stiffness measurements showing excellent agreement to the nominal values. PMID:24613636

  7. Intrinsic Resting-State Functional Connectivity in the Human Spinal Cord at 3.0 T.

    PubMed

    San Emeterio Nateras, Oscar; Yu, Fang; Muir, Eric R; Bazan, Carlos; Franklin, Crystal G; Li, Wei; Li, Jinqi; Lancaster, Jack L; Duong, Timothy Q

    2016-04-01

    To apply resting-state functional magnetic resonance (MR) imaging to map functional connectivity of the human spinal cord. Studies were performed in nine self-declared healthy volunteers with informed consent and institutional review board approval. Resting-state functional MR imaging was performed to map functional connectivity of the human cervical spinal cord from C1 to C4 at 1 × 1 × 3-mm resolution with a 3.0-T clinical MR imaging unit. Independent component analysis (ICA) was performed to derive resting-state functional MR imaging z-score maps rendered on two-dimensional and three-dimensional images. Seed-based analysis was performed for cross validation with ICA networks by using Pearson correlation. Reproducibility analysis of resting-state functional MR imaging maps from four repeated trials in a single participant yielded a mean z score of 6 ± 1 (P < .0001). The centroid coordinates across the four trials deviated by 2 in-plane voxels ± 2 mm (standard deviation) and up to one adjacent image section ± 3 mm. ICA of group resting-state functional MR imaging data revealed prominent functional connectivity patterns within the spinal cord gray matter. There were statistically significant (z score > 3, P < .001) bilateral, unilateral, and intersegmental correlations in the ventral horns, dorsal horns, and central spinal cord gray matter. Three-dimensional surface rendering provided visualization of these components along the length of the spinal cord. Seed-based analysis showed that many ICA components exhibited strong and significant (P < .05) correlations, corroborating the ICA results. Resting-state functional MR imaging connectivity networks are qualitatively consistent with known neuroanatomic and functional structures in the spinal cord. Resting-state functional MR imaging of the human cervical spinal cord with a 3.0-T clinical MR imaging unit and standard MR imaging protocols and hardware reveals prominent functional connectivity patterns within the spinal cord gray matter, consistent with known functional and anatomic layouts of the spinal cord.

  8. Analyzing the Use of Concept Maps in Computer Science: A Systematic Mapping Study

    ERIC Educational Resources Information Center

    dos Santos, Vinicius; de Souza, Érica F.; Felizardo, Katia R; Vijaykumar, Nandamudi L.

    2017-01-01

    Context: concept Maps (CMs) enable the creation of a schematic representation of a domain knowledge. For this reason, CMs have been applied in different research areas, including Computer Science. Objective: the objective of this paper is to present the results of a systematic mapping study conducted to collect and evaluate existing research on…

  9. Calculating Lyapunov Exponents: Applying Products and Evaluating Integrals

    ERIC Educational Resources Information Center

    McCartney, Mark

    2010-01-01

    Two common examples of one-dimensional maps (the tent map and the logistic map) are generalized to cases where they have more than one control parameter. In the case of the tent map, this still allows the global Lyapunov exponent to be found analytically, and permits various properties of the resulting global Lyapunov exponents to be investigated…

  10. Comparing the Performance of Japan's Earthquake Hazard Maps to Uniform and Randomized Maps

    NASA Astrophysics Data System (ADS)

    Brooks, E. M.; Stein, S. A.; Spencer, B. D.

    2015-12-01

    The devastating 2011 magnitude 9.1 Tohoku earthquake and the resulting shaking and tsunami were much larger than anticipated in earthquake hazard maps. Because this and all other earthquakes that caused ten or more fatalities in Japan since 1979 occurred in places assigned a relatively low hazard, Geller (2011) argued that "all of Japan is at risk from earthquakes, and the present state of seismological science does not allow us to reliably differentiate the risk level in particular geographic areas," so a map showing uniform hazard would be preferable to the existing map. Defenders of the maps countered by arguing that these earthquakes are low-probability events allowed by the maps, which predict the levels of shaking that should expected with a certain probability over a given time. Although such maps are used worldwide in making costly policy decisions for earthquake-resistant construction, how well these maps actually perform is unknown. We explore this hotly-contested issue by comparing how well a 510-year-long record of earthquake shaking in Japan is described by the Japanese national hazard (JNH) maps, uniform maps, and randomized maps. Surprisingly, as measured by the metric implicit in the JNH maps, i.e. that during the chosen time interval the predicted ground motion should be exceeded only at a specific fraction of the sites, both uniform and randomized maps do better than the actual maps. However, using as a metric the squared misfit between maximum observed shaking and that predicted, the JNH maps do better than uniform or randomized maps. These results indicate that the JNH maps are not performing as well as expected, that what factors control map performance is complicated, and that learning more about how maps perform and why would be valuable in making more effective policy.

  11. New Topographic Maps of Io Using Voyager and Galileo Stereo Imaging and Photoclinometry

    NASA Astrophysics Data System (ADS)

    White, O. L.; Schenk, P. M.; Hoogenboom, T.

    2012-03-01

    Stereo and photoclinometry processing have been applied to Voyager and Galileo images of Io in order to derive regional- and local-scale topographic maps of 20% of the moon’s surface to date. We present initial mapping results.

  12. The 2008 U.S. Geological Survey national seismic hazard models and maps for the central and eastern United States

    USGS Publications Warehouse

    Petersen, Mark D.; Frankel, Arthur D.; Harmsen, Stephen C.; Mueller, Charles S.; Boyd, Oliver S.; Luco, Nicolas; Wheeler, Russell L.; Rukstales, Kenneth S.; Haller, Kathleen M.

    2012-01-01

    In this paper, we describe the scientific basis for the source and ground-motion models applied in the 2008 National Seismic Hazard Maps, the development of new products that are used for building design and risk analyses, relationships between the hazard maps and design maps used in building codes, and potential future improvements to the hazard maps.

  13. In-situ microscale through-silicon via strain measurements by synchrotron x-ray microdiffraction exploring the physics behind data interpretation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xi; School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332; Thadesar, Paragkumar A.

    2014-09-15

    In-situ microscale thermomechanical strain measurements have been performed in combination with synchrotron x-ray microdiffraction to understand the fundamental cause of failures in microelectronics devices with through-silicon vias. The physics behind the raster scan and data analysis of the measured strain distribution maps is explored utilizing the energies of indexed reflections from the measured data and applying them for beam intensity analysis and effective penetration depth determination. Moreover, a statistical analysis is performed for the beam intensity and strain distributions along the beam penetration path to account for the factors affecting peak search and strain refinement procedure.

  14. Applying a rateless code in content delivery networks

    NASA Astrophysics Data System (ADS)

    Suherman; Zarlis, Muhammad; Parulian Sitorus, Sahat; Al-Akaidi, Marwan

    2017-09-01

    Content delivery network (CDN) allows internet providers to locate their services, to map their coverage into networks without necessarily to own them. CDN is part of the current internet infrastructures, supporting multi server applications especially social media. Various works have been proposed to improve CDN performances. Since accesses on social media servers tend to be short but frequent, providing redundant to the transmitted packets to ensure lost packets not degrade the information integrity may improve service performances. This paper examines the implementation of rateless code in the CDN infrastructure. The NS-2 evaluations show that rateless code is able to reduce packet loss up to 50%.

  15. Exploring nonlinear feature space dimension reduction and data representation in breast CADx with Laplacian eigenmaps and t-SNE

    PubMed Central

    Jamieson, Andrew R.; Giger, Maryellen L.; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha

    2010-01-01

    Purpose: In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res. 9, 2579–2605 (2008)]. Methods: These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier’s AUC performance. Results: In the large U.S. data set, sample high performance results include, AUC0.632+=0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+=0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+=0.90 with interval [0.847;0.919], all using the MCMC-BANN. Conclusions: Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space. PMID:20175497

  16. Technical Note: Independent component analysis for quality assurance in functional MRI.

    PubMed

    Astrakas, Loukas G; Kallistis, Nikolaos S; Kalef-Ezra, John A

    2016-02-01

    Independent component analysis (ICA) is an established method of analyzing human functional MRI (fMRI) data. Here, an ICA-based fMRI quality control (QC) tool was developed and used. ICA-based fMRI QC tool to be used with a commercial phantom was developed. In an attempt to assess the performance of the tool relative to preexisting alternative tools, it was used seven weeks before and eight weeks after repair of a faulty gradient amplifier of a non-state-of-the-art MRI unit. More specifically, its performance was compared with the AAPM 100 acceptance testing and quality assurance protocol and two fMRI QC protocols, proposed by Freidman et al. ["Report on a multicenter fMRI quality assurance protocol," J. Magn. Reson. Imaging 23, 827-839 (2006)] and Stocker et al. ["Automated quality assurance routines for fMRI data applied to a multicenter study," Hum. Brain Mapp. 25, 237-246 (2005)], respectively. The easily developed and applied ICA-based QC protocol provided fMRI QC indices and maps equally sensitive to fMRI instabilities with the indices and maps of other established protocols. The ICA fMRI QC indices were highly correlated with indices of other fMRI QC protocols and in some cases theoretically related to them. Three or four independent components with slow varying time series are detected under normal conditions. ICA applied on phantom measurements is an easy and efficient tool for fMRI QC. Additionally, it can protect against misinterpretations of artifact components as human brain activations. Evaluating fMRI QC indices in the central region of a phantom is not always the optimal choice.

  17. Application of fuzzy logic approach for wind erosion hazard mapping in Laghouat region (Algeria) using remote sensing and GIS

    NASA Astrophysics Data System (ADS)

    Saadoud, Djouher; Hassani, Mohamed; Martin Peinado, Francisco José; Guettouche, Mohamed Saïd

    2018-06-01

    Wind erosion is one of the most serious environmental problems in Algeria that threatens human activities and socio-economic development. The main goal of this study is to apply a fuzzy logic approach to wind erosion sensitivity mapping in the Laghouat region, Algeria. Six causative factors, obtained by applying fuzzy membership functions to each used parameter, are considered: soil, vegetation cover, wind factor, soil dryness, land topography and land cover sensitivity. Different fuzzy operators (AND, OR, SUM, PRODUCT, and GAMMA) are applied to generate wind-erosion hazard map. Success rate curves reveal that the fuzzy gamma (γ) operator, with γ equal to 0.9, gives the best prediction accuracy with an area under curve of 85.2%. The resulting wind-erosion sensitivity map delineates the area into different zones of five relative sensitivity classes: very high, high, moderate, low and very low. The estimated result was verified by field measurements and the high statistically significant value of a chi-square test.

  18. Multispectral assessment of skin malformations using a modified video-microscope

    NASA Astrophysics Data System (ADS)

    Bekina, A.; Diebele, I.; Rubins, U.; Zaharans, J.; Derjabo, A.; Spigulis, J.

    2012-10-01

    A simplified method is proposed for alternative clinical diagnostics of skin malformations. A modified digital microscope, additionally equipped with a fourcolour LED (450 nm, 545 nm, 660 nm and 940 nm) subsequent illumination system, was applied for assessment of skin cancerous lesions and cutaneous inflammations. Multispectral image analysis was performed to map distributions of skin erythema index, bilirubin index, melanoma/nevus differentiation parameter, and fluorescence indicator. The skin malformation monitoring has shown that it is possible to differentiate melanoma from other pathologies.

  19. Efficient method for computing the electronic transport properties of a multiterminal system

    NASA Astrophysics Data System (ADS)

    Lima, Leandro R. F.; Dusko, Amintor; Lewenkopf, Caio

    2018-04-01

    We present a multiprobe recursive Green's function method to compute the transport properties of mesoscopic systems using the Landauer-Büttiker approach. By introducing an adaptive partition scheme, we map the multiprobe problem into the standard two-probe recursive Green's function method. We apply the method to compute the longitudinal and Hall resistances of a disordered graphene sample, a system of current interest. We show that the performance and accuracy of our method compares very well with other state-of-the-art schemes.

  20. Performance as a function of ability, resources invested, and strategy used.

    PubMed

    Botella, Juan; Peña, Daniel; Contreras, María José; Shih, Pei-Chun; Santacreu, José

    2009-01-01

    Computerized tasks allow a more fine-grained analysis of the strategy deployed in a task designed to map a specific ability than the usual assessment on the basis of only the level of performance. Manipulations expected to impair performance sometimes do not have that effect, probably because the level of performance alone can confound the assessment of the ability level if researchers ignore the strategy used. In a study with 1,872 participants, the authors applied the Spatial Orientation Dynamic Test-Revised (J. Santacreu, 1999) in single and dual task settings, identifying 3 different strategies. Strategy shifts were associated with the level of performance, as more apt individuals were more likely to shift to better strategies. Ignoring the strategies yields counterintuitive results that cannot be explained by simple, direct relations among the constructs involved.

  1. Issues in testing the new national seismic hazard model for Italy

    NASA Astrophysics Data System (ADS)

    Stein, S.; Peresan, A.; Kossobokov, V. G.; Brooks, E. M.; Spencer, B. D.

    2016-12-01

    It is important to bear in mind that we know little about how earthquake hazard maps actually describe the shaking that will actually occur in the future, and have no agreed way of assessing how well a map performed in the past, and, thus, whether one map performs better than another. Moreover, we should not forget that different maps can be useful for different end users, who may have different cost-and-benefit strategies. Thus, regardless of the specific tests we chose to use, the adopted testing approach should have several key features: We should assess map performance using all the available instrumental, paleo seismology, and historical intensity data. Instrumental data alone span a period much too short to capture the largest earthquakes - and thus strongest shaking - expected from most faults. We should investigate what causes systematic misfit, if any, between the longest record we have - historical intensity data available for the Italian territory from 217 B.C. to 2002 A.D. - and a given hazard map. We should compare how seismic hazard maps developed over time. How do the most recent maps for Italy compare to earlier ones? It is important to understand local divergences that show how the models are developing to the most recent one. The temporal succession of maps is important: we have to learn from previous errors. We should use the many different tests that have been proposed. All are worth trying, because different metrics of performance show different aspects of how a hazard map performs and can be used. We should compare other maps to the ones we are testing. Maps can be made using a wide variety of assumptions, which will lead to different predicted shaking. It is possible that maps derived by other approaches may perform better. Although Italian current codes are based on probabilistic maps, it is important from both a scientific and societal perspective to look at all options including deterministic scenario based ones. Comparing what works better, according to different performance measures, will give valuable insight into key map parameters and assumptions. We may well find that different maps perform better in different applications.

  2. Quasi-conformal mapping with genetic algorithms applied to coordinate transformations

    NASA Astrophysics Data System (ADS)

    González-Matesanz, F. J.; Malpica, J. A.

    2006-11-01

    In this paper, piecewise conformal mapping for the transformation of geodetic coordinates is studied. An algorithm, which is an improved version of a previous algorithm published by Lippus [2004a. On some properties of piecewise conformal mappings. Eesti NSV Teaduste Akademmia Toimetised Füüsika-Matemaakika 53, 92-98; 2004b. Transformation of coordinates using piecewise conformal mapping. Journal of Geodesy 78 (1-2), 40] is presented; the improvement comes from using a genetic algorithm to partition the complex plane into convex polygons, whereas the original one did so manually. As a case study, the method is applied to the transformation of the Spanish datum ED50 and ETRS89, and both its advantages and disadvantages are discussed herein.

  3. Applied photo interpretation for airbrush cartography

    NASA Technical Reports Server (NTRS)

    Inge, J. L.; Bridges, P. M.

    1976-01-01

    New techniques of cartographic portrayal have been developed for the compilation of maps of lunar and planetary surfaces. Conventional photo interpretation methods utilizing size, shape, shadow, tone, pattern, and texture are applied to computer processed satellite television images. The variety of the image data allows the illustrator to interpret image details by inter-comparison and intra-comparison of photographs. Comparative judgements are affected by illumination, resolution, variations in surface coloration, and transmission or processing artifacts. The validity of the interpretation process is tested by making a representational drawing by an airbrush portrayal technique. Production controls insure the consistency of a map series. Photo interpretive cartographic portrayal skills are used to prepare two kinds of map series and are adaptable to map products of different kinds and purposes.

  4. The Tofts model in frequency domain: fast and robust determination of pharmacokinetic maps for dynamic contrast enhancement MRI

    NASA Astrophysics Data System (ADS)

    Vajuvalli, Nithin N.; Chikkemenahally, Dharmendra Kumar K.; Nayak, Krupa N.; Bhosale, Manoj G.; Geethanath, Sairam

    2016-12-01

    Dynamic contrast enhancement magnetic resonance imaging (DCE-MRI) is a well-established method for non-invasive detection and therapeutic monitoring of pathologies through administration of intravenous contrast agent. Quantification of pharmacokinetic (PK) maps can be achieved through application of compartmental models relevant to the pathophysiology of the tissue under interrogation. The determination of PK parameters involves fitting of time-concentration data to these models. In this work, the Tofts model in frequency domain (TM-FD) is applied to a weakly vascularized tissue such as the breast. It is derived as a convolution-free model from the conventional Tofts model in the time domain (TM-TD). This reduces the dimensionality of the curve-fitting problem from two to one. The approaches of TM-FD and TM-TD were applied to two kinds of in silico phantoms and six in vivo breast DCE data sets with and without the addition of noise. The results showed that computational time taken to estimate PK maps using TM-FD was 16-25% less than with TM-TD. Normalized root mean square error (NRMSE) calculation and Pearson correlation analyses were performed to validate robustness and accuracy of the TM-FD and TM-TD approaches. These compared with ground truth values in the case of phantom studies for four different temporal resolutions. Results showed that NRMSE values for TM-FD were significantly lower than those of TM-TD as validated by a paired t-test along with reduced computational time. This approach therefore enables online evaluation of PK maps by radiologists in a clinical setting, aiding in the evaluation of 3D and/or increased coverage of the tissue of interest.

  5. Spectral density mapping at multiple magnetic fields suitable for 13C NMR relaxation studies

    NASA Astrophysics Data System (ADS)

    Kadeřávek, Pavel; Zapletal, Vojtěch; Fiala, Radovan; Srb, Pavel; Padrta, Petr; Přecechtělová, Jana Pavlíková; Šoltésová, Mária; Kowalewski, Jozef; Widmalm, Göran; Chmelík, Josef; Sklenář, Vladimír; Žídek, Lukáš

    2016-05-01

    Standard spectral density mapping protocols, well suited for the analysis of 15N relaxation rates, introduce significant systematic errors when applied to 13C relaxation data, especially if the dynamics is dominated by motions with short correlation times (small molecules, dynamic residues of macromolecules). A possibility to improve the accuracy by employing cross-correlated relaxation rates and on measurements taken at several magnetic fields has been examined. A suite of protocols for analyzing such data has been developed and their performance tested. Applicability of the proposed protocols is documented in two case studies, spectral density mapping of a uniformly labeled RNA hairpin and of a selectively labeled disaccharide exhibiting highly anisotropic tumbling. Combination of auto- and cross-correlated relaxation data acquired at three magnetic fields was applied in the former case in order to separate effects of fast motions and conformational or chemical exchange. An approach using auto-correlated relaxation rates acquired at five magnetic fields, applicable to anisotropically moving molecules, was used in the latter case. The results were compared with a more advanced analysis of data obtained by interpolation of auto-correlated relaxation rates measured at seven magnetic fields, and with the spectral density mapping of cross-correlated relaxation rates. The results showed that sufficiently accurate values of auto- and cross-correlated spectral density functions at zero and 13C frequencies can be obtained from data acquired at three magnetic fields for uniformly 13C -labeled molecules with a moderate anisotropy of the rotational diffusion tensor. Analysis of auto-correlated relaxation rates at five magnetic fields represents an alternative for molecules undergoing highly anisotropic motions.

  6. A Tool for Modelling the Probability of Landslides Impacting Road Networks

    NASA Astrophysics Data System (ADS)

    Taylor, Faith E.; Santangelo, Michele; Marchesini, Ivan; Malamud, Bruce D.; Guzzetti, Fausto

    2014-05-01

    Triggers such as earthquakes or heavy rainfall can result in hundreds to thousands of landslides occurring across a region within a short space of time. These landslides can in turn result in blockages across the road network, impacting how people move about a region. Here, we show the development and application of a semi-stochastic model to simulate how landslides intersect with road networks during a triggered landslide event. This was performed by creating 'synthetic' triggered landslide inventory maps and overlaying these with a road network map to identify where road blockages occur. Our landslide-road model has been applied to two regions: (i) the Collazzone basin (79 km2) in Central Italy where 422 landslides were triggered by rapid snowmelt in January 1997, (ii) the Oat Mountain quadrangle (155 km2) in California, USA, where 1,350 landslides were triggered by the Northridge Earthquake (M = 6.7) in January 1994. For both regions, detailed landslide inventory maps for the triggered events were available, in addition to maps of landslide susceptibility and road networks of primary, secondary and tertiary roads. To create 'synthetic' landslide inventory maps, landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL. The number of landslide areas selected was based on the observed density of landslides (number of landslides km-2) in the triggered event inventories. Landslide shapes were approximated as ellipses, where the ratio of the major and minor axes varies with AL. Landslides were then dropped over the region semi-stochastically, conditioned by a landslide susceptibility map, resulting in a synthetic landslide inventory map. The originally available landslide susceptibility maps did not take into account susceptibility changes in the immediate vicinity of roads, therefore our landslide susceptibility map was adjusted to further reduce the susceptibility near each road based on the road level (primary, secondary, tertiary). For each model run, we superimposed the spatial location of landslide drops with the road network, and recorded the number, size and location of road blockages recorded, along with landslides within 50 and 100 m of the different road levels. Network analysis tools available in GRASS GIS were also applied to measure the impact upon the road network in terms of connectivity. The model was performed 100 times in a Monte-Carlo simulation for each region. Initial results show reasonable agreement between model output and the observed landslide inventories in terms of the number of road blockages. In Collazzone (length of road network = 153 km, landslide density = 5.2 landslides km-2), the median number of modelled road blockages over 100 model runs was 5 (±2.5 standard deviation) compared to the mapped inventory observed number of 5 road blockages. In Northridge (length of road network = 780 km, landslide density = 8.7 landslides km-2), the median number of modelled road blockages over 100 model runs was 108 (±17.2 standard deviation) compared to the mapped inventory observed number of 48 road blockages. As we progress with model development, we believe this semi-stochastic modelling approach will potentially aid civil protection agencies to explore different scenarios of road network potential damage as the result of different magnitude landslide triggering event scenarios.

  7. Design and performance of an ultra-high vacuum scanning tunneling microscope operating at dilution refrigerator temperatures and high magnetic fields.

    PubMed

    Misra, S; Zhou, B B; Drozdov, I K; Seo, J; Urban, L; Gyenis, A; Kingsley, S C J; Jones, H; Yazdani, A

    2013-10-01

    We describe the construction and performance of a scanning tunneling microscope capable of taking maps of the tunneling density of states with sub-atomic spatial resolution at dilution refrigerator temperatures and high (14 T) magnetic fields. The fully ultra-high vacuum system features visual access to a two-sample microscope stage at the end of a bottom-loading dilution refrigerator, which facilitates the transfer of in situ prepared tips and samples. The two-sample stage enables location of the best area of the sample under study and extends the experiment lifetime. The successful thermal anchoring of the microscope, described in detail, is confirmed through a base temperature reading of 20 mK, along with a measured electron temperature of 250 mK. Atomically resolved images, along with complementary vibration measurements, are presented to confirm the effectiveness of the vibration isolation scheme in this instrument. Finally, we demonstrate that the microscope is capable of the same level of performance as typical machines with more modest refrigeration by measuring spectroscopic maps at base temperature both at zero field and in an applied magnetic field.

  8. A Comparison of Phasing Algorithms for Trios and Unrelated Individuals

    PubMed Central

    Marchini, Jonathan; Cutler, David; Patterson, Nick; Stephens, Matthew; Eskin, Eleazar; Halperin, Eran; Lin, Shin; Qin, Zhaohui S.; Munro, Heather M.; Abecasis, Gonçalo R.; Donnelly, Peter

    2006-01-01

    Knowledge of haplotype phase is valuable for many analysis methods in the study of disease, population, and evolutionary genetics. Considerable research effort has been devoted to the development of statistical and computational methods that infer haplotype phase from genotype data. Although a substantial number of such methods have been developed, they have focused principally on inference from unrelated individuals, and comparisons between methods have been rather limited. Here, we describe the extension of five leading algorithms for phase inference for handling father-mother-child trios. We performed a comprehensive assessment of the methods applied to both trios and to unrelated individuals, with a focus on genomic-scale problems, using both simulated data and data from the HapMap project. The most accurate algorithm was PHASE (v2.1). For this method, the percentages of genotypes whose phase was incorrectly inferred were 0.12%, 0.05%, and 0.16% for trios from simulated data, HapMap Centre d'Etude du Polymorphisme Humain (CEPH) trios, and HapMap Yoruban trios, respectively, and 5.2% and 5.9% for unrelated individuals in simulated data and the HapMap CEPH data, respectively. The other methods considered in this work had comparable but slightly worse error rates. The error rates for trios are similar to the levels of genotyping error and missing data expected. We thus conclude that all the methods considered will provide highly accurate estimates of haplotypes when applied to trio data sets. Running times differ substantially between methods. Although it is one of the slowest methods, PHASE (v2.1) was used to infer haplotypes for the 1 million–SNP HapMap data set. Finally, we evaluated methods of estimating the value of r2 between a pair of SNPs and concluded that all methods estimated r2 well when the estimated value was ⩾0.8. PMID:16465620

  9. Biased Dropout and Crossmap Dropout: Learning towards effective Dropout regularization in convolutional neural network.

    PubMed

    Poernomo, Alvin; Kang, Dae-Ki

    2018-08-01

    Training a deep neural network with a large number of parameters often leads to overfitting problem. Recently, Dropout has been introduced as a simple, yet effective regularization approach to combat overfitting in such models. Although Dropout has shown remarkable results on many deep neural network cases, its actual effect on CNN has not been thoroughly explored. Moreover, training a Dropout model will significantly increase the training time as it takes longer time to converge than a non-Dropout model with the same architecture. To deal with these issues, we address Biased Dropout and Crossmap Dropout, two novel approaches of Dropout extension based on the behavior of hidden units in CNN model. Biased Dropout divides the hidden units in a certain layer into two groups based on their magnitude and applies different Dropout rate to each group appropriately. Hidden units with higher activation value, which give more contributions to the network final performance, will be retained by a lower Dropout rate, while units with lower activation value will be exposed to a higher Dropout rate to compensate the previous part. The second approach is Crossmap Dropout, which is an extension of the regular Dropout in convolution layer. Each feature map in a convolution layer has a strong correlation between each other, particularly in every identical pixel location in each feature map. Crossmap Dropout tries to maintain this important correlation yet at the same time break the correlation between each adjacent pixel with respect to all feature maps by applying the same Dropout mask to all feature maps, so that all pixels or units in equivalent positions in each feature map will be either dropped or active during training. Our experiment with various benchmark datasets shows that our approaches provide better generalization than the regular Dropout. Moreover, our Biased Dropout takes faster time to converge during training phase, suggesting that assigning noise appropriately in hidden units can lead to an effective regularization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Asymmetric neighborhood functions accelerate ordering process of self-organizing maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ota, Kaiichiro; Aoki, Takaaki; Kurata, Koji

    2011-02-15

    A self-organizing map (SOM) algorithm can generate a topographic map from a high-dimensional stimulus space to a low-dimensional array of units. Because a topographic map preserves neighborhood relationships between the stimuli, the SOM can be applied to certain types of information processing such as data visualization. During the learning process, however, topological defects frequently emerge in the map. The presence of defects tends to drastically slow down the formation of a globally ordered topographic map. To remove such topological defects, it has been reported that an asymmetric neighborhood function is effective, but only in the simple case of mapping one-dimensionalmore » stimuli to a chain of units. In this paper, we demonstrate that even when high-dimensional stimuli are used, the asymmetric neighborhood function is effective for both artificial and real-world data. Our results suggest that applying the asymmetric neighborhood function to the SOM algorithm improves the reliability of the algorithm. In addition, it enables processing of complicated, high-dimensional data by using this algorithm.« less

  11. Subsite mapping of enzymes. Application of the depolymerase computer model to two alpha-amylases.

    PubMed Central

    Allen, J D; Thoma, J A

    1976-01-01

    In the preceding paper (Allen and Thoma, 1976) we developed a depolymerase computer model, which uses a minimization routine to establish a subsite map for a depolymerase. In the present paper we show how the model is applied to experimental data for two alpha-amylases. Michaelis parameters and bond-cleavage frequencies for substrates of chain lengths up to twelve glucosyl units have been reported for Bacillus amyloliquefaciens, and a subsite map has been proposed for this enzyme [Thoma et al. (1971) J. Biol. Chem. 246, 5621-5635]. By applying the computer model to the experimental data, we have arrived at a ten-subsite map. We find that a significant improvement in this map is achieved by allowing the hydrolytic rate coefficient to vary as a function of the number of occupied subsites comprising the enzyme-binding region. The bond-cleavage frequencies, the enzyme is found to have eight subsites. A partial subsite map is arrived at, but the entire binding region cannot be mapped because Michaelis parameters are complicated by transglycosylation reactions. The hydrolytic rate coefficients for this enzyme are not constant. PMID:999630

  12. Natural texture retrieval based on perceptual similarity measurement

    NASA Astrophysics Data System (ADS)

    Gao, Ying; Dong, Junyu; Lou, Jianwen; Qi, Lin; Liu, Jun

    2018-04-01

    A typical texture retrieval system performs feature comparison and might not be able to make human-like judgments of image similarity. Meanwhile, it is commonly known that perceptual texture similarity is difficult to be described by traditional image features. In this paper, we propose a new texture retrieval scheme based on texture perceptual similarity. The key of the proposed scheme is that prediction of perceptual similarity is performed by learning a non-linear mapping from image features space to perceptual texture space by using Random Forest. We test the method on natural texture dataset and apply it on a new wallpapers dataset. Experimental results demonstrate that the proposed texture retrieval scheme with perceptual similarity improves the retrieval performance over traditional image features.

  13. MOEMS Fabry-Pérot interferometer with point-anchored Si-air mirrors for middle infrared

    NASA Astrophysics Data System (ADS)

    Tuohiniemi, Mikko; Näsilä, Antti; Akujärvi, Altti; Blomberg, Martti

    2014-09-01

    We studied how a micromachined Fabry-Pérot interferometer, realized with wide point-anchored Si/air-gap reflectors, performs at the middle-infrared. A computational analysis of the anchor mechanical behavior is also presented. Compared with solid-film reflectors, this technology features better index contrast, which enables a wider stop band and potentially higher resolution. In this work, we investigate whether the performance is improved according to the index-contrast benefit, or whether the mechanical differences play a role. For comparison, we manufactured and characterized another design that applies solid-film reflectors of Si/SiO2 structure. This data is exploited as a reference for a middle-infrared interferometer and as a template for mapping the performance from the simulation results to the measured data. The novel Si/air-gap device was realized as a non-tunable proof-of-concept version. The measured data is mapped into an estimate of the achievable performance of a tunable version. We present the measured transmission and resolution data and compare the simulation models that reproduce the data. The prediction for the tunable middle-infrared Si/air-gap device is then presented. The results indicate that the interferometer’s resolution is expected to have improved twofold and have a much wider stop band compared with the prior art.

  14. Folding Digital Mapping into a Traditional Field Camp Program

    NASA Astrophysics Data System (ADS)

    Kelley, D. F.

    2011-12-01

    Louisiana State University runs a field camp with a permanent fixed-base which has continually operated since 1928 in the Front Range just to the south of Colorado Springs, CO. The field camp program which offers a 6-credit hour course in Field Geology follows a very traditional structure. The first week is spent collecting data for the construction of a detailed stratigraphic column of the local geology. The second week is spent learning the skills of geologic mapping, while the third applies these skills to a more geologically complicated mapping area. The final three weeks of the field camp program are spent studying and mapping igneous and metamorphic rocks as well as conducting a regional stratigraphic correlation exercise. Historically there has been a lack of technology involved in this program. All mapping has been done in the field without the use of any digital equipment and all products have been made in the office without the use of computers. In the summer of 2011 the use of GPS units, and GIS software were introduced to the program. The exercise that was chosen for this incorporation of technology was one in which metamorphic rocks are mapped within Golden Gate Canyon State Park in Colorado. This same mapping exercise was carried out during the 2010 field camp session with no GPS or GIS use. The students in both groups had the similar geologic backgrounds, similar grade point averages, and similar overall performances at field camp. However, the group that used digital mapping techniques mapped the field area more quickly and reportedly with greater ease. Additionally, the students who used GPS and GIS included more detailed rock descriptions with their final maps indicating that they spent less time in the field focusing on mapping contacts between units. The outcome was a better overall product. The use of GPS units also indirectly caused the students to produce better field maps. In addition to greater ease in mapping, the use of GIS software to create maps was rewarding to the students and gave them mapping experience that is in line with industry standards.

  15. SNP discovery by high-throughput sequencing in soybean

    PubMed Central

    2010-01-01

    Background With the advance of new massively parallel genotyping technologies, quantitative trait loci (QTL) fine mapping and map-based cloning become more achievable in identifying genes for important and complex traits. Development of high-density genetic markers in the QTL regions of specific mapping populations is essential for fine-mapping and map-based cloning of economically important genes. Single nucleotide polymorphisms (SNPs) are the most abundant form of genetic variation existing between any diverse genotypes that are usually used for QTL mapping studies. The massively parallel sequencing technologies (Roche GS/454, Illumina GA/Solexa, and ABI/SOLiD), have been widely applied to identify genome-wide sequence variations. However, it is still remains unclear whether sequence data at a low sequencing depth are enough to detect the variations existing in any QTL regions of interest in a crop genome, and how to prepare sequencing samples for a complex genome such as soybean. Therefore, with the aims of identifying SNP markers in a cost effective way for fine-mapping several QTL regions, and testing the validation rate of the putative SNPs predicted with Solexa short sequence reads at a low sequencing depth, we evaluated a pooled DNA fragment reduced representation library and SNP detection methods applied to short read sequences generated by Solexa high-throughput sequencing technology. Results A total of 39,022 putative SNPs were identified by the Illumina/Solexa sequencing system using a reduced representation DNA library of two parental lines of a mapping population. The validation rates of these putative SNPs predicted with low and high stringency were 72% and 85%, respectively. One hundred sixty four SNP markers resulted from the validation of putative SNPs and have been selectively chosen to target a known QTL, thereby increasing the marker density of the targeted region to one marker per 42 K bp. Conclusions We have demonstrated how to quickly identify large numbers of SNPs for fine mapping of QTL regions by applying massively parallel sequencing combined with genome complexity reduction techniques. This SNP discovery approach is more efficient for targeting multiple QTL regions in a same genetic population, which can be applied to other crops. PMID:20701770

  16. Land cover maps, BVOC emissions, and SOA burden in a global aerosol-climate model

    NASA Astrophysics Data System (ADS)

    Stanelle, Tanja; Henrot, Alexandra; Bey, Isaelle

    2015-04-01

    It has been reported that different land cover representations influence the emission of biogenic volatile organic compounds (BVOC) (e.g. Guenther et al., 2006). But the land cover forcing used in model simulations is quite uncertain (e.g. Jung et al., 2006). As a consequence the simulated emission of BVOCs depends on the applied land cover map. To test the sensitivity of global and regional estimates of BVOC emissions on the applied land cover map we applied 3 different land cover maps into our global aerosol-climate model ECHAM6-HAM2.2. We found a high sensitivity for tropical regions. BVOCs are a very prominent precursor for the production of Secondary Organic Aerosols (SOA). Therefore the sensitivity of BVOC emissions on land cover maps impacts the SOA burden in the atmosphere. With our model system we are able to quantify that impact. References: Guenther et al. (2006), Estimates of global terrestrial isoprene emissions using MEGAN, Atmos. Chem. Phys., 6, 3181-3210, doi:10.5194/acp-6-3181-2006. Jung et al. (2006), Exploiting synergies of global land cover products for carbon cycle modeling, Rem. Sens. Environm., 101, 534-553, doi:10.1016/j.rse.2006.01.020.

  17. Improvement of the repeatability of parallel transmission at 7T using interleaved acquisition in the calibration scan.

    PubMed

    Kameda, Hiroyuki; Kudo, Kohsuke; Matsuda, Tsuyoshi; Harada, Taisuke; Iwadate, Yuji; Uwano, Ikuko; Yamashita, Fumio; Yoshioka, Kunihiro; Sasaki, Makoto; Shirato, Hiroki

    2017-12-04

    Respiration-induced phase shift affects B 0 /B 1 + mapping repeatability in parallel transmission (pTx) calibration for 7T brain MRI, but is improved by breath-holding (BH). However, BH cannot be applied during long scans. To examine whether interleaved acquisition during calibration scanning could improve pTx repeatability and image homogeneity. Prospective. Nine healthy subjects. 7T MRI with a two-channel RF transmission system was used. Calibration scanning for B 0 /B 1 + mapping was performed under sequential acquisition/free-breathing (Seq-FB), Seq-BH, and interleaved acquisition/FB (Int-FB) conditions. The B 0 map was calculated with two echo times, and the B 1 + map was obtained using the Bloch-Siegert method. Actual flip-angle imaging (AFI) and gradient echo (GRE) imaging were performed using pTx and quadrature-Tx (qTx). All scans were acquired in five sessions. Repeatability was evaluated using intersession standard deviation (SD) or coefficient of variance (CV), and in-plane homogeneity was evaluated using in-plane CV. A paired t-test with Bonferroni correction for multiple comparisons was used. The intersession CV/SDs for the B 0 /B 1 + maps were significantly smaller in Int-FB than in Seq-FB (Bonferroni-corrected P < 0.05 for all). The intersession CVs for the AFI and GRE images were also significantly smaller in Int-FB, Seq-BH, and qTx than in Seq-FB (Bonferroni-corrected P < 0.05 for all). The in-plane CVs for the AFI and GRE images in Seq-FB, Int-FB, and Seq-BH were significantly smaller than in qTx (Bonferroni-corrected P < 0.01 for all). Using interleaved acquisition during calibration scans of pTx for 7T brain MRI improved the repeatability of B 0 /B 1 + mapping, AFI, and GRE images, without BH. 1 Technical Efficacy Stage 1 J. Magn. Reson. Imaging 2017. © 2017 International Society for Magnetic Resonance in Medicine.

  18. Sensor-Motor Maps for Describing Linear Reflex Composition in Hopping.

    PubMed

    Schumacher, Christian; Seyfarth, André

    2017-01-01

    In human and animal motor control several sensory organs contribute to a network of sensory pathways modulating the motion depending on the task and the phase of execution to generate daily motor tasks such as locomotion. To better understand the individual and joint contribution of reflex pathways in locomotor tasks, we developed a neuromuscular model that describes hopping movements. In this model, we consider the influence of proprioceptive length (LFB), velocity (VFB) and force feedback (FFB) pathways of a leg extensor muscle on hopping stability, performance and efficiency (metabolic effort). Therefore, we explore the space describing the blending of the monosynaptic reflex pathway gains. We call this reflex parameter space a sensor-motor map . The sensor-motor maps are used to visualize the functional contribution of sensory pathways in multisensory integration. We further evaluate the robustness of these sensor-motor maps to changes in tendon elasticity, body mass, segment length and ground compliance. The model predicted that different reflex pathway compositions selectively optimize specific hopping characteristics (e.g., performance and efficiency). Both FFB and LFB were pathways that enable hopping. FFB resulted in the largest hopping heights, LFB enhanced hopping efficiency and VFB had the ability to disable hopping. For the tested case, the topology of the sensor-motor maps as well as the location of functionally optimal compositions were invariant to changes in system designs (tendon elasticity, body mass, segment length) or environmental parameters (ground compliance). Our results indicate that different feedback pathway compositions may serve different functional roles. The topology of the sensor-motor map was predicted to be robust against changes in the mechanical system design indicating that the reflex system can use different morphological designs, which does not apply for most robotic systems (for which the control often follows a specific design). Consequently, variations in body mechanics are permitted with consistent compositions of sensory feedback pathways. Given the variability in human body morphology, such variations are highly relevant for human motor control.

  19. Mapping ecological risks with a portfolio-based technique: incorporating uncertainty and decision-making preferences

    Treesearch

    Denys Yemshanov; Frank H. Koch; Mark Ducey; Klaus Koehler

    2013-01-01

    Geographic mapping of risks is a useful analytical step in ecological risk assessments and in particular, in analyses aimed to estimate risks associated with introductions of invasive organisms. In this paper, we approach invasive species risk mapping as a portfolio allocation problem and apply techniques from decision theory to build an invasion risk map that combines...

  20. Method for the visualization of landform by mapping using low altitude UAV application

    NASA Astrophysics Data System (ADS)

    Sharan Kumar, N.; Ashraf Mohamad Ismail, Mohd; Sukor, Nur Sabahiah Abdul; Cheang, William

    2018-05-01

    Unmanned Aerial Vehicle (UAV) and Digital Photogrammetry are evolving drastically in mapping technology. The significance and necessity for digital landform mapping are developing with years. In this study, a mapping workflow is applied to obtain two different input data sets which are the orthophoto and DSM. A fine flying technology is used to capture Low Altitude Aerial Photography (LAAP). Low altitude UAV (Drone) with the fixed advanced camera was utilized for imagery while computerized photogrammetry handling using Photo Scan was applied for cartographic information accumulation. The data processing through photogrammetry and orthomosaic processes is the main applications. High imagery quality is essential for the effectiveness and nature of normal mapping output such as 3D model, Digital Elevation Model (DEM), Digital Surface Model (DSM) and Ortho Images. The exactitude of Ground Control Points (GCP), flight altitude and the resolution of the camera are essential for good quality DEM and Orthophoto.

  1. Basin boundaries and focal points in a map coming from Bairstow's method.

    PubMed

    Gardini, Laura; Bischi, Gian-Italo; Fournier-Prunaret, Daniele

    1999-06-01

    This paper is devoted to the study of the global dynamical properties of a two-dimensional noninvertible map, with a denominator which can vanish, obtained by applying Bairstow's method to a cubic polynomial. It is shown that the complicated structure of the basins of attraction of the fixed points is due to the existence of singularities such as sets of nondefinition, focal points, and prefocal curves, which are specific to maps with a vanishing denominator, and have been recently introduced in the literature. Some global bifurcations that change the qualitative structure of the basin boundaries, are explained in terms of contacts among these singularities. The techniques used in this paper put in evidence some new dynamic behaviors and bifurcations, which are peculiar of maps with denominator; hence they can be applied to the analysis of other classes of maps coming from iterative algorithms (based on Newton's method, or others). (c) 1999 American Institute of Physics.

  2. Cloud GIS Based Watershed Management

    NASA Astrophysics Data System (ADS)

    Bediroğlu, G.; Colak, H. E.

    2017-11-01

    In this study, we generated a Cloud GIS based watershed management system with using Cloud Computing architecture. Cloud GIS is used as SAAS (Software as a Service) and DAAS (Data as a Service). We applied GIS analysis on cloud in terms of testing SAAS and deployed GIS datasets on cloud in terms of DAAS. We used Hybrid cloud computing model in manner of using ready web based mapping services hosted on cloud (World Topology, Satellite Imageries). We uploaded to system after creating geodatabases including Hydrology (Rivers, Lakes), Soil Maps, Climate Maps, Rain Maps, Geology and Land Use. Watershed of study area has been determined on cloud using ready-hosted topology maps. After uploading all the datasets to systems, we have applied various GIS analysis and queries. Results shown that Cloud GIS technology brings velocity and efficiency for watershed management studies. Besides this, system can be easily implemented for similar land analysis and management studies.

  3. National Park Service Vegetation Mapping Inventory Program: Appalachian National Scenic Trail vegetation mapping project

    USGS Publications Warehouse

    Hop, Kevin D.; Strassman, Andrew C.; Hall, Mark; Menard, Shannon; Largay, Ery; Sattler, Stephanie; Hoy, Erin E.; Ruhser, Janis; Hlavacek, Enrika; Dieck, Jennifer

    2017-01-01

    The National Park Service (NPS) Vegetation Mapping Inventory (VMI) Program classifies, describes, and maps existing vegetation of national park units for the NPS Natural Resource Inventory and Monitoring (I&M) Program. The NPS VMI Program is managed by the NPS I&M Division and provides baseline vegetation information to the NPS Natural Resource I&M Program. The U.S. Geological Survey Upper Midwest Environmental Sciences Center, NatureServe, NPS Northeast Temperate Network, and NPS Appalachian National Scenic Trail (APPA) have completed vegetation classification and mapping of APPA for the NPS VMI Program.Mappers, ecologists, and botanists collaborated to affirm vegetation types within the U.S. National Vegetation Classification (USNVC) of APPA and to determine how best to map the vegetation types by using aerial imagery. Analyses of data from 1,618 vegetation plots were used to describe USNVC associations of APPA. Data from 289 verification sites were collected to test the field key to vegetation associations and the application of vegetation associations to a sample set of map polygons. Data from 269 validation sites were collected to assess vegetation mapping prior to submitting the vegetation map for accuracy assessment (AA). Data from 3,265 AA sites were collected, of which 3,204 were used to test accuracy of the vegetation map layer. The collective of these datasets affirmed 280 USNVC associations for the APPA vegetation mapping project.To map the vegetation and land cover of APPA, 169 map classes were developed. The 169 map classes consist of 150 that represent natural (including ruderal) vegetation types in the USNVC, 11 that represent cultural (agricultural and developed) vegetation types in the USNVC, 5 that represent natural landscapes with catastrophic disturbance or some other modification to natural vegetation preventing accurate classification in the USNVC, and 3 that represent nonvegetated water (non-USNVC). Features were interpreted from viewing 4-band digital aerial imagery using digital onscreen three-dimensional stereoscopic workflow systems in geographic information systems (GIS). (Digital aerial imagery was collected each fall during 2009–11 to capture leaf-phenology change of hardwood trees across the latitudinal range of APPA.) The interpreted data were digitally and spatially referenced, thus making the spatial-database layers usable in GIS. Polygon units were mapped to either a 0.5-hectare (ha) or 0.25-ha minimum mapping unit, depending on vegetation type or scenario; however, polygon units were mapped to 0.1 ha for alpine vegetation.A geodatabase containing various feature-class layers and tables provide locations and support data to USNVC vegetation types (vegetation map layer), vegetation plots, verification sites, validation sites, AA sites, project boundary extent and zones, and aerial image centers and flight lines. The feature-class layer and related tables of the vegetation map layer provide 30,395 polygons of detailed attribute data covering 110,919.7 ha, with an average polygon size of 3.6 ha; the vegetation map coincides closely with the administrative boundary for APPA.Summary reports generated from the vegetation map layer of the map classes representing USNVC natural (including ruderal) vegetation types apply to 28,242 polygons (92.9% of polygons) and cover 106,413.0 ha (95.9%) of the map extent for APPA. The map layer indicates APPA to be 92.4% forest and woodland (102,480.8 ha), 1.7% shrubland (1866.3 ha), and 1.8% herbaceous cover (2,065.9 ha). Map classes representing park-special vegetation (undefined in the USNVC) apply to 58 polygons (0.2% of polygons) and cover 404.3 ha (0.4%) of the map extent. Map classes representing USNVC cultural types apply to 1,777 polygons (5.8% of polygons) and cover 2,516.3 ha (2.3%) of the map extent. Map classes representing nonvegetated water (non-USNVC) apply to 332 polygons (1.1% of polygons) and cover 1,586.2 ha (1.4%) of the map extent.

  4. The Geometric Nature of the Flaschka Transformation

    NASA Astrophysics Data System (ADS)

    Bloch, Anthony M.; Gay-Balmaz, François; Ratiu, Tudor S.

    2017-06-01

    We show that the Flaschka map, originally introduced to analyze the dynamics of the integrable Toda lattice system, is the inverse of a momentum map. We discuss the geometrical setting of the map and apply it to the generalized Toda lattice systems on semisimple Lie algebras, the rigid body system on Toda orbits, and to coadjoint orbits of semidirect products groups. In addition, we develop an infinite-dimensional generalization for the group of area preserving diffeomorphisms of the annulus and apply it to the analysis of the dispersionless Toda lattice PDE and the solvable rigid body PDE.

  5. Mind map learning for advanced engineering study: case study in system dynamics

    NASA Astrophysics Data System (ADS)

    Woradechjumroen, Denchai

    2018-01-01

    System Dynamics (SD) is one of the subjects that were use in learning Automatic Control Systems in dynamic and control field. Mathematical modelling and solving skills of students for engineering systems are expecting outcomes of the course which can be further used to efficiently study control systems and mechanical vibration; however, the fundamental of the SD includes strong backgrounds in Dynamics and Differential Equations, which are appropriate to the students in governmental universities that have strong skills in Mathematics and Scientifics. For private universities, students are weak in the above subjects since they obtained high vocational certificate from Technical College or Polytechnic School, which emphasize the learning contents in practice. To enhance their learning for improving their backgrounds, this paper applies mind maps based problem based learning to relate the essential relations of mathematical and physical equations. With the advantages of mind maps, each student is assigned to design individual mind maps for self-leaning development after they attend the class and learn overall picture of each chapter from the class instructor. Four problems based mind maps learning are assigned to each student. Each assignment is evaluated via mid-term and final examinations, which are issued in terms of learning concepts and applications. In the method testing, thirty students are tested and evaluated via student learning backgrounds in the past. The result shows that well-design mind maps can improve learning performance based on outcome evaluation. Especially, mind maps can reduce time-consuming and reviewing for Mathematics and Physics in SD significantly.

  6. Probabilistic mapping of flood-induced backscatter changes in SAR time series

    NASA Astrophysics Data System (ADS)

    Schlaffer, Stefan; Chini, Marco; Giustarini, Laura; Matgen, Patrick

    2017-04-01

    The information content of flood extent maps can be increased considerably by including information on the uncertainty of the flood area delineation. This additional information can be of benefit in flood forecasting and monitoring. Furthermore, flood probability maps can be converted to binary maps showing flooded and non-flooded areas by applying a threshold probability value pF = 0.5. In this study, a probabilistic change detection approach for flood mapping based on synthetic aperture radar (SAR) time series is proposed. For this purpose, conditional probability density functions (PDFs) for land and open water surfaces were estimated from ENVISAT ASAR Wide Swath (WS) time series containing >600 images using a reference mask of permanent water bodies. A pixel-wise harmonic model was used to account for seasonality in backscatter from land areas caused by soil moisture and vegetation dynamics. The approach was evaluated for a large-scale flood event along the River Severn, United Kingdom. The retrieved flood probability maps were compared to a reference flood mask derived from high-resolution aerial imagery by means of reliability diagrams. The obtained performance measures indicate both high reliability and confidence although there was a slight under-estimation of the flood extent, which may in part be attributed to topographically induced radar shadows along the edges of the floodplain. Furthermore, the results highlight the importance of local incidence angle for the separability between flooded and non-flooded areas as specular reflection properties of open water surfaces increase with a more oblique viewing geometry.

  7. A complete mass spectrometric map for the analysis of the yeast proteome and its application to quantitative trait analysis

    PubMed Central

    Picotti, Paola; Clement-Ziza, Mathieu; Lam, Henry; Campbell, David S.; Schmidt, Alexander; Deutsch, Eric W.; Röst, Hannes; Sun, Zhi; Rinner, Oliver; Reiter, Lukas; Shen, Qin; Michaelson, Jacob J.; Frei, Andreas; Alberti, Simon; Kusebauch, Ulrike; Wollscheid, Bernd; Moritz, Robert; Beyer, Andreas; Aebersold, Ruedi

    2013-01-01

    Complete reference maps or datasets, like the genomic map of an organism, are highly beneficial tools for biological and biomedical research. Attempts to generate such reference datasets for a proteome so far failed to reach complete proteome coverage, with saturation apparent at approximately two thirds of the proteomes tested, even for the most thoroughly characterized proteomes. Here, we used a strategy based on high-throughput peptide synthesis and mass spectrometry to generate a close to complete reference map (97% of the genome-predicted proteins) of the S. cerevisiae proteome. We generated two versions of this mass spectrometric map one supporting discovery- (shotgun) and the other hypothesis-driven (targeted) proteomic measurements. The two versions of the map, therefore, constitute a complete set of proteomic assays to support most studies performed with contemporary proteomic technologies. The reference libraries can be browsed via a web-based repository and associated navigation tools. To demonstrate the utility of the reference libraries we applied them to a protein quantitative trait locus (pQTL) analysis, which requires measurement of the same peptides over a large number of samples with high precision. Protein measurements over a set of 78 S. cerevisiae strains revealed a complex relationship between independent genetic loci, impacting on the levels of related proteins. Our results suggest that selective pressure favors the acquisition of sets of polymorphisms that maintain the stoichiometry of protein complexes and pathways. PMID:23334424

  8. Incorporating Aptamers in the Multiple Analyte Profiling Assays (xMAP): Detection of C-Reactive Protein.

    PubMed

    Bernard, Elyse D; Nguyen, Kathy C; DeRosa, Maria C; Tayabali, Azam F; Aranda-Rodriguez, Rocio

    2017-01-01

    Aptamers are short oligonucleotide sequences used in detection systems because of their high affinity binding to a variety of macromolecules. With the introduction of aptamers over 25 years ago came the exploration of their use in many different applications as a substitute for antibodies. Aptamers have several advantages; they are easy to synthesize, can bind to analytes for which it is difficult to obtain antibodies, and in some cases bind better than antibodies. As such, aptamer applications have significantly expanded as an adjunct to a variety of different immunoassay designs. The Multiple-Analyte Profiling (xMAP) technology developed by Luminex Corporation commonly uses antibodies for the detection of analytes in small sample volumes through the use of fluorescently coded microbeads. This technology permits the simultaneous detection of multiple analytes in each sample tested and hence could be applied in many research fields. Although little work has been performed adapting this technology for use with apatmers, optimizing aptamer-based xMAP assays would dramatically increase the versatility of analyte detection. We report herein on the development of an xMAP bead-based aptamer/antibody sandwich assay for a biomarker of inflammation (C-reactive protein or CRP). Protocols for the coupling of aptamers to xMAP beads, validation of coupling, and for an aptamer/antibody sandwich-type assay for CRP are detailed. The optimized conditions, protocols and findings described in this research could serve as a starting point for the development of new aptamer-based xMAP assays.

  9. Mapping Urban Tree Canopy Cover Using Fused Airborne LIDAR and Satellite Imagery Data

    NASA Astrophysics Data System (ADS)

    Parmehr, Ebadat G.; Amati, Marco; Fraser, Clive S.

    2016-06-01

    Urban green spaces, particularly urban trees, play a key role in enhancing the liveability of cities. The availability of accurate and up-to-date maps of tree canopy cover is important for sustainable development of urban green spaces. LiDAR point clouds are widely used for the mapping of buildings and trees, and several LiDAR point cloud classification techniques have been proposed for automatic mapping. However, the effectiveness of point cloud classification techniques for automated tree extraction from LiDAR data can be impacted to the point of failure by the complexity of tree canopy shapes in urban areas. Multispectral imagery, which provides complementary information to LiDAR data, can improve point cloud classification quality. This paper proposes a reliable method for the extraction of tree canopy cover from fused LiDAR point cloud and multispectral satellite imagery data. The proposed method initially associates each LiDAR point with spectral information from the co-registered satellite imagery data. It calculates the normalised difference vegetation index (NDVI) value for each LiDAR point and corrects tree points which have been misclassified as buildings. Then, region growing of tree points, taking the NDVI value into account, is applied. Finally, the LiDAR points classified as tree points are utilised to generate a canopy cover map. The performance of the proposed tree canopy cover mapping method is experimentally evaluated on a data set of airborne LiDAR and WorldView 2 imagery covering a suburb in Melbourne, Australia.

  10. Thermal Effects of Lunar Surface Roughness: Application for the 2008 LRO Diviner Lunar Radiometer Experiment

    NASA Astrophysics Data System (ADS)

    Greenhagen, B.; Paige, D. A.

    2007-12-01

    It is well known that surface roughness affects spectral slope in the infrared. For the first time, we applied a three-dimensional thermal model to a high resolution lunar topography map to study the effects of surface roughness on lunar thermal emission spectra. We applied a numerical instrument model of the upcoming Diviner Lunar Radiometer Experiment (DLRE) to simulate the expected instrument response to surface roughness variations. The Diviner Lunar Radiometer Experiment (DLRE) will launch in late 2008 onboard the Lunar Reconnaissance Orbiter (LRO). DLRE is a nine-channel radiometer designed to study the thermal and petrologic properties of the lunar surface. DLRE has two solar channels (0.3-3.0 μm high/low sensitivity), three mid-infrared petrology channels (7.55-8.05, 8.10-8.40 8.40-8.70 μm), and four thermal infrared channels (12.5-25, 25-50, 50-100, and 100-200 μm). The topographic data we used was selected from a USGS Hadley Rille DEM (from Apollo 15 Panoramic Camera data) with 10 m resolution (M. Rosiek; personal communication). To remove large scale topographic features, we applied a 200 x 200 pixel boxcar high-pass filter to a relatively flat portion of the DEM. This "flattened" surface roughness map served as the basis for much of this study. We also examined the unaltered topography. Surface temperatures were calculated using a three-dimensional ray tracing thermal model. We created temperature maps at numerous solar incidence angles with nadir viewing geometry. A DLRE instrument model, which includes filter spectral responses and detector fields of view, was applied to the high resolution temperature maps. We studied both the thermal and petrologic effects of surface roughness. For the thermal study, the output of the optics model is a filter specific temperature, scaled to a DLRE footprint of < 500 m. For the petrologic study, we examined the effect of the surface roughness induced spectral slope on the DLRE's ability to locate the Christiansen Feature, which is a good compositional indicator. With multiple thermal infrared channels over a wide spectral range, DLRE will be well suited to measure temperature variations due to surface roughness. Any necessary compensation (e.g. correction for spectral slope) to the mid-infrared petrology data will be performed.

  11. Evaluation of Water Retention in Lumbar Intervertebral Disks Before and After Exercise Stress With T2 Mapping.

    PubMed

    Chokan, Kou; Murakami, Hideki; Endo, Hirooki; Mimata, Yoshikuni; Yamabe, Daisuke; Tsukimura, Itsuko; Oikawa, Ryosuke; Doita, Minoru

    2016-04-01

    T2 mapping was used to quantify moisture content of the lumbar spinal disk nucleus pulposus (NP) and annulus fibrosus before and after exercise stress, and after rest, to evaluate the intervertebral disk function. To clarify water retention in intervertebral disks of the lumbar vertebrae by performing magnetic resonance imaging before and after exercise stress and quantitatively measuring changes in moisture content of intervertebral disks with T2 mapping. To date, a few case studies describe functional evaluation of articular cartilage with T2 mapping; however, T2 mapping to the functional evaluation of intervertebral disks has rarely been applied. Using T2 mapping might help detect changes in the moisture content of intervertebral disks, including articular cartilage, before and after exercise stress, thus enabling the evaluation of changes in water retention shock absorber function. Subjects, comprising 40 healthy individuals (males: 26, females: 14), underwent magnetic resonance imaging T2 mapping before and after exercise stress and after rest. Image J image analysis software was then used to set regions of interest in the obtained images of the anterior annulus fibrosus, posterior annulus fibrosus, and NP. T2 values were measured and compared according to upper vertebrae position and degeneration grade. T2 values significantly decreased in the NP after exercise stress and significantly increased after rest. According to upper vertebrae position, in all of the upper vertebrae positions, T2 values for the NP significantly decreased after exercise stress and significantly increased after rest. According to the degeneration grade, in the NP of grade 1 and 2 cases, T2 values significantly decreased after exercise stress and significantly increased after rest. T2 mapping could be used to not only diagnose the degree of degeneration but also evaluate intervertebral disk function. 3.

  12. Landslide susceptibility mapping using a bivariate statistical model in a tropical hilly area of southeastern Brazil

    NASA Astrophysics Data System (ADS)

    Araújo, J. P. C.; DA Silva, L. M.; Dourado, F. A. D.; Fernandes, N.

    2015-12-01

    Landslides are the most damaging natural hazard in the mountainous region of Rio de Janeiro State in Brazil, responsible for thousands of deaths and important financial and environmental losses. However, this region has currently few landslide susceptibility maps implemented on an adequate scale. Identification of landslide susceptibility areas is fundamental in successful land use planning and management practices to reduce risk. This paper applied the Bayes' theorem based on weight of evidence (WoE) using 8 landslide-related factors in a geographic information system (GIS) for landslide susceptibility mapping. 378 landslide locations were identified and mapped on a selected basin in the city of Nova Friburgo, triggered by the January 2011 rainfall event. The landslide scars were divided into two subsets: training and validation subsets. The 8 landslide-related factors weighted by WoE were performed using chi-square test to indicate which variables are conditionally independent of each other to be used in the final map. Finally, the maps of weighted factors were summed up to construct the landslide susceptibility map and validated by the validation landslide subset. According to the results, slope, aspect and contribution area showed the higher positive spatial correlation with landslides. In the landslide susceptibility map, 21% of the area presented very low and low susceptibilities with 3% of the validation scars, 41% presented medium susceptibility with 22% of the validation scars and 38% presented high and very high susceptibilities with 75% of the validation scars. The very high susceptibility class stands for 16% of the basin area and has 54% of the all scars. The approach used in this study can be considered very useful since 75% of the area affected by landslides was included in the high and very high susceptibility classes.

  13. Clinical high-resolution mapping of the proteoglycan-bound water fraction in articular cartilage of the human knee joint.

    PubMed

    Bouhrara, Mustapha; Reiter, David A; Sexton, Kyle W; Bergeron, Christopher M; Zukley, Linda M; Spencer, Richard G

    2017-11-01

    We applied our recently introduced Bayesian analytic method to achieve clinically-feasible in-vivo mapping of the proteoglycan water fraction (PgWF) of human knee cartilage with improved spatial resolution and stability as compared to existing methods. Multicomponent driven equilibrium single-pulse observation of T 1 and T 2 (mcDESPOT) datasets were acquired from the knees of two healthy young subjects and one older subject with previous knee injury. Each dataset was processed using Bayesian Monte Carlo (BMC) analysis incorporating a two-component tissue model. We assessed the performance and reproducibility of BMC and of the conventional analysis of stochastic region contraction (SRC) in the estimation of PgWF. Stability of the BMC analysis of PgWF was tested by comparing independent high-resolution (HR) datasets from each of the two young subjects. Unlike SRC, the BMC-derived maps from the two HR datasets were essentially identical. Furthermore, SRC maps showed substantial random variation in estimated PgWF, and mean values that differed from those obtained using BMC. In addition, PgWF maps derived from conventional low-resolution (LR) datasets exhibited partial volume and magnetic susceptibility effects. These artifacts were absent in HR PgWF images. Finally, our analysis showed regional variation in PgWF estimates, and substantially higher values in the younger subjects as compared to the older subject. BMC-mcDESPOT permits HR in-vivo mapping of PgWF in human knee cartilage in a clinically-feasible acquisition time. HR mapping reduces the impact of partial volume and magnetic susceptibility artifacts compared to LR mapping. Finally, BMC-mcDESPOT demonstrated excellent reproducibility in the determination of PgWF. Published by Elsevier Inc.

  14. Multilayer apparent magnetization mapping approach and its application in mineral exploration

    NASA Astrophysics Data System (ADS)

    Guo, L.; Meng, X.; Chen, Z.

    2016-12-01

    Apparent magnetization mapping is a technique to estimate magnetization distribution in the subsurface from the observed magnetic data. It has been applied for geologic mapping and mineral exploration for decades. Apparent magnetization mapping usually models the magnetic layer as a collection of vertical, juxtaposed prisms in both horizontal directions, whose top and bottom surfaces are assumed to be horizontal or variable-depth, and then inverts or deconvolves the magnetic anomalies in the space or frequency domain to determine the magnetization of each prism. The conventional mapping approaches usually assume that magnetic sources contain no remanent magnetization. However, such assumptions are not always valid in mineral exploration of metallic ores. In this case, the negligence of the remanence will result in large geologic deviation or the occurrence of negative magnetization. One alternate strategy is to transform the observed magnetic anomalies into some quantities that are insensitive or weakly sensitive to the remanence and then subsequently to perform inversion on these quantities, without needing any a priori information about remanent magnetization. Such kinds of quantities include the amplitude of the magnetic total field anomaly (AMA), and the normalized magnetic source strength (NSS). Here, we present a space-domain inversion approach for multilayer magnetization mapping based on the AMA for reducing effects of remanence. In the real world, magnetization usually varies vertically in the subsurface. If we use only one-layer model for mapping, the result is simply vertical superposition of different magnetization distributions. Hence, a multi-layer model for mapping would be a more realistic approach. We test the approach on the real data from a metallic deposit area in North China. The results demonstrated that our approach is feasible and produces considerable magnetization distribution from top layer to bottom layer in the subsurface.

  15. Factors influencing delivered mean airway pressure during nasal CPAP with the RAM cannula.

    PubMed

    Gerdes, Jeffrey S; Sivieri, Emidio M; Abbasi, Soraya

    2016-01-01

    To measure mean airway pressure (MAP) delivered through the RAM Cannula® when used with a ventilator in CPAP mode as a function of percent nares occlusion in a simulated nasal interface/test lung model and to compare the results to MAPs using a nasal continuous positive airway pressure (NCPAP) interface with nares fully occluded. An artificial airway model was connected to a spontaneous breathing lung model in which MAP was measured at set NCPAP levels between 4 and 8 cmH2 O provided by a Dräger Evita XL® ventilator and delivered through three sizes of RAM cannulae. Measurements were performed with varying leakage at the nasal interface by decreasing occlusion from 100% to 29%, half-way prong insertion, and simulated mouth leakage. Comparison measurements were made using the Dräger BabyFlow® NCPAP interface with a full nasal seal. With simulated mouth closed, the Dräger interface delivered MAPs within 0.5 cmH2 O of set CPAP levels. For the RAM cannula, with 60-80% nares occlusion, overall delivered MAPs were 60 ± 17% less than set CPAP levels (P < 0.001). Further, MAP decreased progressively with decreasing percent nares occlusion. The simulated open mouth condition resulted in significantly lower MAPs to <1.7 cmH2 O. The one-half prong insertion depth condition, with closed mouth, yielded MAPs approximately 35 ± 9% less than full insertion pressures (P < 0.001). In our bench tests, the RAM interface connected to a ventilator in NCPAP mode failed to deliver set CPAP levels when applied using the manufacturer recommended 60-80% nares occlusion, even with closed mouth and full nasal prong insertion conditions. © 2015 Wiley Periodicals, Inc.

  16. Inversion of Acoustic and Electromagnetic Recordings for Mapping Current Flow in Lightning Strikes

    NASA Astrophysics Data System (ADS)

    Anderson, J.; Johnson, J.; Arechiga, R. O.; Thomas, R. J.

    2012-12-01

    Acoustic recordings can be used to map current-carrying conduits in lightning strikes. Unlike stepped leaders, whose very high frequency (VHF) radio emissions have short (meter-scale) wavelengths and can be located by lightning-mapping arrays, current pulses emit longer (kilometer-scale) waves and cannot be mapped precisely by electromagnetic observations alone. While current pulses are constrained to conductive channels created by stepped leaders, these leaders often branch as they propagate, and most branches fail to carry current. Here, we present a method to use thunder recordings to map current pulses, and we apply it to acoustic and VHF data recorded in 2009 in the Magdalena mountains in central New Mexico, USA. Thunder is produced by rapid heating and expansion of the atmosphere along conductive channels in response to current flow, and therefore can be used to recover the geometry of the current-carrying channel. Toward this goal, we use VHF pulse maps to identify candidate conductive channels where we treat each channel as a superposition of finely-spaced acoustic point sources. We apply ray tracing in variable atmospheric structures to forward model the thunder that our microphone network would record for each candidate channel. Because multiple channels could potentially carry current, a non-linear inversion is performed to determine the acoustic source strength of each channel. For each combination of acoustic source strengths, synthetic thunder is modeled as a superposition of thunder signals produced by each channel, and a power envelope of this stack is then calculated. The inversion iteratively minimizes the misfit between power envelopes of recorded and modeled thunder. Because the atmospheric sound speed structure through which the waves propagate during these events is unknown, we repeat the procedure on many plausible atmospheres to find an optimal fit. We then determine the candidate channel, or channels, that minimizes residuals between synthetic and acoustic recordings. We demonstrate the usefulness of this method on both intracloud and cloud-to-ground strikes, and discuss factors affecting our ability to replicate recorded thunder.

  17. Genotyping by Sequencing in Almond: SNP Discovery, Linkage Mapping, and Marker Design

    PubMed Central

    Goonetilleke, Shashi N.; March, Timothy J.; Wirthensohn, Michelle G.; Arús, Pere; Walker, Amanda R.; Mather, Diane E.

    2017-01-01

    In crop plant genetics, linkage maps provide the basis for the mapping of loci that affect important traits and for the selection of markers to be applied in crop improvement. In outcrossing species such as almond (Prunus dulcis Mill. D. A. Webb), application of a double pseudotestcross mapping approach to the F1 progeny of a biparental cross leads to the construction of a linkage map for each parent. Here, we report on the application of genotyping by sequencing to discover and map single nucleotide polymorphisms in the almond cultivars “Nonpareil” and “Lauranne.” Allele-specific marker assays were developed for 309 tag pairs. Application of these assays to 231 Nonpareil × Lauranne F1 progeny provided robust linkage maps for each parent. Analysis of phenotypic data for shell hardness demonstrated the utility of these maps for quantitative trait locus mapping. Comparison of these maps to the peach genome assembly confirmed high synteny and collinearity between the peach and almond genomes. The marker assays were applied to progeny from several other Nonpareil crosses, providing the basis for a composite linkage map of Nonpareil. Applications of the assays to a panel of almond clones and a panel of rootstocks used for almond production demonstrated the broad applicability of the markers and provide subsets of markers that could be used to discriminate among accessions. The sequence-based linkage maps and single nucleotide polymorphism assays presented here could be useful resources for the genetic analysis and genetic improvement of almond. PMID:29141988

  18. Performance analysis of different database in new internet mapping system

    NASA Astrophysics Data System (ADS)

    Yao, Xing; Su, Wei; Gao, Shuai

    2017-03-01

    In the Mapping System of New Internet, Massive mapping entries between AID and RID need to be stored, added, updated, and deleted. In order to better deal with the problem when facing a large number of mapping entries update and query request, the Mapping System of New Internet must use high-performance database. In this paper, we focus on the performance of Redis, SQLite, and MySQL these three typical databases, and the results show that the Mapping System based on different databases can adapt to different needs according to the actual situation.

  19. A study on aircraft map display location and orientation. [effects of map display location on manual piloting performance

    NASA Technical Reports Server (NTRS)

    Baty, D. L.; Wempe, T. E.; Huff, E. M.

    1973-01-01

    Six airline pilots participated in a fixed-base simulator study to determine the effects of two Horizontal Situation Display (HSD/map) panel locations relative to the Vertical Situation Display (VSD), and of three map orientations on manual piloting performance. Pilot comments and opinions were formally obtained. Significant performance differences were found between wind conditions, and among pilots, but not between map locations and orientations. The results also illustrate the potential tracking accuracy of such a display. Recommendations concerning display location and map orientation are made.

  20. Wavelength resolved neutron transmission analysis to identify single crystal particles in historical metallurgy

    NASA Astrophysics Data System (ADS)

    Barzagli, E.; Grazzi, F.; Salvemini, F.; Scherillo, A.; Sato, H.; Shinohara, T.; Kamiyama, T.; Kiyanagi, Y.; Tremsin, A.; Zoppi, Marco

    2014-07-01

    The phase composition and the microstructure of four ferrous Japanese arrows of the Edo period (17th-19th century) has been determined through two complementary neutron techniques: Position-sensitive wavelength-resolved neutron transmission analysis (PS-WRNTA) and time-of-flight neutron diffraction (ToF-ND). Standard ToF-ND technique has been applied by using the INES diffractometer at the ISIS pulsed neutron source in the UK, while the innovative PS-WRNTA one has been performed at the J-PARC neutron source on the BL-10 NOBORU beam line using the high spatial high time resolution neutron imaging detector. With ToF-ND we were able to reach information about the quantitative distribution of the metal and non-metal phases, the texture level, the strain level and the domain size of each of the samples, which are important parameters to gain knowledge about the technological level of the Japanese weapon. Starting from this base of data, the more complex PS-WRNTA has been applied to the same samples. This experimental technique exploits the presence of the so-called Bragg edges, in the time-of-flight spectrum of neutrons transmitted through crystalline materials, to map the microstructural properties of samples. The two techniques are non-invasive and can be easily applied to archaeometry for an accurate microstructure mapping of metal and ceramic artifacts.

  1. Multiprocessing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1991-01-01

    Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.

  2. A statistical approach for validating eSOTER and digital soil maps in front of traditional soil maps

    NASA Astrophysics Data System (ADS)

    Bock, Michael; Baritz, Rainer; Köthe, Rüdiger; Melms, Stephan; Günther, Susann

    2015-04-01

    During the European research project eSOTER, three different Digital Soil Maps (DSM) were developed for the pilot area Chemnitz 1:250,000 (FP7 eSOTER project, grant agreement nr. 211578). The core task of the project was to revise the SOTER method for the interpretation of soil and terrain data. It was one of the working hypothesis that eSOTER does not only provide terrain data with typical soil profiles, but that the new products actually perform like a conceptual soil map. The three eSOTER maps for the pilot area considerably differed in spatial representation and content of soil classes. In this study we compare the three eSOTER maps against existing reconnaissance soil maps keeping in mind that traditional soil maps have many subjective issues and intended bias regarding the overestimation and emphasize of certain features. Hence, a true validation of the proper representation of modeled soil maps is hardly possible; rather a statistical comparison between modeled and empirical approaches is possible. If eSOTER data represent conceptual soil maps, then different eSOTER, DSM and conventional maps from various sources and different regions could be harmonized towards consistent new data sets for large areas including the whole European continent. One of the eSOTER maps has been developed closely to the traditional SOTER method: terrain classification data (derived from SRTM DEM) were combined with lithology data (re-interpreted geological map); the corresponding terrain units were then extended with soil information: a very dense regional soil profile data set was used to define soil mapping units based on a statistical grouping of terrain units. The second map is a pure DSM map using continuous terrain parameters instead of terrain classification; radiospectrometric data were used to supplement parent material information from geology maps. The classification method Random Forest was used. The third approach predicts soil diagnostic properties based on covariates similar to DSM practices; in addition, multi-temporal MODIS data were used; the resulting soil map is the product of these diagnostic layers producing a map of soil reference groups (classified according to WRB). Because the third approach was applied to a larger test area in central Europe, and compared to the first two approaches, has worked with coarser input data, comparability is only partly fulfilled. To evaluate the usability of the three eSOTER maps, and to make a comparison among them, traditional soil maps 1:200,000 and 1:50,000 were used as reference data sets. Three statistical methods were applied: (i) in a moving window the distribution of the soil classes of each DSM product was compared to that of the soil maps by calculating the corrected coefficient of contingency, (ii) the value of predictive power for each of the eSOTER maps was determined, and (iii) the degree of consistency was derived. The latter is based on a weighting of the match of occurring class combinations via expert knowledge and recalculating the proportions of map appearance with these weights. To re-check the validation results a field study by local soil experts was conducted. The results show clearly that the first eSOTER approach based on the terrain classification / reinterpreted parent material information has the greatest similarity with traditional soil maps. The spatial differentiation offered by such an approach is well suitable to serve as a conceptual soil map. Therefore, eSOTER can be a tool for soil mappers to generate conceptual soil maps in a faster and more consistent way. This conclusion is at least valid for overview scales such as 1.250,000.

  3. Integration of adaptive guided filtering, deep feature learning, and edge-detection techniques for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Wan, Xiaoqing; Zhao, Chunhui; Gao, Bing

    2017-11-01

    The integration of an edge-preserving filtering technique in the classification of a hyperspectral image (HSI) has been proven effective in enhancing classification performance. This paper proposes an ensemble strategy for HSI classification using an edge-preserving filter along with a deep learning model and edge detection. First, an adaptive guided filter is applied to the original HSI to reduce the noise in degraded images and to extract powerful spectral-spatial features. Second, the extracted features are fed as input to a stacked sparse autoencoder to adaptively exploit more invariant and deep feature representations; then, a random forest classifier is applied to fine-tune the entire pretrained network and determine the classification output. Third, a Prewitt compass operator is further performed on the HSI to extract the edges of the first principal component after dimension reduction. Moreover, the regional growth rule is applied to the resulting edge logical image to determine the local region for each unlabeled pixel. Finally, the categories of the corresponding neighborhood samples are determined in the original classification map; then, the major voting mechanism is implemented to generate the final output. Extensive experiments proved that the proposed method achieves competitive performance compared with several traditional approaches.

  4. Validation of a standardized mapping system of the hip joint for radial MRA sequencing.

    PubMed

    Klenke, Frank M; Hoffmann, Daniel B; Cross, Brian J; Siebenrock, Klaus A

    2015-03-01

    Intraarticular gadolinium-enhanced magnetic resonance arthrography (MRA) is commonly applied to characterize morphological disorders of the hip. However, the reproducibility of retrieving anatomic landmarks on MRA scans and their correlation with intraarticular pathologies is unknown. A precise mapping system for the exact localization of hip pathomorphologies with radial MRA sequences is lacking. Therefore, the purpose of the study was the establishment and validation of a reproducible mapping system for radial sequences of hip MRA. Sixty-nine consecutive intraarticular gadolinium-enhanced hip MRAs were evaluated. Radial sequencing consisted of 14 cuts orientated along the axis of the femoral neck. Three orthopedic surgeons read the radial sequences independently. Each MRI was read twice with a minimum interval of 7 days from the first reading. The intra- and inter-observer reliability of the mapping procedure was determined. A clockwise system for hip MRA was established. The teardrop figure served to determine the 6 o'clock position of the acetabulum; the center of the greater trochanter served to determine the 12 o'clock position of the femoral head-neck junction. The intra- and inter-observer ICCs to retrieve the correct 6/12 o'clock positions were 0.906-0.996 and 0.978-0.988, respectively. The established mapping system for radial sequences of hip joint MRA is reproducible and easy to perform.

  5. A Computational Solution to Automatically Map Metabolite Libraries in the Context of Genome Scale Metabolic Networks.

    PubMed

    Merlet, Benjamin; Paulhe, Nils; Vinson, Florence; Frainay, Clément; Chazalviel, Maxime; Poupin, Nathalie; Gloaguen, Yoann; Giacomoni, Franck; Jourdan, Fabien

    2016-01-01

    This article describes a generic programmatic method for mapping chemical compound libraries on organism-specific metabolic networks from various databases (KEGG, BioCyc) and flat file formats (SBML and Matlab files). We show how this pipeline was successfully applied to decipher the coverage of chemical libraries set up by two metabolomics facilities MetaboHub (French National infrastructure for metabolomics and fluxomics) and Glasgow Polyomics (GP) on the metabolic networks available in the MetExplore web server. The present generic protocol is designed to formalize and reduce the volume of information transfer between the library and the network database. Matching of metabolites between libraries and metabolic networks is based on InChIs or InChIKeys and therefore requires that these identifiers are specified in both libraries and networks. In addition to providing covering statistics, this pipeline also allows the visualization of mapping results in the context of metabolic networks. In order to achieve this goal, we tackled issues on programmatic interaction between two servers, improvement of metabolite annotation in metabolic networks and automatic loading of a mapping in genome scale metabolic network analysis tool MetExplore. It is important to note that this mapping can also be performed on a single or a selection of organisms of interest and is thus not limited to large facilities.

  6. Application of Raytracing Through the High Resolution Numerical Weather Model HIRLAM for the Analysis of European VLBI

    NASA Technical Reports Server (NTRS)

    Garcia-Espada, Susana; Haas, Rudiger; Colomer, Francisco

    2010-01-01

    An important limitation for the precision in the results obtained by space geodetic techniques like VLBI and GPS are tropospheric delays caused by the neutral atmosphere, see e.g. [1]. In recent years numerical weather models (NWM) have been applied to improve mapping functions which are used for tropospheric delay modeling in VLBI and GPS data analyses. In this manuscript we use raytracing to calculate slant delays and apply these to the analysis of Europe VLBI data. The raytracing is performed through the limited area numerical weather prediction (NWP) model HIRLAM. The advantages of this model are high spatial (0.2 deg. x 0.2 deg.) and high temporal resolution (in prediction mode three hours).

  7. Imaging Electric Properties of Biological Tissues by RF Field Mapping in MRI

    PubMed Central

    Zhang, Xiaotong; Zhu, Shanan; He, Bin

    2010-01-01

    The electric properties (EPs) of biological tissue, i.e., the electric conductivity and permittivity, can provide important information in the diagnosis of various diseases. The EPs also play an important role in specific absorption rate (SAR) calculation, a major concern in high-field Magnetic Resonance Imaging (MRI), as well as in non-medical areas such as wireless-telecommunications. The high-field MRI system is accompanied by significant wave propagation effects, and the radio frequency (RF) radiation is dependent on the EPs of biological tissue. Based on the measurement of the active transverse magnetic component of the applied RF field (known as B1-mapping technique), we propose a dual-excitation algorithm, which uses two sets of measured B1 data to noninvasively reconstruct the electric properties of biological tissues. The Finite Element Method (FEM) was utilized in three-dimensional (3D) modeling and B1 field calculation. A series of computer simulations were conducted to evaluate the feasibility and performance of the proposed method on a 3D head model within a transverse electromagnetic (TEM) coil and a birdcage (BC) coil. Using a TEM coil, when noise free, the reconstructed EP distribution of tissues in the brain has relative errors of 12% ∼ 28% and correlated coefficients of greater than 0.91. Compared with other B1-mapping based reconstruction algorithms, our approach provides superior performance without the need for iterative computations. The present simulation results suggest that good reconstruction of electric properties from B1 mapping can be achieved. PMID:20129847

  8. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    PubMed

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  9. Apriori Versions Based on MapReduce for Mining Frequent Patterns on Big Data.

    PubMed

    Luna, Jose Maria; Padillo, Francisco; Pechenizkiy, Mykola; Ventura, Sebastian

    2017-09-27

    Pattern mining is one of the most important tasks to extract meaningful and useful information from raw data. This task aims to extract item-sets that represent any type of homogeneity and regularity in data. Although many efficient algorithms have been developed in this regard, the growing interest in data has caused the performance of existing pattern mining techniques to be dropped. The goal of this paper is to propose new efficient pattern mining algorithms to work in big data. To this aim, a series of algorithms based on the MapReduce framework and the Hadoop open-source implementation have been proposed. The proposed algorithms can be divided into three main groups. First, two algorithms [Apriori MapReduce (AprioriMR) and iterative AprioriMR] with no pruning strategy are proposed, which extract any existing item-set in data. Second, two algorithms (space pruning AprioriMR and top AprioriMR) that prune the search space by means of the well-known anti-monotone property are proposed. Finally, a last algorithm (maximal AprioriMR) is also proposed for mining condensed representations of frequent patterns. To test the performance of the proposed algorithms, a varied collection of big data datasets have been considered, comprising up to 3 · 10#x00B9;⁸ transactions and more than 5 million of distinct single-items. The experimental stage includes comparisons against highly efficient and well-known pattern mining algorithms. Results reveal the interest of applying MapReduce versions when complex problems are considered, and also the unsuitability of this paradigm when dealing with small data.

  10. Dynamic prescription maps for site-specific variable rate irrigation of cotton

    USDA-ARS?s Scientific Manuscript database

    A prescription map is a set of instructions that controls a variable rate irrigation (VRI) system. These maps, which may be based on prior yield, soil texture, topography, or soil electrical conductivity data, are often manually applied at the beginning of an irrigation season and remain static. The...

  11. A Round-Efficient Authenticated Key Agreement Scheme Based on Extended Chaotic Maps for Group Cloud Meeting

    PubMed Central

    Lee, Tian-Fu; Wang, Zeng-Bo

    2017-01-01

    The security is a critical issue for business purposes. For example, the cloud meeting must consider strong security to maintain the communication privacy. Considering the scenario with cloud meeting, we apply extended chaotic map to present passwordless group authentication key agreement, termed as Passwordless Group Authentication Key Agreement (PL-GAKA). PL-GAKA improves the computation efficiency for the simple group password-based authenticated key agreement (SGPAKE) proposed by Lee et al. in terms of computing the session key. Since the extended chaotic map has equivalent security level to the Diffie–Hellman key exchange scheme applied by SGPAKE, the security of PL-GAKA is not sacrificed when improving the computation efficiency. Moreover, PL-GAKA is a passwordless scheme, so the password maintenance is not necessary. Short-term authentication is considered, hence the communication security is stronger than other protocols by dynamically generating session key in each cloud meeting. In our analysis, we first prove that each meeting member can get the correct information during the meeting. We analyze common security issues for the proposed PL-GAKA in terms of session key security, mutual authentication, perfect forward security, and data integrity. Moreover, we also demonstrate that communicating in PL-GAKA is secure when suffering replay attacks, impersonation attacks, privileged insider attacks, and stolen-verifier attacks. Eventually, an overall comparison is given to show the performance between PL-GAKA, SGPAKE and related solutions. PMID:29207509

  12. Mapping lava morphology of the Galapagos Spreading Center at 92°W: fuzzy logic provides a classification of high-resolution bathymetry and backscatter

    NASA Astrophysics Data System (ADS)

    McClinton, J. T.; White, S. M.; Sinton, J. M.; Rubin, K. H.; Bowles, J. A.

    2010-12-01

    Differences in axial lava morphology along the Galapagos Spreading Center (GSC) can indicate variations in magma supply and emplacement dynamics due to the influence of the adjacent Galapagos hot spot. Unfortunately, the ability to discriminate fine-scale lava morphology has historically been limited to observations of the small coverage areas of towed camera surveys and submersible operations. This research presents a neuro-fuzzy approach to automated seafloor classification using spatially coincident, high-resolution bathymetry and backscatter data. The classification method implements a Sugeno-type fuzzy inference system trained by a multi-layered adaptive neural network and is capable of rapidly classifying seafloor morphology based on attributes of surface geometry and texture. The system has been applied to the 92°W segment of the western GSC in order to quantify coverage areas and distributions of pillow, lobate, and sheet lava morphology. An accuracy assessment has been performed on the classification results. The resulting classified maps provide a high-resolution view of GSC axial morphology and indicate the study area terrain is approximately 40% pillow flows, 40% lobate and sheet flows, and 10% fissured or faulted area, with about 10% of the study area unclassifiable. Fine-scale features such as eruptive fissures, tumuli, and individual pillowed lava flow fronts are also visible. Although this system has been applied to lava morphology, its design and implementation are applicable to other undersea mapping applications.

  13. Case-based fracture image retrieval.

    PubMed

    Zhou, Xin; Stern, Richard; Müller, Henning

    2012-05-01

    Case-based fracture image retrieval can assist surgeons in decisions regarding new cases by supplying visually similar past cases. This tool may guide fracture fixation and management through comparison of long-term outcomes in similar cases. A fracture image database collected over 10 years at the orthopedic service of the University Hospitals of Geneva was used. This database contains 2,690 fracture cases associated with 43 classes (based on the AO/OTA classification). A case-based retrieval engine was developed and evaluated using retrieval precision as a performance metric. Only cases in the same class as the query case are considered as relevant. The scale-invariant feature transform (SIFT) is used for image analysis. Performance evaluation was computed in terms of mean average precision (MAP) and early precision (P10, P30). Retrieval results produced with the GNU image finding tool (GIFT) were used as a baseline. Two sampling strategies were evaluated. One used a dense 40 × 40 pixel grid sampling, and the second one used the standard SIFT features. Based on dense pixel grid sampling, three unsupervised feature selection strategies were introduced to further improve retrieval performance. With dense pixel grid sampling, the image is divided into 1,600 (40 × 40) square blocks. The goal is to emphasize the salient regions (blocks) and ignore irrelevant regions. Regions are considered as important when a high variance of the visual features is found. The first strategy is to calculate the variance of all descriptors on the global database. The second strategy is to calculate the variance of all descriptors for each case. A third strategy is to perform a thumbnail image clustering in a first step and then to calculate the variance for each cluster. Finally, a fusion between a SIFT-based system and GIFT is performed. A first comparison on the selection of sampling strategies using SIFT features shows that dense sampling using a pixel grid (MAP = 0.18) outperformed the SIFT detector-based sampling approach (MAP = 0.10). In a second step, three unsupervised feature selection strategies were evaluated. A grid parameter search is applied to optimize parameters for feature selection and clustering. Results show that using half of the regions (700 or 800) obtains the best performance for all three strategies. Increasing the number of clusters in clustering can also improve the retrieval performance. The SIFT descriptor variance in each case gave the best indication of saliency for the regions (MAP = 0.23), better than the other two strategies (MAP = 0.20 and 0.21). Combining GIFT (MAP = 0.23) and the best SIFT strategy (MAP = 0.23) produced significantly better results (MAP = 0.27) than each system alone. A case-based fracture retrieval engine was developed and is available for online demonstration. SIFT is used to extract local features, and three feature selection strategies were introduced and evaluated. A baseline using the GIFT system was used to evaluate the salient point-based approaches. Without supervised learning, SIFT-based systems with optimized parameters slightly outperformed the GIFT system. A fusion of the two approaches shows that the information contained in the two approaches is complementary. Supervised learning on the feature space is foreseen as the next step of this study.

  14. The Effects of Reciprocal Teaching and Direct Instruction Approaches on Knowledge Map (K-Map) Generation Skill

    ERIC Educational Resources Information Center

    Görgen, Izzet

    2014-01-01

    The primary purpose of the present study is to investigate whether reciprocal teaching approach or direct instruction approach is more effective in the teaching of k-map generation skill. Secondary purpose of the study is to determine which of the k-map generation principles are more challenging for students to apply. The results of the study…

  15. The Effects of Reciprocal Teaching and Direct Instruction Approaches on Knowledge Map (k-map) Generation Skill

    ERIC Educational Resources Information Center

    Gorgen, Izzet

    2014-01-01

    The primary purpose of the present study is to investigate whether reciprocal teaching approach or direct instruction approach is more effective in the teaching of k-map generation skill. Secondary purpose of the study is to determine which of the k-map generation principles are more challenging for students to apply. The results of the study…

  16. Benthic Habitat Mapping by Combining Lyzenga’s Optical Model and Relative Water Depth Model in Lintea Island, Southeast Sulawesi

    NASA Astrophysics Data System (ADS)

    Hafizt, M.; Manessa, M. D. M.; Adi, N. S.; Prayudha, B.

    2017-12-01

    Benthic habitat mapping using satellite data is one challenging task for practitioners and academician as benthic objects are covered by light-attenuating water column obscuring object discrimination. One common method to reduce this water-column effect is by using depth-invariant index (DII) image. However, the application of the correction in shallow coastal areas is challenging as a dark object such as seagrass could have a very low pixel value, preventing its reliable identification and classification. This limitation can be solved by specifically applying a classification process to areas with different water depth levels. The water depth level can be extracted from satellite imagery using Relative Water Depth Index (RWDI). This study proposed a new approach to improve the mapping accuracy, particularly for benthic dark objects by combining the DII of Lyzenga’s water column correction method and the RWDI of Stumpt’s method. This research was conducted in Lintea Island which has a high variation of benthic cover using Sentinel-2A imagery. To assess the effectiveness of the proposed new approach for benthic habitat mapping two different classification procedures are implemented. The first procedure is the commonly applied method in benthic habitat mapping where DII image is used as input data to all coastal area for image classification process regardless of depth variation. The second procedure is the proposed new approach where its initial step begins with the separation of the study area into shallow and deep waters using the RWDI image. Shallow area was then classified using the sunglint-corrected image as input data and the deep area was classified using DII image as input data. The final classification maps of those two areas were merged as a single benthic habitat map. A confusion matrix was then applied to evaluate the mapping accuracy of the final map. The result shows that the new proposed mapping approach can be used to map all benthic objects in all depth ranges and shows a better accuracy compared to that of classification map produced using only with DII.

  17. `VIS/NIR mapping of TOC and extent of organic soils in the Nørre Å valley

    NASA Astrophysics Data System (ADS)

    Knadel, M.; Greve, M. H.; Thomsen, A.

    2009-04-01

    Organic soils represent a substantial pool of carbon in Denmark. The need for carbon stock assessment calls for more rapid and effective mapping methods to be developed. The aim of this study was to compare traditional soil mapping with maps produced from the results of a mobile VIS/NIR system and to evaluate the ability to estimate TOC and map the area of organic soils. The Veris mobile VIS/NIR spectroscopy system was compared to traditional manual sampling. The system is developed for in-situ near surface measurements of soil carbon content. It measures diffuse reflectance in the 350 nm-2200 nm region. The system consists of two spectrophotometers mounted on a toolbar and pulled by a tractor. Optical measurements are made through a sapphire window at the bottom of the shank. The shank was pulled at a depth of 5-7 cm at a speed of 4-5 km/hr. 20-25 spectra per second with 8 nm resolution were acquired by the spectrometers. Measurements were made on 10-12 m spaced transects. The system also acquired soil electrical conductivity (EC) for two soil depths: shallow EC-SH (0- 31 cm) and deep conductivity EC-DP (0- 91 cm). The conductivity was recorded together with GPS coordinates and spectral data for further construction of the calibration models. Two maps of organic soils in the Nørre Å valley (Central Jutland) were generated: (i) based on a conventional 25 m grid with 162 sampling points and laboratory analysis of TOC, (ii) based on in-situ VIS/NIR measurements supported by chemometrics. Before regression analysis, spectral information was compressed by calculating principal components. The outliers were determined by a mahalanobis distance equation and removed. Clustering using a fuzzy c- means algorithm was conducted. Within each cluster a location with the minimal spatial variability was selected. A map of 15 representative sample locations was proposed. The interpolation of the spectra into a single spectrum was performed using a Gaussian kernel weighting function. Spectra obtained near a sampled location were averaged. The collected spectra were correlated to TOC of the 15 representative samples using multivariate regression techniques (Unscrambler 9.7; Camo ASA, Oslo, Norway). Two types of calibrations were performed: using only spectra and using spectra together with the auxiliary data (EC-SH and EC-DP). These calibration equations were computed using PLS regression, segmented cross-validation method on centred data (using the raw spectral data, log 1/R). Six different spectra pre-treatments were conducted: (1) only spectra, (2) Savitsky-Golay smoothing over 11 wavelength points and transformation to a (3) 1'st and (4) 2'nd Savitzky and Golay derivative algorithm with a derivative interval of 21 wavelength points, (5) with or (6) without smoothing. The best treatment was considered to be the one with the lowest Root Mean Square Error of Prediction (RMSEP), the highest r2 between the VIS/NIR-predicted and measured values in the calibration model and the lowest mean deviation of predicted TOC values. The best calibration model was obtained with the mathematical pre-treatment's including smoothing, calculating the 2'nd derivative and outlier removal. The two TOC maps were compared after interpolation using kriging. They showed a similar pattern in the TOC distribution. Despite the unfavourable field conditions the VIS/NIR system performed well in both low and high TOC areas. Water content in places exceeding field capacity in the lower parts of the investigated field did not seriously degrade measurements. The present study represents the first attempt to apply the mobile Veris VIS/NIR system to the mapping of TOC of peat soils in Denmark. The result from this study show that a mobile VIS/NIR system can be applied to cost effective TOC mapping of mineral and organic soils with highly varying water content. Key words: VIS/NIR spectroscopy, organic soils, TOC

  18. Real-time tsunami inundation forecasting and damage mapping towards enhancing tsunami disaster resiliency

    NASA Astrophysics Data System (ADS)

    Koshimura, S.; Hino, R.; Ohta, Y.; Kobayashi, H.; Musa, A.; Murashima, Y.

    2014-12-01

    With use of modern computing power and advanced sensor networks, a project is underway to establish a new system of real-time tsunami inundation forecasting, damage estimation and mapping to enhance society's resilience in the aftermath of major tsunami disaster. The system consists of fusion of real-time crustal deformation monitoring/fault model estimation by Ohta et al. (2012), high-performance real-time tsunami propagation/inundation modeling with NEC's vector supercomputer SX-ACE, damage/loss estimation models (Koshimura et al., 2013), and geo-informatics. After a major (near field) earthquake is triggered, the first response of the system is to identify the tsunami source model by applying RAPiD Algorithm (Ohta et al., 2012) to observed RTK-GPS time series at GEONET sites in Japan. As performed in the data obtained during the 2011 Tohoku event, we assume less than 10 minutes as the acquisition time of the source model. Given the tsunami source, the system moves on to running tsunami propagation and inundation model which was optimized on the vector supercomputer SX-ACE to acquire the estimation of time series of tsunami at offshore/coastal tide gauges to determine tsunami travel and arrival time, extent of inundation zone, maximum flow depth distribution. The implemented tsunami numerical model is based on the non-linear shallow-water equations discretized by finite difference method. The merged bathymetry and topography grids are prepared with 10 m resolution to better estimate the tsunami inland penetration. Given the maximum flow depth distribution, the system performs GIS analysis to determine the numbers of exposed population and structures using census data, then estimates the numbers of potential death and damaged structures by applying tsunami fragility curve (Koshimura et al., 2013). Since the tsunami source model is determined, the model is supposed to complete the estimation within 10 minutes. The results are disseminated as mapping products to responders and stakeholders, e.g. national and regional municipalities, to be utilized for their emergency/response activities. In 2014, the system is verified through the case studies of 2011 Tohoku event and potential earthquake scenarios along Nankai Trough with regard to its capability and robustness.

  19. Novel-word learning deficits in Mandarin-speaking preschool children with specific language impairments.

    PubMed

    Chen, Yuchun; Liu, Huei-Mei

    2014-01-01

    Children with SLI exhibit overall deficits in novel word learning compared to their age-matched peers. However, the manifestation of the word learning difficulty in SLI was not consistent across tasks and the factors affecting the learning performance were not yet determined. Our aim is to examine the extent of word learning difficulties in Mandarin-speaking preschool children with SLI, and to explore the potent influence of existing lexical knowledge on to the word learning process. Preschool children with SLI (n=37) and typical language development (n=33) were exposed to novel words for unfamiliar objects embedded in stories. Word learning tasks including the initial mapping and short-term repetitive learning were designed. Results revealed that Mandarin-speaking preschool children with SLI performed as well as their age-peers in the initial form-meaning mapping task. Their word learning difficulty was only evidently shown in the short-term repetitive learning task under a production demand, and their learning speed was slower than the control group. Children with SLI learned the novel words with a semantic head better in both the initial mapping and repetitive learning tasks. Moderate correlations between stand word learning performances and scores on standardized vocabulary were found after controlling for children's age and nonverbal IQ. The results suggested that the word learning difficulty in children with SLI occurred in the process of establishing a robust phonological representation at the beginning stage of word learning. Also, implicit compound knowledge is applied to aid word learning process for children with and without SLI. We also provide the empirical data to validate the relationship between preschool children's word learning performance and their existing receptive vocabulary ability. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Adiabatic quantum simulation of quantum chemistry.

    PubMed

    Babbush, Ryan; Love, Peter J; Aspuru-Guzik, Alán

    2014-10-13

    We show how to apply the quantum adiabatic algorithm directly to the quantum computation of molecular properties. We describe a procedure to map electronic structure Hamiltonians to 2-body qubit Hamiltonians with a small set of physically realizable couplings. By combining the Bravyi-Kitaev construction to map fermions to qubits with perturbative gadgets to reduce the Hamiltonian to 2-body, we obtain precision requirements on the coupling strengths and a number of ancilla qubits that scale polynomially in the problem size. Hence our mapping is efficient. The required set of controllable interactions includes only two types of interaction beyond the Ising interactions required to apply the quantum adiabatic algorithm to combinatorial optimization problems. Our mapping may also be of interest to chemists directly as it defines a dictionary from electronic structure to spin Hamiltonians with physical interactions.

  1. Impact of atmospheric correction and image filtering on hyperspectral classification of tree species using support vector machine

    NASA Astrophysics Data System (ADS)

    Shahriari Nia, Morteza; Wang, Daisy Zhe; Bohlman, Stephanie Ann; Gader, Paul; Graves, Sarah J.; Petrovic, Milenko

    2015-01-01

    Hyperspectral images can be used to identify savannah tree species at the landscape scale, which is a key step in measuring biomass and carbon, and tracking changes in species distributions, including invasive species, in these ecosystems. Before automated species mapping can be performed, image processing and atmospheric correction is often performed, which can potentially affect the performance of classification algorithms. We determine how three processing and correction techniques (atmospheric correction, Gaussian filters, and shade/green vegetation filters) affect the prediction accuracy of classification of tree species at pixel level from airborne visible/infrared imaging spectrometer imagery of longleaf pine savanna in Central Florida, United States. Species classification using fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) atmospheric correction outperformed ATCOR in the majority of cases. Green vegetation (normalized difference vegetation index) and shade (near-infrared) filters did not increase classification accuracy when applied to large and continuous patches of specific species. Finally, applying a Gaussian filter reduces interband noise and increases species classification accuracy. Using the optimal preprocessing steps, our classification accuracy of six species classes is about 75%.

  2. Improving CNN Performance Accuracies With Min-Max Objective.

    PubMed

    Shi, Weiwei; Gong, Yihong; Tao, Xiaoyu; Wang, Jinjun; Zheng, Nanning

    2017-06-09

    We propose a novel method for improving performance accuracies of convolutional neural network (CNN) without the need to increase the network complexity. We accomplish the goal by applying the proposed Min-Max objective to a layer below the output layer of a CNN model in the course of training. The Min-Max objective explicitly ensures that the feature maps learned by a CNN model have the minimum within-manifold distance for each object manifold and the maximum between-manifold distances among different object manifolds. The Min-Max objective is general and able to be applied to different CNNs with insignificant increases in computation cost. Moreover, an incremental minibatch training procedure is also proposed in conjunction with the Min-Max objective to enable the handling of large-scale training data. Comprehensive experimental evaluations on several benchmark data sets with both the image classification and face verification tasks reveal that employing the proposed Min-Max objective in the training process can remarkably improve performance accuracies of a CNN model in comparison with the same model trained without using this objective.

  3. Improving mapping and SNP-calling performance in multiplexed targeted next-generation sequencing

    PubMed Central

    2012-01-01

    Background Compared to classical genotyping, targeted next-generation sequencing (tNGS) can be custom-designed to interrogate entire genomic regions of interest, in order to detect novel as well as known variants. To bring down the per-sample cost, one approach is to pool barcoded NGS libraries before sample enrichment. Still, we lack a complete understanding of how this multiplexed tNGS approach and the varying performance of the ever-evolving analytical tools can affect the quality of variant discovery. Therefore, we evaluated the impact of different software tools and analytical approaches on the discovery of single nucleotide polymorphisms (SNPs) in multiplexed tNGS data. To generate our own test model, we combined a sequence capture method with NGS in three experimental stages of increasing complexity (E. coli genes, multiplexed E. coli, and multiplexed HapMap BRCA1/2 regions). Results We successfully enriched barcoded NGS libraries instead of genomic DNA, achieving reproducible coverage profiles (Pearson correlation coefficients of up to 0.99) across multiplexed samples, with <10% strand bias. However, the SNP calling quality was substantially affected by the choice of tools and mapping strategy. With the aim of reducing computational requirements, we compared conventional whole-genome mapping and SNP-calling with a new faster approach: target-region mapping with subsequent ‘read-backmapping’ to the whole genome to reduce the false detection rate. Consequently, we developed a combined mapping pipeline, which includes standard tools (BWA, SAMtools, etc.), and tested it on public HiSeq2000 exome data from the 1000 Genomes Project. Our pipeline saved 12 hours of run time per Hiseq2000 exome sample and detected ~5% more SNPs than the conventional whole genome approach. This suggests that more potential novel SNPs may be discovered using both approaches than with just the conventional approach. Conclusions We recommend applying our general ‘two-step’ mapping approach for more efficient SNP discovery in tNGS. Our study has also shown the benefit of computing inter-sample SNP-concordances and inspecting read alignments in order to attain more confident results. PMID:22913592

  4. Engineering fluid flow using sequenced microstructures

    NASA Astrophysics Data System (ADS)

    Amini, Hamed; Sollier, Elodie; Masaeli, Mahdokht; Xie, Yu; Ganapathysubramanian, Baskar; Stone, Howard A.; di Carlo, Dino

    2013-05-01

    Controlling the shape of fluid streams is important across scales: from industrial processing to control of biomolecular interactions. Previous approaches to control fluid streams have focused mainly on creating chaotic flows to enhance mixing. Here we develop an approach to apply order using sequences of fluid transformations rather than enhancing chaos. We investigate the inertial flow deformations around a library of single cylindrical pillars within a microfluidic channel and assemble these net fluid transformations to engineer fluid streams. As these transformations provide a deterministic mapping of fluid elements from upstream to downstream of a pillar, we can sequentially arrange pillars to apply the associated nested maps and, therefore, create complex fluid structures without additional numerical simulation. To show the range of capabilities, we present sequences that sculpt the cross-sectional shape of a stream into complex geometries, move and split a fluid stream, perform solution exchange and achieve particle separation. A general strategy to engineer fluid streams into a broad class of defined configurations in which the complexity of the nonlinear equations of fluid motion are abstracted from the user is a first step to programming streams of any desired shape, which would be useful for biological, chemical and materials automation.

  5. SOHO EIT Carrington maps from synoptic full-disk data

    NASA Technical Reports Server (NTRS)

    Thompson, B. J.; Newmark, J. S.; Gurman, J. B.; Delaboudiniere, J. P.; Clette, F.; Gibson, S. E.

    1997-01-01

    The solar synoptic maps, obtained from observations carried out since May 1996 by the extreme-ultraviolet imaging telescope (EIT) onboard the Solar and Heliospheric Observatory (SOHO), are presented. The maps were constructed for each Carrington rotation with the calibrated data. The off-limb maps at 1.05 and 1.10 solar radii were generated for three coronal lines using the standard applied to coronagraph synoptic maps. The maps reveal several aspects of the solar structure over the entire rotation and are used in the whole sun month modeling campaign. @txt extreme-ultraviolet imaging telescope

  6. Fixed point theorems for generalized α -β-weakly contraction mappings in metric spaces and applications.

    PubMed

    Latif, Abdul; Mongkolkeha, Chirasak; Sintunavarat, Wutiphol

    2014-01-01

    We extend the notion of generalized weakly contraction mappings due to Choudhury et al. (2011) to generalized α-β-weakly contraction mappings. We show with examples that our new class of mappings is a real generalization of several known classes of mappings. We also establish fixed point results for such mappings in metric spaces. Applying our new results, we obtain fixed point results on ordinary metric spaces, metric spaces endowed with an arbitrary binary relation, and metric spaces endowed with graph.

  7. Mapping land cover change over continental Africa using Landsat and Google Earth Engine cloud computing.

    PubMed

    Midekisa, Alemayehu; Holl, Felix; Savory, David J; Andrade-Pacheco, Ricardo; Gething, Peter W; Bennett, Adam; Sturrock, Hugh J W

    2017-01-01

    Quantifying and monitoring the spatial and temporal dynamics of the global land cover is critical for better understanding many of the Earth's land surface processes. However, the lack of regularly updated, continental-scale, and high spatial resolution (30 m) land cover data limit our ability to better understand the spatial extent and the temporal dynamics of land surface changes. Despite the free availability of high spatial resolution Landsat satellite data, continental-scale land cover mapping using high resolution Landsat satellite data was not feasible until now due to the need for high-performance computing to store, process, and analyze this large volume of high resolution satellite data. In this study, we present an approach to quantify continental land cover and impervious surface changes over a long period of time (15 years) using high resolution Landsat satellite observations and Google Earth Engine cloud computing platform. The approach applied here to overcome the computational challenges of handling big earth observation data by using cloud computing can help scientists and practitioners who lack high-performance computational resources.

  8. Mapping land cover change over continental Africa using Landsat and Google Earth Engine cloud computing

    PubMed Central

    Holl, Felix; Savory, David J.; Andrade-Pacheco, Ricardo; Gething, Peter W.; Bennett, Adam; Sturrock, Hugh J. W.

    2017-01-01

    Quantifying and monitoring the spatial and temporal dynamics of the global land cover is critical for better understanding many of the Earth’s land surface processes. However, the lack of regularly updated, continental-scale, and high spatial resolution (30 m) land cover data limit our ability to better understand the spatial extent and the temporal dynamics of land surface changes. Despite the free availability of high spatial resolution Landsat satellite data, continental-scale land cover mapping using high resolution Landsat satellite data was not feasible until now due to the need for high-performance computing to store, process, and analyze this large volume of high resolution satellite data. In this study, we present an approach to quantify continental land cover and impervious surface changes over a long period of time (15 years) using high resolution Landsat satellite observations and Google Earth Engine cloud computing platform. The approach applied here to overcome the computational challenges of handling big earth observation data by using cloud computing can help scientists and practitioners who lack high-performance computational resources. PMID:28953943

  9. Investigation of the effect of alumina and compaction pressure on physical, electrical and tribological properties of Al-Fe-Cr-Al2O3 powder composites

    NASA Astrophysics Data System (ADS)

    Mohsin, Mohammad; Mohd, Aas; Suhaib, M.; Arif, Sajjad; Arif Siddiqui, M.

    2017-10-01

    In this experimental work, aluminium Al-20Fe-5Cr (in wt.%) matrix reinforced with varying wt.% Al2O3 (0, 10, 20 and 30) and compaction pressure (470, 550 and 600 MPa) were prepared by powder metallurgy technique. The characterization of composites were performed by scanning electron microscopy (SEM), x-ray diffraction (XRD), energy dispersive spectrum (EDS) and elemental mapping. Uniform distribution of Al2O3 in aluminium matrix were observed by elemental mapping. The composites showed an increase in density and hardness by increasing both alumina and compaction pressure. While, electrical conductivity decreased by the addition of alumina. The tribological study of the composites were performed on pin-on-disc apparatus at sliding conditions (applied load 40 N, sliding speed 1.5 m s-1, sliding distance 300 m). The tribological properties of the composites were improved by increasing alumina and compaction pressure. SEM analysis were also carried out to understand wear mechanism of the worn surfaces of various fabricated composites and aluminium matrix.

  10. Estimation of urban surface water at subpixel level from neighborhood pixels using multispectral remote sensing image (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xie, Huan; Luo, Xin; Xu, Xiong; Wang, Chen; Pan, Haiyan; Tong, Xiaohua; Liu, Shijie

    2016-10-01

    Water body is a fundamental element in urban ecosystems and water mapping is critical for urban and landscape planning and management. As remote sensing has increasingly been used for water mapping in rural areas, this spatially explicit approach applied in urban area is also a challenging work due to the water bodies mainly distributed in a small size and the spectral confusion widely exists between water and complex features in the urban environment. Water index is the most common method for water extraction at pixel level, and spectral mixture analysis (SMA) has been widely employed in analyzing urban environment at subpixel level recently. In this paper, we introduce an automatic subpixel water mapping method in urban areas using multispectral remote sensing data. The objectives of this research consist of: (1) developing an automatic land-water mixed pixels extraction technique by water index; (2) deriving the most representative endmembers of water and land by utilizing neighboring water pixels and adaptive iterative optimal neighboring land pixel for respectively; (3) applying a linear unmixing model for subpixel water fraction estimation. Specifically, to automatically extract land-water pixels, the locally weighted scatter plot smoothing is firstly used to the original histogram curve of WI image . And then the Ostu threshold is derived as the start point to select land-water pixels based on histogram of the WI image with the land threshold and water threshold determination through the slopes of histogram curve . Based on the previous process at pixel level, the image is divided into three parts: water pixels, land pixels, and mixed land-water pixels. Then the spectral mixture analysis (SMA) is applied to land-water mixed pixels for water fraction estimation at subpixel level. With the assumption that the endmember signature of a target pixel should be more similar to adjacent pixels due to spatial dependence, the endmember of water and land are determined by neighboring pure land or pure water pixels within a distance. To obtaining the most representative endmembers in SMA, we designed an adaptive iterative endmember selection method based on the spatial similarity of adjacent pixels. According to the spectral similarity in a spatial adjacent region, the spectrum of land endmember is determined by selecting the most representative land pixel in a local window, and the spectrum of water endmember is determined by calculating an average of the water pixels in the local window. The proposed hierarchical processing method based on WI and SMA (WISMA) is applied to urban areas for reliability evaluation using the Landsat-8 Operational Land Imager (OLI) images. For comparison, four methods at pixel level and subpixel level were chosen respectively. Results indicate that the water maps generated by the proposed method correspond as closely with the truth water maps with subpixel precision. And the results showed that the WISMA achieved the best performance in water mapping with comprehensive analysis of different accuracy evaluation indexes (RMSE and SE).

  11. Automatic identification of IASLC-defined mediastinal lymph node stations on CT scans using multi-atlas organ segmentation

    NASA Astrophysics Data System (ADS)

    Hoffman, Joanne; Liu, Jiamin; Turkbey, Evrim; Kim, Lauren; Summers, Ronald M.

    2015-03-01

    Station-labeling of mediastinal lymph nodes is typically performed to identify the location of enlarged nodes for cancer staging. Stations are usually assigned in clinical radiology practice manually by qualitative visual assessment on CT scans, which is time consuming and highly variable. In this paper, we developed a method that automatically recognizes the lymph node stations in thoracic CT scans based on the anatomical organs in the mediastinum. First, the trachea, lungs, and spines are automatically segmented to locate the mediastinum region. Then, eight more anatomical organs are simultaneously identified by multi-atlas segmentation. Finally, with the segmentation of those anatomical organs, we convert the text definitions of the International Association for the Study of Lung Cancer (IASLC) lymph node map into patient-specific color-coded CT image maps. Thus, a lymph node station is automatically assigned to each lymph node. We applied this system to CT scans of 86 patients with 336 mediastinal lymph nodes measuring equal or greater than 10 mm. 84.8% of mediastinal lymph nodes were correctly mapped to their stations.

  12. Integrating Depth and Image Sequences for Planetary Rover Mapping Using Rgb-D Sensor

    NASA Astrophysics Data System (ADS)

    Peng, M.; Wan, W.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Zhao, Q.; Teng, B.; Mao, X.

    2018-04-01

    RGB-D camera allows the capture of depth and color information at high data rates, and this makes it possible and beneficial integrate depth and image sequences for planetary rover mapping. The proposed mapping method consists of three steps. First, the strict projection relationship among 3D space, depth data and visual texture data is established based on the imaging principle of RGB-D camera, then, an extended bundle adjustment (BA) based SLAM method with integrated 2D and 3D measurements is applied to the image network for high-precision pose estimation. Next, as the interior and exterior elements of RGB images sequence are available, dense matching is completed with the CMPMVS tool. Finally, according to the registration parameters after ICP, the 3D scene from RGB images can be registered to the 3D scene from depth images well, and the fused point cloud can be obtained. Experiment was performed in an outdoor field to simulate the lunar surface. The experimental results demonstrated the feasibility of the proposed method.

  13. Isomap transform for segmenting human body shapes.

    PubMed

    Cerveri, P; Sarro, K J; Marchente, M; Barros, R M L

    2011-09-01

    Segmentation of the 3D human body is a very challenging problem in applications exploiting volume capture data. Direct clustering in the Euclidean space is usually complex or even unsolvable. This paper presents an original method based on the Isomap (isometric feature mapping) transform of the volume data-set. The 3D articulated posture is mapped by Isomap in the pose of Da Vinci's Vitruvian man. The limbs are unrolled from each other and separated from the trunk and pelvis, and the topology of the human body shape is recovered. In such a configuration, Hoshen-Kopelman clustering applied to concentric spherical shells is used to automatically group points into the labelled principal curves. Shepard interpolation is utilised to back-map points of the principal curves into the original volume space. The experimental results performed on many different postures have proved the validity of the proposed method. Reliability of less than 2 cm and 3° in the location of the joint centres and direction axes of rotations has been obtained, respectively, which qualifies this procedure as a potential tool for markerless motion analysis.

  14. Energy- and k -resolved mapping of the magnetic circular dichroism in threshold photoemission from Co films on Pt(111)

    NASA Astrophysics Data System (ADS)

    Staab, Maximilian; Kutnyakhov, Dmytro; Wallauer, Robert; Chernov, Sergey; Medjanik, Katerina; Elmers, Hans Joachim; Kläui, Mathias; Schönhense, Gerd

    2017-04-01

    The magnetic circular dichroism in threshold photoemission (TPMCD) for perpendicularly magnetized fcc Co films on Pt(111) has been revisited. A complete mapping of the spectral function I (EB,kx,ky) (binding energy EB, momentum parallel to surface kx, ky) and the corresponding TPMCD asymmetry distribution AMCD(EB,kx,ky) has been performed for one-photon and two-photon photoemission using time-of-flight momentum microscopy. The experimental results allow distinguishing direct from indirect transitions. The measurements reveal clear band features of direct transitions from bulk bands that show a nontrivial asymmetry pattern. A significant homogeneous background with substantial asymmetry stemming from indirect transitions superposes direct transitions. Two-photon photoemission reveals enhanced emission intensity via an image potential state, acting as intermediate state. The image potential state enhances not only intensity but also asymmetry. The present results demonstrate that two-photon photoemission is a powerful method for mapping the spin-polarized unoccupied band structures and points out pathways for applying TPMCD as a contrast mechanism for various classes of magnetic materials.

  15. Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop

    NASA Astrophysics Data System (ADS)

    Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin

    2014-06-01

    Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.

  16. A combined triggering-propagation modeling approach for the assessment of rainfall induced debris flow susceptibility

    NASA Astrophysics Data System (ADS)

    Stancanelli, Laura Maria; Peres, David Johnny; Cancelliere, Antonino; Foti, Enrico

    2017-07-01

    Rainfall-induced shallow slides can evolve into debris flows that move rapidly downstream with devastating consequences. Mapping the susceptibility to debris flow is an important aid for risk mitigation. We propose a novel practical approach to derive debris flow inundation maps useful for susceptibility assessment, that is based on the integrated use of DEM-based spatially-distributed hydrological and slope stability models with debris flow propagation models. More specifically, the TRIGRS infiltration and infinite slope stability model and the FLO-2D model for the simulation of the related debris flow propagation and deposition are combined. An empirical instability-to-debris flow triggering threshold calibrated on the basis of observed events, is applied to link the two models and to accomplish the task of determining the amount of unstable mass that develops as a debris flow. Calibration of the proposed methodology is carried out based on real data of the debris flow event occurred on 1 October 2009, in the Peloritani mountains area (Italy). Model performance, assessed by receiver-operating-characteristics (ROC) indexes, evidences fairly good reproduction of the observed event. Comparison with the performance of the traditional debris flow modeling procedure, in which sediment and water hydrographs are inputed as lumped at selected points on top of the streams, is also performed, in order to assess quantitatively the limitations of such commonly applied approach. Results show that the proposed method, besides of being more process-consistent than the traditional hydrograph-based approach, can potentially provide a more accurate simulation of debris-flow phenomena, in terms of spatial patterns of erosion and deposition as well on the quantification of mobilized volumes and depths, avoiding overestimation of debris flow triggering volume and, thus, of maximum inundation flow depths.

  17. Semi-automatic mapping of cultural heritage from airborne laser scanning using deep learning

    NASA Astrophysics Data System (ADS)

    Due Trier, Øivind; Salberg, Arnt-Børre; Holger Pilø, Lars; Tonning, Christer; Marius Johansen, Hans; Aarsten, Dagrun

    2016-04-01

    This paper proposes to use deep learning to improve semi-automatic mapping of cultural heritage from airborne laser scanning (ALS) data. Automatic detection methods, based on traditional pattern recognition, have been applied in a number of cultural heritage mapping projects in Norway for the past five years. Automatic detection of pits and heaps have been combined with visual interpretation of the ALS data for the mapping of deer hunting systems, iron production sites, grave mounds and charcoal kilns. However, the performance of the automatic detection methods varies substantially between ALS datasets. For the mapping of deer hunting systems on flat gravel and sand sediment deposits, the automatic detection results were almost perfect. However, some false detections appeared in the terrain outside of the sediment deposits. These could be explained by other pit-like landscape features, like parts of river courses, spaces between boulders, and modern terrain modifications. However, these were easy to spot during visual interpretation, and the number of missed individual pitfall traps was still low. For the mapping of grave mounds, the automatic method produced a large number of false detections, reducing the usefulness of the semi-automatic approach. The mound structure is a very common natural terrain feature, and the grave mounds are less distinct in shape than the pitfall traps. Still, applying automatic mound detection on an entire municipality did lead to a new discovery of an Iron Age grave field with more than 15 individual mounds. Automatic mound detection also proved to be useful for a detailed re-mapping of Norway's largest Iron Age grave yard, which contains almost 1000 individual graves. Combined pit and mound detection has been applied to the mapping of more than 1000 charcoal kilns that were used by an iron work 350-200 years ago. The majority of charcoal kilns were indirectly detected as either pits on the circumference, a central mound, or both. However, kilns with a flat interior and a shallow ditch along the circumference were often missed by the automatic detection method. The successfulness of automatic detection seems to depend on two factors: (1) the density of ALS ground hits on the cultural heritage structures being sought, and (2) to what extent these structures stand out from natural terrain structures. The first factor may, to some extent, be improved by using a higher number of ALS pulses per square meter. The second factor is difficult to change, and also highlights another challenge: how to make a general automatic method that is applicable in all types of terrain within a country. The mixed experience with traditional pattern recognition for semi-automatic mapping of cultural heritage led us to consider deep learning as an alternative approach. The main principle is that a general feature detector has been trained on a large image database. The feature detector is then tailored to a specific task by using a modest number of images of true and false examples of the features being sought. Results of using deep learning are compared with previous results using traditional pattern recognition.

  18. Comparing the Effect of Thinking Maps Training Package Developed by the Thinking Maps Method on the Reading Performance of Dyslexic Students.

    PubMed

    Faramarzi, Salar; Moradi, Mohammadreza; Abedi, Ahmad

    2018-06-01

    The present study aimed to develop the thinking maps training package and compare its training effect with the thinking maps method on the reading performance of second and fifth grade of elementary school male dyslexic students. For this mixed method exploratory study, from among the above mentioned grades' students in Isfahan, 90 students who met the inclusion criteria were selected by multistage sampling and randomly assigned into six experimental and control groups. The data were collected by reading and dyslexia test and Wechsler Intelligence Scale for Children-fourth edition. The results of covariance analysis indicated a significant difference between the reading performance of the experimental (thinking maps training package and thinking maps method groups) and control groups ([Formula: see text]). Moreover, there were significant differences between the thinking maps training package group and thinking maps method group in some of the subtests ([Formula: see text]). It can be concluded that thinking maps training package and the thinking maps method exert a positive influence on the reading performance of dyslexic students; therefore, thinking maps can be used as an effective training and treatment method.

  19. Shallow Geothermal Admissibility Maps: a Methodology to Achieve a Sustainable Development of Shallow Geothermal Energy with Regards to Groundwater Resources

    NASA Astrophysics Data System (ADS)

    Bréthaut, D.; Parriaux, A.; Tacher, L.

    2009-04-01

    Implantation and use of shallow geothermal systems may have environmental impacts. Traditionally, risks are divided into 2 categories: direct and indirect. Direct risks are linked with the leakage of the circulating fluid (usually water with anti-freeze) of ground source heat pumps into the underground which may be a source of contamination. Indirect risks are linked with the borehole itself and the operation of the systems which can modify the groundwater flow, change groundwater temperature and chemistry, create bypasses from the surfaces to the aquifers or between two aquifers. Groundwater source heat pumps (GWSHP) may provoke indirect risks, while ground source heat pumps (GSHP) may provoke both direct and indirect risks. To minimize those environmental risks, the implantation of shallow geothermal systems must be regulated. In 2007, more than 7000 GSHP have been installed in Switzerland, which represents 1.5 Mio drilled meters. In the canton of Vaud, each shallow geothermal project has to be approved by the Department of the Environment. Approximately 1500 demands have been treated during 2007, about 15 times more than in 1990. Mapping shallow geothermal systems implantation restrictions due to environmental constrains permits: 1) to optimize the management and planning of the systems, 2) to minimize their impact on groundwater resources and 3) to facilitate administrative procedures for treating implantation demands. Such maps are called admissibility maps. Here, a methodology to elaborate them is presented and tested. Interactions between shallow geothermal energy and groundwater resources have been investigated. Admissibility criteria are proposed and structured into a flow chart which provides a decision making tool for shallow geothermal systems implantation. This approach has been applied to three areas of West Switzerland ranging from 2 to 6 km2. For each area, a geological investigation has been realized and complementary territorial information (e.g. map of contaminated areas) was gathered in order to produce the admissibility maps. For one area, a more detailed study has been performed and a complete 3D geological model has been constructed using an in-house modelling software called GeoShape. The model was then imported into a geographical information system which has been used to realize the admissibility map. Resulting maps were judged to be consistent and satisfying. In a second part of the project, this method will be applied at a larger scale. An admissibility map of the canton of Vaud (3200 km2) will be created. Considering the fast growth of the number of implanted GSHP and GWSHP throughout the world, it is clear that admissibility maps will play a major role in developing shallow geothermal energy as an environmentally friendly and sustainable resource.

  20. An overview of concept mapping in Dutch mental health care.

    PubMed

    Nabitz, Udo; van Randeraad-van der Zee, Carlijn; Kok, Ineke; van Bon-Martens, Marja; Serverens, Peter

    2017-02-01

    About 25 years ago, concept mapping was introduced in the Netherlands and applied in different fields. A collection of concept mapping projects conducted in the Netherlands was identified, in part in the archive of the Netherlands Institute of Mental Health and Addiction (Trimbos Institute). Some of the 90 identified projects are internationally published. The 90 concept mapping projects reflect the changes in mental health care and can be grouped into 5-year periods and into five typologies. The studies range from conceptualizing the problems of the homeless to the specification of quality indicators for treatment programs for patients with cystic fibrosis. The number of concept mapping projects has varied over time. Growth has been considerable in the last 5 years compared to the previous 5 years. Three case studies are described in detail with 12 characteristics and graphical representations. Concept mapping aligns well with the typical Dutch approach of the "Poldermodel." A broad introduction of concept mapping in European countries in cooperation with other countries, such as the United States and Canada, would strengthen the empirical basis for applying this approach in health care policy, quality, and clinical work. Copyright © 2016. Published by Elsevier Ltd.

  1. Implementing a Parallel Image Edge Detection Algorithm Based on the Otsu-Canny Operator on the Hadoop Platform

    PubMed Central

    Wang, Min; Tian, Yun

    2018-01-01

    The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance. PMID:29861711

  2. Performance seeking control excitation mode

    NASA Technical Reports Server (NTRS)

    Schkolnik, Gerard

    1995-01-01

    Flight testing of the performance seeking control (PSC) excitation mode was successfully completed at NASA Dryden on the F-15 highly integrated digital electronic control (HIDEC) aircraft. Although the excitation mode was not one of the original objectives of the PSC program, it was rapidly prototyped and implemented into the architecture of the PSC algorithm, allowing valuable and timely research data to be gathered. The primary flight test objective was to investigate the feasibility of a future measurement-based performance optimization algorithm. This future algorithm, called AdAPT, which stands for adaptive aircraft performance technology, generates and applies excitation inputs to selected control effectors. Fourier transformations are used to convert measured response and control effector data into frequency domain models which are mapped into state space models using multiterm frequency matching. Formal optimization principles are applied to produce an integrated, performance optimal effector suite. The key technical challenge of the measurement-based approach is the identification of the gradient of the performance index to the selected control effector. This concern was addressed by the excitation mode flight test. The AdAPT feasibility study utilized the PSC excitation mode to apply separate sinusoidal excitation trims to the controls - one aircraft, inlet first ramp (cowl), and one engine, throat area. Aircraft control and response data were recorded using on-board instrumentation and analyzed post-flight. Sensor noise characteristics, axial acceleration performance gradients, and repeatability were determined. Results were compared to pilot comments to assess the ride quality. Flight test results indicate that performance gradients were identified at all flight conditions, sensor noise levels were acceptable at the frequencies of interest, and excitations were generally not sensed by the pilot.

  3. Remote sensing and GIS for mapping groundwater recharge and discharge areas in salinity prone catchments, southeastern Australia

    NASA Astrophysics Data System (ADS)

    Tweed, Sarah O.; Leblanc, Marc; Webb, John A.; Lubczynski, Maciek W.

    2007-02-01

    Identifying groundwater recharge and discharge areas across catchments is critical for implementing effective strategies for salinity mitigation, surface-water and groundwater resource management, and ecosystem protection. In this study, a synergistic approach has been developed, which applies a combination of remote sensing and geographic information system (GIS) techniques to map groundwater recharge and discharge areas. This approach is applied to an unconfined basalt aquifer, in a salinity and drought prone region of southeastern Australia. The basalt aquifer covers ~11,500 km2 in an agriculturally intensive region. A review of local hydrogeological processes allowed a series of surface and subsurface indicators of groundwater recharge and discharge areas to be established. Various remote sensing and GIS techniques were then used to map these surface indicators including: terrain analysis, monitoring of vegetation activity, and mapping of infiltration capacity. All regions where groundwater is not discharging to the surface were considered potential recharge areas. This approach, applied systematically across a catchment, provides a framework for mapping recharge and discharge areas. A key component in assigning surface and subsurface indicators is the relevance to the dominant recharge and discharge processes occurring and the use of appropriate remote sensing and GIS techniques with the capacity to identify these processes.

  4. Mapping loci influencing blood pressure in the Framingham pedigrees using model-free LOD score analysis of a quantitative trait.

    PubMed

    Knight, Jo; North, Bernard V; Sham, Pak C; Curtis, David

    2003-12-31

    This paper presents a method of performing model-free LOD-score based linkage analysis on quantitative traits. It is implemented in the QMFLINK program. The method is used to perform a genome screen on the Framingham Heart Study data. A number of markers that show some support for linkage in our study coincide substantially with those implicated in other linkage studies of hypertension. Although the new method needs further testing on additional real and simulated data sets we can already say that it is straightforward to apply and may offer a useful complementary approach to previously available methods for the linkage analysis of quantitative traits.

  5. Mapping loci influencing blood pressure in the Framingham pedigrees using model-free LOD score analysis of a quantitative trait

    PubMed Central

    Knight, Jo; North, Bernard V; Sham, Pak C; Curtis, David

    2003-01-01

    This paper presents a method of performing model-free LOD-score based linkage analysis on quantitative traits. It is implemented in the QMFLINK program. The method is used to perform a genome screen on the Framingham Heart Study data. A number of markers that show some support for linkage in our study coincide substantially with those implicated in other linkage studies of hypertension. Although the new method needs further testing on additional real and simulated data sets we can already say that it is straightforward to apply and may offer a useful complementary approach to previously available methods for the linkage analysis of quantitative traits. PMID:14975142

  6. Conformational and functional analysis of molecular dynamics trajectories by Self-Organising Maps

    PubMed Central

    2011-01-01

    Background Molecular dynamics (MD) simulations are powerful tools to investigate the conformational dynamics of proteins that is often a critical element of their function. Identification of functionally relevant conformations is generally done clustering the large ensemble of structures that are generated. Recently, Self-Organising Maps (SOMs) were reported performing more accurately and providing more consistent results than traditional clustering algorithms in various data mining problems. We present a novel strategy to analyse and compare conformational ensembles of protein domains using a two-level approach that combines SOMs and hierarchical clustering. Results The conformational dynamics of the α-spectrin SH3 protein domain and six single mutants were analysed by MD simulations. The Cα's Cartesian coordinates of conformations sampled in the essential space were used as input data vectors for SOM training, then complete linkage clustering was performed on the SOM prototype vectors. A specific protocol to optimize a SOM for structural ensembles was proposed: the optimal SOM was selected by means of a Taguchi experimental design plan applied to different data sets, and the optimal sampling rate of the MD trajectory was selected. The proposed two-level approach was applied to single trajectories of the SH3 domain independently as well as to groups of them at the same time. The results demonstrated the potential of this approach in the analysis of large ensembles of molecular structures: the possibility of producing a topological mapping of the conformational space in a simple 2D visualisation, as well as of effectively highlighting differences in the conformational dynamics directly related to biological functions. Conclusions The use of a two-level approach combining SOMs and hierarchical clustering for conformational analysis of structural ensembles of proteins was proposed. It can easily be extended to other study cases and to conformational ensembles from other sources. PMID:21569575

  7. Development of Competence and Performance in Cartographic Language by Children at the Concrete Level of Map-Reasoning.

    ERIC Educational Resources Information Center

    Gerber, Rodney Victor

    This dissertation examines development of children's skills at map using and free-recall map sketching, with particular emphasis on map reasoning, competence in cartographic language, and performance in cartographic language. Cartographic language (the broad range of line, point, and area signs and map elements) is interpreted as the means by…

  8. Improved mapping of the travelling salesman problem for quantum annealing

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias; Heim, Bettina; Brown, Ethan; Wecker, David

    2015-03-01

    We consider the quantum adiabatic algorithm as applied to the travelling salesman problem (TSP). We introduce a novel mapping of TSP to an Ising spin glass Hamiltonian and compare it to previous known mappings. Through direct perturbative analysis, unitary evolution, and simulated quantum annealing, we show this new mapping to be significantly superior. We discuss how this advantage can translate to actual physical implementations of TSP on quantum annealers.

  9. Quality and Rigor of the Concept Mapping Methodology: A Pooled Study Analysis

    ERIC Educational Resources Information Center

    Rosas, Scott R.; Kane, Mary

    2012-01-01

    The use of concept mapping in research and evaluation has expanded dramatically over the past 20 years. Researchers in academic, organizational, and community-based settings have applied concept mapping successfully without the benefit of systematic analyses across studies to identify the features of a methodologically sound study. Quantitative…

  10. A Different Approach to Preparing Novakian Concept Maps: The Indexing Method

    ERIC Educational Resources Information Center

    Turan Oluk, Nurcan; Ekmekci, Güler

    2016-01-01

    People who claim that applying Novakian concept maps in Turkish is problematic base their arguments largely upon the structural differences between the English and Turkish languages. This study aims to introduce the indexing method to eliminate problems encountered in Turkish applications of Novakian maps and to share the preliminary results of…

  11. 44 CFR 65.17 - Review of determinations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... determination; and (5) A copy of the effective NFIP map (Flood Hazard Boundary Map (FHBM) or Flood Insurance...) The name of the NFIP community in which the building or manufactured home is located; (ii) The... applies; (iii) The NFIP map panel number and effective date upon which the determination is based; (iv) A...

  12. 44 CFR 65.17 - Review of determinations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... determination; and (5) A copy of the effective NFIP map (Flood Hazard Boundary Map (FHBM) or Flood Insurance...) The name of the NFIP community in which the building or manufactured home is located; (ii) The... applies; (iii) The NFIP map panel number and effective date upon which the determination is based; (iv) A...

  13. 44 CFR 65.17 - Review of determinations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... determination; and (5) A copy of the effective NFIP map (Flood Hazard Boundary Map (FHBM) or Flood Insurance...) The name of the NFIP community in which the building or manufactured home is located; (ii) The... applies; (iii) The NFIP map panel number and effective date upon which the determination is based; (iv) A...

  14. 44 CFR 65.17 - Review of determinations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... determination; and (5) A copy of the effective NFIP map (Flood Hazard Boundary Map (FHBM) or Flood Insurance...) The name of the NFIP community in which the building or manufactured home is located; (ii) The... applies; (iii) The NFIP map panel number and effective date upon which the determination is based; (iv) A...

  15. 44 CFR 65.17 - Review of determinations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... determination; and (5) A copy of the effective NFIP map (Flood Hazard Boundary Map (FHBM) or Flood Insurance...) The name of the NFIP community in which the building or manufactured home is located; (ii) The... applies; (iii) The NFIP map panel number and effective date upon which the determination is based; (iv) A...

  16. Modeling Research Project Risks with Fuzzy Maps

    ERIC Educational Resources Information Center

    Bodea, Constanta Nicoleta; Dascalu, Mariana Iuliana

    2009-01-01

    The authors propose a risks evaluation model for research projects. The model is based on fuzzy inference. The knowledge base for fuzzy process is built with a causal and cognitive map of risks. The map was especially developed for research projects, taken into account their typical lifecycle. The model was applied to an e-testing research…

  17. Conformal mapping for multiple terminals

    PubMed Central

    Wang, Weimin; Ma, Wenying; Wang, Qiang; Ren, Hao

    2016-01-01

    Conformal mapping is an important mathematical tool that can be used to solve various physical and engineering problems in many fields, including electrostatics, fluid mechanics, classical mechanics, and transformation optics. It is an accurate and convenient way to solve problems involving two terminals. However, when faced with problems involving three or more terminals, which are more common in practical applications, existing conformal mapping methods apply assumptions or approximations. A general exact method does not exist for a structure with an arbitrary number of terminals. This study presents a conformal mapping method for multiple terminals. Through an accurate analysis of boundary conditions, additional terminals or boundaries are folded into the inner part of a mapped region. The method is applied to several typical situations, and the calculation process is described for two examples of an electrostatic actuator with three electrodes and of a light beam splitter with three ports. Compared with previously reported results, the solutions for the two examples based on our method are more precise and general. The proposed method is helpful in promoting the application of conformal mapping in analysis of practical problems. PMID:27830746

  18. Mean template for tensor-based morphometry using deformation tensors.

    PubMed

    Leporé, Natasha; Brun, Caroline; Pennec, Xavier; Chou, Yi-Yu; Lopez, Oscar L; Aizenstein, Howard J; Becker, James T; Toga, Arthur W; Thompson, Paul M

    2007-01-01

    Tensor-based morphometry (TBM) studies anatomical differences between brain images statistically, to identify regions that differ between groups, over time, or correlate with cognitive or clinical measures. Using a nonlinear registration algorithm, all images are mapped to a common space, and statistics are most commonly performed on the Jacobian determinant (local expansion factor) of the deformation fields. In, it was shown that the detection sensitivity of the standard TBM approach could be increased by using the full deformation tensors in a multivariate statistical analysis. Here we set out to improve the common space itself, by choosing the shape that minimizes a natural metric on the deformation tensors from that space to the population of control subjects. This method avoids statistical bias and should ease nonlinear registration of new subjects data to a template that is 'closest' to all subjects' anatomies. As deformation tensors are symmetric positive-definite matrices and do not form a vector space, all computations are performed in the log-Euclidean framework. The control brain B that is already the closest to 'average' is found. A gradient descent algorithm is then used to perform the minimization that iteratively deforms this template and obtains the mean shape. We apply our method to map the profile of anatomical differences in a dataset of 26 HIV/AIDS patients and 14 controls, via a log-Euclidean Hotelling's T2 test on the deformation tensors. These results are compared to the ones found using the 'best' control, B. Statistics on both shapes are evaluated using cumulative distribution functions of the p-values in maps of inter-group differences.

  19. Surface density mapping of natural tissue by a scanning haptic microscope (SHM).

    PubMed

    Moriwaki, Takeshi; Oie, Tomonori; Takamizawa, Keiichi; Murayama, Yoshinobu; Fukuda, Toru; Omata, Sadao; Nakayama, Yasuhide

    2013-02-01

    To expand the performance capacity of the scanning haptic microscope (SHM) beyond surface mapping microscopy of elastic modulus or topography, surface density mapping of a natural tissue was performed by applying a measurement theory of SHM, in which a frequency change occurs upon contact of the sample surface with the SHM sensor - a microtactile sensor (MTS) that vibrates at a pre-determined constant oscillation frequency. This change was mainly stiffness-dependent at a low oscillation frequency and density-dependent at a high oscillation frequency. Two paragon examples with extremely different densities but similar macroscopic elastic moduli in the range of natural soft tissues were selected: one was agar hydrogels and the other silicon organogels with extremely low (less than 25 mg/cm(3)) and high densities (ca. 1300 mg/cm(3)), respectively. Measurements were performed in saline solution near the second-order resonance frequency, which led to the elastic modulus, and near the third-order resonance frequency. There was little difference in the frequency changes between the two resonance frequencies in agar gels. In contrast, in silicone gels, a large frequency change by MTS contact was observed near the third-order resonance frequency, indicating that the frequency change near the third-order resonance frequency reflected changes in both density and elastic modulus. Therefore, a density image of the canine aortic wall was subsequently obtained by subtracting the image observed near the second-order resonance frequency from that near the third-order resonance frequency. The elastin-rich region had a higher density than the collagen-rich region.

  20. Genotyping by Sequencing in Almond: SNP Discovery, Linkage Mapping, and Marker Design.

    PubMed

    Goonetilleke, Shashi N; March, Timothy J; Wirthensohn, Michelle G; Arús, Pere; Walker, Amanda R; Mather, Diane E

    2018-01-04

    In crop plant genetics, linkage maps provide the basis for the mapping of loci that affect important traits and for the selection of markers to be applied in crop improvement. In outcrossing species such as almond ( Prunus dulcis Mill. D. A. Webb), application of a double pseudotestcross mapping approach to the F 1 progeny of a biparental cross leads to the construction of a linkage map for each parent. Here, we report on the application of genotyping by sequencing to discover and map single nucleotide polymorphisms in the almond cultivars "Nonpareil" and "Lauranne." Allele-specific marker assays were developed for 309 tag pairs. Application of these assays to 231 Nonpareil × Lauranne F 1 progeny provided robust linkage maps for each parent. Analysis of phenotypic data for shell hardness demonstrated the utility of these maps for quantitative trait locus mapping. Comparison of these maps to the peach genome assembly confirmed high synteny and collinearity between the peach and almond genomes. The marker assays were applied to progeny from several other Nonpareil crosses, providing the basis for a composite linkage map of Nonpareil. Applications of the assays to a panel of almond clones and a panel of rootstocks used for almond production demonstrated the broad applicability of the markers and provide subsets of markers that could be used to discriminate among accessions. The sequence-based linkage maps and single nucleotide polymorphism assays presented here could be useful resources for the genetic analysis and genetic improvement of almond. Copyright © 2018 Goonetilleke et al.

Top