NASA Technical Reports Server (NTRS)
Macdonald, H.; Waite, W.; Elachi, C.; Babcock, R.; Konig, R.; Gattis, J.; Borengasser, M.; Tolman, D.
1980-01-01
Imaging radar was evaluated as an adjunct to conventional petroleum exploration techniques, especially linear mapping. Linear features were mapped from several remote sensor data sources including stereo photography, enhanced LANDSAT imagery, SLAR radar imagery, enhanced SAR radar imagery, and SAR radar/LANDSAT combinations. Linear feature maps were compared with surface joint data, subsurface and geophysical data, and gas production in the Arkansas part of the Arkoma basin. The best LANDSAT enhanced product for linear detection was found to be a winter scene, band 7, uniform distribution stretch. Of the individual SAR data products, the VH (cross polarized) SAR radar mosaic provides for detection of most linears; however, none of the SAR enhancements is significantly better than the others. Radar/LANDSAT merges may provide better linear detection than a single sensor mapping mode, but because of operator variability, the results are inconclusive. Radar/LANDSAT combinations appear promising as an optimum linear mapping technique, if the advantages and disadvantages of each remote sensor are considered.
Questionable Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.
ERIC Educational Resources Information Center
Gleason, John M.
1993-01-01
This response to an earlier article on a combined log-linear/MDS model for mapping journals by citation analysis discusses the underlying assumptions of the Poisson model with respect to characteristics of the citation process. The importance of empirical data analysis is also addressed. (nine references) (LRW)
Hippocampus Segmentation Based on Local Linear Mapping
Pang, Shumao; Jiang, Jun; Lu, Zhentai; Li, Xueli; Yang, Wei; Huang, Meiyan; Zhang, Yu; Feng, Yanqiu; Huang, Wenhua; Feng, Qianjin
2017-01-01
We propose local linear mapping (LLM), a novel fusion framework for distance field (DF) to perform automatic hippocampus segmentation. A k-means cluster method is propose for constructing magnetic resonance (MR) and DF dictionaries. In LLM, we assume that the MR and DF samples are located on two nonlinear manifolds and the mapping from the MR manifold to the DF manifold is differentiable and locally linear. We combine the MR dictionary using local linear representation to present the test sample, and combine the DF dictionary using the corresponding coefficients derived from local linear representation procedure to predict the DF of the test sample. We then merge the overlapped predicted DF patch to obtain the DF value of each point in the test image via a confidence-based weighted average method. This approach enabled us to estimate the label of the test image according to the predicted DF. The proposed method was evaluated on brain images of 35 subjects obtained from SATA dataset. Results indicate the effectiveness of the proposed method, which yields mean Dice similarity coefficients of 0.8697, 0.8770 and 0.8734 for the left, right and bi-lateral hippocampus, respectively. PMID:28368016
Hippocampus Segmentation Based on Local Linear Mapping.
Pang, Shumao; Jiang, Jun; Lu, Zhentai; Li, Xueli; Yang, Wei; Huang, Meiyan; Zhang, Yu; Feng, Yanqiu; Huang, Wenhua; Feng, Qianjin
2017-04-03
We propose local linear mapping (LLM), a novel fusion framework for distance field (DF) to perform automatic hippocampus segmentation. A k-means cluster method is propose for constructing magnetic resonance (MR) and DF dictionaries. In LLM, we assume that the MR and DF samples are located on two nonlinear manifolds and the mapping from the MR manifold to the DF manifold is differentiable and locally linear. We combine the MR dictionary using local linear representation to present the test sample, and combine the DF dictionary using the corresponding coefficients derived from local linear representation procedure to predict the DF of the test sample. We then merge the overlapped predicted DF patch to obtain the DF value of each point in the test image via a confidence-based weighted average method. This approach enabled us to estimate the label of the test image according to the predicted DF. The proposed method was evaluated on brain images of 35 subjects obtained from SATA dataset. Results indicate the effectiveness of the proposed method, which yields mean Dice similarity coefficients of 0.8697, 0.8770 and 0.8734 for the left, right and bi-lateral hippocampus, respectively.
Hippocampus Segmentation Based on Local Linear Mapping
NASA Astrophysics Data System (ADS)
Pang, Shumao; Jiang, Jun; Lu, Zhentai; Li, Xueli; Yang, Wei; Huang, Meiyan; Zhang, Yu; Feng, Yanqiu; Huang, Wenhua; Feng, Qianjin
2017-04-01
We propose local linear mapping (LLM), a novel fusion framework for distance field (DF) to perform automatic hippocampus segmentation. A k-means cluster method is propose for constructing magnetic resonance (MR) and DF dictionaries. In LLM, we assume that the MR and DF samples are located on two nonlinear manifolds and the mapping from the MR manifold to the DF manifold is differentiable and locally linear. We combine the MR dictionary using local linear representation to present the test sample, and combine the DF dictionary using the corresponding coefficients derived from local linear representation procedure to predict the DF of the test sample. We then merge the overlapped predicted DF patch to obtain the DF value of each point in the test image via a confidence-based weighted average method. This approach enabled us to estimate the label of the test image according to the predicted DF. The proposed method was evaluated on brain images of 35 subjects obtained from SATA dataset. Results indicate the effectiveness of the proposed method, which yields mean Dice similarity coefficients of 0.8697, 0.8770 and 0.8734 for the left, right and bi-lateral hippocampus, respectively.
Reciprocal space mapping and single-crystal scattering rods.
Smilgies, Detlef M; Blasini, Daniel R; Hotta, Shu; Yanagi, Hisao
2005-11-01
Reciprocal space mapping using a linear gas detector in combination with a matching Soller collimator has been applied to map scattering rods of well oriented organic microcrystals grown on a solid surface. Formulae are provided to correct image distortions in angular space and to determine the required oscillation range, in order to measure properly integrated scattering intensities.
Single Image Super-Resolution Using Global Regression Based on Multiple Local Linear Mappings.
Choi, Jae-Seok; Kim, Munchurl
2017-03-01
Super-resolution (SR) has become more vital, because of its capability to generate high-quality ultra-high definition (UHD) high-resolution (HR) images from low-resolution (LR) input images. Conventional SR methods entail high computational complexity, which makes them difficult to be implemented for up-scaling of full-high-definition input images into UHD-resolution images. Nevertheless, our previous super-interpolation (SI) method showed a good compromise between Peak-Signal-to-Noise Ratio (PSNR) performances and computational complexity. However, since SI only utilizes simple linear mappings, it may fail to precisely reconstruct HR patches with complex texture. In this paper, we present a novel SR method, which inherits the large-to-small patch conversion scheme from SI but uses global regression based on local linear mappings (GLM). Thus, our new SR method is called GLM-SI. In GLM-SI, each LR input patch is divided into 25 overlapped subpatches. Next, based on the local properties of these subpatches, 25 different local linear mappings are applied to the current LR input patch to generate 25 HR patch candidates, which are then regressed into one final HR patch using a global regressor. The local linear mappings are learned cluster-wise in our off-line training phase. The main contribution of this paper is as follows: Previously, linear-mapping-based conventional SR methods, including SI only used one simple yet coarse linear mapping to each patch to reconstruct its HR version. On the contrary, for each LR input patch, our GLM-SI is the first to apply a combination of multiple local linear mappings, where each local linear mapping is found according to local properties of the current LR patch. Therefore, it can better approximate nonlinear LR-to-HR mappings for HR patches with complex texture. Experiment results show that the proposed GLM-SI method outperforms most of the state-of-the-art methods, and shows comparable PSNR performance with much lower computational complexity when compared with a super-resolution method based on convolutional neural nets (SRCNN15). Compared with the previous SI method that is limited with a scale factor of 2, GLM-SI shows superior performance with average 0.79 dB higher in PSNR, and can be used for scale factors of 3 or higher.
NASA Astrophysics Data System (ADS)
Syusina, O. M.; Chernitsov, A. M.; Tamarov, V. A.
2011-07-01
The combined method for mapping of the confidence region of the motion of the small bodies of Solar system at any point in time is proposed. In this method firstly one carries out linear mapping of the initial region at given point in time. If nonlinear-ity coefficient of evaluated region provided to be larger of permissible ones, that one carries out linear mapping of the initial region at limit moment for which this region be ellipsoidal. After that obtained region is mapping at given point in time by nonlinear method.
Hypothalamic stimulation and baroceptor reflex interaction on renal nerve activity.
NASA Technical Reports Server (NTRS)
Wilson, M. F.; Ninomiya, I.; Franz, G. N.; Judy, W. V.
1971-01-01
The basal level of mean renal nerve activity (MRNA-0) measured in anesthetized cats was found to be modified by the additive interaction of hypothalamic and baroceptor reflex influences. Data were collected with the four major baroceptor nerves either intact or cut, and with mean aortic pressure (MAP) either clamped with a reservoir or raised with l-epinephrine. With intact baroceptor nerves, MRNA stayed essentially constant at level MRNA-0 for MAP below an initial pressure P1, and fell approximately linearly to zero as MAP was raised to P2. Cutting the baroceptor nerves kept MRNA at MRNA-0 (assumed to represent basal central neural output) independent of MAP. The addition of hypothalamic stimulation produced nearly constant increments in MRNA for all pressure levels up to P2, with complete inhibition at some level above P2. The increments in MRNA depended on frequency and location of the stimulus. A piecewise linear model describes MRNA as a linear combination of hypothalamic, basal central neural, and baroceptor reflex activity.
Linear and Non-Linear Visual Feature Learning in Rat and Humans
Bossens, Christophe; Op de Beeck, Hans P.
2016-01-01
The visual system processes visual input in a hierarchical manner in order to extract relevant features that can be used in tasks such as invariant object recognition. Although typically investigated in primates, recent work has shown that rats can be trained in a variety of visual object and shape recognition tasks. These studies did not pinpoint the complexity of the features used by these animals. Many tasks might be solved by using a combination of relatively simple features which tend to be correlated. Alternatively, rats might extract complex features or feature combinations which are nonlinear with respect to those simple features. In the present study, we address this question by starting from a small stimulus set for which one stimulus-response mapping involves a simple linear feature to solve the task while another mapping needs a well-defined nonlinear combination of simpler features related to shape symmetry. We verified computationally that the nonlinear task cannot be trivially solved by a simple V1-model. We show how rats are able to solve the linear feature task but are unable to acquire the nonlinear feature. In contrast, humans are able to use the nonlinear feature and are even faster in uncovering this solution as compared to the linear feature. The implications for the computational capabilities of the rat visual system are discussed. PMID:28066201
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chown, R.; et al.
We present three maps of the millimeter-wave sky created by combining data from the South Pole Telescope (SPT) and the Planck satellite. We use data from the SPT-SZ survey, a survey of 2540 deg$^2$ of the the sky with arcminute resolution in three bands centered at 95, 150, and 220 GHz, and the full-mission Planck temperature data in the 100, 143, and 217 GHz bands. A linear combination of the SPT-SZ and Planck data is computed in spherical harmonic space, with weights derived from the noise of both instruments. This weighting scheme results in Planck data providing most of themore » large-angular-scale information in the combined maps, with the smaller-scale information coming from SPT-SZ data. A number of tests have been done on the maps. We find their angular power spectra to agree very well with theoretically predicted spectra and previously published results.« less
Intensity Mapping Foreground Cleaning with Generalized Needlet Internal Linear Combination
NASA Astrophysics Data System (ADS)
Olivari, L. C.; Remazeilles, M.; Dickinson, C.
2018-05-01
Intensity mapping (IM) is a new observational technique to survey the large-scale structure of matter using spectral emission lines. IM observations are contaminated by instrumental noise and astrophysical foregrounds. The foregrounds are at least three orders of magnitude larger than the searched signals. In this work, we apply the Generalized Needlet Internal Linear Combination (GNILC) method to subtract radio foregrounds and to recover the cosmological HI and CO signals within the IM context. For the HI IM case, we find that GNILC can reconstruct the HI plus noise power spectra with 7.0% accuracy for z = 0.13 - 0.48 (960 - 1260 MHz) and l <~ 400, while for the CO IM case, we find that it can reconstruct the CO plus noise power spectra with 6.7% accuracy for z = 2.4 - 3.4 (26 - 34 GHz) and l <~ 3000.
In, Myung-Ho; Posnansky, Oleg; Speck, Oliver
2016-05-01
To accurately correct diffusion-encoding direction-dependent eddy-current-induced geometric distortions in diffusion-weighted echo-planar imaging (DW-EPI) and to minimize the calibration time at 7 Tesla (T). A point spread function (PSF) mapping based eddy-current calibration method is newly presented to determine eddy-current-induced geometric distortions even including nonlinear eddy-current effects within the readout acquisition window. To evaluate the temporal stability of eddy-current maps, calibration was performed four times within 3 months. Furthermore, spatial variations of measured eddy-current maps versus their linear superposition were investigated to enable correction in DW-EPIs with arbitrary diffusion directions without direct calibration. For comparison, an image-based eddy-current correction method was additionally applied. Finally, this method was combined with a PSF-based susceptibility-induced distortion correction approach proposed previously to correct both susceptibility and eddy-current-induced distortions in DW-EPIs. Very fast eddy-current calibration in a three-dimensional volume is possible with the proposed method. The measured eddy-current maps are very stable over time and very similar maps can be obtained by linear superposition of principal-axes eddy-current maps. High resolution in vivo brain results demonstrate that the proposed method allows more efficient eddy-current correction than the image-based method. The combination of both PSF-based approaches allows distortion-free images, which permit reliable analysis in diffusion tensor imaging applications at 7T. © 2015 Wiley Periodicals, Inc.
Analysis and application of ERTS-1 data for regional geological mapping
NASA Technical Reports Server (NTRS)
Gold, D. P.; Parizek, R. R.; Alexander, S. A.
1973-01-01
Combined visual and digital techniques of analysing ERTS-1 data for geologic information have been tried on selected areas in Pennsylvania. The major physiolographic and structural provinces show up well. Supervised mapping, following the imaged expression of known geologic features on ERTS band 5 enlargements (1:250,000) of parts of eastern Pennsylvania, delimited the Diabase Sills and the Precambrian rocks of the Reading Prong with remarkable accuracy. From unsupervised mapping, transgressive linear features are apparent in unexpected density, and exhibit strong control over river valley and stream channel directions. They are unaffected by bedrock type, age, or primary structural boundaries, which suggests they are either rejuvenated basement joint directions on different scales, or they are a recently impressed structure possibly associated with a drifting North American plate. With ground mapping and underflight data, 6 scales of linear features have been recognized.
Kia, Seyed Mostafa; Vega Pons, Sandro; Weisz, Nathan; Passerini, Andrea
2016-01-01
Brain decoding is a popular multivariate approach for hypothesis testing in neuroimaging. Linear classifiers are widely employed in the brain decoding paradigm to discriminate among experimental conditions. Then, the derived linear weights are visualized in the form of multivariate brain maps to further study spatio-temporal patterns of underlying neural activities. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal to noise ratios, and the high dimensionality of neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, first, we present a theoretical definition of interpretability in brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. Second, as an application of the proposed definition, we exemplify a heuristic for approximating the interpretability in multivariate analysis of evoked magnetoencephalography (MEG) responses. Third, we propose to combine the approximated interpretability and the generalization performance of the brain decoding into a new multi-objective criterion for model selection. Our results, for the simulated and real MEG data, show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms in the future.
Kia, Seyed Mostafa; Vega Pons, Sandro; Weisz, Nathan; Passerini, Andrea
2017-01-01
Brain decoding is a popular multivariate approach for hypothesis testing in neuroimaging. Linear classifiers are widely employed in the brain decoding paradigm to discriminate among experimental conditions. Then, the derived linear weights are visualized in the form of multivariate brain maps to further study spatio-temporal patterns of underlying neural activities. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal to noise ratios, and the high dimensionality of neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, first, we present a theoretical definition of interpretability in brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. Second, as an application of the proposed definition, we exemplify a heuristic for approximating the interpretability in multivariate analysis of evoked magnetoencephalography (MEG) responses. Third, we propose to combine the approximated interpretability and the generalization performance of the brain decoding into a new multi-objective criterion for model selection. Our results, for the simulated and real MEG data, show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms in the future. PMID:28167896
Khalifé, Maya; Fernandez, Brice; Jaubert, Olivier; Soussan, Michael; Brulon, Vincent; Buvat, Irène; Comtat, Claude
2017-09-21
In brain PET/MR applications, accurate attenuation maps are required for accurate PET image quantification. An implemented attenuation correction (AC) method for brain imaging is the single-atlas approach that estimates an AC map from an averaged CT template. As an alternative, we propose to use a zero echo time (ZTE) pulse sequence to segment bone, air and soft tissue. A linear relationship between histogram normalized ZTE intensity and measured CT density in Hounsfield units ([Formula: see text]) in bone has been established thanks to a CT-MR database of 16 patients. Continuous AC maps were computed based on the segmented ZTE by setting a fixed linear attenuation coefficient (LAC) to air and soft tissue and by using the linear relationship to generate continuous μ values for the bone. Additionally, for the purpose of comparison, four other AC maps were generated: a ZTE derived AC map with a fixed LAC for the bone, an AC map based on the single-atlas approach as provided by the PET/MR manufacturer, a soft-tissue only AC map and, finally, the CT derived attenuation map used as the gold standard (CTAC). All these AC maps were used with different levels of smoothing for PET image reconstruction with and without time-of-flight (TOF). The subject-specific AC map generated by combining ZTE-based segmentation and linear scaling of the normalized ZTE signal into [Formula: see text] was found to be a good substitute for the measured CTAC map in brain PET/MR when used with a Gaussian smoothing kernel of [Formula: see text] corresponding to the PET scanner intrinsic resolution. As expected TOF reduces AC error regardless of the AC method. The continuous ZTE-AC performed better than the other alternative MR derived AC methods, reducing the quantification error between the MRAC corrected PET image and the reference CTAC corrected PET image.
NASA Astrophysics Data System (ADS)
Khalifé, Maya; Fernandez, Brice; Jaubert, Olivier; Soussan, Michael; Brulon, Vincent; Buvat, Irène; Comtat, Claude
2017-10-01
In brain PET/MR applications, accurate attenuation maps are required for accurate PET image quantification. An implemented attenuation correction (AC) method for brain imaging is the single-atlas approach that estimates an AC map from an averaged CT template. As an alternative, we propose to use a zero echo time (ZTE) pulse sequence to segment bone, air and soft tissue. A linear relationship between histogram normalized ZTE intensity and measured CT density in Hounsfield units (HU ) in bone has been established thanks to a CT-MR database of 16 patients. Continuous AC maps were computed based on the segmented ZTE by setting a fixed linear attenuation coefficient (LAC) to air and soft tissue and by using the linear relationship to generate continuous μ values for the bone. Additionally, for the purpose of comparison, four other AC maps were generated: a ZTE derived AC map with a fixed LAC for the bone, an AC map based on the single-atlas approach as provided by the PET/MR manufacturer, a soft-tissue only AC map and, finally, the CT derived attenuation map used as the gold standard (CTAC). All these AC maps were used with different levels of smoothing for PET image reconstruction with and without time-of-flight (TOF). The subject-specific AC map generated by combining ZTE-based segmentation and linear scaling of the normalized ZTE signal into HU was found to be a good substitute for the measured CTAC map in brain PET/MR when used with a Gaussian smoothing kernel of 4~mm corresponding to the PET scanner intrinsic resolution. As expected TOF reduces AC error regardless of the AC method. The continuous ZTE-AC performed better than the other alternative MR derived AC methods, reducing the quantification error between the MRAC corrected PET image and the reference CTAC corrected PET image.
Interpreting linear support vector machine models with heat map molecule coloring
2011-01-01
Background Model-based virtual screening plays an important role in the early drug discovery stage. The outcomes of high-throughput screenings are a valuable source for machine learning algorithms to infer such models. Besides a strong performance, the interpretability of a machine learning model is a desired property to guide the optimization of a compound in later drug discovery stages. Linear support vector machines showed to have a convincing performance on large-scale data sets. The goal of this study is to present a heat map molecule coloring technique to interpret linear support vector machine models. Based on the weights of a linear model, the visualization approach colors each atom and bond of a compound according to its importance for activity. Results We evaluated our approach on a toxicity data set, a chromosome aberration data set, and the maximum unbiased validation data sets. The experiments show that our method sensibly visualizes structure-property and structure-activity relationships of a linear support vector machine model. The coloring of ligands in the binding pocket of several crystal structures of a maximum unbiased validation data set target indicates that our approach assists to determine the correct ligand orientation in the binding pocket. Additionally, the heat map coloring enables the identification of substructures important for the binding of an inhibitor. Conclusions In combination with heat map coloring, linear support vector machine models can help to guide the modification of a compound in later stages of drug discovery. Particularly substructures identified as important by our method might be a starting point for optimization of a lead compound. The heat map coloring should be considered as complementary to structure based modeling approaches. As such, it helps to get a better understanding of the binding mode of an inhibitor. PMID:21439031
NASA Astrophysics Data System (ADS)
Akgun, Aykut; Dag, Serhat; Bulut, Fikri
2008-05-01
Landslides are very common natural problems in the Black Sea Region of Turkey due to the steep topography, improper use of land cover and adverse climatic conditions for landslides. In the western part of region, many studies have been carried out especially in the last decade for landslide susceptibility mapping using different evaluation methods such as deterministic approach, landslide distribution, qualitative, statistical and distribution-free analyses. The purpose of this study is to produce landslide susceptibility maps of a landslide-prone area (Findikli district, Rize) located at the eastern part of the Black Sea Region of Turkey by likelihood frequency ratio (LRM) model and weighted linear combination (WLC) model and to compare the results obtained. For this purpose, landslide inventory map of the area were prepared for the years of 1983 and 1995 by detailed field surveys and aerial-photography studies. Slope angle, slope aspect, lithology, distance from drainage lines, distance from roads and the land-cover of the study area are considered as the landslide-conditioning parameters. The differences between the susceptibility maps derived by the LRM and the WLC models are relatively minor when broad-based classifications are taken into account. However, the WLC map showed more details but the other map produced by LRM model produced weak results. The reason for this result is considered to be the fact that the majority of pixels in the LRM map have high values than the WLC-derived susceptibility map. In order to validate the two susceptibility maps, both of them were compared with the landslide inventory map. Although the landslides do not exist in the very high susceptibility class of the both maps, 79% of the landslides fall into the high and very high susceptibility zones of the WLC map while this is 49% for the LRM map. This shows that the WLC model exhibited higher performance than the LRM model.
Fast or slow? Compressions (or not) in number-to-line mappings.
Candia, Victor; Deprez, Paola; Wernery, Jannis; Núñez, Rafael
2015-01-01
We investigated, in a university student population, spontaneous (non-speeded) fast and slow number-to-line mapping responses using non-symbolic (dots) and symbolic (words) stimuli. Seeking for less conventionalized responses, we used anchors 0-130, rather than the standard 0-100. Slow responses to both types of stimuli only produced linear mappings with no evidence of non-linear compression. In contrast, fast responses revealed distinct patterns of non-linear compression for dots and words. A predicted logarithmic compression was observed in fast responses to dots in the 0-130 range, but not in the reduced 0-100 range, indicating compression in proximity of the upper anchor 130, not the standard 100. Moreover, fast responses to words revealed an unexpected significant negative compression in the reduced 0-100 range, but not in the 0-130 range, indicating compression in proximity to the lower anchor 0. Results show that fast responses help revealing the fundamentally distinct nature of symbolic and non-symbolic quantity representation. Whole number words, being intrinsically mediated by cultural phenomena such as language and education, emphasize the invariance of magnitude between them—essential for linear mappings, and therefore, unlike non-symbolic (psychophysical) stimuli, yield spatial mappings that don't seem to be influenced by the Weber-Fechner law of psychophysics. However, high levels of education (when combined with an absence of standard upper anchors) may lead fast responses to overestimate magnitude invariance on the lower end of word numerals.
Forest fire risk assessment-an integrated approach based on multicriteria evaluation.
Goleiji, Elham; Hosseini, Seyed Mohsen; Khorasani, Nematollah; Monavari, Seyed Masoud
2017-11-06
The present study deals with application of the weighted linear combination method for zoning of forest fire risk in Dohezar and Sehezar region of Mazandaran province in northern Iran. In this study, the effective criteria for fires were identified by the Delphi method, and these included ecological and socioeconomic parameters. In this regard, the first step comprised of digital layers; the required data were provided from databases, related centers, and field data collected in the region. Then, the map of criteria was digitized in a geographic information system, and all criteria and indexes were normalized by fuzzy logic. After that, the geographic information system (GIS 10.3) was integrated with the Weighted Linear Combination and the Analytical Network Process, to produce zonation of the forest fire risk map in the Dohezar and Sehezar region. In order to analyze accuracy of the evaluation, the results obtained from the study were compared to records of former fire incidents in the region. This was done using the Kappa coefficient test and a receiver operating characteristic curve. The model showing estimations for forest fire risk explained that the prepared map had accuracy of 90% determined by the Kappa coefficient test and the value of 0.924 by receiver operating characteristic. These results showed that the prepared map had high accuracy and efficacy.
Principal components colour display of ERTS imagery
NASA Technical Reports Server (NTRS)
Taylor, M. M.
1974-01-01
In the technique presented, colours are not derived from single bands, but rather from independent linear combinations of the bands. Using a simple model of the processing done by the visual system, three informationally independent linear combinations of the four ERTS bands are mapped onto the three visual colour dimensions of brightness, redness-greenness and blueness-yellowness. The technique permits user-specific transformations which enhance particular features, but this is not usually needed, since a single transformation provides a picture which conveys much of the information implicit in the ERTS data. Examples of experimental vector images with matched individual band images are shown.
On-line Adaptive Radiation Treatment of Prostate Cancer
2008-01-01
novel imaging system using a linear x-ray source and a linear detector . This imaging system may significantly improve the quality of online images...yielded the Euclidean voxel distances nside the ROI. The two distance maps were combined with ositive distances outside and negative distances inside...is reduced by 1cm. IMRT is more sensitive to organ motion. Large discrepancies of bladder and rectum doses were observed compared to the actual
Combination of Eight Alleles at Four Quantitative Trait Loci Determines Grain Length in Rice
Zeng, Yuxiang; Ji, Zhijuan; Wen, Zhihua; Liang, Yan; Yang, Changdeng
2016-01-01
Grain length is an important quantitative trait in rice (Oryza sativa L.) that influences both grain yield and exterior quality. Although many quantitative trait loci (QTLs) for grain length have been identified, it is still unclear how different alleles from different QTLs regulate grain length coordinately. To explore the mechanisms of QTL combination in the determination of grain length, five mapping populations, including two F2 populations, an F3 population, an F7 recombinant inbred line (RIL) population, and an F8 RIL population, were developed from the cross between the U.S. tropical japonica variety ‘Lemont’ and the Chinese indica variety ‘Yangdao 4’ and grown under different environmental conditions. Four QTLs (qGL-3-1, qGL-3-2, qGL-4, and qGL-7) for grain length were detected using both composite interval mapping and multiple interval mapping methods in the mapping populations. In each locus, there was an allele from one parent that increased grain length and another allele from another parent that decreased it. The eight alleles in the four QTLs were analyzed to determine whether these alleles act additively across loci, and lead to a linear relationship between the predicted breeding value of QTLs and phenotype. Linear regression analysis suggested that the combination of eight alleles determined grain length. Plants carrying more grain length-increasing alleles had longer grain length than those carrying more grain length-decreasing alleles. This trend was consistent in all five mapping populations and demonstrated the regulation of grain length by the four QTLs. Thus, these QTLs are ideal resources for modifying grain length in rice. PMID:26942914
Multiwavelength observations of magnetic fields and related activity on XI Bootis A
NASA Technical Reports Server (NTRS)
Saar, Steven H.; Huovelin, J.; Linsky, Jeffrey L.; Giampapa, Mark S.; Jordan, Carole
1988-01-01
Preliminary results of coordinated observations of magnetic fields and related activity on the active dwarf, Xi Boo A, are presented. Combining the magnetic fluxes with the linear polarization data, a simple map of the stellar active regions is constructed.
NASA Astrophysics Data System (ADS)
Karahaliou, A.; Vassiou, K.; Skiadopoulos, S.; Kanavou, T.; Yiakoumelos, A.; Costaridou, L.
2009-07-01
The current study investigates whether texture features extracted from lesion kinetics feature maps can be used for breast cancer diagnosis. Fifty five women with 57 breast lesions (27 benign, 30 malignant) were subjected to dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) on 1.5T system. A linear-slope model was fitted pixel-wise to a representative lesion slice time series and fitted parameters were used to create three kinetic maps (wash out, time to peak enhancement and peak enhancement). 28 grey level co-occurrence matrices features were extracted from each lesion kinetic map. The ability of texture features per map in discriminating malignant from benign lesions was investigated using a Probabilistic Neural Network classifier. Additional classification was performed by combining classification outputs of most discriminating feature subsets from the three maps, via majority voting. The combined scheme outperformed classification based on individual maps achieving area under Receiver Operating Characteristics curve 0.960±0.029. Results suggest that heterogeneity of breast lesion kinetics, as quantified by texture analysis, may contribute to computer assisted tissue characterization in DCE-MRI.
NASA Astrophysics Data System (ADS)
Qie, G.; Wang, G.; Wang, M.
2016-12-01
Mixed pixels and shadows due to buildings in urban areas impede accurate estimation and mapping of city vegetation carbon density. In most of previous studies, these factors are often ignored, which thus result in underestimation of city vegetation carbon density. In this study we presented an integrated methodology to improve the accuracy of mapping city vegetation carbon density. Firstly, we applied a linear shadow remove analysis (LSRA) on remotely sensed Landsat 8 images to reduce the shadow effects on carbon estimation. Secondly, we integrated a linear spectral unmixing analysis (LSUA) with a linear stepwise regression (LSR), a logistic model-based stepwise regression (LMSR) and k-Nearest Neighbors (kNN), and utilized and compared the integrated models on shadow-removed images to map vegetation carbon density. This methodology was examined in Shenzhen City of Southeast China. A data set from a total of 175 sample plots measured in 2013 and 2014 was used to train the models. The independent variables statistically significantly contributing to improving the fit of the models to the data and reducing the sum of squared errors were selected from a total of 608 variables derived from different image band combinations and transformations. The vegetation fraction from LSUA was then added into the models as an important independent variable. The estimates obtained were evaluated using a cross-validation method. Our results showed that higher accuracies were obtained from the integrated models compared with the ones using traditional methods which ignore the effects of mixed pixels and shadows. This study indicates that the integrated method has great potential on improving the accuracy of urban vegetation carbon density estimation. Key words: Urban vegetation carbon, shadow, spectral unmixing, spatial modeling, Landsat 8 images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samal, Pramoda Kumar; Jain, Pankaj; Saha, Rajib
We estimate cosmic microwave background (CMB) polarization and temperature power spectra using Wilkinson Microwave Anisotropy Probe (WMAP) 5 year foreground contaminated maps. The power spectrum is estimated by using a model-independent method, which does not utilize directly the diffuse foreground templates nor the detector noise model. The method essentially consists of two steps: (1) removal of diffuse foregrounds contamination by making linear combination of individual maps in harmonic space and (2) cross-correlation of foreground cleaned maps to minimize detector noise bias. For the temperature power spectrum we also estimate and subtract residual unresolved point source contamination in the cross-power spectrummore » using the point source model provided by the WMAP science team. Our TT, TE, and EE power spectra are in good agreement with the published results of the WMAP science team. We perform detailed numerical simulations to test for bias in our procedure. We find that the bias is small in almost all cases. A negative bias at low l in TT power spectrum has been pointed out in an earlier publication. We find that the bias-corrected quadrupole power (l(l + 1)C{sub l} /2{pi}) is 532 {mu}K{sup 2}, approximately 2.5 times the estimate (213.4 {mu}K{sup 2}) made by the WMAP team.« less
Pesticide adsorption in relation to soil properties and soil type distribution in regional scale.
Kodešová, Radka; Kočárek, Martin; Kodeš, Vít; Drábek, Ondřej; Kozák, Josef; Hejtmánková, Kateřina
2011-02-15
Study was focused on the evaluation of pesticide adsorption in soils, as one of the parameters, which are necessary to know when assessing possible groundwater contamination caused by pesticides commonly used in agriculture. Batch sorption tests were performed for 11 selected pesticides and 13 representative soils. The Freundlich equations were used to describe adsorption isotherms. Multiple-linear regressions were used to predict the Freundlich adsorption coefficients from measured soil properties. Resulting functions and a soil map of the Czech Republic were used to generate maps of the coefficient distribution. The multiple linear regressions showed that the K(F) coefficient depended on: (a) combination of OM (organic matter content), pH(KCl) and CEC (cation exchange capacity), or OM, SCS (sorption complex saturation) and salinity (terbuthylazine), (b) combination of OM and pH(KCl), or OM, SCS and salinity (prometryne), (c) combination of OM and pH(KCl), or OM and ρ(z) (metribuzin), (d) combination of OM, CEC and clay content, or clay content, CEC and salinity (hexazinone), (e) combination of OM and pH(KCl), or OM and SCS (metolachlor), (f) OM or combination of OM and CaCO(3) (chlorotoluron), (g) OM (azoxystrobin), (h) combination of OM and pH(KCl) (trifluralin), (i) combination of OM and clay content (fipronil), (j) combination of OM and pH(KCl), or OM, pH(KCl) and CaCO(3) (thiacloprid), (k) combination of OM, pH(KCl) and CEC, or sand content, pH(KCl) and salinity (chlormequat chloride). Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Jiang, L.; Wang, G.
2017-12-01
Snow cover is one of key elements in the investigations of weather, climatic change, water resource, and snow hazard. Satellites observations from on-board optical sensors provides the ability to snow cover mapping through the discrimination of snow from other surface features and cloud. MODIS provides maximum of snow cover data using 8-day composition data in order to reduce the cloud obscuration impacts. However, snow cover mapping is often required to obtain at the temporal scale of less than one day, especially in the case of disasters. Geostationary satellites provide much higher temporal resolution measurements (typically at 15 min or half or one hour), which has a great potential to reduce cloud cover problem and observe ground surface for identifying snow. The proposed method in this work is that how to take the advantages of polar-orbiting and geostationary optical sensors to accurately map snow cover without data gaps due to cloud. FY-2 geostationary satellites have high temporal resolution observations, however, they are lacking enough spectral bands essential for snow cover monitoring, such as the 1.6 μm band. Based on our recent work (Wang et al., 2017), we improved FY-2/VISSR fractional snow cover estimation with a linear spectral unmixing analysis method. The linear approach is applied then using the reflectance observed at the certain hourly image of FY-2 to calculate pixel-wise snow cover fraction. The composition of daily factional snow cover employs the sun zenith angle, where the snow fraction under lowest sun zenith angle is considered as the most confident result. FY-2/VISSR fractional snow cover map has less cloud due to the composition of multi-temporal snow maps in a single day. In order to get an accurate and cloud-reduced fractional snow cover map, both of MODIS and FY-2/VISSR daily snow fraction maps are blended together. With the combination of FY-2E/VISSR and MODIS, there are still some cloud existing in the daily snow fraction map. Then the combination snow fraction map is temporally reconstructed using MATLAB Piecewise Cubic Hermite Interpolating Polynomial (PCHIP) function to derive a completely daily cloud-free snow cover map under all the sky conditions.
NASA Technical Reports Server (NTRS)
Fagan, Matthew E.; Defries, Ruth S.; Sesnie, Steven E.; Arroyo-Mora, J. Pablo; Soto, Carlomagno; Singh, Aditya; Townsend, Philip A.; Chazdon, Robin L.
2015-01-01
An efficient means to map tree plantations is needed to detect tropical land use change and evaluate reforestation projects. To analyze recent tree plantation expansion in northeastern Costa Rica, we examined the potential of combining moderate-resolution hyperspectral imagery (2005 HyMap mosaic) with multitemporal, multispectral data (Landsat) to accurately classify (1) general forest types and (2) tree plantations by species composition. Following a linear discriminant analysis to reduce data dimensionality, we compared four Random Forest classification models: hyperspectral data (HD) alone; HD plus interannual spectral metrics; HD plus a multitemporal forest regrowth classification; and all three models combined. The fourth, combined model achieved overall accuracy of 88.5%. Adding multitemporal data significantly improved classification accuracy (p less than 0.0001) of all forest types, although the effect on tree plantation accuracy was modest. The hyperspectral data alone classified six species of tree plantations with 75% to 93% producer's accuracy; adding multitemporal spectral data increased accuracy only for two species with dense canopies. Non-native tree species had higher classification accuracy overall and made up the majority of tree plantations in this landscape. Our results indicate that combining occasionally acquired hyperspectral data with widely available multitemporal satellite imagery enhances mapping and monitoring of reforestation in tropical landscapes.
Resultant as the determinant of a Koszul complex
NASA Astrophysics Data System (ADS)
Anokhina, A. S.; Morozov, A. Yu.; Shakirov, Sh. R.
2009-09-01
The determinant is a very important characteristic of a linear map between vector spaces. Two generalizations of linear maps are intensively used in modern theory: linear complexes (nilpotent chains of linear maps) and nonlinear maps. The determinant of a complex and the resultant are then the corresponding generalizations of the determinant of a linear map. It turns out that these two quantities are related: the resultant of a nonlinear map is the determinant of the corresponding Koszul complex. We give an elementary introduction into these notions and relations, which will definitely play a role in the future development of theoretical physics.
Poynton, Clare B; Chen, Kevin T; Chonde, Daniel B; Izquierdo-Garcia, David; Gollub, Randy L; Gerstner, Elizabeth R; Batchelor, Tracy T; Catana, Ciprian
2014-01-01
We present a new MRI-based attenuation correction (AC) approach for integrated PET/MRI systems that combines both segmentation- and atlas-based methods by incorporating dual-echo ultra-short echo-time (DUTE) and T1-weighted (T1w) MRI data and a probabilistic atlas. Segmented atlases were constructed from CT training data using a leave-one-out framework and combined with T1w, DUTE, and CT data to train a classifier that computes the probability of air/soft tissue/bone at each voxel. This classifier was applied to segment the MRI of the subject of interest and attenuation maps (μ-maps) were generated by assigning specific linear attenuation coefficients (LACs) to each tissue class. The μ-maps generated with this "Atlas-T1w-DUTE" approach were compared to those obtained from DUTE data using a previously proposed method. For validation of the segmentation results, segmented CT μ-maps were considered to the "silver standard"; the segmentation accuracy was assessed qualitatively and quantitatively through calculation of the Dice similarity coefficient (DSC). Relative change (RC) maps between the CT and MRI-based attenuation corrected PET volumes were also calculated for a global voxel-wise assessment of the reconstruction results. The μ-maps obtained using the Atlas-T1w-DUTE classifier agreed well with those derived from CT; the mean DSCs for the Atlas-T1w-DUTE-based μ-maps across all subjects were higher than those for DUTE-based μ-maps; the atlas-based μ-maps also showed a lower percentage of misclassified voxels across all subjects. RC maps from the atlas-based technique also demonstrated improvement in the PET data compared to the DUTE method, both globally as well as regionally.
Natural Resources Inventory and Land Evaluation in Switzerland
NASA Technical Reports Server (NTRS)
Haefner, H. (Principal Investigator)
1975-01-01
The author has identified the following significant results. A system was developed to operationally map and measure the areal extent of various land use categories for updating existing and producing new and actual thematic maps showing the latest state of rural and urban landscapes and its changes. The processing system includes: (1) preprocessing steps for radiometric and geometric corrections; (2) classification of the data by a multivariate procedure, using a stepwise linear discriminant analysis based on carefully selected training cells; and (3) output in form of color maps by printing black and white theme overlays of a selected scale with photomation system and its coloring and combination into a color composite.
Maximally Informative Statistics for Localization and Mapping
NASA Technical Reports Server (NTRS)
Deans, Matthew C.
2001-01-01
This paper presents an algorithm for localization and mapping for a mobile robot using monocular vision and odometry as its means of sensing. The approach uses the Variable State Dimension filtering (VSDF) framework to combine aspects of Extended Kalman filtering and nonlinear batch optimization. This paper describes two primary improvements to the VSDF. The first is to use an interpolation scheme based on Gaussian quadrature to linearize measurements rather than relying on analytic Jacobians. The second is to replace the inverse covariance matrix in the VSDF with its Cholesky factor to improve the computational complexity. Results of applying the filter to the problem of localization and mapping with omnidirectional vision are presented.
Use of Satellite Remote Sensing Data in the Mapping of Global Landslide Susceptibility
NASA Technical Reports Server (NTRS)
Hong, Yang; Adler, Robert F.; Huffman, George J.
2007-01-01
Satellite remote sensing data has significant potential use in analysis of natural hazards such as landslides. Relying on the recent advances in satellite remote sensing and geographic information system (GIS) techniques, this paper aims to map landslide susceptibility over most of the globe using a GIs-based weighted linear combination method. First , six relevant landslide-controlling factors are derived from geospatial remote sensing data and coded into a GIS system. Next, continuous susceptibility values from low to high are assigned to each of the six factors. Second, a continuous scale of a global landslide susceptibility index is derived using GIS weighted linear combination based on each factor's relative significance to the process of landslide occurrence (e.g., slope is the most important factor, soil types and soil texture are also primary-level parameters, while elevation, land cover types, and drainage density are secondary in importance). Finally, the continuous index map is further classified into six susceptibility categories. Results show the hot spots of landslide-prone regions include the Pacific Rim, the Himalayas and South Asia, Rocky Mountains, Appalachian Mountains, Alps, and parts of the Middle East and Africa. India, China, Nepal, Japan, the USA, and Peru are shown to have landslide-prone areas. This first-cut global landslide susceptibility map forms a starting point to provide a global view of landslide risks and may be used in conjunction with satellite-based precipitation information to potentially detect areas with significant landslide potential due to heavy rainfall. 1
Chuang, Li-Yeh; Moi, Sin-Hua; Lin, Yu-Da; Yang, Cheng-Hong
2016-10-01
Evolutionary algorithms could overcome the computational limitations for the statistical evaluation of large datasets for high-order single nucleotide polymorphism (SNP) barcodes. Previous studies have proposed several chaotic particle swarm optimization (CPSO) methods to detect SNP barcodes for disease analysis (e.g., for breast cancer and chronic diseases). This work evaluated additional chaotic maps combined with the particle swarm optimization (PSO) method to detect SNP barcodes using a high-dimensional dataset. Nine chaotic maps were used to improve PSO method results and compared the searching ability amongst all CPSO methods. The XOR and ZZ disease models were used to compare all chaotic maps combined with PSO method. Efficacy evaluations of CPSO methods were based on statistical values from the chi-square test (χ 2 ). The results showed that chaotic maps could improve the searching ability of PSO method when population are trapped in the local optimum. The minor allele frequency (MAF) indicated that, amongst all CPSO methods, the numbers of SNPs, sample size, and the highest χ 2 value in all datasets were found in the Sinai chaotic map combined with PSO method. We used the simple linear regression results of the gbest values in all generations to compare the all methods. Sinai chaotic map combined with PSO method provided the highest β values (β≥0.32 in XOR disease model and β≥0.04 in ZZ disease model) and the significant p-value (p-value<0.001 in both the XOR and ZZ disease models). The Sinai chaotic map was found to effectively enhance the fitness values (χ 2 ) of PSO method, indicating that the Sinai chaotic map combined with PSO method is more effective at detecting potential SNP barcodes in both the XOR and ZZ disease models. Copyright © 2016 Elsevier B.V. All rights reserved.
Locally linear regression for pose-invariant face recognition.
Chai, Xiujuan; Shan, Shiguang; Chen, Xilin; Gao, Wen
2007-07-01
The variation of facial appearance due to the viewpoint (/pose) degrades face recognition systems considerably, which is one of the bottlenecks in face recognition. One of the possible solutions is generating virtual frontal view from any given nonfrontal view to obtain a virtual gallery/probe face. Following this idea, this paper proposes a simple, but efficient, novel locally linear regression (LLR) method, which generates the virtual frontal view from a given nonfrontal face image. We first justify the basic assumption of the paper that there exists an approximate linear mapping between a nonfrontal face image and its frontal counterpart. Then, by formulating the estimation of the linear mapping as a prediction problem, we present the regression-based solution, i.e., globally linear regression. To improve the prediction accuracy in the case of coarse alignment, LLR is further proposed. In LLR, we first perform dense sampling in the nonfrontal face image to obtain many overlapped local patches. Then, the linear regression technique is applied to each small patch for the prediction of its virtual frontal patch. Through the combination of all these patches, the virtual frontal view is generated. The experimental results on the CMU PIE database show distinct advantage of the proposed method over Eigen light-field method.
Using Chaotic System in Encryption
NASA Astrophysics Data System (ADS)
Findik, Oğuz; Kahramanli, Şirzat
In this paper chaotic systems and RSA encryption algorithm are combined in order to develop an encryption algorithm which accomplishes the modern standards. E.Lorenz's weather forecast' equations which are used to simulate non-linear systems are utilized to create chaotic map. This equation can be used to generate random numbers. In order to achieve up-to-date standards and use online and offline status, a new encryption technique that combines chaotic systems and RSA encryption algorithm has been developed. The combination of RSA algorithm and chaotic systems makes encryption system.
NASA Astrophysics Data System (ADS)
Evans, Alan C.; Dai, Weiqian; Collins, D. Louis; Neelin, Peter; Marrett, Sean
1991-06-01
We describe the implementation, experience and preliminary results obtained with a 3-D computerized brain atlas for topographical and functional analysis of brain sub-regions. A volume-of-interest (VOI) atlas was produced by manual contouring on 64 adjacent 2 mm-thick MRI slices to yield 60 brain structures in each hemisphere which could be adjusted, originally by global affine transformation or local interactive adjustments, to match individual MRI datasets. We have now added a non-linear deformation (warp) capability (Bookstein, 1989) into the procedure for fitting the atlas to the brain data. Specific target points are identified in both atlas and MRI spaces which define a continuous 3-D warp transformation that maps the atlas on to the individual brain image. The procedure was used to fit MRI brain image volumes from 16 young normal volunteers. Regional volume and positional variability were determined, the latter in such a way as to assess the extent to which previous linear models of brain anatomical variability fail to account for the true variation among normal individuals. Using a linear model for atlas deformation yielded 3-D fits of the MRI data which, when pooled across subjects and brain regions, left a residual mis-match of 6 - 7 mm as compared to the non-linear model. The results indicate a substantial component of morphometric variability is not accounted for by linear scaling. This has profound implications for applications which employ stereotactic coordinate systems which map individual brains into a common reference frame: quantitative neuroradiology, stereotactic neurosurgery and cognitive mapping of normal brain function with PET. In the latter case, the combination of a non-linear deformation algorithm would allow for accurate measurement of individual anatomic variations and the inclusion of such variations in inter-subject averaging methodologies used for cognitive mapping with PET.
Information Processing Capacity of Dynamical Systems
NASA Astrophysics Data System (ADS)
Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge
2012-07-01
Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory.
Information Processing Capacity of Dynamical Systems
Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge
2012-01-01
Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory. PMID:22816038
Inferring the most probable maps of underground utilities using Bayesian mapping model
NASA Astrophysics Data System (ADS)
Bilal, Muhammad; Khan, Wasiq; Muggleton, Jennifer; Rustighi, Emiliano; Jenks, Hugo; Pennock, Steve R.; Atkins, Phil R.; Cohn, Anthony
2018-03-01
Mapping the Underworld (MTU), a major initiative in the UK, is focused on addressing social, environmental and economic consequences raised from the inability to locate buried underground utilities (such as pipes and cables) by developing a multi-sensor mobile device. The aim of MTU device is to locate different types of buried assets in real time with the use of automated data processing techniques and statutory records. The statutory records, even though typically being inaccurate and incomplete, provide useful prior information on what is buried under the ground and where. However, the integration of information from multiple sensors (raw data) with these qualitative maps and their visualization is challenging and requires the implementation of robust machine learning/data fusion approaches. An approach for automated creation of revised maps was developed as a Bayesian Mapping model in this paper by integrating the knowledge extracted from sensors raw data and available statutory records. The combination of statutory records with the hypotheses from sensors was for initial estimation of what might be found underground and roughly where. The maps were (re)constructed using automated image segmentation techniques for hypotheses extraction and Bayesian classification techniques for segment-manhole connections. The model consisting of image segmentation algorithm and various Bayesian classification techniques (segment recognition and expectation maximization (EM) algorithm) provided robust performance on various simulated as well as real sites in terms of predicting linear/non-linear segments and constructing refined 2D/3D maps.
A 2500 deg 2 CMB Lensing Map from Combined South Pole Telescope and Planck Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Omori, Y.; Chown, R.; Simard, G.
Here, we present a cosmic microwave background (CMB) lensing map produced from a linear combination of South Pole Telescope (SPT) and Planck temperature data. The 150 GHz temperature data from the 2500 deg 2 SPT-SZ survey is combined with the Planck 143 GHz data in harmonic space to obtain a temperature map that has a broader ℓ coverage and less noise than either individual map. Using a quadratic estimator technique on this combined temperature map, we produce a map of the gravitational lensing potential projected along the line of sight. We measure the auto-spectrum of the lensing potentialmore » $${C}_{L}^{\\phi \\phi }$$, and compare it to the theoretical prediction for a ΛCDM cosmology consistent with the Planck 2015 data set, finding a best-fit amplitude of $${0.95}_{-0.06}^{+0.06}(\\mathrm{stat}.{)}_{-0.01}^{+0.01}(\\mathrm{sys}.)$$. The null hypothesis of no lensing is rejected at a significance of 24σ. One important use of such a lensing potential map is in cross-correlations with other dark matter tracers. We demonstrate this cross-correlation in practice by calculating the cross-spectrum, $${C}_{L}^{\\phi G}$$, between the SPT+Planck lensing map and Wide-field Infrared Survey Explorer (WISE) galaxies. We fit $${C}_{L}^{\\phi G}$$ to a power law of the form $${p}_{L}=a{(L/{L}_{0})}^{-b}$$ with a, L 0, and b fixed, and find $${\\eta }^{\\phi G}={C}_{L}^{\\phi G}/{p}_{L}={0.94}_{-0.04}^{+0.04}$$, which is marginally lower, but in good agreement with $${\\eta }^{\\phi G}={1.00}_{-0.01}^{+0.02}$$, the best-fit amplitude for the cross-correlation of Planck-2015 CMB lensing and WISE galaxies over ~67% of the sky. Finally, the lensing potential map presented here will be used for cross-correlation studies with the Dark Energy Survey, whose footprint nearly completely covers the SPT 2500 deg 2 field.« less
A 2500 deg 2 CMB Lensing Map from Combined South Pole Telescope and Planck Data
Omori, Y.; Chown, R.; Simard, G.; ...
2017-11-07
Here, we present a cosmic microwave background (CMB) lensing map produced from a linear combination of South Pole Telescope (SPT) and Planck temperature data. The 150 GHz temperature data from the 2500 deg 2 SPT-SZ survey is combined with the Planck 143 GHz data in harmonic space to obtain a temperature map that has a broader ℓ coverage and less noise than either individual map. Using a quadratic estimator technique on this combined temperature map, we produce a map of the gravitational lensing potential projected along the line of sight. We measure the auto-spectrum of the lensing potentialmore » $${C}_{L}^{\\phi \\phi }$$, and compare it to the theoretical prediction for a ΛCDM cosmology consistent with the Planck 2015 data set, finding a best-fit amplitude of $${0.95}_{-0.06}^{+0.06}(\\mathrm{stat}.{)}_{-0.01}^{+0.01}(\\mathrm{sys}.)$$. The null hypothesis of no lensing is rejected at a significance of 24σ. One important use of such a lensing potential map is in cross-correlations with other dark matter tracers. We demonstrate this cross-correlation in practice by calculating the cross-spectrum, $${C}_{L}^{\\phi G}$$, between the SPT+Planck lensing map and Wide-field Infrared Survey Explorer (WISE) galaxies. We fit $${C}_{L}^{\\phi G}$$ to a power law of the form $${p}_{L}=a{(L/{L}_{0})}^{-b}$$ with a, L 0, and b fixed, and find $${\\eta }^{\\phi G}={C}_{L}^{\\phi G}/{p}_{L}={0.94}_{-0.04}^{+0.04}$$, which is marginally lower, but in good agreement with $${\\eta }^{\\phi G}={1.00}_{-0.01}^{+0.02}$$, the best-fit amplitude for the cross-correlation of Planck-2015 CMB lensing and WISE galaxies over ~67% of the sky. Finally, the lensing potential map presented here will be used for cross-correlation studies with the Dark Energy Survey, whose footprint nearly completely covers the SPT 2500 deg 2 field.« less
He, Bo; Zhang, Shujing; Yan, Tianhong; Zhang, Tao; Liang, Yan; Zhang, Hongjin
2011-01-01
Mobile autonomous systems are very important for marine scientific investigation and military applications. Many algorithms have been studied to deal with the computational efficiency problem required for large scale simultaneous localization and mapping (SLAM) and its related accuracy and consistency. Among these methods, submap-based SLAM is a more effective one. By combining the strength of two popular mapping algorithms, the Rao-Blackwellised particle filter (RBPF) and extended information filter (EIF), this paper presents a combined SLAM-an efficient submap-based solution to the SLAM problem in a large scale environment. RBPF-SLAM is used to produce local maps, which are periodically fused into an EIF-SLAM algorithm. RBPF-SLAM can avoid linearization of the robot model during operating and provide a robust data association, while EIF-SLAM can improve the whole computational speed, and avoid the tendency of RBPF-SLAM to be over-confident. In order to further improve the computational speed in a real time environment, a binary-tree-based decision-making strategy is introduced. Simulation experiments show that the proposed combined SLAM algorithm significantly outperforms currently existing algorithms in terms of accuracy and consistency, as well as the computing efficiency. Finally, the combined SLAM algorithm is experimentally validated in a real environment by using the Victoria Park dataset.
NASA Astrophysics Data System (ADS)
Carucci, Isabella P.; Villaescusa-Navarro, Francisco; Viel, Matteo
2017-04-01
We investigate the cross-correlation signal between 21cm intensity mapping maps and the Lyα forest in the fully non-linear regime using state-of-the-art hydrodynamic simulations. The cross-correlation signal between the Lyα forest and 21cm maps can provide a coherent and comprehensive picture of the neutral hydrogen (HI) content of our Universe in the post-reionization era, probing both its mass content and volume distribution. We compute the auto-power spectra of both fields together with their cross-power spectrum at z = 2.4 and find that on large scales the fields are completely anti-correlated. This anti-correlation arises because regions with high (low) 21cm emission, such as those with a large (low) concentration of damped Lyα systems, will show up as regions with low (high) transmitted flux. We find that on scales smaller than k simeq 0.2 hMpc-1 the cross-correlation coefficient departs from -1, at a scale where non-linearities show up. We use the anisotropy of the power spectra in redshift-space to determine the values of the bias and of the redshift-space distortion parameters of both fields. We find that the errors on the value of the cosmological and astrophysical parameters could decrease by 30% when adding data from the cross-power spectrum, in a conservative analysis. Our results point out that linear theory is capable of reproducing the shape and amplitude of the cross-power up to rather non-linear scales. Finally, we find that the 21cm-Lyα cross-power spectrum can be detected by combining data from a BOSS-like survey together with 21cm intensity mapping observations by SKA1-MID with a S/N ratio higher than 3 in kin[0.06,1] hMpc-1. We emphasize that while the shape and amplitude of the 21cm auto-power spectrum can be severely affected by residual foreground contamination, cross-power spectra will be less sensitive to that and therefore can be used to identify systematics in the 21cm maps.
NASA Technical Reports Server (NTRS)
Colwell, R. N. (Principal Investigator)
1983-01-01
The geometric quality of the TM and MSS film products were evaluated by making selective photo measurements such as scale, linear and area determinations; and by measuring the coordinates of known features on both the film products and map products and then relating these paired observations using a standard linear least squares regression approach. Quantitative interpretation tests are described which evaluate the quality and utility of the TM film products and various band combinations for detecting and identifying important forest and agricultural features.
NASA Astrophysics Data System (ADS)
Cianciara, Aleksander J.; Anderson, Christopher J.; Chen, Xuelei; Chen, Zhiping; Geng, Jingchao; Li, Jixia; Liu, Chao; Liu, Tao; Lu, Wing; Peterson, Jeffrey B.; Shi, Huli; Steffel, Catherine N.; Stebbins, Albert; Stucky, Thomas; Sun, Shijie; Timbie, Peter T.; Wang, Yougang; Wu, Fengquan; Zhang, Juyong
A wide bandwidth, dual polarized, modified four-square antenna is presented as a feed antenna for radio astronomical measurements. A linear array of these antennas is used as a line-feed for cylindrical reflectors for Tianlai, a radio interferometer designed for 21cm intensity mapping. Simulations of the feed antenna beam patterns and scattering parameters are compared to experimental results at multiple frequencies across the 650-1420MHz range. Simulations of the beam patterns of the combined feed array/reflector are presented as well.
A componential model of human interaction with graphs: 1. Linear regression modeling
NASA Technical Reports Server (NTRS)
Gillan, Douglas J.; Lewis, Robert
1994-01-01
Task analyses served as the basis for developing the Mixed Arithmetic-Perceptual (MA-P) model, which proposes (1) that people interacting with common graphs to answer common questions apply a set of component processes-searching for indicators, encoding the value of indicators, performing arithmetic operations on the values, making spatial comparisons among indicators, and repsonding; and (2) that the type of graph and user's task determine the combination and order of the components applied (i.e., the processing steps). Two experiments investigated the prediction that response time will be linearly related to the number of processing steps according to the MA-P model. Subjects used line graphs, scatter plots, and stacked bar graphs to answer comparison questions and questions requiring arithmetic calculations. A one-parameter version of the model (with equal weights for all components) and a two-parameter version (with different weights for arithmetic and nonarithmetic processes) accounted for 76%-85% of individual subjects' variance in response time and 61%-68% of the variance taken across all subjects. The discussion addresses possible modifications in the MA-P model, alternative models, and design implications from the MA-P model.
NASA Astrophysics Data System (ADS)
Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan
2018-04-01
Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.
Inverse full state hybrid projective synchronization for chaotic maps with different dimensions
NASA Astrophysics Data System (ADS)
Ouannas, Adel; Grassi, Giuseppe
2016-09-01
A new synchronization scheme for chaotic (hyperchaotic) maps with different dimensions is presented. Specifically, given a drive system map with dimension n and a response system with dimension m, the proposed approach enables each drive system state to be synchronized with a linear response combination of the response system states. The method, based on the Lyapunov stability theory and the pole placement technique, presents some useful features: (i) it enables synchronization to be achieved for both cases of n < m and n > m; (ii) it is rigorous, being based on theorems; (iii) it can be readily applied to any chaotic (hyperchaotic) maps defined to date. Finally, the capability of the approach is illustrated by synchronization examples between the two-dimensional Hénon map (as the drive system) and the three-dimensional hyperchaotic Wang map (as the response system), and the three-dimensional Hénon-like map (as the drive system) and the two-dimensional Lorenz discrete-time system (as the response system).
Mapping quantitative trait loci for traits defined as ratios.
Yang, Runqing; Li, Jiahan; Xu, Shizhong
2008-03-01
Many traits are defined as ratios of two quantitative traits. Methods of QTL mapping for regular quantitative traits are not optimal when applied to ratios due to lack of normality for traits defined as ratios. We develop a new method of QTL mapping for traits defined as ratios. The new method uses a special linear combination of the two component traits, and thus takes advantage of the normal property of the new variable. Simulation study shows that the new method can substantially increase the statistical power of QTL detection relative to the method which treats ratios as regular quantitative traits. The new method also outperforms the method that uses Box-Cox transformed ratio as the phenotype. A real example of QTL mapping for relative growth rate in soybean demonstrates that the new method can detect more QTL than existing methods of QTL mapping for traits defined as ratios.
Gritti, Fabrice
2016-11-18
An new class of gradient liquid chromatography (GLC) is proposed and its performance is analyzed from a theoretical viewpoint. During the course of such gradients, both the solvent strength and the column temperature are simultaneously changed in time and space. The solvent and temperature gradients propagate along the chromatographic column at their own and independent linear velocity. This class of gradient is called combined solvent- and temperature-programmed gradient liquid chromatography (CST-GLC). The general expressions of the retention time, retention factor, and of the temporal peak width of the analytes at elution in CST-GLC are derived for linear solvent strength (LSS) retention models, modified van't Hoff retention behavior, linear and non-distorted solvent gradients, and for linear temperature gradients. In these conditions, the theory predicts that CST-GLC is equivalent to a unique and apparent dynamic solvent gradient. The apparent solvent gradient steepness is the sum of the solvent and temperature steepness. The apparent solvent linear velocity is the reciprocal of the steepness-averaged sum of the reciprocal of the actual solvent and temperature linear velocities. The advantage of CST-GLC over conventional GLC is demonstrated for the resolution of protein digests (peptide mapping) when applying smooth, retained, and linear acetonitrile gradients in combination with a linear temperature gradient (from 20°C to 90°C) using 300μm×150mm capillary columns packed with sub-2 μm particles. The benefit of CST-GLC is demonstrated when the temperature gradient propagates at the same velocity as the chromatographic speed. The experimental proof-of-concept for the realization of temperature ramps propagating at a finite and constant linear velocity is also briefly described. Copyright © 2016 Elsevier B.V. All rights reserved.
Poynton, Clare B; Chen, Kevin T; Chonde, Daniel B; Izquierdo-Garcia, David; Gollub, Randy L; Gerstner, Elizabeth R; Batchelor, Tracy T; Catana, Ciprian
2014-01-01
We present a new MRI-based attenuation correction (AC) approach for integrated PET/MRI systems that combines both segmentation- and atlas-based methods by incorporating dual-echo ultra-short echo-time (DUTE) and T1-weighted (T1w) MRI data and a probabilistic atlas. Segmented atlases were constructed from CT training data using a leave-one-out framework and combined with T1w, DUTE, and CT data to train a classifier that computes the probability of air/soft tissue/bone at each voxel. This classifier was applied to segment the MRI of the subject of interest and attenuation maps (μ-maps) were generated by assigning specific linear attenuation coefficients (LACs) to each tissue class. The μ-maps generated with this “Atlas-T1w-DUTE” approach were compared to those obtained from DUTE data using a previously proposed method. For validation of the segmentation results, segmented CT μ-maps were considered to the “silver standard”; the segmentation accuracy was assessed qualitatively and quantitatively through calculation of the Dice similarity coefficient (DSC). Relative change (RC) maps between the CT and MRI-based attenuation corrected PET volumes were also calculated for a global voxel-wise assessment of the reconstruction results. The μ-maps obtained using the Atlas-T1w-DUTE classifier agreed well with those derived from CT; the mean DSCs for the Atlas-T1w-DUTE-based μ-maps across all subjects were higher than those for DUTE-based μ-maps; the atlas-based μ-maps also showed a lower percentage of misclassified voxels across all subjects. RC maps from the atlas-based technique also demonstrated improvement in the PET data compared to the DUTE method, both globally as well as regionally. PMID:24753982
Radar response to vegetation. [soil moisture mapping via microwave backscattering
NASA Technical Reports Server (NTRS)
Ulaby, F. T.
1975-01-01
Active microwave measurements of vegetation backscatter were conducted to determine the utility of radar in mapping soil moisture through vegetation and mapping crop types. Using a truck-mounted boom, spectral response data were obtained for four crop types (corn, milo, soybeans, and alfalfa) over the 4-8 GHz frequency band, at incidence angles of 0 to 70 degrees in 10-degree steps, and for all four linear polarization combinations. Based on a total of 125 data sets covering a wide range of soil moisture, content, system design criteria are proposed for each of the aforementioned objectives. Quantitative soil moisture determination was best achieved at the lower frequency end of the 4-8 GHz band using HH polarized waves in the 5- to 15-degree incidence angle range. A combination of low and high frequency measurements are suggested for classifying crop types. For crop discrimination, a dual-frequency dual-polarization (VV and cross) system operating at incidence angles above 40 degrees is suggested.
The morphing of geographical features by Fourier transformation.
Li, Jingzhong; Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang
2018-01-01
This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features' continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable.
Exponential convergence through linear finite element discretization of stratified subdomains
NASA Astrophysics Data System (ADS)
Guddati, Murthy N.; Druskin, Vladimir; Vaziri Astaneh, Ali
2016-10-01
Motivated by problems where the response is needed at select localized regions in a large computational domain, we devise a novel finite element discretization that results in exponential convergence at pre-selected points. The key features of the discretization are (a) use of midpoint integration to evaluate the contribution matrices, and (b) an unconventional mapping of the mesh into complex space. Named complex-length finite element method (CFEM), the technique is linked to Padé approximants that provide exponential convergence of the Dirichlet-to-Neumann maps and thus the solution at specified points in the domain. Exponential convergence facilitates drastic reduction in the number of elements. This, combined with sparse computation associated with linear finite elements, results in significant reduction in the computational cost. The paper presents the basic ideas of the method as well as illustration of its effectiveness for a variety of problems involving Laplace, Helmholtz and elastodynamics equations.
Symmetries of the Space of Linear Symplectic Connections
NASA Astrophysics Data System (ADS)
Fox, Daniel J. F.
2017-01-01
There is constructed a family of Lie algebras that act in a Hamiltonian way on the symplectic affine space of linear symplectic connections on a symplectic manifold. The associated equivariant moment map is a formal sum of the Cahen-Gutt moment map, the Ricci tensor, and a translational term. The critical points of a functional constructed from it interpolate between the equations for preferred symplectic connections and the equations for critical symplectic connections. The commutative algebra of formal sums of symmetric tensors on a symplectic manifold carries a pair of compatible Poisson structures, one induced from the canonical Poisson bracket on the space of functions on the cotangent bundle polynomial in the fibers, and the other induced from the algebraic fiberwise Schouten bracket on the symmetric algebra of each fiber of the cotangent bundle. These structures are shown to be compatible, and the required Lie algebras are constructed as central extensions of their! linear combinations restricted to formal sums of symmetric tensors whose first order term is a multiple of the differential of its zeroth order term.
More memory under evolutionary learning may lead to chaos
NASA Astrophysics Data System (ADS)
Diks, Cees; Hommes, Cars; Zeppini, Paolo
2013-02-01
We show that an increase of memory of past strategy performance in a simple agent-based innovation model, with agents switching between costly innovation and cheap imitation, can be quantitatively stabilising while at the same time qualitatively destabilising. As memory in the fitness measure increases, the amplitude of price fluctuations decreases, but at the same time a bifurcation route to chaos may arise. The core mechanism leading to the chaotic behaviour in this model with strategy switching is that the map obtained for the system with memory is a convex combination of an increasing linear function and a decreasing non-linear function.
A methodology for physically based rockfall hazard assessment
NASA Astrophysics Data System (ADS)
Crosta, G. B.; Agliardi, F.
Rockfall hazard assessment is not simple to achieve in practice and sound, physically based assessment methodologies are still missing. The mobility of rockfalls implies a more difficult hazard definition with respect to other slope instabilities with minimal runout. Rockfall hazard assessment involves complex definitions for "occurrence probability" and "intensity". This paper is an attempt to evaluate rockfall hazard using the results of 3-D numerical modelling on a topography described by a DEM. Maps portraying the maximum frequency of passages, velocity and height of blocks at each model cell, are easily combined in a GIS in order to produce physically based rockfall hazard maps. Different methods are suggested and discussed for rockfall hazard mapping at a regional and local scale both along linear features or within exposed areas. An objective approach based on three-dimensional matrixes providing both a positional "Rockfall Hazard Index" and a "Rockfall Hazard Vector" is presented. The opportunity of combining different parameters in the 3-D matrixes has been evaluated to better express the relative increase in hazard. Furthermore, the sensitivity of the hazard index with respect to the included variables and their combinations is preliminarily discussed in order to constrain as objective as possible assessment criteria.
A 2500 deg 2 CMB Lensing Map from Combined South Pole Telescope and Planck Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Omori, Y.; Chown, R.; Simard, G.
We present a cosmic microwave background (CMB) lensing map produced from a linear combination of South Pole Telescope (SPT) and \\emph{Planck} temperature data. The 150 GHz temperature data from themore » $$2500\\ {\\rm deg}^{2}$$ SPT-SZ survey is combined with the \\emph{Planck} 143 GHz data in harmonic space, to obtain a temperature map that has a broader $$\\ell$$ coverage and less noise than either individual map. Using a quadratic estimator technique on this combined temperature map, we produce a map of the gravitational lensing potential projected along the line of sight. We measure the auto-spectrum of the lensing potential $$C_{L}^{\\phi\\phi}$$, and compare it to the theoretical prediction for a $$\\Lambda$$CDM cosmology consistent with the \\emph{Planck} 2015 data set, finding a best-fit amplitude of $$0.95_{-0.06}^{+0.06}({\\rm Stat.})\\! _{-0.01}^{+0.01}({\\rm Sys.})$$. The null hypothesis of no lensing is rejected at a significance of $$24\\,\\sigma$$. One important use of such a lensing potential map is in cross-correlations with other dark matter tracers. We demonstrate this cross-correlation in practice by calculating the cross-spectrum, $$C_{L}^{\\phi G}$$, between the SPT+\\emph{Planck} lensing map and Wide-field Infrared Survey Explorer (\\emph{WISE}) galaxies. We fit $$C_{L}^{\\phi G}$$ to a power law of the form $$p_{L}=a(L/L_{0})^{-b}$$ with $$a=2.15 \\times 10^{-8}$$, $b=1.35$, $$L_{0}=490$$, and find $$\\eta^{\\phi G}=0.94^{+0.04}_{-0.04}$$, which is marginally lower, but in good agreement with $$\\eta^{\\phi G}=1.00^{+0.02}_{-0.01}$$, the best-fit amplitude for the cross-correlation of \\emph{Planck}-2015 CMB lensing and \\emph{WISE} galaxies over $$\\sim67\\%$$ of the sky. The lensing potential map presented here will be used for cross-correlation studies with the Dark Energy Survey (DES), whose footprint nearly completely covers the SPT $$2500\\ {\\rm deg}^2$$ field.« less
A 2500 deg 2 CMB Lensing Map from Combined South Pole Telescope and Planck Data
Omori, Y.; Chown, R.; Simard, G.; ...
2017-11-07
We present a cosmic microwave background (CMB) lensing map produced from a linear combination of South Pole Telescope (SPT) and \\emph{Planck} temperature data. The 150 GHz temperature data from themore » $$2500\\ {\\rm deg}^{2}$$ SPT-SZ survey is combined with the \\emph{Planck} 143 GHz data in harmonic space, to obtain a temperature map that has a broader $$\\ell$$ coverage and less noise than either individual map. Using a quadratic estimator technique on this combined temperature map, we produce a map of the gravitational lensing potential projected along the line of sight. We measure the auto-spectrum of the lensing potential $$C_{L}^{\\phi\\phi}$$, and compare it to the theoretical prediction for a $$\\Lambda$$CDM cosmology consistent with the \\emph{Planck} 2015 data set, finding a best-fit amplitude of $$0.95_{-0.06}^{+0.06}({\\rm Stat.})\\! _{-0.01}^{+0.01}({\\rm Sys.})$$. The null hypothesis of no lensing is rejected at a significance of $$24\\,\\sigma$$. One important use of such a lensing potential map is in cross-correlations with other dark matter tracers. We demonstrate this cross-correlation in practice by calculating the cross-spectrum, $$C_{L}^{\\phi G}$$, between the SPT+\\emph{Planck} lensing map and Wide-field Infrared Survey Explorer (\\emph{WISE}) galaxies. We fit $$C_{L}^{\\phi G}$$ to a power law of the form $$p_{L}=a(L/L_{0})^{-b}$$ with $$a=2.15 \\times 10^{-8}$$, $b=1.35$, $$L_{0}=490$$, and find $$\\eta^{\\phi G}=0.94^{+0.04}_{-0.04}$$, which is marginally lower, but in good agreement with $$\\eta^{\\phi G}=1.00^{+0.02}_{-0.01}$$, the best-fit amplitude for the cross-correlation of \\emph{Planck}-2015 CMB lensing and \\emph{WISE} galaxies over $$\\sim67\\%$$ of the sky. The lensing potential map presented here will be used for cross-correlation studies with the Dark Energy Survey (DES), whose footprint nearly completely covers the SPT $$2500\\ {\\rm deg}^2$$ field.« less
An Isometric Mapping Based Co-Location Decision Tree Algorithm
NASA Astrophysics Data System (ADS)
Zhou, G.; Wei, J.; Zhou, X.; Zhang, R.; Huang, W.; Sha, H.; Chen, J.
2018-05-01
Decision tree (DT) induction has been widely used in different pattern classification. However, most traditional DTs have the disadvantage that they consider only non-spatial attributes (ie, spectral information) as a result of classifying pixels, which can result in objects being misclassified. Therefore, some researchers have proposed a co-location decision tree (Cl-DT) method, which combines co-location and decision tree to solve the above the above-mentioned traditional decision tree problems. Cl-DT overcomes the shortcomings of the existing DT algorithms, which create a node for each value of a given attribute, which has a higher accuracy than the existing decision tree approach. However, for non-linearly distributed data instances, the euclidean distance between instances does not reflect the true positional relationship between them. In order to overcome these shortcomings, this paper proposes an isometric mapping method based on Cl-DT (called, (Isomap-based Cl-DT), which is a method that combines heterogeneous and Cl-DT together. Because isometric mapping methods use geodetic distances instead of Euclidean distances between non-linearly distributed instances, the true distance between instances can be reflected. The experimental results and several comparative analyzes show that: (1) The extraction method of exposed carbonate rocks is of high accuracy. (2) The proposed method has many advantages, because the total number of nodes, the number of leaf nodes and the number of nodes are greatly reduced compared to Cl-DT. Therefore, the Isomap -based Cl-DT algorithm can construct a more accurate and faster decision tree.
Fusion of pan-tropical biomass maps using weighted averaging and regional calibration data
NASA Astrophysics Data System (ADS)
Ge, Yong; Avitabile, Valerio; Heuvelink, Gerard B. M.; Wang, Jianghao; Herold, Martin
2014-09-01
Biomass is a key environmental variable that influences many biosphere-atmosphere interactions. Recently, a number of biomass maps at national, regional and global scales have been produced using different approaches with a variety of input data, such as from field observations, remotely sensed imagery and other spatial datasets. However, the accuracy of these maps varies regionally and is largely unknown. This research proposes a fusion method to increase the accuracy of regional biomass estimates by using higher-quality calibration data. In this fusion method, the biases in the source maps were first adjusted to correct for over- and underestimation by comparison with the calibration data. Next, the biomass maps were combined linearly using weights derived from the variance-covariance matrix associated with the accuracies of the source maps. Because each map may have different biases and accuracies for different land use types, the biases and fusion weights were computed for each of the main land cover types separately. The conceptual arguments are substantiated by a case study conducted in East Africa. Evaluation analysis shows that fusing multiple source biomass maps may produce a more accurate map than when only one biomass map or unweighted averaging is used.
NASA Astrophysics Data System (ADS)
De Boissieu, Florian; Sevin, Brice; Cudahy, Thomas; Mangeas, Morgan; Chevrel, Stéphane; Ong, Cindy; Rodger, Andrew; Maurizot, Pierre; Laukamp, Carsten; Lau, Ian; Touraivane, Touraivane; Cluzel, Dominique; Despinoy, Marc
2018-02-01
Accurate maps of Earth's geology, especially its regolith, are required for managing the sustainable exploration and development of mineral resources. This paper shows how airborne imaging hyperspectral data collected over weathered peridotite rocks in vegetated, mountainous terrane in New Caledonia were processed using a combination of methods to generate a regolith-geology map that could be used for more efficiently targeting Ni exploration. The image processing combined two usual methods, which are spectral feature extraction and support vector machine (SVM). This rationale being the spectral features extraction can rapidly reduce data complexity by both targeting only the diagnostic mineral absorptions and masking those pixels complicated by vegetation, cloud and deep shade. SVM is a supervised classification method able to generate an optimal non-linear classifier with these features that generalises well even with limited training data. Key minerals targeted are serpentine, which is considered as an indicator for hydrolysed peridotitic rock, and iron oxy-hydroxides (hematite and goethite), which are considered as diagnostic of laterite development. The final classified regolith map was assessed against interpreted regolith field sites, which yielded approximately 70% similarity for all unit types, as well as against a regolith-geology map interpreted using traditional datasets (not hyperspectral imagery). Importantly, the hyperspectral derived mineral map provided much greater detail enabling a more precise understanding of the regolith-geological architecture where there are exposed soils and rocks.
Mazari-Hiriart, Marisa; Cruz-Bello, Gustavo; Bojórquez-Tapia, Luis A; Juárez-Marusich, Lourdes; Alcantar-López, Georgina; Marín, Luis E; Soto-Galera, Ernesto
2006-03-01
This study was based on a groundwater vulnerability assessment approach implemented for the Mexico City Metropolitan Area (MCMA). The approach is based on a fuzzy multi-criteria procedure integrated in a geographic information system. The approach combined the potential contaminant sources with the permeability of geological materials. Initially, contaminant sources were ranked by experts through the Analytic Hierarchy Process. An aggregated contaminant sources map layer was obtained through the simple additive weighting method, using a scalar multiplication of criteria weights and binary maps showing the location of each source. A permeability map layer was obtained through the reclassification of a geology map using the respective hydraulic conductivity values, followed by a linear normalization of these values against a compatible scale. A fuzzy logic procedure was then applied to transform and combine the two map layers, resulting in a groundwater vulnerability map layer of five classes: very low, low, moderate, high, and very high. Results provided a more coherent assessment of the policy-making priorities considered when discussing the vulnerability of groundwater to organic compounds. The very high and high vulnerability areas covered a relatively small area (71 km(2) or 1.5% of the total study area), allowing the identification of the more critical locations. The advantage of a fuzzy logic procedure is that it enables the best possible use to be made of the information available regarding groundwater vulnerability in the MCMA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carucci, Isabella P.; Villaescusa-Navarro, Francisco; Viel, Matteo, E-mail: ipcarucci@sissa.it, E-mail: fvillaescusa@simonsfoundation.org, E-mail: viel@oats.inaf.it
We investigate the cross-correlation signal between 21cm intensity mapping maps and the Lyα forest in the fully non-linear regime using state-of-the-art hydrodynamic simulations. The cross-correlation signal between the Lyα forest and 21cm maps can provide a coherent and comprehensive picture of the neutral hydrogen (HI) content of our Universe in the post-reionization era, probing both its mass content and volume distribution. We compute the auto-power spectra of both fields together with their cross-power spectrum at z = 2.4 and find that on large scales the fields are completely anti-correlated. This anti-correlation arises because regions with high (low) 21cm emission, suchmore » as those with a large (low) concentration of damped Lyα systems, will show up as regions with low (high) transmitted flux. We find that on scales smaller than k ≅ 0.2 h Mpc{sup −1} the cross-correlation coefficient departs from −1, at a scale where non-linearities show up. We use the anisotropy of the power spectra in redshift-space to determine the values of the bias and of the redshift-space distortion parameters of both fields. We find that the errors on the value of the cosmological and astrophysical parameters could decrease by 30% when adding data from the cross-power spectrum, in a conservative analysis. Our results point out that linear theory is capable of reproducing the shape and amplitude of the cross-power up to rather non-linear scales. Finally, we find that the 21cm-Lyα cross-power spectrum can be detected by combining data from a BOSS-like survey together with 21cm intensity mapping observations by SKA1-MID with a S/N ratio higher than 3 in k element of [0.06,1] h Mpc{sup −1}. We emphasize that while the shape and amplitude of the 21cm auto-power spectrum can be severely affected by residual foreground contamination, cross-power spectra will be less sensitive to that and therefore can be used to identify systematics in the 21cm maps.« less
Sci—Thur AM: YIS - 08: Constructing an Attenuation map for a PET/MR Breast coil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patrick, John C.; Imaging, Lawson Health Research Institute, Knoxville, TN; London Regional Cancer Program, Knoxville, TN
2014-08-15
In 2013, around 23000 Canadian women and 200 Canadian men were diagnosed with breast cancer. An estimated 5100 women and 55 men died from the disease. Using the sensitivity of MRI with the selectivity of PET, PET/MRI combines anatomical and functional information within the same scan and could help with early detection in high-risk patients. MRI requires radiofrequency coils for transmitting energy and receiving signal but the breast coil attenuates PET signal. To correct for this PET attenuation, a 3-dimensional map of linear attenuation coefficients (μ-map) of the breast coil must be created and incorporated into the PET reconstruction process.more » Several approaches have been proposed for building hardware μ-maps, some of which include the use of conventional kVCT and Dual energy CT. These methods can produce high resolution images based on the electron densities of materials that can be converted into μ-maps. However, imaging hardware containing metal components with photons in the kV range is susceptible to metal artifacts. These artifacts can compromise the accuracy of the resulting μ-map and PET reconstruction; therefore high-Z components should be removed. We propose a method for calculating μ-maps without removing coil components, based on megavoltage (MV) imaging with a linear accelerator that has been detuned for imaging at 1.0MeV. Containers of known geometry with F18 were placed in the breast coil for imaging. A comparison between reconstructions based on the different μ-map construction methods was made. PET reconstructions with our method show a maximum of 6% difference over the existing kVCT-based reconstructions.« less
Mapping Error in Southern Ocean Transport Computed from Satellite Altimetry and Argo
NASA Astrophysics Data System (ADS)
Kosempa, M.; Chambers, D. P.
2016-02-01
Argo profiling floats afford basin-scale coverage of the Southern Ocean since 2005. When density estimates from Argo are combined with surface geostrophic currents derived from satellite altimetry, one can estimate integrated geostrophic transport above 2000 dbar [e.g., Kosempa and Chambers, JGR, 2014]. However, the interpolation techniques relied upon to generate mapped data from Argo and altimetry will impart a mapping error. We quantify this mapping error by sampling the high-resolution Southern Ocean State Estimate (SOSE) at the locations of Argo floats and Jason-1, and -2 altimeter ground tracks, then create gridded products using the same optimal interpolation algorithms used for the Argo/altimetry gridded products. We combine these surface and subsurface grids to compare the sampled-then-interpolated transport grids to those from the original SOSE data in an effort to quantify the uncertainty in volume transport integrated across the Antarctic Circumpolar Current (ACC). This uncertainty is then used to answer two fundamental questions: 1) What is the minimum linear trend that can be observed in ACC transport given the present length of the instrument record? 2) How long must the instrument record be to observe a trend with an accuracy of 0.1 Sv/year?
NASA Astrophysics Data System (ADS)
Passow, Christian; Donner, Reik
2017-04-01
Quantile mapping (QM) is an established concept that allows to correct systematic biases in multiple quantiles of the distribution of a climatic observable. It shows remarkable results in correcting biases in historical simulations through observational data and outperforms simpler correction methods which relate only to the mean or variance. Since it has been shown that bias correction of future predictions or scenario runs with basic QM can result in misleading trends in the projection, adjusted, trend preserving, versions of QM were introduced in the form of detrended quantile mapping (DQM) and quantile delta mapping (QDM) (Cannon, 2015, 2016). Still, all previous versions and applications of QM based bias correction rely on the assumption of time-independent quantiles over the investigated period, which can be misleading in the context of a changing climate. Here, we propose a novel combination of linear quantile regression (QR) with the classical QM method to introduce a consistent, time-dependent and trend preserving approach of bias correction for historical and future projections. Since QR is a regression method, it is possible to estimate quantiles in the same resolution as the given data and include trends or other dependencies. We demonstrate the performance of the new method of linear regression quantile mapping (RQM) in correcting biases of temperature and precipitation products from historical runs (1959 - 2005) of the COSMO model in climate mode (CCLM) from the Euro-CORDEX ensemble relative to gridded E-OBS data of the same spatial and temporal resolution. A thorough comparison with established bias correction methods highlights the strengths and potential weaknesses of the new RQM approach. References: A.J. Cannon, S.R. Sorbie, T.Q. Murdock: Bias Correction of GCM Precipitation by Quantile Mapping - How Well Do Methods Preserve Changes in Quantiles and Extremes? Journal of Climate, 28, 6038, 2015 A.J. Cannon: Multivariate Bias Correction of Climate Model Outputs - Matching Marginal Distributions and Inter-variable Dependence Structure. Journal of Climate, 29, 7045, 2016
The morphing of geographical features by Fourier transformation
Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang
2018-01-01
This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features’ continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable. PMID:29351344
NASA Astrophysics Data System (ADS)
Madrucci, Vanessa; Taioli, Fabio; de Araújo, Carlos César
2008-08-01
SummaryThis paper presents the groundwater favorability mapping on a fractured terrain in the eastern portion of São Paulo State, Brazil. Remote sensing, airborne geophysical data, photogeologic interpretation, geologic and geomorphologic maps and geographic information system (GIS) techniques have been used. The results of cross-tabulation between these maps and well yield data allowed groundwater prospective parameters in a fractured-bedrock aquifer. These prospective parameters are the base for the favorability analysis whose principle is based on the knowledge-driven method. The multicriteria analysis (weighted linear combination) was carried out to give a groundwater favorability map, because the prospective parameters have different weights of importance and different classes of each parameter. The groundwater favorability map was tested by cross-tabulation with new well yield data and spring occurrence. The wells with the highest values of productivity, as well as all the springs occurrence are situated in the excellent and good favorability mapped areas. It shows good coherence between the prospective parameters and the well yield and the importance of GIS techniques for definition of target areas for detail study and wells location.
Multimodal Image Alignment via Linear Mapping between Feature Modalities.
Jiang, Yanyun; Zheng, Yuanjie; Hou, Sujuan; Chang, Yuchou; Gee, James
2017-01-01
We propose a novel landmark matching based method for aligning multimodal images, which is accomplished uniquely by resolving a linear mapping between different feature modalities. This linear mapping results in a new measurement on similarity of images captured from different modalities. In addition, our method simultaneously solves this linear mapping and the landmark correspondences by minimizing a convex quadratic function. Our method can estimate complex image relationship between different modalities and nonlinear nonrigid spatial transformations even in the presence of heavy noise, as shown in our experiments carried out by using a variety of image modalities.
Multivariate statistical analysis of stream-sediment geochemistry in the Grazer Paläozoikum, Austria
Weber, L.; Davis, J.C.
1990-01-01
The Austrian reconnaissance study of stream-sediment composition — more than 30000 clay-fraction samples collected over an area of 40000 km2 — is summarized in an atlas of regional maps that show the distributions of 35 elements. These maps, rich in information, reveal complicated patterns of element abundance that are difficult to compare on more than a small number of maps at one time. In such a study, multivariate procedures such as simultaneous R-Q mode components analysis may be helpful. They can compress a large number of variables into a much smaller number of independent linear combinations. These composite variables may be mapped and relationships sought between them and geological properties. As an example, R-Q mode components analysis is applied here to the Grazer Paläozoikum, a tectonic unit northeast of the city of Graz, which is composed of diverse lithologies and contains many mineral deposits.
Visualization for genomics: the Microbial Genome Viewer.
Kerkhoven, Robert; van Enckevort, Frank H J; Boekhorst, Jos; Molenaar, Douwe; Siezen, Roland J
2004-07-22
A Web-based visualization tool, the Microbial Genome Viewer, is presented that allows the user to combine complex genomic data in a highly interactive way. This Web tool enables the interactive generation of chromosome wheels and linear genome maps from genome annotation data stored in a MySQL database. The generated images are in scalable vector graphics (SVG) format, which is suitable for creating high-quality scalable images and dynamic Web representations. Gene-related data such as transcriptome and time-course microarray experiments can be superimposed on the maps for visual inspection. The Microbial Genome Viewer 1.0 is freely available at http://www.cmbi.kun.nl/MGV
Quasi-model free control for the post-capture operation of a non-cooperative target
NASA Astrophysics Data System (ADS)
She, Yuchen; Sun, Jun; Li, Shuang; Li, Wendan; Song, Ting
2018-06-01
This paper investigates a quasi-model free control (QMFC) approach for the post-capture control of a non-cooperative space object. The innovation of this paper lies in the following three aspects, which correspond to the three challenges presented in the mission scenario. First, an excitation-response mapping search strategy is developed based on the linearization of the system in terms of a set of parameters, which is efficient in handling the combined spacecraft with a high coupling effect on the inertia matrix. Second, a virtual coordinate system is proposed to efficiently compute the center of mass (COM) of the combined system, which improves the COM tracking efficiency for time-varying COM positions. Third, a linear online corrector is built to reduce the control error to further improve the control accuracy, which helps control the tracking mode within the combined system's time-varying inertia matrix. Finally, simulation analyses show that the proposed control framework is able to realize combined spacecraft post-capture control in extremely unfavorable conditions with high control accuracy.
Detection of endometrial lesions by degree of linear polarization maps
NASA Astrophysics Data System (ADS)
Kim, Jihoon; Fazleabas, Asgerally; Walsh, Joseph T.
2010-02-01
Endometriosis is one of the most common causes of chronic pelvic pain and infertility and is characterized by the presence of endometrial glands and stroma outside of the uterine cavity. A novel laparoscopic polarization imaging system was designed to detect endometriosis by imaging endometrial lesions. Linearly polarized light with varying incident polarization angles illuminated endometrial lesions. Degree of linear polarization image maps of endometrial lesions were constructed by using remitted polarized light. The image maps were compared with regular laparoscopy image. The degree of linear polarization map contributed to the detection of endometriosis by revealing structures inside the lesion. The utilization of rotating incident polarization angle (IPA) for the linearly polarized light provides extended understanding of endometrial lesions. The developed polarization system with varying IPA and the collected image maps could provide improved characterization of endometrial lesions via higher visibility of the structure of the lesions and thereby improve diagnosis of endometriosis.
Profiling a Mind Map User: A Descriptive Appraisal
ERIC Educational Resources Information Center
Tucker, Joanne M.; Armstrong, Gary R.; Massad, Victor J.
2010-01-01
Whether manually or through the use of software, a non-linear information organization framework known as mind mapping offers an alternative method for capturing thoughts, ideas and information to linear thinking modes such as outlining. Mind mapping is brainstorming, organizing, and problem solving. This paper examines mind mapping techniques,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Mary K.
The Koobi Fora Formation in northwestern Kenya has yielded more hominin fossils dated between 2.1 and 1.2 Ma than any other location on Earth. This research was undertaken to discover the spectral signatures of a portion of the Koobi Fora Formation using imagery from the DOE's Multispectral Thermal Imager (MTI) satellite. Creation of a digital geologic map from MTI imagery was a secondary goal of this research. MTI is unique amongst multispectral satellites in that it co-collects data from 15 spectral bands ranging from the visible to the thermal infrared with a ground sample distance of 5 meters per pixelmore » in the visible and 20 meters in the infrared. The map was created in two stages. The first was to correct the base MTI image using spatial accuracy assessment points collected in the field. The second was to mosaic various MTI images together to create the final Koobi Fora map. Absolute spatial accuracy of the final map product is 73 meters. The geologic classification of the Koobi Fora MTI map also took place in two stages. The field work stage involved location of outcrops of different lithologies within the Koobi Fora Formation. Field descriptions of these outcrops were made and their locations recorded. During the second stage, a linear spectral unmixing algorithm was applied to the MTI mosaic. In order to train the linear spectra unmixing algorithm, regions of interest representing four different classes of geologic material (tuff, alluvium, carbonate, and basalt), as well as a vegetation class were defined within the MTI mosaic. The regions of interest were based upon the aforementioned field data as well as overlays of geologic maps from the 1976 Iowa State mapping project. Pure spectra were generated for each class from the regions of interest, and then the unmixing algorithm classified each pixel according to relative percentage of classes found within the pixel based upon the pure spectra values. A total of four unique combinations of geologic classes were analyzed using the algorithm. The tuffs within the Koobi Fora Formation were defined with 100% accuracy using a combination of pure spectra from the basalt, vegetation, and tuff.« less
Arbitrary-Order Conservative and Consistent Remapping and a Theory of Linear Maps: Part II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ullrich, Paul A.; Devendran, Dharshi; Johansen, Hans
2016-04-01
The focus on this series of articles is on the generation of accurate, conservative, consistent, and (optionally) monotone linear offline maps. This paper is the second in the series. It extends on the first part by describing four examples of 2D linear maps that can be constructed in accordance with the theory of the earlier work. The focus is again on spherical geometry, although these techniques can be readily extended to arbitrary manifolds. The four maps include conservative, consistent, and (optionally) monotone linear maps (i) between two finite-volume meshes, (ii) from finite-volume to finite-element meshes using a projection-type approach, (iii)more » from finite-volume to finite-element meshes using volumetric integration, and (iv) between two finite-element meshes. Arbitrary order of accuracy is supported for each of the described nonmonotone maps.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de
2015-07-21
In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implementsmore » sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.« less
Pinski, Peter; Riplinger, Christoph; Valeev, Edward F; Neese, Frank
2015-07-21
In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.
Predictive models reduce talent development costs in female gymnastics.
Pion, Johan; Hohmann, Andreas; Liu, Tianbiao; Lenoir, Matthieu; Segers, Veerle
2017-04-01
This retrospective study focuses on the comparison of different predictive models based on the results of a talent identification test battery for female gymnasts. We studied to what extent these models have the potential to optimise selection procedures, and at the same time reduce talent development costs in female artistic gymnastics. The dropout rate of 243 female elite gymnasts was investigated, 5 years past talent selection, using linear (discriminant analysis) and non-linear predictive models (Kohonen feature maps and multilayer perceptron). The coaches classified 51.9% of the participants correct. Discriminant analysis improved the correct classification to 71.6% while the non-linear technique of Kohonen feature maps reached 73.7% correctness. Application of the multilayer perceptron even classified 79.8% of the gymnasts correctly. The combination of different predictive models for talent selection can avoid deselection of high-potential female gymnasts. The selection procedure based upon the different statistical analyses results in decrease of 33.3% of cost because the pool of selected athletes can be reduced to 92 instead of 138 gymnasts (as selected by the coaches). Reduction of the costs allows the limited resources to be fully invested in the high-potential athletes.
Analytic Reflected Lightcurves for Exoplanets
NASA Astrophysics Data System (ADS)
Haggard, Hal M.; Cowan, Nicolas B.
2018-04-01
The disk-integrated reflected brightness of an exoplanet changes as a function of time due to orbital and rotational motion coupled with an inhomogeneous albedo map. We have previously derived analytic reflected lightcurves for spherical harmonic albedo maps in the special case of a synchronously-rotating planet on an edge-on orbit (Cowan, Fuentes & Haggard 2013). In this letter, we present analytic reflected lightcurves for the general case of a planet on an inclined orbit, with arbitrary spin period and non-zero obliquity. We do so for two different albedo basis maps: bright points (δ-maps), and spherical harmonics (Y_l^m-maps). In particular, we use Wigner D-matrices to express an harmonic lightcurve for an arbitrary viewing geometry as a non-linear combination of harmonic lightcurves for the simpler edge-on, synchronously rotating geometry. These solutions will enable future exploration of the degeneracies and information content of reflected lightcurves, as well as fast calculation of lightcurves for mapping exoplanets based on time-resolved photometry. To these ends we make available Exoplanet Analytic Reflected Lightcurves (EARL), a simple open-source code that allows rapid computation of reflected lightcurves.
Robust Head-Pose Estimation Based on Partially-Latent Mixture of Linear Regressions.
Drouard, Vincent; Horaud, Radu; Deleforge, Antoine; Ba, Sileye; Evangelidis, Georgios
2017-03-01
Head-pose estimation has many applications, such as social event analysis, human-robot and human-computer interaction, driving assistance, and so forth. Head-pose estimation is challenging, because it must cope with changing illumination conditions, variabilities in face orientation and in appearance, partial occlusions of facial landmarks, as well as bounding-box-to-face alignment errors. We propose to use a mixture of linear regressions with partially-latent output. This regression method learns to map high-dimensional feature vectors (extracted from bounding boxes of faces) onto the joint space of head-pose angles and bounding-box shifts, such that they are robustly predicted in the presence of unobservable phenomena. We describe in detail the mapping method that combines the merits of unsupervised manifold learning techniques and of mixtures of regressions. We validate our method with three publicly available data sets and we thoroughly benchmark four variants of the proposed algorithm with several state-of-the-art head-pose estimation methods.
The solar vector magnetograph of the Okayama Astrophysical Observatory
NASA Technical Reports Server (NTRS)
Makita, M.; Hamana, S.; Nishi, K.
1985-01-01
The vector magnetograph of the Okayama Astrophysical Observatory is fed to the 65 cm solar coude telescope with a 10 m Littrow spectrograph. The polarimeter put at the telescope focus analyzes the incident polarization. Photomultipliers (PMT) at the exit of the spectrograph pick up the modulated light signals and send them to the electronic controller. The controller analyzes frequency and phase of the signal. The analyzer of the polarimeter is a combination of a single wave plate rotating at 40 Hz and a Wallaston prism. Incident linear and circular polarizations are modified at four times and twice the rotation frequency, respectively. Two compensators minimize the instrumental polarization, mainly caused by the two tilt mirrors in the optical path of the telescope. The four photomultipliers placed on the wings of the FeI 5250A line give maps of intensity, longitudinal field and transverse field. The main outputs, maps of intensity, and net linear and circular polarizations in the neighboring continuum are obtained by the other two monitor PMTs.
Blind decomposition of Herschel-HIFI spectral maps of the NGC 7023 nebula
NASA Astrophysics Data System (ADS)
Berné, O.; Joblin, C.; Deville, Y.; Pilleri, P.; Pety, J.; Teyssier, D.; Gerin, M.; Fuente, A.
2012-12-01
Large spatial-spectral surveys are more and more common in astronomy. This calls for the need of new methods to analyze such mega- to giga-pixel data-cubes. In this paper we present a method to decompose such observations into a limited and comprehensive set of components. The original data can then be interpreted in terms of linear combinations of these components. The method uses non-negative matrix factorization (NMF) to extract latent spectral end-members in the data. The number of needed end-members is estimated based on the level of noise in the data. A Monte-Carlo scheme is adopted to estimate the optimal end-members, and their standard deviations. Finally, the maps of linear coefficients are reconstructed using non-negative least squares. We apply this method to a set of hyperspectral data of the NGC 7023 nebula, obtained recently with the HIFI instrument onboard the Herschel space observatory, and provide a first interpretation of the results in terms of 3-dimensional dynamical structure of the region.
Abdulla, Ahmed AbdoAziz Ahmed; Lin, Hongfei; Xu, Bo; Banbhrani, Santosh Kumar
2016-07-25
Biomedical literature retrieval is becoming increasingly complex, and there is a fundamental need for advanced information retrieval systems. Information Retrieval (IR) programs scour unstructured materials such as text documents in large reserves of data that are usually stored on computers. IR is related to the representation, storage, and organization of information items, as well as to access. In IR one of the main problems is to determine which documents are relevant and which are not to the user's needs. Under the current regime, users cannot precisely construct queries in an accurate way to retrieve particular pieces of data from large reserves of data. Basic information retrieval systems are producing low-quality search results. In our proposed system for this paper we present a new technique to refine Information Retrieval searches to better represent the user's information need in order to enhance the performance of information retrieval by using different query expansion techniques and apply a linear combinations between them, where the combinations was linearly between two expansion results at one time. Query expansions expand the search query, for example, by finding synonyms and reweighting original terms. They provide significantly more focused, particularized search results than do basic search queries. The retrieval performance is measured by some variants of MAP (Mean Average Precision) and according to our experimental results, the combination of best results of query expansion is enhanced the retrieved documents and outperforms our baseline by 21.06 %, even it outperforms a previous study by 7.12 %. We propose several query expansion techniques and their combinations (linearly) to make user queries more cognizable to search engines and to produce higher-quality search results.
Many-to-one form-to-function mapping weakens parallel morphological evolution.
Thompson, Cole J; Ahmed, Newaz I; Veen, Thor; Peichel, Catherine L; Hendry, Andrew P; Bolnick, Daniel I; Stuart, Yoel E
2017-11-01
Evolutionary ecologists aim to explain and predict evolutionary change under different selective regimes. Theory suggests that such evolutionary prediction should be more difficult for biomechanical systems in which different trait combinations generate the same functional output: "many-to-one mapping." Many-to-one mapping of phenotype to function enables multiple morphological solutions to meet the same adaptive challenges. Therefore, many-to-one mapping should undermine parallel morphological evolution, and hence evolutionary predictability, even when selection pressures are shared among populations. Studying 16 replicate pairs of lake- and stream-adapted threespine stickleback (Gasterosteus aculeatus), we quantified three parts of the teleost feeding apparatus and used biomechanical models to calculate their expected functional outputs. The three feeding structures differed in their form-to-function relationship from one-to-one (lower jaw lever ratio) to increasingly many-to-one (buccal suction index, opercular 4-bar linkage). We tested for (1) weaker linear correlations between phenotype and calculated function, and (2) less parallel evolution across lake-stream pairs, in the many-to-one systems relative to the one-to-one system. We confirm both predictions, thus supporting the theoretical expectation that increasing many-to-one mapping undermines parallel evolution. Therefore, sole consideration of morphological variation within and among populations might not serve as a proxy for functional variation when multiple adaptive trait combinations exist. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Tran, Annelise; Trevennec, Carlène; Lutwama, Julius; Sserugga, Joseph; Gély, Marie; Pittiglio, Claudia; Pinto, Julio; Chevalier, Véronique
2016-01-01
Rift Valley fever (RVF), a mosquito-borne disease affecting ruminants and humans, is one of the most important viral zoonoses in Africa. The objective of the present study was to develop a geographic knowledge-based method to map the areas suitable for RVF amplification and RVF spread in four East African countries, namely, Kenya, Tanzania, Uganda and Ethiopia, and to assess the predictive accuracy of the model using livestock outbreak data from Kenya and Tanzania. Risk factors and their relative importance regarding RVF amplification and spread were identified from a literature review. A numerical weight was calculated for each risk factor using an analytical hierarchy process. The corresponding geographic data were collected, standardized and combined based on a weighted linear combination to produce maps of the suitability for RVF transmission. The accuracy of the resulting maps was assessed using RVF outbreak locations in livestock reported in Kenya and Tanzania between 1998 and 2012 and the ROC curve analysis. Our results confirmed the capacity of the geographic information system-based multi-criteria evaluation method to synthesize available scientific knowledge and to accurately map (AUC = 0.786; 95% CI [0.730–0.842]) the spatial heterogeneity of RVF suitability in East Africa. This approach provides users with a straightforward and easy update of the maps according to data availability or the further development of scientific knowledge. PMID:27631374
Mapping the integrated Sachs-Wolfe effect
NASA Astrophysics Data System (ADS)
Manzotti, A.; Dodelson, S.
2014-12-01
On large scales, the anisotropies in the cosmic microwave background (CMB) reflect not only the primordial density field but also the energy gain when photons traverse decaying gravitational potentials of large scale structure, what is called the integrated Sachs-Wolfe (ISW) effect. Decomposing the anisotropy signal into a primordial piece and an ISW component, the main secondary effect on large scales, is more urgent than ever as cosmologists strive to understand the Universe on those scales. We present a likelihood technique for extracting the ISW signal combining measurements of the CMB, the distribution of galaxies, and maps of gravitational lensing. We test this technique with simulated data showing that we can successfully reconstruct the ISW map using all the data sets together. Then we present the ISW map obtained from a combination of real data: the NRAO VLA sky survey (NVSS) galaxy survey, temperature anisotropies, and lensing maps made by the Planck satellite. This map shows that, with the data sets used and assuming linear physics, there is no evidence, from the reconstructed ISW signal in the Cold Spot region, for an entirely ISW origin of this large scale anomaly in the CMB. However a large scale structure origin from low redshift voids outside the NVSS redshift range is still possible. Finally we show that future surveys, thanks to a better large scale lensing reconstruction will be able to improve the reconstruction signal to noise which is now mainly coming from galaxy surveys.
NASA Astrophysics Data System (ADS)
Müller, Peter; Krause, Marita; Beck, Rainer; Schmidt, Philip
2017-10-01
Context. The venerable NOD2 data reduction software package for single-dish radio continuum observations, which was developed for use at the 100-m Effelsberg radio telescope, has been successfully applied over many decades. Modern computing facilities, however, call for a new design. Aims: We aim to develop an interactive software tool with a graphical user interface for the reduction of single-dish radio continuum maps. We make a special effort to reduce the distortions along the scanning direction (scanning effects) by combining maps scanned in orthogonal directions or dual- or multiple-horn observations that need to be processed in a restoration procedure. The package should also process polarisation data and offer the possibility to include special tasks written by the individual user. Methods: Based on the ideas of the NOD2 package we developed NOD3, which includes all necessary tasks from the raw maps to the final maps in total intensity and linear polarisation. Furthermore, plot routines and several methods for map analysis are available. The NOD3 package is written in Python, which allows the extension of the package via additional tasks. The required data format for the input maps is FITS. Results: The NOD3 package is a sophisticated tool to process and analyse maps from single-dish observations that are affected by scanning effects from clouds, receiver instabilities, or radio-frequency interference. The "basket-weaving" tool combines orthogonally scanned maps into a final map that is almost free of scanning effects. The new restoration tool for dual-beam observations reduces the noise by a factor of about two compared to the NOD2 version. Combining single-dish with interferometer data in the map plane ensures the full recovery of the total flux density. Conclusions: This software package is available under the open source license GPL for free use at other single-dish radio telescopes of the astronomical community. The NOD3 package is designed to be extendable to multi-channel data represented by data cubes in Stokes I, Q, and U.
GLSM realizations of maps and intersections of Grassmannians and Pfaffians
NASA Astrophysics Data System (ADS)
Căldăraru, Andrei; Knapp, Johanna; Sharpe, Eric
2018-04-01
In this paper we give gauged linear sigma model (GLSM) realizations of a number of geometries not previously presented in GLSMs. We begin by describing GLSM realizations of maps including Veronese and Segre embeddings, which can be applied to give GLSMs explicitly describing non-complete intersection constructions such as the intersection of one hypersurface with the image under some map of another. We also discuss GLSMs for intersections of Grassmannians and Pfaffians with one another, and with their images under various maps, which sometimes form exotic constructions of Calabi-Yaus, as well as GLSMs for other exotic Calabi-Yau constructions of Kanazawa. Much of this paper focuses on a specific set of examples of GLSMs for intersections of Grassmannians G(2 , N ) with themselves after a linear rotation, including the Calabi-Yau case N = 5. One phase of the GLSM realizes an intersection of two Grassmannians, the other phase realizes an intersection of two Pfaffians. The GLSM has two nonabelian factors in its gauge group, and we consider dualities in those factors. In both the original GLSM and a double-dual, one geometric phase is realized perturbatively (as the critical locus of a superpotential), and the other via quantum effects. Dualizing on a single gauge group factor yields a model in which each geometry is realized through a simultaneous combination of perturbative and quantum effects.
Ion-absorption band analysis for the discrimination of iron-rich zones. [Nevada
NASA Technical Reports Server (NTRS)
Rowan, L. C. (Principal Investigator); Wetlaufer, P. H.
1974-01-01
The author has identified the following significant results. A technique which combines digital computer processing and color composition was devised for detecting hydrothermally altered areas and for discriminating among many rock types in an area in south-central Nevada. Subtle spectral reflectance differences among the rock types are enhanced by ratioing and contrast-stretching MSS radiance values for form ratio images which subsequently are displayed in color-ratio composites. Landform analysis of Nevada shows that linear features compiled without respect to length results in approximately 25 percent coincidence with mapped faults. About 80 percent of the major lineaments coincides with mapped faults, and substantial extension of locally mapped faults is commonly indicated. Seven major lineament systems appear to be old zones of crustal weakness which have provided preferred conduits for rising magma through periodic reactivation.
2012-03-09
equation is a product of a complex basis vector in Jackson and a linear combination of plane wave functions. We convert both the amplitudes and the...wave function arguments from complex scalars to complex vectors . This conversion allows us to separate the electric field vector and the imaginary...magnetic field vector , because exponentials of imaginary scalars convert vectors to imaginary vectors and vice versa, while ex- ponentials of imaginary
Impact of the Combination of GNSS and Altimetry Data on the Derived Global Ionosphere Maps
NASA Astrophysics Data System (ADS)
Todorova, S.; Schuh, H.; Hobiger, T.; Hernandez-Pajares, M.
2007-05-01
The classical input data for development of Global Ionosphere Maps (GIM) of the Total Electron Content (TEC) is the so called "geometry free linear combination", obtained from the dual-frequency Global Navigation Satellite System (GNSS) observations. Such maps in general achieve good quality of the ionosphere representation. However, the GNSS stations are inhomogeneously distributed, with large gaps particularly over the sea surface, which lowers the precision of the GIM over these areas. On the other hand, the dual-frequency satellite altimetry missions such as Jason-1 and TOPEX/Poseidon provide information about the parameter of the ionosphere precisely above the sea surface, where the altimetry observations are preformed. Due to the limited spread of the measurements and some open issues related to systematic errors, the ionospheric data from satellite altimetry is used only for cross-validation of the GNSS GIM. It can be anticipated however, that some specifics of the ionosphere parameter derived by satellite altimetry will partly balance the inhomogeneity of the GNSS data. Such important features are complementing in the global resolution, different biasing and the absence of additional mapping, as it is the case in GNSS. In this study we create two-hourly GIM from GNSS data and additionally introduce satellite altimetry observations, which help to compensate the insufficient GNSS coverage of the oceans. The combination of the data from around 180 GNSS stations and the satellite altimetry mission Jason-1 is performed on the normal equation level. The comparison between the integrated ionosphere models and the GNSS-only maps shows a higher accuracy of the combined GIM over the seas. A further effect of the combination is that the method allows the independent estimation of daily values of the Differential Code Biases (DCB) for all GNSS satellites and receivers, and of the systematic errors affecting the altimetry measurements. Such errors should include a hardware delay similar to the GNSS DCB as well as the impact of the topside ionosphere, which is not sampled by Jason-1. At this stage, for testing purposes we estimate a constant daily value, which will be further investigated. The final aim of the study is the development of improved combined global TEC maps, which make best use of the advantages of each particular type of data and have higher accuracy and reliability than the results derived by the two methods if treated individually.
Phase-Controlled Polarization Modulators
NASA Technical Reports Server (NTRS)
Chuss, D. T.; Wollack, E. J.; Novak, G.; Moseley, S. H.; Pisano, G.; Krejny, M.; U-Yen, K.
2012-01-01
We report technology development of millimeter/submillimeter polarization modulators that operate by introducing a a variable, controlled phase delay between two orthogonal polarization states. The variable-delay polarization modulator (VPM) operates via the introduction of a variable phase delay between two linear orthogonal polarization states, resulting in a variable mapping of a single linear polarization into a combination of that Stokes parameter and circular (Stokes V) polarization. Characterization of a prototype VPM is presented at 350 and 3000 microns. We also describe a modulator in which a variable phase delay is introduced between right- and left- circular polarization states. In this architecture, linear polarization is fully modulated. Each of these devices consists of a polarization diplexer parallel to and in front of a movable mirror. Modulation involves sub-wavelength translations of the mirror that change the magnitude of the phase delay.
Linear Mapping of Numbers onto Space Requires Attention
ERIC Educational Resources Information Center
Anobile, Giovanni; Cicchini, Guido Marco; Burr, David C.
2012-01-01
Mapping of number onto space is fundamental to mathematics and measurement. Previous research suggests that while typical adults with mathematical schooling map numbers veridically onto a linear scale, pre-school children and adults without formal mathematics training, as well as individuals with dyscalculia, show strong compressive,…
LPmerge: an R package for merging genetic maps by linear programming.
Endelman, Jeffrey B; Plomion, Christophe
2014-06-01
Consensus genetic maps constructed from multiple populations are an important resource for both basic and applied research, including genome-wide association analysis, genome sequence assembly and studies of evolution. The LPmerge software uses linear programming to efficiently minimize the mean absolute error between the consensus map and the linkage maps from each population. This minimization is performed subject to linear inequality constraints that ensure the ordering of the markers in the linkage maps is preserved. When marker order is inconsistent between linkage maps, a minimum set of ordinal constraints is deleted to resolve the conflicts. LPmerge is on CRAN at http://cran.r-project.org/web/packages/LPmerge. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Cross-entropy embedding of high-dimensional data using the neural gas model.
Estévez, Pablo A; Figueroa, Cristián J; Saito, Kazumi
2005-01-01
A cross-entropy approach to mapping high-dimensional data into a low-dimensional space embedding is presented. The method allows to project simultaneously the input data and the codebook vectors, obtained with the Neural Gas (NG) quantizer algorithm, into a low-dimensional output space. The aim of this approach is to preserve the relationship defined by the NG neighborhood function for each pair of input and codebook vectors. A cost function based on the cross-entropy between input and output probabilities is minimized by using a Newton-Raphson method. The new approach is compared with Sammon's non-linear mapping (NLM) and the hierarchical approach of combining a vector quantizer such as the self-organizing feature map (SOM) or NG with the NLM recall algorithm. In comparison with these techniques, our method delivers a clear visualization of both data points and codebooks, and it achieves a better mapping quality in terms of the topology preservation measure q(m).
Combining textual and visual information for image retrieval in the medical domain.
Gkoufas, Yiannis; Morou, Anna; Kalamboukis, Theodore
2011-01-01
In this article we have assembled the experience obtained from our participation in the imageCLEF evaluation task over the past two years. Exploitation on the use of linear combinations for image retrieval has been attempted by combining visual and textual sources of images. From our experiments we conclude that a mixed retrieval technique that applies both textual and visual retrieval in an interchangeably repeated manner improves the performance while overcoming the scalability limitations of visual retrieval. In particular, the mean average precision (MAP) has increased from 0.01 to 0.15 and 0.087 for 2009 and 2010 data, respectively, when content-based image retrieval (CBIR) is performed on the top 1000 results from textual retrieval based on natural language processing (NLP).
Landslide susceptibility map: from research to application
NASA Astrophysics Data System (ADS)
Fiorucci, Federica; Reichenbach, Paola; Ardizzone, Francesca; Rossi, Mauro; Felicioni, Giulia; Antonini, Guendalina
2014-05-01
Susceptibility map is an important and essential tool in environmental planning, to evaluate landslide hazard and risk and for a correct and responsible management of the territory. Landslide susceptibility is the likelihood of a landslide occurring in an area on the basis of local terrain conditions. Can be expressed as the probability that any given region will be affected by landslides, i.e. an estimate of "where" landslides are likely to occur. In this work we present two examples of landslide susceptibility map prepared for the Umbria Region and for the Perugia Municipality. These two maps were realized following official request from the Regional and Municipal government to the Research Institute for the Hydrogeological Protection (CNR-IRPI). The susceptibility map prepared for the Umbria Region represents the development of previous agreements focused to prepare: i) a landslide inventory map that was included in the Urban Territorial Planning (PUT) and ii) a series of maps for the Regional Plan for Multi-risk Prevention. The activities carried out for the Umbria Region were focused to define and apply methods and techniques for landslide susceptibility zonation. Susceptibility maps were prepared exploiting a multivariate statistical model (linear discriminant analysis) for the five Civil Protection Alert Zones defined in the regional territory. The five resulting maps were tested and validated using the spatial distribution of recent landslide events that occurred in the region. The susceptibility map for the Perugia Municipality was prepared to be integrated as one of the cartographic product in the Municipal development plan (PRG - Piano Regolatore Generale) as required by the existing legislation. At strategic level, one of the main objectives of the PRG, is to establish a framework of knowledge and legal aspects for the management of geo-hydrological risk. At national level most of the susceptibility maps prepared for the PRG, were and still are obtained qualitatively classifying the territory according to slope classes. For the Perugia Municipality the susceptibility map was obtained combining results of statistical multivariate models and landslide density map. In particular, in the first phase a susceptibility zonation was prepared using different single and combined probability statistical multivariate techniques. The zonation was then combined and compared with the landslide density map in order to reclassify the false negative (portion of the territory classified by the model as stable affected by slope failures). The semi-quantitative resulting map was classified in five susceptibility classes. For each class a set of technical regulation was established to manage the territory.
NASA Astrophysics Data System (ADS)
Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui
2014-07-01
The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.
Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui
2014-07-01
The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.
NASA Astrophysics Data System (ADS)
Hekmatmanesh, Amin; Jamaloo, Fatemeh; Wu, Huapeng; Handroos, Heikki; Kilpeläinen, Asko
2018-04-01
Brain Computer Interface (BCI) can be a challenge for developing of robotic, prosthesis and human-controlled systems. This work focuses on the implementation of a common spatial pattern (CSP) base algorithm to detect event related desynchronization patterns. Utilizing famous previous work in this area, features are extracted by filter bank with common spatial pattern (FBCSP) method, and then weighted by a sensitive learning vector quantization (SLVQ) algorithm. In the current work, application of the radial basis function (RBF) as a mapping kernel of linear discriminant analysis (KLDA) method on the weighted features, allows the transfer of data into a higher dimension for more discriminated data scattering by RBF kernel. Afterwards, support vector machine (SVM) with generalized radial basis function (GRBF) kernel is employed to improve the efficiency and robustness of the classification. Averagely, 89.60% accuracy and 74.19% robustness are achieved. BCI Competition III, Iva data set is used to evaluate the algorithm for detecting right hand and foot imagery movement patterns. Results show that combination of KLDA with SVM-GRBF classifier makes 8.9% and 14.19% improvements in accuracy and robustness, respectively. For all the subjects, it is concluded that mapping the CSP features into a higher dimension by RBF and utilization GRBF as a kernel of SVM, improve the accuracy and reliability of the proposed method.
CMIP5 downscaling and its uncertainty in China
NASA Astrophysics Data System (ADS)
Yue, TianXiang; Zhao, Na; Fan, ZeMeng; Li, Jing; Chen, ChuanFa; Lu, YiMin; Wang, ChenLiang; Xu, Bing; Wilson, John
2016-11-01
A comparison between the Coupled Model Intercomparison Project Phase 5 (CMIP5) data and observations at 735 meteorological stations indicated that mean annual temperature (MAT) was underestimated about 1.8 °C while mean annual precipitation (MAP) was overestimated about 263 mm in general across the whole of China. A statistical analysis of China-CMIP5 data demonstrated that MAT exhibits spatial stationarity, while MAP exhibits spatial non-stationarity. MAT and MAP data from the China-CMIP5 dataset were downscaled by combining statistical approaches with a method for high accuracy surface modeling (HASM). A statistical transfer function (STF) of MAT was formulated using minimized residuals output by HASM with an ordinary least squares (OLS) linear equation that used latitude and elevation as independent variables, abbreviated as HASM-OLS. The STF of MAP under a BOX-COX transformation was derived as a combination of minimized residuals output by HASM with a geographically weight regression (GWR) using latitude, longitude, elevation and impact coefficient of aspect as independent variables, abbreviated as HASM-GB. Cross validation, using observational data from the 735 meteorological stations across China for the period 1976 to 2005, indicates that the largest uncertainty occurred on the Tibet plateau with mean absolute errors (MAEs) of MAT and MAP as high as 4.64 °C and 770.51 mm, respectively. The downscaling processes of HASM-OLS and HASM-GB generated MAEs of MAT and MAP that were 67.16% and 77.43% lower, respectively across the whole of China on average, and 88.48% and 97.09% lower for the Tibet plateau.
The structure of mode-locking regions of piecewise-linear continuous maps: II. Skew sawtooth maps
NASA Astrophysics Data System (ADS)
Simpson, D. J. W.
2018-05-01
In two-parameter bifurcation diagrams of piecewise-linear continuous maps on , mode-locking regions typically have points of zero width known as shrinking points. Near any shrinking point, but outside the associated mode-locking region, a significant proportion of parameter space can be usefully partitioned into a two-dimensional array of annular sectors. The purpose of this paper is to show that in these sectors the dynamics is well-approximated by a three-parameter family of skew sawtooth circle maps, where the relationship between the skew sawtooth maps and the N-dimensional map is fixed within each sector. The skew sawtooth maps are continuous, degree-one, and piecewise-linear, with two different slopes. They approximate the stable dynamics of the N-dimensional map with an error that goes to zero with the distance from the shrinking point. The results explain the complicated radial pattern of periodic, quasi-periodic, and chaotic dynamics that occurs near shrinking points.
Suitability assessment and mapping of Oyo State, Nigeria, for rice cultivation using GIS
NASA Astrophysics Data System (ADS)
Ayoade, Modupe Alake
2017-08-01
Rice is one of the most preferred food crops in Nigeria. However, local rice production has declined with the oil boom of the 1970s causing demand to outstrip supply. Rice production can be increased through the integration of Geographic Information Systems (GIS) and crop-land suitability analysis and mapping. Based on the key predictor variables that determine rice yield mentioned in relevant literature, data on rainfall, temperature, relative humidity, slope, and soil of Oyo state were obtained. To develop rice suitability maps for the state, two MCE-GIS techniques, namely the Overlay approach and weighted linear combination (WLC), using fuzzy AHP were used and compared. A Boolean land use map derived from a landsat imagery was used in masking out areas currently unavailable for rice production. Both suitability maps were classified into four categories of very suitable, suitable, moderate, and fairly moderate. Although the maps differ slightly, the overlay and WLC (AHP) approach found most parts of Oyo state (51.79 and 82.9 % respectively) to be moderately suitable for rice production. However, in areas like Eruwa, Oyo, and Shaki, rainfall amount received needs to be supplemented by irrigation for increased rice yield.
Analytic reflected light curves for exoplanets
NASA Astrophysics Data System (ADS)
Haggard, Hal M.; Cowan, Nicolas B.
2018-07-01
The disc-integrated reflected brightness of an exoplanet changes as a function of time due to orbital and rotational motions coupled with an inhomogeneous albedo map. We have previously derived analytic reflected light curves for spherical harmonic albedo maps in the special case of a synchronously rotating planet on an edge-on orbit (Cowan, Fuentes & Haggard). In this paper, we present analytic reflected light curves for the general case of a planet on an inclined orbit, with arbitrary spin period and non-zero obliquity. We do so for two different albedo basis maps: bright points (δ-maps), and spherical harmonics (Y_ l^m-maps). In particular, we use Wigner D-matrices to express an harmonic light curve for an arbitrary viewing geometry as a non-linear combination of harmonic light curves for the simpler edge-on, synchronously rotating geometry. These solutions will enable future exploration of the degeneracies and information content of reflected light curves, as well as fast calculation of light curves for mapping exoplanets based on time-resolved photometry. To these ends, we make available Exoplanet Analytic Reflected Lightcurves, a simple open-source code that allows rapid computation of reflected light curves.
Linear reduction methods for tag SNP selection.
He, Jingwu; Zelikovsky, Alex
2004-01-01
It is widely hoped that constructing a complete human haplotype map will help to associate complex diseases with certain SNP's. Unfortunately, the number of SNP's is huge and it is very costly to sequence many individuals. Therefore, it is desirable to reduce the number of SNP's that should be sequenced to considerably small number of informative representatives, so called tag SNP's. In this paper, we propose a new linear algebra based method for selecting and using tag SNP's. Our method is purely combinatorial and can be combined with linkage disequilibrium (LD) and block based methods. We measure the quality of our tag SNP selection algorithm by comparing actual SNP's with SNP's linearly predicted from linearly chosen tag SNP's. We obtain an extremely good compression and prediction rates. For example, for long haplotypes (>25000 SNP's), knowing only 0.4% of all SNP's we predict the entire unknown haplotype with 2% accuracy while the prediction method is based on a 10% sample of the population.
NASA Astrophysics Data System (ADS)
Wang, Wei; Zhong, Ming; Cheng, Ling; Jin, Lu; Shen, Si
2018-02-01
In the background of building global energy internet, it has both theoretical and realistic significance for forecasting and analysing the ratio of electric energy to terminal energy consumption. This paper firstly analysed the influencing factors of the ratio of electric energy to terminal energy and then used combination method to forecast and analyse the global proportion of electric energy. And then, construct the cointegration model for the proportion of electric energy by using influence factor such as electricity price index, GDP, economic structure, energy use efficiency and total population level. At last, this paper got prediction map of the proportion of electric energy by using the combination-forecasting model based on multiple linear regression method, trend analysis method, and variance-covariance method. This map describes the development trend of the proportion of electric energy in 2017-2050 and the proportion of electric energy in 2050 was analysed in detail using scenario analysis.
Three-Dimensional Mapping of Microenvironmental Control of Methyl Rotational Barriers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hembree, William I; Baudry, Jerome Y
2011-01-01
Sterical (van der Waals-induced) rotational barriers of methyl groups are investigated theoretically, using ab initio and empirical force field calculations, for various three-dimensional microenvironmental conditions around the methyl group rotator of a model neopentane molecule. The destabilization (reducing methyl rotational barriers) or stabilization (increasing methyl rotational barriers) of the staggered conformation of the methyl rotator depends on a combination of microenvironmental contributions from (i) the number of atoms around the rotator, (ii) the distance between the rotator and the microenvironmental atoms, and (iii) the dihedral angle between the stator, rotator, and molecular environment around the rotator. These geometrical criteria combinemore » their respective effects in a linearly additive fashion, with no apparent cooperative effects, and their combination in space around a rotator may increase, decrease, or leave the rotator s rotational barrier unmodified. This is exemplified in a geometrical analysis of the alanine dipeptide crystal where microenvironmental effects on methyl rotators barrier of rotation fit the geometrical mapping described in the neopentane model.« less
Schwarz maps of algebraic linear ordinary differential equations
NASA Astrophysics Data System (ADS)
Sanabria Malagón, Camilo
2017-12-01
A linear ordinary differential equation is called algebraic if all its solution are algebraic over its field of definition. In this paper we solve the problem of finding closed form solution to algebraic linear ordinary differential equations in terms of standard equations. Furthermore, we obtain a method to compute all algebraic linear ordinary differential equations with rational coefficients by studying their associated Schwarz map through the Picard-Vessiot Theory.
A new linear least squares method for T1 estimation from SPGR signals with multiple TRs
NASA Astrophysics Data System (ADS)
Chang, Lin-Ching; Koay, Cheng Guan; Basser, Peter J.; Pierpaoli, Carlo
2009-02-01
The longitudinal relaxation time, T1, can be estimated from two or more spoiled gradient recalled echo x (SPGR) images with two or more flip angles and one or more repetition times (TRs). The function relating signal intensity and the parameters are nonlinear; T1 maps can be computed from SPGR signals using nonlinear least squares regression. A widely-used linear method transforms the nonlinear model by assuming a fixed TR in SPGR images. This constraint is not desirable since multiple TRs are a clinically practical way to reduce the total acquisition time, to satisfy the required resolution, and/or to combine SPGR data acquired at different times. A new linear least squares method is proposed using the first order Taylor expansion. Monte Carlo simulations of SPGR experiments are used to evaluate the accuracy and precision of the estimated T1 from the proposed linear and the nonlinear methods. We show that the new linear least squares method provides T1 estimates comparable in both precision and accuracy to those from the nonlinear method, allowing multiple TRs and reducing computation time significantly.
A characterization of positive linear maps and criteria of entanglement for quantum states
NASA Astrophysics Data System (ADS)
Hou, Jinchuan
2010-09-01
Let H and K be (finite- or infinite-dimensional) complex Hilbert spaces. A characterization of positive completely bounded normal linear maps from {\\mathcal B}(H) into {\\mathcal B}(K) is given, which particularly gives a characterization of positive elementary operators including all positive linear maps between matrix algebras. This characterization is then applied to give a representation of quantum channels (operations) between infinite-dimensional systems. A necessary and sufficient criterion of separability is given which shows that a state ρ on HotimesK is separable if and only if (ΦotimesI)ρ >= 0 for all positive finite-rank elementary operators Φ. Examples of NCP and indecomposable positive linear maps are given and are used to recognize some entangled states that cannot be recognized by the PPT criterion and the realignment criterion.
Generalization and capacity of extensively large two-layered perceptrons.
Rosen-Zvi, Michal; Engel, Andreas; Kanter, Ido
2002-09-01
The generalization ability and storage capacity of a treelike two-layered neural network with a number of hidden units scaling as the input dimension is examined. The mapping from the input to the hidden layer is via Boolean functions; the mapping from the hidden layer to the output is done by a perceptron. The analysis is within the replica framework where an order parameter characterizing the overlap between two networks in the combined space of Boolean functions and hidden-to-output couplings is introduced. The maximal capacity of such networks is found to scale linearly with the logarithm of the number of Boolean functions per hidden unit. The generalization process exhibits a first-order phase transition from poor to perfect learning for the case of discrete hidden-to-output couplings. The critical number of examples per input dimension, alpha(c), at which the transition occurs, again scales linearly with the logarithm of the number of Boolean functions. In the case of continuous hidden-to-output couplings, the generalization error decreases according to the same power law as for the perceptron, with the prefactor being different.
Control of the NASA Langley 16-Foot Transonic Tunnel with the Self-Organizing Feature Map
NASA Technical Reports Server (NTRS)
Motter, Mark A.
1998-01-01
A predictive, multiple model control strategy is developed based on an ensemble of local linear models of the nonlinear system dynamics for a transonic wind tunnel. The local linear models are estimated directly from the weights of a Self Organizing Feature Map (SOFM). Local linear modeling of nonlinear autonomous systems with the SOFM is extended to a control framework where the modeled system is nonautonomous, driven by an exogenous input. This extension to a control framework is based on the consideration of a finite number of subregions in the control space. Multiple self organizing feature maps collectively model the global response of the wind tunnel to a finite set of representative prototype controls. These prototype controls partition the control space and incorporate experimental knowledge gained from decades of operation. Each SOFM models the combination of the tunnel with one of the representative controls, over the entire range of operation. The SOFM based linear models are used to predict the tunnel response to a larger family of control sequences which are clustered on the representative prototypes. The control sequence which corresponds to the prediction that best satisfies the requirements on the system output is applied as the external driving signal. Each SOFM provides a codebook representation of the tunnel dynamics corresponding to a prototype control. Different dynamic regimes are organized into topological neighborhoods where the adjacent entries in the codebook represent the minimization of a similarity metric which is the essence of the self organizing feature of the map. Thus, the SOFM is additionally employed to identify the local dynamical regime, and consequently implements a switching scheme than selects the best available model for the applied control. Experimental results of controlling the wind tunnel, with the proposed method, during operational runs where strict research requirements on the control of the Mach number were met, are presented. Comparison to similar runs under the same conditions with the tunnel controlled by either the existing controller or an expert operator indicate the superiority of the method.
NASA Astrophysics Data System (ADS)
Hong, H.; Zhu, A. X.
2017-12-01
Climate change is a common phenomenon and it is very serious all over the world. The intensification of rainfall extremes with climate change is of key importance to society and then it may induce a large impact through landslides. This paper presents GIS-based new ensemble data mining techniques that weight-of-evidence, logistic model tree, linear and quadratic discriminant for landslide spatial modelling. This research was applied in Anfu County, which is a landslide-prone area in Jiangxi Province, China. According to a literature review and research the study area, we select the landslide influencing factor and their maps were digitized in a GIS environment. These landslide influencing factors are the altitude, plan curvature, profile curvature, slope degree, slope aspect, topographic wetness index (TWI), Stream Power Index (SPI), Topographic Wetness Index (SPI), distance to faults, distance to rivers, distance to roads, soil, lithology, normalized difference vegetation index and land use. According to historical information of individual landslide events, interpretation of the aerial photographs, and field surveys supported by the government of Jiangxi Meteorological Bureau of China, 367 landslides were identified in the study area. The landslide locations were divided into two subsets, namely, training and validating (70/30), based on a random selection scheme. In this research, Pearson's correlation was used for the evaluation of the relationship between the landslides and influencing factors. In the next step, three data mining techniques combined with the weight-of-evidence, logistic model tree, linear and quadratic discriminant, were used for the landslide spatial modelling and its zonation. Finally, the landslide susceptibility maps produced by the mentioned models were evaluated by the ROC curve. The results showed that the area under the curve (AUC) of all of the models was > 0.80. At the same time, the highest AUC value was for the linear and quadratic discriminant model (0.864), followed by logistic model tree (0.832), and weight-of-evidence (0.819). In general, the landslide maps can be applied for land use planning and management in the Anfu area.
Meshless analysis of shear deformable shells: the linear model
NASA Astrophysics Data System (ADS)
Costa, Jorge C.; Tiago, Carlos M.; Pimenta, Paulo M.
2013-10-01
This work develops a kinematically linear shell model departing from a consistent nonlinear theory. The initial geometry is mapped from a flat reference configuration by a stress-free finite deformation, after which, the actual shell motion takes place. The model maintains the features of a complete stress-resultant theory with Reissner-Mindlin kinematics based on an inextensible director. A hybrid displacement variational formulation is presented, where the domain displacements and kinematic boundary reactions are independently approximated. The resort to a flat reference configuration allows the discretization using 2-D Multiple Fixed Least-Squares (MFLS) on the domain. The consistent definition of stress resultants and consequent plane stress assumption led to a neat formulation for the analysis of shells. The consistent linear approximation, combined with MFLS, made possible efficient computations with a desired continuity degree, leading to smooth results for the displacement, strain and stress fields, as shown by several numerical examples.
Protein side chain rotational isomerization: A minimum perturbation mapping study
NASA Astrophysics Data System (ADS)
Haydock, Christopher
1993-05-01
A theory of the rotational isomerization of the indole side chain of tryptophan-47 of variant-3 scorpion neurotoxin is presented. The isomerization potential energy, entropic part of the isomerization free energy, isomer probabilities, transition state theory reaction rates, and indole order parameters are calculated from a minimum perturbation mapping over tryptophan-47 χ1×χ2 torsion space. A new method for calculating the fluorescence anisotropy from molecular dynamics simulations is proposed. The method is based on an expansion that separates transition dipole orientation from chromophore dynamics. The minimum perturbation potential energy map is inverted and applied as a bias potential for a 100 ns umbrella sampling simulation. The entropic part of the isomerization free energy as calculated by minimum perturbation mapping and umbrella sampling are in fairly close agreement. Throughout, the approximation is made that two glutamine and three tyrosine side chains neighboring tryptophan-47 are truncated at the Cβ atom. Comparison with the previous combination thermodynamic perturbation and umbrella sampling study suggests that this truncated neighbor side chain approximation leads to at least a qualitatively correct theory of tryptophan-47 rotational isomerization in the wild type variant-3 scorpion neurotoxin. Analysis of van der Waals interactions in a transition state region indicates that for the simulation of barrier crossing trajectories a linear combination of three specially defined dihedral angles will be superior to a simple side chain dihedral reaction coordinate.
Land subsidence susceptibility and hazard mapping: the case of Amyntaio Basin, Greece
NASA Astrophysics Data System (ADS)
Tzampoglou, P.; Loupasakis, C.
2017-09-01
Landslide susceptibility and hazard mapping has been applying for more than 20 years succeeding the assessment of the landslide risk and the mitigation the phenomena. On the contrary, equivalent maps aiming to study and mitigate land subsidence phenomena caused by the overexploitation of the aquifers are absent from the international literature. The current study focuses at the Amyntaio basin, located in West Macedonia at Florina prefecture. As proved by numerous studies the wider area has been severely affected by the overexploitation of the aquifers, caused by the mining and the agricultural activities. The intensive ground water level drop has triggered extensive land subsidence phenomena, especially at the perimeter of the open pit coal mine operating at the site, causing damages to settlements and infrastructure. The land subsidence susceptibility and risk maps were produced by applying the semi-quantitative WLC (Weighted Linear Combination) method, especially calibrated for this particular catastrophic event. The results were evaluated by using detailed field mapping data referring to the spatial distribution of the surface ruptures caused by the subsidence. The high correlation between the produced maps and the field mapping data, have proved the great value of the maps and of the applied technique on the management and the mitigation of the phenomena. Obviously, these maps can be safely used by decision-making authorities for the future urban safety development.
Murayama, Tomonori; Nakajima, Jun
2016-01-01
Anatomical segmentectomies play an important role in oncological lung resection, particularly for ground-glass types of primary lung cancers. This operation can also be applied to metastatic lung tumors deep in the lung. Virtual assisted lung mapping (VAL-MAP) is a novel technique that allows for bronchoscopic multi-spot dye markings to provide “geometric information” to the lung surface, using three-dimensional virtual images. In addition to wedge resections, VAL-MAP has been found to be useful in thoracoscopic segmentectomies, particularly complex segmentectomies, such as combined subsegmentectomies or extended segmentectomies. There are five steps in VAL-MAP-assisted segmentectomies: (I) “standing” stitches along the resection lines; (II) cleaning hilar anatomy; (III) confirming hilar anatomy; (IV) going 1 cm deeper; (V) step-by-step stapling technique. Depending on the anatomy, segmentectomies can be classified into linear (lingular, S6, S2), V- or U-shaped (right S1, left S3, S2b + S3a), and three dimensional (S7, S8, S9, S10) segmentectomies. Particularly three dimensional segmentectomies are challenging in the complexity of stapling techniques. This review focuses on how VAL-MAP can be utilized in segmentectomy, and how this technique can assist the stapling process in even the most challenging ones. PMID:28066675
Remote sensing and GIS-based prediction and assessment of copper-gold resources in Thailand
NASA Astrophysics Data System (ADS)
Yang, Shasha; Wang, Gongwen; Du, Wenhui; Huang, Luxiong
2014-03-01
Quantitative integration of geological information is a frontier and hotspot of prospecting decision research in the world. The forming process of large scale Cu-Au deposits is influenced by complicated geological events and restricted by various geological factors (stratum, structure and alteration). In this paper, using Thailand's copper-gold deposit district as a case study, geological anomaly theory is used along with the typical copper and gold metallogenic model, ETM+ remote sensing images, geological maps and mineral geology database in study area are combined with GIS technique. These techniques create ore-forming information such as geological information (strata, line-ring faults, intrusion), remote sensing information (hydroxyl alteration, iron alteration, linear-ring structure) and the Cu-Au prospect targets. These targets were identified using weights of evidence model. The research results show that the remote sensing and geological data can be combined to quickly predict and assess for exploration of mineral resources in a regional metallogenic belt.
Linear programming model to develop geodiversity map using utility theory
NASA Astrophysics Data System (ADS)
Sepehr, Adel
2015-04-01
In this article, the classification and mapping of geodiversity based on a quantitative methodology was accomplished using linear programming, the central idea of which being that geosites and geomorphosites as main indicators of geodiversity can be evaluated by utility theory. A linear programming method was applied for geodiversity mapping over Khorasan-razavi province located in eastern north of Iran. In this route, the main criteria for distinguishing geodiversity potential in the studied area were considered regarding rocks type (lithology), faults position (tectonic process), karst area (dynamic process), Aeolian landforms frequency and surface river forms. These parameters were investigated by thematic maps including geology, topography and geomorphology at scales 1:100'000, 1:50'000 and 1:250'000 separately, imagery data involving SPOT, ETM+ (Landsat 7) and field operations directly. The geological thematic layer was simplified from the original map using a practical lithologic criterion based on a primary genetic rocks classification representing metamorphic, igneous and sedimentary rocks. The geomorphology map was provided using DEM at scale 30m extracted by ASTER data, geology and google earth images. The geology map shows tectonic status and geomorphology indicated dynamic processes and landform (karst, Aeolian and river). Then, according to the utility theory algorithms, we proposed a linear programming to classify geodiversity degree in the studied area based on geology/morphology parameters. The algorithm used in the methodology was consisted a linear function to be maximized geodiversity to certain constraints in the form of linear equations. The results of this research indicated three classes of geodiversity potential including low, medium and high status. The geodiversity potential shows satisfied conditions in the Karstic areas and Aeolian landscape. Also the utility theory used in the research has been decreased uncertainty of the evaluations.
The 4-8 GHz Microwave Active and Passive Spectrometer (MAPS). Volume 1: Radar section
NASA Technical Reports Server (NTRS)
Ulaby, F. T.
1973-01-01
The performance characteristics of the radar section of the prototype 4-8 GHz Microwave Active and Passive Spectrometer system are reported. Active and passive spectral responses were measured of natural, cultivated, and human-made surfaces over the 4-18 GHz region of frequencies for look angles between zero and 70 degrees and for all possible linear polarization combinations. Soil and plant samples were collected to measure their dielectric properties and moisture content. The FORTRAN program for area calculation is provided.
Izquierdo-Garcia, David; Hansen, Adam E; Förster, Stefan; Benoit, Didier; Schachoff, Sylvia; Fürst, Sebastian; Chen, Kevin T; Chonde, Daniel B; Catana, Ciprian
2014-11-01
We present an approach for head MR-based attenuation correction (AC) based on the Statistical Parametric Mapping 8 (SPM8) software, which combines segmentation- and atlas-based features to provide a robust technique to generate attenuation maps (μ maps) from MR data in integrated PET/MR scanners. Coregistered anatomic MR and CT images of 15 glioblastoma subjects were used to generate the templates. The MR images from these subjects were first segmented into 6 tissue classes (gray matter, white matter, cerebrospinal fluid, bone, soft tissue, and air), which were then nonrigidly coregistered using a diffeomorphic approach. A similar procedure was used to coregister the anatomic MR data for a new subject to the template. Finally, the CT-like images obtained by applying the inverse transformations were converted to linear attenuation coefficients to be used for AC of PET data. The method was validated on 16 new subjects with brain tumors (n = 12) or mild cognitive impairment (n = 4) who underwent CT and PET/MR scans. The μ maps and corresponding reconstructed PET images were compared with those obtained using the gold standard CT-based approach and the Dixon-based method available on the Biograph mMR scanner. Relative change (RC) images were generated in each case, and voxel- and region-of-interest-based analyses were performed. The leave-one-out cross-validation analysis of the data from the 15 atlas-generation subjects showed small errors in brain linear attenuation coefficients (RC, 1.38% ± 4.52%) compared with the gold standard. Similar results (RC, 1.86% ± 4.06%) were obtained from the analysis of the atlas-validation datasets. The voxel- and region-of-interest-based analysis of the corresponding reconstructed PET images revealed quantification errors of 3.87% ± 5.0% and 2.74% ± 2.28%, respectively. The Dixon-based method performed substantially worse (the mean RC values were 13.0% ± 10.25% and 9.38% ± 4.97%, respectively). Areas closer to the skull showed the largest improvement. We have presented an SPM8-based approach for deriving the head μ map from MR data to be used for PET AC in integrated PET/MR scanners. Its implementation is straightforward and requires only the morphologic data acquired with a single MR sequence. The method is accurate and robust, combining the strengths of both segmentation- and atlas-based approaches while minimizing their drawbacks. © 2014 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Linear phase compressive filter
McEwan, Thomas E.
1995-01-01
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.
Kamei, Ryotaro; Watanabe, Yuji; Sagiyama, Koji; Isoda, Takuro; Togao, Osamu; Honda, Hiroshi
2018-05-23
To investigate the optimal monochromatic color combination for fusion imaging of FDG-PET and diffusion-weighted MR images (DW) regarding lesion conspicuity of each image. Six linear monochromatic color-maps of red, blue, green, cyan, magenta, and yellow were assigned to each of the FDG-PET and DW images. Total perceptual color differences of the lesions were calculated based on the lightness and chromaticity measured with the photometer. Visual lesion conspicuity was also compared among the PET-only, DW-only and PET-DW-double positive portions with mean conspicuity scores. Statistical analysis was performed with a one-way analysis of variance and Spearman's rank correlation coefficient. Among all the 12 possible monochromatic color-map combinations, the 3 combinations of red/cyan, magenta/green, and red/green produced the highest conspicuity scores. Total color differences between PET-positive and double-positive portions correlated with conspicuity scores (ρ = 0.2933, p < 0.005). Lightness differences showed a significant negative correlation with conspicuity scores between the PET-only and DWI-only positive portions. Chromaticity differences showed a marginally significant correlation with conspicuity scores between DWI-positive and double-positive portions. Monochromatic color combinations can facilitate the visual evaluation of FDG-uptake and diffusivity as well as registration accuracy on the FDG-PET/DW fusion images, when red- and green-colored elements are assigned to FDG-PET and DW images, respectively.
sCMOS detector for imaging VNIR spectrometry
NASA Astrophysics Data System (ADS)
Eckardt, Andreas; Reulke, Ralf; Schwarzer, Horst; Venus, Holger; Neumann, Christian
2013-09-01
The facility Optical Information Systems (OS) at the Robotics and Mechatronics Center of the German Aerospace Center (DLR) has more than 30 years of experience with high-resolution imaging technology. This paper shows the scientific results of the institute of leading edge instruments and focal plane designs for EnMAP VIS/NIR spectrograph. EnMAP (Environmental Mapping and Analysis Program) is one of the selected proposals for the national German Space Program. The EnMAP project includes the technological design of the hyper spectral space borne instrument and the algorithms development of the classification. The EnMAP project is a joint response of German Earth observation research institutions, value-added resellers and the German space industry like Kayser-Threde GmbH (KT) and others to the increasing demand on information about the status of our environment. The Geo Forschungs Zentrum (GFZ) Potsdam is the Principal Investigator of EnMAP. DLR OS and KT were driving the technology of new detectors and the FPA design for this project, new manufacturing accuracy and on-chip processing capability in order to keep pace with the ambitious scientific and user requirements. In combination with the engineering research, the current generations of space borne sensor systems are focusing on VIS/NIR high spectral resolution to meet the requirements on earth and planetary observation systems. The combination of large swath and high spectral resolution with intelligent synchronization control, fast-readout ADC chains and new focal-plane concepts open the door to new remote-sensing and smart deep space instruments. The paper gives an overview over the detector verification program at DLR on FPA level, new control possibilities for sCMOS detectors in global shutter mode and key parameters like PRNU, DSNU, MTF, SNR, Linearity, Spectral Response, Quantum Efficiency, Flatness and Radiation Tolerance will be discussed in detail.
NASA Astrophysics Data System (ADS)
Saito, Asaki; Yasutomi, Shin-ichi; Tamura, Jun-ichi; Ito, Shunji
2015-06-01
We introduce a true orbit generation method enabling exact simulations of dynamical systems defined by arbitrary-dimensional piecewise linear fractional maps, including piecewise linear maps, with rational coefficients. This method can generate sufficiently long true orbits which reproduce typical behaviors (inherent behaviors) of these systems, by properly selecting algebraic numbers in accordance with the dimension of the target system, and involving only integer arithmetic. By applying our method to three dynamical systems—that is, the baker's transformation, the map associated with a modified Jacobi-Perron algorithm, and an open flow system—we demonstrate that it can reproduce their typical behaviors that have been very difficult to reproduce with conventional simulation methods. In particular, for the first two maps, we show that we can generate true orbits displaying the same statistical properties as typical orbits, by estimating the marginal densities of their invariant measures. For the open flow system, we show that an obtained true orbit correctly converges to the stable period-1 orbit, which is inherently possessed by the system.
Visual EKF-SLAM from Heterogeneous Landmarks †
Esparza-Jiménez, Jorge Othón; Devy, Michel; Gordillo, José L.
2016-01-01
Many applications require the localization of a moving object, e.g., a robot, using sensory data acquired from embedded devices. Simultaneous localization and mapping from vision performs both the spatial and temporal fusion of these data on a map when a camera moves in an unknown environment. Such a SLAM process executes two interleaved functions: the front-end detects and tracks features from images, while the back-end interprets features as landmark observations and estimates both the landmarks and the robot positions with respect to a selected reference frame. This paper describes a complete visual SLAM solution, combining both point and line landmarks on a single map. The proposed method has an impact on both the back-end and the front-end. The contributions comprehend the use of heterogeneous landmark-based EKF-SLAM (the management of a map composed of both point and line landmarks); from this perspective, the comparison between landmark parametrizations and the evaluation of how the heterogeneity improves the accuracy on the camera localization, the development of a front-end active-search process for linear landmarks integrated into SLAM and the experimentation methodology. PMID:27070602
Voltage gradient mapping and electrophysiologically guided cryoablation in children with AVNRT.
Drago, Fabrizio; Battipaglia, Irma; Russo, Mario Salvatore; Remoli, Romolo; Pazzano, Vincenzo; Grifoni, Gino; Allegretti, Greta; Silvetti, Massimo Stefano
2018-04-01
Recently, voltage gradient mapping of Koch's triangle to find low-voltage connections, or 'voltage bridges', corresponding to the anatomic position of the slow pathway, has been introduced as a method to ablate atrioventricular nodal reentry tachycardia (AVNRT) in children. Thus, we aimed to assess the effectiveness of voltage mapping of Koch's triangle, combined with the search for the slow potential signal in 'low-voltage bridges', to guide cryoablation of AVNRT in children. From June 2015 to May 2016, 35 consecutive paediatric patients (mean age 12.1 ± 4.5 years) underwent 3D-guided cryoablation of AVNRT at our Institution. Fifteen children were enrolled as control group (mean age 14 ± 4 years). A voltage gradient mapping of Koch's triangle was obtained in all patients, showing low-voltage connections in all children with AVNRT but not in controls. Prior to performing cryoablation, we looked for the typical 'hump and spike' electrogram, generally considered to be representative of slow pathway potential within a low-voltage bridge. In all patients the 'hump and spike' electrogram was found inside bridges of low voltage. Focal or high-density linear lesions, extended or not, were delivered guided by low-voltage bridge visualization. Acute success rate was 100%, and no recurrence was reported at a mean follow-up of 8 ± 3 months. Voltage gradient mapping of Koch's triangle, combined with the search for the slow potential signal in low-voltage bridges, is effective in guiding cryoablation of AVNRT in paediatric patients, with a complete acute success rate and no AVNRT recurrences at mid-term follow-up.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time.
Dhar, Amrit; Minin, Vladimir N
2017-05-01
Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time
Dhar, Amrit
2017-01-01
Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780
MacNab, Ying C
2016-08-01
This paper concerns with multivariate conditional autoregressive models defined by linear combination of independent or correlated underlying spatial processes. Known as linear models of coregionalization, the method offers a systematic and unified approach for formulating multivariate extensions to a broad range of univariate conditional autoregressive models. The resulting multivariate spatial models represent classes of coregionalized multivariate conditional autoregressive models that enable flexible modelling of multivariate spatial interactions, yielding coregionalization models with symmetric or asymmetric cross-covariances of different spatial variation and smoothness. In the context of multivariate disease mapping, for example, they facilitate borrowing strength both over space and cross variables, allowing for more flexible multivariate spatial smoothing. Specifically, we present a broadened coregionalization framework to include order-dependent, order-free, and order-robust multivariate models; a new class of order-free coregionalized multivariate conditional autoregressives is introduced. We tackle computational challenges and present solutions that are integral for Bayesian analysis of these models. We also discuss two ways of computing deviance information criterion for comparison among competing hierarchical models with or without unidentifiable prior parameters. The models and related methodology are developed in the broad context of modelling multivariate data on spatial lattice and illustrated in the context of multivariate disease mapping. The coregionalization framework and related methods also present a general approach for building spatially structured cross-covariance functions for multivariate geostatistics. © The Author(s) 2016.
Frequency domain technique for a two-dimensional mapping of optical tissue properties
NASA Astrophysics Data System (ADS)
Bocher, Thomas; Beuthan, Juergen; Minet, Olaf; Naber, Rolf-Dieter; Mueller, Gerhard J.
1995-12-01
Locally and individually varying optical tissue parameters (mu) a, (mu) s, and g are responsible for non-neglectible uncertainties in the interpretation of spectroscopic data in optical biopsy techniques. The intrinsic fluorescence signal for instance doesn't depend only on the fluorophore concentration but also on the amount of other background absorbers and on alterations of scattering properties. Therefore neither a correct relative nor an absolute mapping of the lateral fluorophore concentration can be derived from the intrinsic fluorescence signal alone. Using MC-simulations it can be shown that in time-resolved LIFS the simultaneously measured backscattered signal of the excitation wavelength (UV) can be used to develop a special, linearized rescaling algorithm to take into account the most dominant of these varying tissue parameters which is (mu) a,ex. In combination with biochemical calibration measurements we were able to perform fiberbased quantitative NADH- concentration measurements. In this paper a new rescaling method for VIS and IR light in the frequency domain is proposed. It can be applied within the validity range of the diffusion approximation and provides full (mu) a and (mu) s rescaling possibility in a 2- dimensional, non-contact mapping mode. The scanning device is planned to be used in combination with a standard operation microscope of ZEISS, Germany.
NASA Astrophysics Data System (ADS)
Müller, Vilhelm; Rajer, Fredrika; Frykholm, Karolin; Nyberg, Lena K.; Quaderi, Saair; Fritzsche, Joachim; Kristiansson, Erik; Ambjörnsson, Tobias; Sandegren, Linus; Westerlund, Fredrik
2016-12-01
Bacterial plasmids are extensively involved in the rapid global spread of antibiotic resistance. We here present an assay, based on optical DNA mapping of single plasmids in nanofluidic channels, which provides detailed information about the plasmids present in a bacterial isolate. In a single experiment, we obtain the number of different plasmids in the sample, the size of each plasmid, an optical barcode that can be used to identify and trace the plasmid of interest and information about which plasmid that carries a specific resistance gene. Gene identification is done using CRISPR/Cas9 loaded with a guide-RNA (gRNA) complementary to the gene of interest that linearizes the circular plasmids at a specific location that is identified using the optical DNA maps. We demonstrate the principle on clinically relevant extended spectrum beta-lactamase (ESBL) producing isolates. We discuss how the gRNA sequence can be varied to obtain the desired information. The gRNA can either be very specific to identify a homogeneous group of genes or general to detect several groups of genes at the same time. Finally, we demonstrate an example where we use a combination of two gRNA sequences to identify carbapenemase-encoding genes in two previously not characterized clinical bacterial samples.
The radio continuum-star formation rate relation in WSRT sings galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heesen, Volker; Brinks, Elias; Leroy, Adam K.
2014-05-01
We present a study of the spatially resolved radio continuum-star formation rate (RC-SFR) relation using state-of-the-art star formation tracers in a sample of 17 THINGS galaxies. We use SFR surface density (Σ{sub SFR}) maps created by a linear combination of GALEX far-UV (FUV) and Spitzer 24 μm maps. We use RC maps at λλ22 and 18 cm from the WSRT SINGS survey and Hα emission maps to correct for thermal RC emission. We compare azimuthally averaged radial profiles of the RC and FUV/mid-IR (MIR) based Σ{sub SFR} maps and study pixel-by-pixel correlations at fixed linear scales of 1.2 and 0.7more » kpc. The ratio of the integrated SFRs from the RC emission to that of the FUV/MIR-based SF tracers is R{sub int}=0.78±0.38, consistent with the relation by Condon. We find a tight correlation between the radial profiles of the radio and FUV/MIR-based Σ{sub SFR} for the entire extent of the disk. The ratio R of the azimuthally averaged radio to FUV/MIR-based Σ{sub SFR} agrees with the integrated ratio and has only quasi-random fluctuations with galactocentric radius that are relatively small (25%). Pixel-by-pixel plots show a tight correlation in log-log diagrams of radio to FUV/MIR-based Σ{sub SFR}, with a typical standard deviation of a factor of two. Averaged over our sample we find (Σ{sub SFR}){sub RC}∝(Σ{sub SFR}){sub hyb}{sup 0.63±0.25}, implying that data points with high Σ{sub SFR} are relatively radio dim, whereas the reverse is true for low Σ{sub SFR}. We interpret this as a result of spectral aging of cosmic-ray electrons (CREs), which are diffusing away from the star formation sites where they are injected into the interstellar medium. This is supported by our finding that the radio spectral index is a second parameter in pixel-by-pixel plots: those data points dominated by young CREs are relatively radio dim, while those dominated by old CREs are slightly more RC bright than what would be expected from a linear extrapolation. We studied the ratio R of radio to FUV/MIR-based integrated SFR as a function of global galaxy parameters and found no clear correlation. This suggests that we can use RC emission as a universal star formation tracer for galaxies with a similar degree of accuracy as other tracers, if we restrict ourselves to global or azimuthally averaged measurements. We can reconcile our finding of an almost linear RC-SFR relation and sub-linear resolved (on 1 kpc scale) RC-Σ{sub SFR} relation by proposing a non-linear magnetic field-SFR relation, B∝SFR{sub hyb}{sup 0.30±0.02}, which holds both globally and locally.« less
Linear phase compressive filter
McEwan, T.E.
1995-06-06
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.
Power Control and Optimization of Photovoltaic and Wind Energy Conversion Systems
NASA Astrophysics Data System (ADS)
Ghaffari, Azad
Power map and Maximum Power Point (MPP) of Photovoltaic (PV) and Wind Energy Conversion Systems (WECS) highly depend on system dynamics and environmental parameters, e.g., solar irradiance, temperature, and wind speed. Power optimization algorithms for PV systems and WECS are collectively known as Maximum Power Point Tracking (MPPT) algorithm. Gradient-based Extremum Seeking (ES), as a non-model-based MPPT algorithm, governs the system to its peak point on the steepest descent curve regardless of changes of the system dynamics and variations of the environmental parameters. Since the power map shape defines the gradient vector, then a close estimate of the power map shape is needed to create user assignable transients in the MPPT algorithm. The Hessian gives a precise estimate of the power map in a neighborhood around the MPP. The estimate of the inverse of the Hessian in combination with the estimate of the gradient vector are the key parts to implement the Newton-based ES algorithm. Hence, we generate an estimate of the Hessian using our proposed perturbation matrix. Also, we introduce a dynamic estimator to calculate the inverse of the Hessian which is an essential part of our algorithm. We present various simulations and experiments on the micro-converter PV systems to verify the validity of our proposed algorithm. The ES scheme can also be used in combination with other control algorithms to achieve desired closed-loop performance. The WECS dynamics is slow which causes even slower response time for the MPPT based on the ES. Hence, we present a control scheme, extended from Field-Oriented Control (FOC), in combination with feedback linearization to reduce the convergence time of the closed-loop system. Furthermore, the nonlinear control prevents magnetic saturation of the stator of the Induction Generator (IG). The proposed control algorithm in combination with the ES guarantees the closed-loop system robustness with respect to high level parameter uncertainty in the IG dynamics. The simulation results verify the effectiveness of the proposed algorithm.
Ureba, A; Salguero, F J; Barbeiro, A R; Jimenez-Ortega, E; Baeza, J A; Miras, H; Linares, R; Perucha, M; Leal, A
2014-08-01
The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called "biophysical" map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast irradiation case (Case II) solved with photon and electron modulated beams (IMRT + MERT); and a prostatic bed case (Case III) with a pronounced concave-shaped PTV by using volumetric modulated arc therapy. In the three cases, the required target prescription doses and constraints on organs at risk were fulfilled in a short enough time to allow routine clinical implementation. The quality assurance protocol followed to check CARMEN system showed a high agreement with the experimental measurements. A Monte Carlo treatment planning model exclusively based on maps performed from patient imaging data has been presented. The sequencing of these maps allows obtaining deliverable apertures which are weighted for modulation under a linear programming formulation. The model is able to solve complex radiotherapy treatments with high accuracy in an efficient computation time.
Characterizing regional soil mineral composition using spectroscopyand geostatistics
Mulder, V.L.; de Bruin, S.; Weyermann, J.; Kokaly, Raymond F.; Schaepman, M.E.
2013-01-01
This work aims at improving the mapping of major mineral variability at regional scale using scale-dependent spatial variability observed in remote sensing data. Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data and statistical methods were combined with laboratory-based mineral characterization of field samples to create maps of the distributions of clay, mica and carbonate minerals and their abundances. The Material Identification and Characterization Algorithm (MICA) was used to identify the spectrally-dominant minerals in field samples; these results were combined with ASTER data using multinomial logistic regression to map mineral distributions. X-ray diffraction (XRD)was used to quantify mineral composition in field samples. XRD results were combined with ASTER data using multiple linear regression to map mineral abundances. We testedwhether smoothing of the ASTER data to match the scale of variability of the target sample would improve model correlations. Smoothing was donewith Fixed Rank Kriging (FRK) to represent the mediumand long-range spatial variability in the ASTER data. Stronger correlations resulted using the smoothed data compared to results obtained with the original data. Highest model accuracies came from using both medium and long-range scaled ASTER data as input to the statistical models. High correlation coefficients were obtained for the abundances of calcite and mica (R2 = 0.71 and 0.70, respectively). Moderately-high correlation coefficients were found for smectite and kaolinite (R2 = 0.57 and 0.45, respectively). Maps of mineral distributions, obtained by relating ASTER data to MICA analysis of field samples, were found to characterize major soil mineral variability (overall accuracies for mica, smectite and kaolinite were 76%, 89% and 86% respectively). The results of this study suggest that the distributions of minerals and their abundances derived using FRK-smoothed ASTER data more closely match the spatial variability of soil and environmental properties at regional scale.
Reconstructing the gravitational field of the local Universe
NASA Astrophysics Data System (ADS)
Desmond, Harry; Ferreira, Pedro G.; Lavaux, Guilhem; Jasche, Jens
2018-03-01
Tests of gravity at the galaxy scale are in their infancy. As a first step to systematically uncovering the gravitational significance of galaxies, we map three fundamental gravitational variables - the Newtonian potential, acceleration and curvature - over the galaxy environments of the local Universe to a distance of approximately 200 Mpc. Our method combines the contributions from galaxies in an all-sky redshift survey, haloes from an N-body simulation hosting low-luminosity objects, and linear and quasi-linear modes of the density field. We use the ranges of these variables to determine the extent to which galaxies expand the scope of generic tests of gravity and are capable of constraining specific classes of model for which they have special significance. Finally, we investigate the improvements afforded by upcoming galaxy surveys.
A Visual Analytics Approach for Station-Based Air Quality Data
Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui
2016-01-01
With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support. PMID:28029117
Tectonic map of Liberia based on geophysical and geological surveys
Behrendt, John Charles; Wotorson, Cletus S.
1972-01-01
Interpretation of the results of aeromagnetic, total-gamma radioactivity, and gravity surveys combined with geologic data for Western Liberia from White and Leo (1969) and other geologic information allows the construction of a tectonic map of Liberia. The map approximately delineates the boundaries between the Liberian (ca. 2700 m.y.) province in the northwestern two-thirds of the country, the Eburnean (ca. 2000 m.y.) province in the south-eastern one-third, and the Pan-African (ca. 550 m.y.) province in the coastal area of the northwestern two-thirds of the country. Rock follation and tectonic structural features trend northeastward in the Liberian province, east-northeastward to north-northeastward in the Eburnean province, and northwestward in the Pan-African age province. Linear residual magnetic anomailes 20-80 km wide and 200-600 gammas in amplitude and following the northeast structural trend typical of the Liberian age province cross the entire country and extend into Sierra Leone and Ivory Coast.
A Visual Analytics Approach for Station-Based Air Quality Data.
Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui
2016-12-24
With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.
The tectonics of Titan: Global structural mapping from Cassini RADAR
Liu, Zac Yung-Chun; Radebaugh, Jani; Harris, Ron A.; Christiansen, Eric H.; Neish, Catherine D.; Kirk, Randolph L.; Lorenz, Ralph D.; ,
2016-01-01
The Cassini RADAR mapper has imaged elevated mountain ridge belts on Titan with a linear-to-arcuate morphology indicative of a tectonic origin. Systematic geomorphologic mapping of the ridges in Synthetic Aperture RADAR (SAR) images reveals that the orientation of ridges is globally E–W and the ridges are more common near the equator than the poles. Comparison with a global topographic map reveals the equatorial ridges are found to lie preferentially at higher-than-average elevations. We conclude the most reasonable formation scenario for Titan’s ridges is that contractional tectonism built the ridges and thickened the icy lithosphere near the equator, causing regional uplift. The combination of global and regional tectonic events, likely contractional in nature, followed by erosion, aeolian activity, and enhanced sedimentation at mid-to-high latitudes, would have led to regional infilling and perhaps covering of some mountain features, thus shaping Titan’s tectonic landforms and surface morphology into what we see today.
Demonstration of Cosmic Microwave Background Delensing Using the Cosmic Infrared Background.
Larsen, Patricia; Challinor, Anthony; Sherwin, Blake D; Mak, Daisy
2016-10-07
Delensing is an increasingly important technique to reverse the gravitational lensing of the cosmic microwave background (CMB) and thus reveal primordial signals the lensing may obscure. We present a first demonstration of delensing on Planck temperature maps using the cosmic infrared background (CIB). Reversing the lensing deflections in Planck CMB temperature maps using a linear combination of the 545 and 857 GHz maps as a lensing tracer, we find that the lensing effects in the temperature power spectrum are reduced in a manner consistent with theoretical expectations. In particular, the characteristic sharpening of the acoustic peaks of the temperature power spectrum resulting from successful delensing is detected at a significance of 16σ, with an amplitude of A_{delens}=1.12±0.07 relative to the expected value of unity. This first demonstration on data of CIB delensing, and of delensing techniques in general, is significant because lensing removal will soon be essential for achieving high-precision constraints on inflationary B-mode polarization.
Digitization of a geologic map for the Quebec-Maine-Gulf of Maine global geoscience transect
Wright, Bruce E.; Stewart, David B.
1990-01-01
The Bedrock Geologic Map of Maine was digitized and combined with digital geologic data for Quebec and the Gulf of Maine for the Quebec-Maine-Gulf of Maine Geologic Transect Project. This map is being combined with digital geophysical data to produce three-dimensional depictions of the subsurface geology and to produce cross sections of the Earth's crust. It is an essential component of a transect that stretches from the craton near Quebec City, Quebec, to the Atlantic Ocean Basin south of Georges Bank. The transect is part of the Global Geosciences Transect Project of the International Lithosphere Program. The Digital Line Graph format is used for storage of the digitized data. A coding scheme similar to that used for base category planimetric data was developed to assign numeric codes to the digitized geologic data. These codes were used to assign attributes to polygon and line features to describe rock type, age, name, tectonic setting of original deposition, mineralogy, and composition of igneous plutonic rocks, as well as faults and other linear features. The digital geologic data can be readily edited, rescaled, and reprojected. The attribute codes allow generalization and selective retrieval of the geologic features. The codes allow assignment of map colors based on age, lithology, or other attribute. The Digital Line Graph format is a general transfer format that is supported by many software vendors and is easily transferred between systems.
Prime, Thomas; Brown, Jennifer M.; Plater, Andrew J.
2015-01-01
Conventionally flood mapping typically includes only a static water level (e.g. peak of a storm tide) in coastal flood inundation events. Additional factors become increasingly important when increased water-level thresholds are met during the combination of a storm tide and increased mean sea level. This research incorporates factors such as wave overtopping and river flow in a range of flood inundation scenarios of future sea-level projections for a UK case study of Fleetwood, northwest England. With increasing mean sea level it is shown that wave overtopping and river forcing have an important bearing on the cost of coastal flood events. The method presented converts inundation maps into monetary cost. This research demonstrates that under scenarios of joint extreme surge-wave-river events the cost of flooding can be increased by up to a factor of 8 compared with an increase in extent of up to a factor of 3 relative to “surge alone” event. This is due to different areas being exposed to different flood hazards and areas with common hazard where flood waters combine non-linearly. This shows that relying simply on flood extent and volume can under-predict the actual economic impact felt by a coastal community. Additionally, the scenario inundation depths have been presented as “brick course” maps, which represent a new way of interpreting flood maps. This is primarily aimed at stakeholders to increase levels of engagement within the coastal community. PMID:25710497
Prime, Thomas; Brown, Jennifer M; Plater, Andrew J
2015-01-01
Conventionally flood mapping typically includes only a static water level (e.g. peak of a storm tide) in coastal flood inundation events. Additional factors become increasingly important when increased water-level thresholds are met during the combination of a storm tide and increased mean sea level. This research incorporates factors such as wave overtopping and river flow in a range of flood inundation scenarios of future sea-level projections for a UK case study of Fleetwood, northwest England. With increasing mean sea level it is shown that wave overtopping and river forcing have an important bearing on the cost of coastal flood events. The method presented converts inundation maps into monetary cost. This research demonstrates that under scenarios of joint extreme surge-wave-river events the cost of flooding can be increased by up to a factor of 8 compared with an increase in extent of up to a factor of 3 relative to "surge alone" event. This is due to different areas being exposed to different flood hazards and areas with common hazard where flood waters combine non-linearly. This shows that relying simply on flood extent and volume can under-predict the actual economic impact felt by a coastal community. Additionally, the scenario inundation depths have been presented as "brick course" maps, which represent a new way of interpreting flood maps. This is primarily aimed at stakeholders to increase levels of engagement within the coastal community.
Brain-heart linear and nonlinear dynamics during visual emotional elicitation in healthy subjects.
Valenza, G; Greco, A; Gentili, C; Lanata, A; Toschi, N; Barbieri, R; Sebastiani, L; Menicucci, D; Gemignani, A; Scilingo, E P
2016-08-01
This study investigates brain-heart dynamics during visual emotional elicitation in healthy subjects through linear and nonlinear coupling measures of EEG spectrogram and instantaneous heart rate estimates. To this extent, affective pictures including different combinations of arousal and valence levels, gathered from the International Affective Picture System, were administered to twenty-two healthy subjects. Time-varying maps of cortical activation were obtained through EEG spectral analysis, whereas the associated instantaneous heartbeat dynamics was estimated using inhomogeneous point-process linear models. Brain-Heart linear and nonlinear coupling was estimated through the Maximal Information Coefficient (MIC), considering EEG time-varying spectra and point-process estimates defined in the time and frequency domains. As a proof of concept, we here show preliminary results considering EEG oscillations in the θ band (4-8 Hz). This band, indeed, is known in the literature to be involved in emotional processes. MIC highlighted significant arousal-dependent changes, mediated by the prefrontal cortex interplay especially occurring at intermediate arousing levels. Furthermore, lower and higher arousing elicitations were associated to not significant brain-heart coupling changes in response to pleasant/unpleasant elicitations.
An efficient method for model refinement in diffuse optical tomography
NASA Astrophysics Data System (ADS)
Zirak, A. R.; Khademi, M.
2007-11-01
Diffuse optical tomography (DOT) is a non-linear, ill-posed, boundary value and optimization problem which necessitates regularization. Also, Bayesian methods are suitable owing to measurements data are sparse and correlated. In such problems which are solved with iterative methods, for stabilization and better convergence, the solution space must be small. These constraints subject to extensive and overdetermined system of equations which model retrieving criteria specially total least squares (TLS) must to refine model error. Using TLS is limited to linear systems which is not achievable when applying traditional Bayesian methods. This paper presents an efficient method for model refinement using regularized total least squares (RTLS) for treating on linearized DOT problem, having maximum a posteriori (MAP) estimator and Tikhonov regulator. This is done with combination Bayesian and regularization tools as preconditioner matrices, applying them to equations and then using RTLS to the resulting linear equations. The preconditioning matrixes are guided by patient specific information as well as a priori knowledge gained from the training set. Simulation results illustrate that proposed method improves the image reconstruction performance and localize the abnormally well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajpathak, Bhooshan, E-mail: bhooshan@ee.iitb.ac.in; Pillai, Harish K., E-mail: hp@ee.iitb.ac.in; Bandyopadhyay, Santanu, E-mail: santanu@me.iitb.ac.in
2015-10-15
In this paper, we analytically examine the unstable periodic orbits and chaotic orbits of the 1-D linear piecewise-smooth discontinuous map. We explore the existence of unstable orbits and the effect of variation in parameters on the coexistence of unstable orbits. Further, we show that this structuring is different from the well known period adding cascade structure associated with the stable periodic orbits of the same map. Further, we analytically prove the existence of chaotic orbit for this map.
The Effect of Using Concept Maps in Elementary Linear Algebra Course on Students’ Learning
NASA Astrophysics Data System (ADS)
Syarifuddin, H.
2018-04-01
This paper presents the results of a classroom action research that was done in Elementary Linear Algebra course at Universitas Negeri Padang. The focus of the research want to see the effect of using concept maps in the course on students’ learning. Data in this study were collected through classroom observation, students’ reflective journal and concept maps that were created by students. The result of the study was the using of concept maps in Elementary Linera Algebra course gave positive effect on students’ learning.
Dissecting Antibodies with Regards to Linear and Conformational Epitopes
Forsström, Björn; Bisławska Axnäs, Barbara; Rockberg, Johan; Danielsson, Hanna; Bohlin, Anna; Uhlen, Mathias
2015-01-01
An important issue for the performance and specificity of an antibody is the nature of the binding to its protein target, including if the recognition involves linear or conformational epitopes. Here, we dissect polyclonal sera by creating epitope-specific antibody fractions using a combination of epitope mapping and an affinity capture approach involving both synthesized peptides and recombinant protein fragments. This allowed us to study the relative amounts of antibodies to linear and conformational epitopes in the polyclonal sera as well as the ability of each antibody-fraction to detect its target protein in Western blot assays. The majority of the analyzed polyclonal sera were found to have most of the target-specific antibodies directed towards linear epitopes and these were in many cases giving Western blot bands of correct molecular weight. In contrast, many of the antibodies towards conformational epitopes did not bind their target proteins in the Western blot assays. The results from this work have given us insights regarding the nature of the antibody response generated by immunization with recombinant protein fragments and has demonstrated the advantage of using antibodies recognizing linear epitopes for immunoassay involving wholly or partially denatured protein targets. PMID:25816293
kruX: matrix-based non-parametric eQTL discovery.
Qi, Jianlong; Asl, Hassan Foroughi; Björkegren, Johan; Michoel, Tom
2014-01-14
The Kruskal-Wallis test is a popular non-parametric statistical test for identifying expression quantitative trait loci (eQTLs) from genome-wide data due to its robustness against variations in the underlying genetic model and expression trait distribution, but testing billions of marker-trait combinations one-by-one can become computationally prohibitive. We developed kruX, an algorithm implemented in Matlab, Python and R that uses matrix multiplications to simultaneously calculate the Kruskal-Wallis test statistic for several millions of marker-trait combinations at once. KruX is more than ten thousand times faster than computing associations one-by-one on a typical human dataset. We used kruX and a dataset of more than 500k SNPs and 20k expression traits measured in 102 human blood samples to compare eQTLs detected by the Kruskal-Wallis test to eQTLs detected by the parametric ANOVA and linear model methods. We found that the Kruskal-Wallis test is more robust against data outliers and heterogeneous genotype group sizes and detects a higher proportion of non-linear associations, but is more conservative for calling additive linear associations. kruX enables the use of robust non-parametric methods for massive eQTL mapping without the need for a high-performance computing infrastructure and is freely available from http://krux.googlecode.com.
NASA Astrophysics Data System (ADS)
Zimmermann, Robert; Brandmeier, Melanie; Andreani, Louis; Gloaguen, Richard
2015-04-01
Remote sensing data can provide valuable information about ore deposits and their alteration zones at surface level. High spectral and spatial resolution of the data is essential for detailed mapping of mineral abundances and related structures. Carbonatites are well known for hosting economic enrichments in REE, Ta, Nb and P (Jones et al. 2013). These make them a preferential target for exploration for those critical elements. In this study we show how combining geomorphic, textural and spectral data improves classification result. We selected a site with a well-known occurrence in northern Namibia: the Epembe dyke. For analysis LANDSAT 8, SRTM and airborne hyperspectral (HyMap) data were chosen. The overlapping data allows a multi-scale and multi-resolution approach. Results from data analysis were validated during fieldwork in 2014. Data was corrected for atmospherical and geometrical effects. Image classification, mineral mapping and tectonic geomorphology allow a refinement of the geological map by lithological mapping in a second step. Detailed mineral abundance maps were computed using spectral unmixing techniques. These techniques are well suited to map abundances of carbonate minerals, but not to discriminate the carbonatite itself from surrounding rocks with similar spectral signatures. Thus, geometric indices were calculated using tectonic geomorphology and textures. For this purpose the TecDEM-toolbox (SHAHZAD & GLOAGUEN 2011) was applied to the SRTM-data for geomorphic analysis. Textural indices (e.g. uniformity, entropy, angular second moment) were derived from HyMap and SRTM by a grey-level co-occurrence matrix (CLAUSI 2002). The carbonatite in the study area is ridge-forming and shows a narrow linear feature in the textural bands. Spectral and geometric information were combined using kohonen Self-Organizing Maps (SOM) for unsupervised clustering. The resulting class spectra were visually compared and interpreted. Classes with similar signatures were merged according to geological context. The major conclusions are: 1. Carbonate minerals can be mapped using spectral unmixing techniques. 2. Carbonatites are associated with specific geometric pattern 3. The combination of spectral and geometric information improves classification result and reduces misclassification. References Clausi, D. A. (2002): An analysis of co-occurrence texture statistics as a function of grey-level quantization. - Canadian Journal of Remote Sensing, 28 (1), 45-62 Jones, A. P., Genge, M. and Carmody, L (2013): Carbonate Melts and Carbonatites. - Reviews in Mineralogy & Geochemistry, 75, 289-322 Shahzad, F. & Gloaguen, R. (2011): TecDEM: A MATLAB based toolbox for tectonic geomorphology, Part 2: Surface dynamics and basin analysis. - Computers and Geosciences, 37 (2), 261-271
NASA Astrophysics Data System (ADS)
Bauer, Adam Q.; Kraft, Andrew; Baxter, Grant A.; Bruchas, Michael; Lee, Jin-Moo; Culver, Joseph P.
2017-02-01
Functional magnetic resonance imaging (fMRI) has transformed our understanding of the brain's functional organization. However, mapping subunits of a functional network using hemoglobin alone presents several disadvantages. Evoked and spontaneous hemodynamic fluctuations reflect ensemble activity from several populations of neurons making it difficult to discern excitatory vs inhibitory network activity. Still, blood-based methods of brain mapping remain powerful because hemoglobin provides endogenous contrast in all mammalian brains. To add greater specificity to hemoglobin assays, we integrated optical intrinsic signal(OIS) imaging with optogenetic stimulation to create an Opto-OIS mapping tool that combines the cell-specificity of optogenetics with label-free, hemoglobin imaging. Before mapping, titrated photostimuli determined which stimulus parameters elicited linear hemodynamic responses in the cortex. Optimized stimuli were then scanned over the left hemisphere to create a set of optogenetically-defined effective connectivity (Opto-EC) maps. For many sites investigated, Opto-EC maps exhibited higher spatial specificity than those determined using spontaneous hemodynamic fluctuations. For example, resting-state functional connectivity (RS-FC) patterns exhibited widespread ipsilateral connectivity while Opto-EC maps contained distinct short- and long-range constellations of ipsilateral connectivity. Further, RS-FC maps were usually symmetric about midline while Opto-EC maps displayed more heterogeneous contralateral homotopic connectivity. Both Opto-EC and RS-FC patterns were compared to mouse connectivity data from the Allen Institute. Unlike RS-FC maps, Thy1-based maps collected in awake, behaving mice closely recapitulated the connectivity structure derived using ex vivo anatomical tracer methods. Opto-OIS mapping could be a powerful tool for understanding cellular and molecular contributions to network dynamics and processing in the mouse brain.
Complex Archaeological Prospection Using Combination of Non-destructive Techniques
NASA Astrophysics Data System (ADS)
Faltýnová, M.; Pavelka, K.; Nový, P.; Šedina, J.
2015-08-01
This article describes the use of a combination of non-destructive techniques for the complex documentation of a fabulous historical site called Devil's Furrow, an unusual linear formation lying in the landscape of central Bohemia. In spite of many efforts towards interpretation of the formation, its original form and purpose have not yet been explained in a satisfactory manner. The study focuses on the northern part of the furrow which appears to be a dissimilar element within the scope of the whole Devil's Furrow. This article presents detailed description of relics of the formation based on historical map searches and modern investigation methods including airborne laser scanning, aerial photogrammetry (based on airplane and RPAS) and ground-penetrating radar. Airborne laser scanning data and aerial orthoimages acquired by the Czech Office for Surveying, Mapping and Cadastre were used. Other measurements were conducted by our laboratory. Data acquired by various methods provide sufficient information to determine the probable original shape of the formation and proves explicitly the anthropological origin of the northern part of the formation (around village Lipany).
Relaxation dynamics of internal segments of DNA chains in nanochannels
NASA Astrophysics Data System (ADS)
Jain, Aashish; Muralidhar, Abhiram; Dorfman, Kevin; Dorfman Group Team
We will present relaxation dynamics of internal segments of a DNA chain confined in nanochannel. The results have direct application in genome mapping technology, where long DNA molecules containing sequence-specific fluorescent probes are passed through an array of nanochannels to linearize them, and then the distances between these probes (the so-called ``DNA barcode'') are measured. The relaxation dynamics of internal segments set the experimental error due to dynamic fluctuations. We developed a multi-scale simulation algorithm, combining a Pruned-Enriched Rosenbluth Method (PERM) simulation of a discrete wormlike chain model with hard spheres with Brownian dynamics (BD) simulations of a bead-spring chain. Realistic parameters such as the bead friction coefficient and spring force law parameters are obtained from PERM simulations and then mapped onto the bead-spring model. The BD simulations are carried out to obtain the extension autocorrelation functions of various segments, which furnish their relaxation times. Interestingly, we find that (i) corner segments relax faster than the center segments and (ii) relaxation times of corner segments do not depend on the contour length of DNA chain, whereas the relaxation times of center segments increase linearly with DNA chain size.
An Example of Linear Mappings: Extension to Rhotrices
ERIC Educational Resources Information Center
Aminu, Abdulhadi
2010-01-01
Let U and V be vector spaces. A mapping T : U [right arrow] V is linear if for each u[subscript 1], u[subscript 2] [is an element of] U and each scalar alpha; T(u[subscript 1] + u[subscript 2]) = T(u[subscript 1] + T(u[subscript 2]) and T(alpha u[subscript 1]) = alpha T(u[subscript 1]). We extend this mapping to the case when U and V are rhotrix…
Linear time-to-space mapping system using double electrooptic beam deflectors.
Hisatake, Shintaro; Tada, Keiji; Nagatsuma, Tadao
2008-12-22
We propose and demonstrate a linear time-to-space mapping system, which is based on two times electrooptic sinusoidal beam deflection. The direction of each deflection is set to be mutually orthogonal with the relative deflection phase of pi/2 rad so that the circular optical beam trajectory can be achieved. The beam spot at the observation plane moves with an uniform velocity and as a result linear time-to-space mapping (an uniform temporal resolution through the mapping) can be realized. The proof-of-concept experiment are carried out and the temporal resolution of 5 ps has been demonstrated using traveling-wave type quasi-velosity-matched electrooptic beam deflectors. The developed system is expected to be applied to characterization of ultrafast optical signal or optical arbitrary waveform shaping for modulated microwave/millimeter-wave generation.
Spatial effect of new municipal solid waste landfill siting using different guidelines.
Ahmad, Siti Zubaidah; Ahamad, Mohd Sanusi S; Yusoff, Mohd Suffian
2014-01-01
Proper implementation of landfill siting with the right regulations and constraints can prevent undesirable long-term effects. Different countries have respective guidelines on criteria for new landfill sites. In this article, we perform a comparative study of municipal solid waste landfill siting criteria stated in the policies and guidelines of eight different constitutional bodies from Malaysia, Australia, India, U.S.A., Europe, China and the Middle East, and the World Bank. Subsequently, a geographic information system (GIS) multi-criteria evaluation model was applied to determine new suitable landfill sites using different criterion parameters using a constraint mapping technique and weighted linear combination. Application of Macro Modeler provided in the GIS-IDRISI Andes software helps in building and executing multi-step models. In addition, the analytic hierarchy process technique was included to determine the criterion weight of the decision maker's preferences as part of the weighted linear combination procedure. The differences in spatial results of suitable sites obtained signifies that dissimilarity in guideline specifications and requirements will have an effect on the decision-making process.
Q-plates as higher order polarization controllers for orbital angular momentum modes of fiber.
Gregg, P; Mirhosseini, M; Rubano, A; Marrucci, L; Karimi, E; Boyd, R W; Ramachandran, S
2015-04-15
We demonstrate that a |q|=1/2 plate, in conjunction with appropriate polarization optics, can selectively and switchably excite all linear combinations of the first radial mode order |l|=1 orbital angular momentum (OAM) fiber modes. This enables full mapping of free-space polarization states onto fiber vector modes, including the radially (TM) and azimuthally polarized (TE) modes. The setup requires few optical components and can yield mode purities as high as ∼30 dB. Additionally, just as a conventional fiber polarization controller creates arbitrary elliptical polarization states to counteract fiber birefringence and yield desired polarizations at the output of a single-mode fiber, q-plates disentangle degenerate state mixing effects between fiber OAM states to yield pure states, even after long-length fiber propagation. We thus demonstrate the ability to switch dynamically, potentially at ∼GHz rates, between OAM modes, or create desired linear combinations of them. We envision applications in fiber-based lasers employing vector or OAM mode outputs, as well as communications networking schemes exploiting spatial modes for higher dimensional encoding.
Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks
NASA Astrophysics Data System (ADS)
Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.
2015-03-01
The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which is to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.
Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks
NASA Astrophysics Data System (ADS)
Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.
2014-11-01
The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which are to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.
NASA Astrophysics Data System (ADS)
Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.
2017-06-01
In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.
NASA Astrophysics Data System (ADS)
González-López, Antonio; Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen
2017-11-01
The influence of the various sources of noise on the uncertainty in radiochromic film (RCF) dosimetry using single channel and multichannel methods is investigated in this work. These sources of noise are extracted from pixel value (PV) readings and dose maps. Pieces of an RCF were each irradiated to different uniform doses, ranging from 0 to 1092 cGy. Then, the pieces were read at two resolutions (72 and 150 ppp) with two flatbed scanners: Epson 10000XL and Epson V800, representing two states of technology. Noise was extracted as described in ISO 15739 (2013), separating its distinct constituents: random noise and fixed pattern (FP) noise. Regarding the PV maps, FP noise is the main source of noise for both models of digitizer. Also, the standard deviation of the random noise in the 10000XL model is almost twice that of the V800 model. In the dose maps, the FP noise is smaller in the multichannel method than in the single channel ones. However, random noise is higher in this method, throughout the dose range. In the multichannel method, FP noise is reduced, as a consequence of this method’s ability to eliminate channel independent perturbations. However, the random noise increases, because the dose is calculated as a linear combination of the doses obtained by the single channel methods. The values of the coefficients of this linear combination are obtained in the present study, and the root of the sum of their squares is shown to range between 0.9 and 1.9 over the dose range studied. These results indicate the random noise to play a fundamental role in the uncertainty of RCF dosimetry: low levels of random noise are required in the digitizer to fully exploit the advantages of the multichannel dosimetry method. This is particularly important for measuring high doses at high spatial resolutions.
Zhang, Qianqian; Guldbrandtsen, Bernt; Calus, Mario P L; Lund, Mogens Sandø; Sahana, Goutam
2016-08-17
There is growing interest in the role of rare variants in the variation of complex traits due to increasing evidence that rare variants are associated with quantitative traits. However, association methods that are commonly used for mapping common variants are not effective to map rare variants. Besides, livestock populations have large half-sib families and the occurrence of rare variants may be confounded with family structure, which makes it difficult to disentangle their effects from family mean effects. We compared the power of methods that are commonly applied in human genetics to map rare variants in cattle using whole-genome sequence data and simulated phenotypes. We also studied the power of mapping rare variants using linear mixed models (LMM), which are the method of choice to account for both family relationships and population structure in cattle. We observed that the power of the LMM approach was low for mapping a rare variant (defined as those that have frequencies lower than 0.01) with a moderate effect (5 to 8 % of phenotypic variance explained by multiple rare variants that vary from 5 to 21 in number) contributing to a QTL with a sample size of 1000. In contrast, across the scenarios studied, statistical methods that are specialized for mapping rare variants increased power regardless of whether multiple rare variants or a single rare variant underlie a QTL. Different methods for combining rare variants in the test single nucleotide polymorphism set resulted in similar power irrespective of the proportion of total genetic variance explained by the QTL. However, when the QTL variance is very small (only 0.1 % of the total genetic variance), these specialized methods for mapping rare variants and LMM generally had no power to map the variants within a gene with sample sizes of 1000 or 5000. We observed that the methods that combine multiple rare variants within a gene into a meta-variant generally had greater power to map rare variants compared to LMM. Therefore, it is recommended to use rare variant association mapping methods to map rare genetic variants that affect quantitative traits in livestock, such as bovine populations.
NASA Astrophysics Data System (ADS)
Hutchings, Joanne; Kendall, Catherine; Shepherd, Neil; Barr, Hugh; Stone, Nicholas
2010-11-01
Rapid Raman mapping has the potential to be used for automated histopathology diagnosis, providing an adjunct technique to histology diagnosis. The aim of this work is to evaluate the feasibility of automated and objective pathology classification of Raman maps using linear discriminant analysis. Raman maps of esophageal tissue sections are acquired. Principal component (PC)-fed linear discriminant analysis (LDA) is carried out using subsets of the Raman map data (6483 spectra). An overall (validated) training classification model performance of 97.7% (sensitivity 95.0 to 100% and specificity 98.6 to 100%) is obtained. The remainder of the map spectra (131,672 spectra) are projected onto the classification model resulting in Raman images, demonstrating good correlation with contiguous hematoxylin and eosin (HE) sections. Initial results suggest that LDA has the potential to automate pathology diagnosis of esophageal Raman images, but since the classification of test spectra is forced into existing training groups, further work is required to optimize the training model. A small pixel size is advantageous for developing the training datasets using mapping data, despite lengthy mapping times, due to additional morphological information gained, and could facilitate differentiation of further tissue groups, such as the basal cells/lamina propria, in the future, but larger pixels sizes (and faster mapping) may be more feasible for clinical application.
NASA Astrophysics Data System (ADS)
Guglielmino, F.; Nunnari, G.; Puglisi, G.; Spata, A.
2009-04-01
We propose a new technique, based on the elastic theory, to efficiently produce an estimate of three-dimensional surface displacement maps by integrating sparse Global Position System (GPS) measurements of deformations and Differential Interferometric Synthetic Aperture Radar (DInSAR) maps of movements of the Earth's surface. The previous methodologies known in literature, for combining data from GPS and DInSAR surveys, require two steps: the first, in which sparse GPS measurements are interpolated in order to fill in GPS displacements at the DInSAR grid, and the second, to estimate the three-dimensional surface displacement maps by using a suitable optimization technique. One of the advantages of the proposed approach is that both these steps are unified. We propose a linear matrix equation which accounts for both GPS and DInSAR data whose solution provide simultaneously the strain tensor, the displacement field and the rigid body rotation tensor throughout the entire investigated area. The mentioned linear matrix equation is solved by using the Weighted Least Square (WLS) thus assuring both numerical robustness and high computation efficiency. The proposed methodology was tested on both synthetic and experimental data, these last from GPS and DInSAR measurements carried out on Mt. Etna. The goodness of the results has been evaluated by using standard errors. These tests also allow optimising the choice of specific parameters of this algorithm. This "open" structure of the method will allow in the near future to take account of other available data sets, such as additional interferograms, or other geodetic data (e.g. levelling, tilt, etc.), in order to achieve even higher accuracy.
Evidence for Crater Ejecta on Venus Tessera Terrain from Earth-Based Radar Images
NASA Technical Reports Server (NTRS)
Campbell, Bruce A.; Campbell, Donald B.; Morgan, Gareth A.; Carter, Lynn M.; Nolan, Michael C.; Chandler, John F.
2014-01-01
We combine Earth-based radar maps of Venus from the 1988 and 2012 inferior conjunctions, which had similar viewing geometries. Processing of both datasets with better image focusing and co-registration techniques, and summing over multiple looks, yields maps with 1-2 km spatial resolution and improved signal to noise ratio, especially in the weaker same-sense circular (SC) polarization. The SC maps are unique to Earth-based observations, and offer a different view of surface properties from orbital mapping using same-sense linear (HH or VV) polarization. Highland or tessera terrains on Venus, which may retain a record of crustal differentiation and processes occurring prior to the loss of water, are of great interest for future spacecraft landings. The Earth-based radar images reveal multiple examples of tessera mantling by impact ''parabolas'' or ''haloes'', and can extend mapping of locally thick material from Magellan data by revealing thinner deposits over much larger areas. Of particular interest is an ejecta deposit from Stuart crater that we infer to mantle much of eastern Alpha Regio. Some radar-dark tessera occurrences may indicate sediments that are trapped for longer periods than in the plains. We suggest that such radar information is important for interpretation of orbital infrared data and selection of future tessera landing sites.
Reconstructing the gravitational field of the local Universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desmond, Harry; Ferreira, Pedro G.; Lavaux, Guilhem
Tests of gravity at the galaxy scale are in their infancy. As a first step to systematically uncovering the gravitational significance of galaxies, we map three fundamental gravitational variables – the Newtonian potential, acceleration and curvature – over the galaxy environments of the local Universe to a distance of approximately 200 Mpc. Our method combines the contributions from galaxies in an all-sky redshift survey, haloes from an N-body simulation hosting low-luminosity objects, and linear and quasi-linear modes of the density field. We use the ranges of these variables to determine the extent to which galaxies expand the scope of genericmore » tests of gravity and are capable of constraining specific classes of model for which they have special significance. In conclusion, we investigate the improvements afforded by upcoming galaxy surveys.« less
Reconstructing the gravitational field of the local Universe
Desmond, Harry; Ferreira, Pedro G.; Lavaux, Guilhem; ...
2017-11-25
Tests of gravity at the galaxy scale are in their infancy. As a first step to systematically uncovering the gravitational significance of galaxies, we map three fundamental gravitational variables – the Newtonian potential, acceleration and curvature – over the galaxy environments of the local Universe to a distance of approximately 200 Mpc. Our method combines the contributions from galaxies in an all-sky redshift survey, haloes from an N-body simulation hosting low-luminosity objects, and linear and quasi-linear modes of the density field. We use the ranges of these variables to determine the extent to which galaxies expand the scope of genericmore » tests of gravity and are capable of constraining specific classes of model for which they have special significance. In conclusion, we investigate the improvements afforded by upcoming galaxy surveys.« less
Optimization techniques for integrating spatial data
Herzfeld, U.C.; Merriam, D.F.
1995-01-01
Two optimization techniques ta predict a spatial variable from any number of related spatial variables are presented. The applicability of the two different methods for petroleum-resource assessment is tested in a mature oil province of the Midcontinent (USA). The information on petroleum productivity, usually not directly accessible, is related indirectly to geological, geophysical, petrographical, and other observable data. This paper presents two approaches based on construction of a multivariate spatial model from the available data to determine a relationship for prediction. In the first approach, the variables are combined into a spatial model by an algebraic map-comparison/integration technique. Optimal weights for the map comparison function are determined by the Nelder-Mead downhill simplex algorithm in multidimensions. Geologic knowledge is necessary to provide a first guess of weights to start the automatization, because the solution is not unique. In the second approach, active set optimization for linear prediction of the target under positivity constraints is applied. Here, the procedure seems to select one variable from each data type (structure, isopachous, and petrophysical) eliminating data redundancy. Automating the determination of optimum combinations of different variables by applying optimization techniques is a valuable extension of the algebraic map-comparison/integration approach to analyzing spatial data. Because of the capability of handling multivariate data sets and partial retention of geographical information, the approaches can be useful in mineral-resource exploration. ?? 1995 International Association for Mathematical Geology.
NASA Astrophysics Data System (ADS)
Steinel, Anke; Schelkes, Klaus; Subah, Ali; Himmelsbach, Thomas
2016-11-01
In (semi-)arid regions, available water resources are scarce and groundwater resources are often overused. Therefore, the option to increase available water resources by managed aquifer recharge (MAR) via infiltration of captured surface runoff was investigated for two basins in northern Jordan. This study evaluated the general suitability of catchments to generate sufficient runoff and tried to identify promising sites to harvest and infiltrate the runoff into the aquifer for later recovery. Large sets of available data were used to create regional thematic maps, which were then combined to constraint maps using Boolean logic and to create suitability maps using weighted linear combination. This approach might serve as a blueprint which could be adapted and applied to similar regions. The evaluation showed that non-committed source water availability is the most restricting factor for successful water harvesting in regions with <200 mm/a rainfall. Experiences with existing structures showed that sediment loads of runoff are high. Therefore, the effectiveness of any existing MAR scheme will decrease rapidly to the point where it results in an overall negative impact due to increased evaporation if maintenance is not undertaken. It is recommended to improve system operation and maintenance, as well as monitoring, in order to achieve a better and constant effectiveness of the infiltration activities.
NASA Astrophysics Data System (ADS)
Muji Susantoro, Tri; Wikantika, Ketut; Saepuloh, Asep; Handoyo Harsolumakso, Agus
2018-05-01
Selection of vegetation indices in plant mapping is needed to provide the best information of plant conditions. The methods used in this research are the standard deviation and the linear regression. This research tried to determine the vegetation indices used for mapping the sugarcane conditions around oil and gas fields. The data used in this study is Landsat 8 OLI/TIRS. The standard deviation analysis on the 23 vegetation indices with 27 samples has resulted in the six highest standard deviations of vegetation indices, termed as GRVI, SR, NLI, SIPI, GEMI and LAI. The standard deviation values are 0.47; 0.43; 0.30; 0.17; 0.16 and 0.13. Regression correlation analysis on the 23 vegetation indices with 280 samples has resulted in the six vegetation indices, termed as NDVI, ENDVI, GDVI, VARI, LAI and SIPI. This was performed based on regression correlation with the lowest value R2 than 0,8. The combined analysis of the standard deviation and the regression correlation has obtained the five vegetation indices, termed as NDVI, ENDVI, GDVI, LAI and SIPI. The results of the analysis of both methods show that a combination of two methods needs to be done to produce a good analysis of sugarcane conditions. It has been clarified through field surveys and showed good results for the prediction of microseepages.
Wu, Yao; Yang, Wei; Lu, Lijun; Lu, Zhentai; Zhong, Liming; Huang, Meiyan; Feng, Yanqiu; Feng, Qianjin; Chen, Wufan
2016-10-01
Attenuation correction is important for PET reconstruction. In PET/MR, MR intensities are not directly related to attenuation coefficients that are needed in PET imaging. The attenuation coefficient map can be derived from CT images. Therefore, prediction of CT substitutes from MR images is desired for attenuation correction in PET/MR. This study presents a patch-based method for CT prediction from MR images, generating attenuation maps for PET reconstruction. Because no global relation exists between MR and CT intensities, we propose local diffeomorphic mapping (LDM) for CT prediction. In LDM, we assume that MR and CT patches are located on 2 nonlinear manifolds, and the mapping from the MR manifold to the CT manifold approximates a diffeomorphism under a local constraint. Locality is important in LDM and is constrained by the following techniques. The first is local dictionary construction, wherein, for each patch in the testing MR image, a local search window is used to extract patches from training MR/CT pairs to construct MR and CT dictionaries. The k-nearest neighbors and an outlier detection strategy are then used to constrain the locality in MR and CT dictionaries. Second is local linear representation, wherein, local anchor embedding is used to solve MR dictionary coefficients when representing the MR testing sample. Under these local constraints, dictionary coefficients are linearly transferred from the MR manifold to the CT manifold and used to combine CT training samples to generate CT predictions. Our dataset contains 13 healthy subjects, each with T1- and T2-weighted MR and CT brain images. This method provides CT predictions with a mean absolute error of 110.1 Hounsfield units, Pearson linear correlation of 0.82, peak signal-to-noise ratio of 24.81 dB, and Dice in bone regions of 0.84 as compared with real CTs. CT substitute-based PET reconstruction has a regression slope of 1.0084 and R 2 of 0.9903 compared with real CT-based PET. In this method, no image segmentation or accurate registration is required. Our method demonstrates superior performance in CT prediction and PET reconstruction compared with competing methods. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Liu, Guo-hua; Rajendran, Narasimmalu; Amemiya, Takashi; Itoh, Kiminori
2011-11-01
A rapid approach based on two-dimensional DNA gel electrophroesis (2-DGE) mapping with selective primer pairs was employed to analyze bacterial community structure in sediments from upstream, midstream and downstream of Sagami River in Japan. The 2-DGE maps indicated that Alpha- and Delta-proteobacteria were major bacterial populations in the upstream and midstream sediments. Further bacterial community structure analysis showed that richness proportion of Alpha- and Delta-proteobacterial groups reflected a trend toward decreasing from the upstream to downstream sediments. The biomass proportion of bacterial populations in the midstream sediment showed a significantly difference from that in the other sediments, suggesting that there may be an environmental pressure on the midstream bacterial community. Lorenz curves, together with Gini coefficients were successfully applied to the 2-DGE mapping data for resolving evenness of bacterial populations, and showed that the plotted curve from high-resolution 2-DGE mapping became less linear and more an exponential function than that of the 1-DGE methods such as chain length analysis and denaturing gradient gel electrophoresis, suggesting that the 2-DGE mapping may achieve a more detailed evaluation of bacterial community. In conclusion, the 2-DGE mapping combined with the selective primer pairs enables bacterial community structure analysis in river sediment and thus it can also monitor sediment pollution based on the change of bacterial community structure.
Contrast Transmission In Medical Image Display
NASA Astrophysics Data System (ADS)
Pizer, Stephen M.; Zimmerman, John B.; Johnston, R. Eugene
1982-11-01
The display of medical images involves transforming recorded intensities such at CT numbers into perceivable intensities such as combinations of color and luminance. For the viewer to extract the most information about patterns of decreasing and increasing recorded intensity, the display designer must pay attention to three issues: 1) choice of display scale, including its discretization; 2) correction for variations in contrast sensitivity across the display scale due to the observer and the display device (producing an honest display); and 3) contrast enhancement based on the information in the recorded image and its importance, determined by viewing objectives. This paper will present concepts and approaches in all three of these areas. In choosing display scales three properties are important: sensitivity, associability, and naturalness of order. The unit of just noticeable difference (jnd) will be carefully defined. An observer experiment to measure the jnd values across a display scale will be specified. The overall sensitivity provided by a scale as measured in jnd's gives a measure of sensitivity called the perceived dynamic range (PDR). Methods for determining the PDR fran the aforementioned PDR values, and PDR's for various grey and pseudocolor scales will be presented. Methods of achieving sensitivity while retaining associability and naturalness of order with pseudocolor scales will be suggested. For any display device and scale it is useful to compensate for the device and observer by preceding the device with an intensity mapping (lookup table) chosen so that perceived intensity is linear with display-driving intensity. This mapping can be determined from the aforementioned jnd values. With a linearized display it is possible to standardize display devices so that the same image displayed on different devices or scales (e.g. video and hard copy) will be in sane sense perceptually equivalent. Furthermore, with a linearized display, it is possible to design contrast enhancement mappings that optimize the transmission of information from the recorded image to the display-driving signal with the assurance that this information will not then be lost by a -further nonlinear relation between display-driving and perceived intensity. It is suggested that optimal contrast enhancement mappings are adaptive to the local distribution of recorded intensities.
Image enhancement by non-linear extrapolation in frequency space
NASA Technical Reports Server (NTRS)
Anderson, Charles H. (Inventor); Greenspan, Hayit K. (Inventor)
1998-01-01
An input image is enhanced to include spatial frequency components having frequencies higher than those in an input image. To this end, an edge map is generated from the input image using a high band pass filtering technique. An enhancing map is subsequently generated from the edge map, with the enhanced map having spatial frequencies exceeding an initial maximum spatial frequency of the input image. The enhanced map is generated by applying a non-linear operator to the edge map in a manner which preserves the phase transitions of the edges of the input image. The enhanced map is added to the input image to achieve a resulting image having spatial frequencies greater than those in the input image. Simplicity of computations and ease of implementation allow for image sharpening after enlargement and for real-time applications such as videophones, advanced definition television, zooming, and restoration of old motion pictures.
Assessment of the Uniqueness of Wind Tunnel Strain-Gage Balance Load Predictions
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2016-01-01
A new test was developed to assess the uniqueness of wind tunnel strain-gage balance load predictions that are obtained from regression models of calibration data. The test helps balance users to gain confidence in load predictions of non-traditional balance designs. It also makes it possible to better evaluate load predictions of traditional balances that are not used as originally intended. The test works for both the Iterative and Non-Iterative Methods that are used in the aerospace testing community for the prediction of balance loads. It is based on the hypothesis that the total number of independently applied balance load components must always match the total number of independently measured bridge outputs or bridge output combinations. This hypothesis is supported by a control volume analysis of the inputs and outputs of a strain-gage balance. It is concluded from the control volume analysis that the loads and bridge outputs of a balance calibration data set must separately be tested for linear independence because it cannot always be guaranteed that a linearly independent load component set will result in linearly independent bridge output measurements. Simple linear math models for the loads and bridge outputs in combination with the variance inflation factor are used to test for linear independence. A highly unique and reversible mapping between the applied load component set and the measured bridge output set is guaranteed to exist if the maximum variance inflation factor of both sets is less than the literature recommended threshold of five. Data from the calibration of a six{component force balance is used to illustrate the application of the new test to real-world data.
Rowan, L.C.; Trautwein, C.M.; Purdy, T.L.
1990-01-01
This study was undertaken as part of the Conterminous U.S. Mineral Assessment Program (CUSMAP). The purpose of the study was to map linear features on Landsat Multispectral Scanner (MSS) images and a proprietary side-looking airborne radar (SLAR) image mosaic and to determine the spatial relationship between these linear features and the locations of metallic mineral occurrE-nces. The results show a close spatial association of linear features with metallic mineral occurrences in parts of the quadrangle, but in other areas the association is less well defined. Linear features are defined as distinct linear and slightly curvilinear elements mappable on MSS and SLAR images. The features generally represent linear segments of streams, ridges, and terminations of topographic features; however, they may also represent tonal patterns that are related to variations in lithology and vegetation. Most linear features in the Butte quadrangle probably represent underlying structural elements, such as fractures (with and without displacement), dikes, and alignment of fold axes. However, in areas underlain by sedimentary rocks, some of the linear features may reflect bedding traces. This report describes the geologic setting of the Butte quadrangle, the procedures used in mapping and analyzing the linear features, and the results of the study. Relationship of these features to placer and non-metal deposits were not analyzed in this study and are not discussed in this report.
Gravitons as Embroidery on the Weave
NASA Astrophysics Data System (ADS)
Iwasaki, Junichi; Rovelli, Carlo
We investigate the physical interpretation of the loop states that appear in the loop representation of quantum gravity. By utilizing the “weave” state, which has been recently introduced as a quantum description of the microstructure of flat space, we analyze the relation between loop states and graviton states. This relation determines a linear map M from the state-space of the nonperturbative theory (loop space) into the state-space of the linearized theory (Fock space). We present an explicit form of this map, and a preliminary investigation of its properties. The existence of such a map indicates that the full nonperturbative quantum theory includes a sector that describes the same physics as (the low energy regimes of) the linearized theory, namely gravitons on flat space.
Using fuzzy logic analysis for siting decisions of infiltration trenches for highway runoff control.
Ki, Seo Jin; Ray, Chittaranjan
2014-09-15
Determining optimal locations for best management practices (BMPs), including their field considerations and limitations, plays an important role for effective stormwater management. However, these issues have been often overlooked in modeling studies that focused on downstream water quality benefits. This study illustrates the methodology of locating infiltration trenches at suitable locations from spatial overlay analyses which combine multiple layers that address different aspects of field application into a composite map. Using seven thematic layers for each analysis, fuzzy logic was employed to develop a site suitability map for infiltration trenches, whereas the DRASTIC method was used to produce a groundwater vulnerability map on the island of Oahu, Hawaii, USA. In addition, the analytic hierarchy process (AHP), one of the most popular overlay analyses, was used for comparison to fuzzy logic. The results showed that the AHP and fuzzy logic methods developed significantly different index maps in terms of best locations and suitability scores. Specifically, the AHP method provided a maximum level of site suitability due to its inherent aggregation approach of all input layers in a linear equation. The most eligible areas in locating infiltration trenches were determined from the superposition of the site suitability and groundwater vulnerability maps using the fuzzy AND operator. The resulting map successfully balanced qualification criteria for a low risk of groundwater contamination and the best BMP site selection. The results of the sensitivity analysis showed that the suitability scores were strongly affected by the algorithms embedded in fuzzy logic; therefore, caution is recommended with their use in overlay analysis. Accordingly, this study demonstrates that the fuzzy logic analysis can not only be used to improve spatial decision quality along with other overlay approaches, but also is combined with general water quality models for initial and refined searches for the best locations of BMPs at the sub-basin level. Copyright © 2014. Published by Elsevier B.V.
Dust remobilization in fusion plasmas under steady state conditions
NASA Astrophysics Data System (ADS)
Tolias, P.; Ratynskaia, S.; De Angeli, M.; De Temmerman, G.; Ripamonti, D.; Riva, G.; Bykov, I.; Shalpegin, A.; Vignitchouk, L.; Brochard, F.; Bystrov, K.; Bardin, S.; Litnovsky, A.
2016-02-01
The first combined experimental and theoretical studies of dust remobilization by plasma forces are reported. The main theoretical aspects of remobilization in fusion devices under steady state conditions are analyzed. In particular, the dominant role of adhesive forces is highlighted and generic remobilization conditions—direct lift-up, sliding, rolling—are formulated. A novel experimental technique is proposed, based on controlled adhesion of dust grains on tungsten samples combined with detailed mapping of the dust deposition profile prior and post plasma exposure. Proof-of-principle experiments in the TEXTOR tokamak and the EXTRAP-T2R reversed-field pinch are presented. The versatile environment of the linear device Pilot-PSI allowed for experiments with different magnetic field topologies and varying plasma conditions that were complemented with camera observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casarini, L.; Bonometto, S.A.; Tessarotto, E.
2016-08-01
We discuss an extension of the Coyote emulator to predict non-linear matter power spectra of dark energy (DE) models with a scale factor dependent equation of state of the form w = w {sub 0}+(1- a ) w {sub a} . The extension is based on the mapping rule between non-linear spectra of DE models with constant equation of state and those with time varying one originally introduced in ref. [40]. Using a series of N-body simulations we show that the spectral equivalence is accurate to sub-percent level across the same range of modes and redshift covered by the Coyotemore » suite. Thus, the extended emulator provides a very efficient and accurate tool to predict non-linear power spectra for DE models with w {sub 0}- w {sub a} parametrization. According to the same criteria we have developed a numerical code that we have implemented in a dedicated module for the CAMB code, that can be used in combination with the Coyote Emulator in likelihood analyses of non-linear matter power spectrum measurements. All codes can be found at https://github.com/luciano-casarini/pkequal.« less
Steinbach, Gábor; Pomozi, István; Zsiros, Ottó; Páy, Anikó; Horváth, Gábor V; Garab, Gyozo
2008-03-01
Anisotropy carries important information on the molecular organization of biological samples. Its determination requires a combination of microscopy and polarization spectroscopy tools. The authors constructed differential polarization (DP) attachments to a laser scanning microscope in order to determine physical quantities related to the anisotropic distribution of molecules in microscopic samples; here the authors focus on fluorescence-detected linear dichroism (FDLD). By modulating the linear polarization of the laser beam between two orthogonally polarized states and by using a demodulation circuit, the authors determine the associated transmitted and fluorescence intensity-difference signals, which serve the basis for LD (linear dichroism) and FDLD, respectively. The authors demonstrate on sections of Convallaria majalis root tissue stained with Acridin Orange that while (nonconfocal) LD images remain smeared and weak, FDLD images recorded in confocal mode reveal strong anisotropy of the cell wall. FDLD imaging is suitable for mapping the anisotropic distribution of transition dipoles in 3 dimensions. A mathematical model is proposed to account for the fiber-laminate ultrastructure of the cell wall and for the intercalation of the dye molecules in complex, highly anisotropic architecture. Copyright 2007 International Society for Analytical Cytology.
kruX: matrix-based non-parametric eQTL discovery
2014-01-01
Background The Kruskal-Wallis test is a popular non-parametric statistical test for identifying expression quantitative trait loci (eQTLs) from genome-wide data due to its robustness against variations in the underlying genetic model and expression trait distribution, but testing billions of marker-trait combinations one-by-one can become computationally prohibitive. Results We developed kruX, an algorithm implemented in Matlab, Python and R that uses matrix multiplications to simultaneously calculate the Kruskal-Wallis test statistic for several millions of marker-trait combinations at once. KruX is more than ten thousand times faster than computing associations one-by-one on a typical human dataset. We used kruX and a dataset of more than 500k SNPs and 20k expression traits measured in 102 human blood samples to compare eQTLs detected by the Kruskal-Wallis test to eQTLs detected by the parametric ANOVA and linear model methods. We found that the Kruskal-Wallis test is more robust against data outliers and heterogeneous genotype group sizes and detects a higher proportion of non-linear associations, but is more conservative for calling additive linear associations. Conclusion kruX enables the use of robust non-parametric methods for massive eQTL mapping without the need for a high-performance computing infrastructure and is freely available from http://krux.googlecode.com. PMID:24423115
Linear combination fitting results for lead speciation in amended soils
Table listing the location, amendment type, distribution (percentage) of lead phases identified, and fitting error (R-factor). BM=bone meal, FB=fish bone, DAP=diammonium phosphate, MAP=monoammonium phosphate, TSP=triple super phosphate, PL=poultry litterThis dataset is associated with the following publication:Obrycki, J., N. Basta, K. Scheckel , B. Stevens, and K. Minca. Phosphorus Amendment Efficacy for In Situ Remediation of Soil Lead Depends on the Bioaccessible Method. Elizabeth Guertal, David Myroid, and C. Wayne Smith JOURNAL OF ENVIRONMENTAL QUALITY. American Society of Agronomy, MADISON, WI, USA, 45(1): 37-44, (2016).
Stochastic series expansion simulation of the t -V model
NASA Astrophysics Data System (ADS)
Wang, Lei; Liu, Ye-Hua; Troyer, Matthias
2016-04-01
We present an algorithm for the efficient simulation of the half-filled spinless t -V model on bipartite lattices, which combines the stochastic series expansion method with determinantal quantum Monte Carlo techniques widely used in fermionic simulations. The algorithm scales linearly in the inverse temperature, cubically with the system size, and is free from the time-discretization error. We use it to map out the finite-temperature phase diagram of the spinless t -V model on the honeycomb lattice and observe a suppression of the critical temperature of the charge-density-wave phase in the vicinity of a fermionic quantum critical point.
Marrero-Ponce, Yovani; Medina-Marrero, Ricardo; Castillo-Garit, Juan A; Romero-Zaldivar, Vicente; Torrens, Francisco; Castro, Eduardo A
2005-04-15
A novel approach to bio-macromolecular design from a linear algebra point of view is introduced. A protein's total (whole protein) and local (one or more amino acid) linear indices are a new set of bio-macromolecular descriptors of relevance to protein QSAR/QSPR studies. These amino-acid level biochemical descriptors are based on the calculation of linear maps on Rn[f k(xmi):Rn-->Rn] in canonical basis. These bio-macromolecular indices are calculated from the kth power of the macromolecular pseudograph alpha-carbon atom adjacency matrix. Total linear indices are linear functional on Rn. That is, the kth total linear indices are linear maps from Rn to the scalar R[f k(xm):Rn-->R]. Thus, the kth total linear indices are calculated by summing the amino-acid linear indices of all amino acids in the protein molecule. A study of the protein stability effects for a complete set of alanine substitutions in the Arc repressor illustrates this approach. A quantitative model that discriminates near wild-type stability alanine mutants from the reduced-stability ones in a training series was obtained. This model permitted the correct classification of 97.56% (40/41) and 91.67% (11/12) of proteins in the training and test set, respectively. It shows a high Matthews correlation coefficient (MCC=0.952) for the training set and an MCC=0.837 for the external prediction set. Additionally, canonical regression analysis corroborated the statistical quality of the classification model (Rcanc=0.824). This analysis was also used to compute biological stability canonical scores for each Arc alanine mutant. On the other hand, the linear piecewise regression model compared favorably with respect to the linear regression one on predicting the melting temperature (tm) of the Arc alanine mutants. The linear model explains almost 81% of the variance of the experimental tm (R=0.90 and s=4.29) and the LOO press statistics evidenced its predictive ability (q2=0.72 and scv=4.79). Moreover, the TOMOCOMD-CAMPS method produced a linear piecewise regression (R=0.97) between protein backbone descriptors and tm values for alanine mutants of the Arc repressor. A break-point value of 51.87 degrees C characterized two mutant clusters and coincided perfectly with the experimental scale. For this reason, we can use the linear discriminant analysis and piecewise models in combination to classify and predict the stability of the mutant Arc homodimers. These models also permitted the interpretation of the driving forces of such folding process, indicating that topologic/topographic protein backbone interactions control the stability profile of wild-type Arc and its alanine mutants.
Semi-blind Bayesian inference of CMB map and power spectrum
NASA Astrophysics Data System (ADS)
Vansyngel, Flavien; Wandelt, Benjamin D.; Cardoso, Jean-François; Benabed, Karim
2016-04-01
We present a new blind formulation of the cosmic microwave background (CMB) inference problem. The approach relies on a phenomenological model of the multifrequency microwave sky without the need for physical models of the individual components. For all-sky and high resolution data, it unifies parts of the analysis that had previously been treated separately such as component separation and power spectrum inference. We describe an efficient sampling scheme that fully explores the component separation uncertainties on the inferred CMB products such as maps and/or power spectra. External information about individual components can be incorporated as a prior giving a flexible way to progressively and continuously introduce physical component separation from a maximally blind approach. We connect our Bayesian formalism to existing approaches such as Commander, spectral mismatch independent component analysis (SMICA), and internal linear combination (ILC), and discuss possible future extensions.
NASA Astrophysics Data System (ADS)
Bessell, Michael S.
2000-08-01
Spectacular colour images have been made by combining CCD images in three different passbands using Adobe Photoshop. These beautiful images highlight a variety of astrophysical phenomena and should be a valuable resource for science education and public awareness of science. The wide field images were obtained at the Siding Spring Observatory (SSO) by mounting a Hasselblad or Nikkor telephoto lens in front of a 2K × 2K CCD. Options of more than 30 degrees or 6 degrees square coverage are produced in a single exposure in this way. Narrow band or broad band filters were placed between lens and CCD enabling deep, linear images in a variety of passbands to be obtained. We have mapped the LMC and SMC and are mapping the Galactic Plane for comparison with the Molonglo Radio Survey. Higher resolution images have also been made with the 40 inch telescope of galaxies and star forming regions in the Milky Way.
System, method, and apparatus for remote measurement of terrestrial biomass
Johnson, Patrick W [Jefferson, MD
2011-04-12
A system, method, and/or apparatus for remote measurement of terrestrial biomass contained in vegetative elements, such as large tree boles or trunks present in an area of interest, are provided. The method includes providing an airborne VHF radar system in combination with a LiDAR system, overflying the area of interest while directing energy toward the area of interest, using the VHF radar system to collect backscatter data from the trees as a function of incidence angle and frequency, and determining a magnitude of the biomass from the backscatter data and data from the laser radar system for each radar resolution cell. A biomass map is generated showing the magnitude of the biomass of the vegetative elements as a function of location on the map by using each resolution cell as a unique location thereon. In certain preferred embodiments, a single frequency is used with a linear array antenna.
Frequential versus spatial colour textons for breast TMA classification.
Fernández-Carrobles, M Milagro; Bueno, Gloria; Déniz, Oscar; Salido, Jesús; García-Rojo, Marcial; Gonzández-López, Lucía
2015-06-01
Advances in digital pathology are generating huge volumes of whole slide (WSI) and tissue microarray images (TMA) which are providing new insights into the causes of cancer. The challenge is to extract and process effectively all the information in order to characterize all the heterogeneous tissue-derived data. This study aims to identify an optimal set of features that best separates different classes in breast TMA. These classes are: stroma, adipose tissue, benign and benign anomalous structures and ductal and lobular carcinomas. To this end, we propose an exhaustive assessment on the utility of textons and colour for automatic classification of breast TMA. Frequential and spatial texton maps from eight different colour models were extracted and compared. Then, in a novel way, the TMA is characterized by the 1st and 2nd order Haralick statistical descriptors obtained from the texton maps with a total of 241 × 8 features for each original RGB image. Subsequently, a feature selection process is performed to remove redundant information and therefore to reduce the dimensionality of the feature vector. Three methods were evaluated: linear discriminant analysis, correlation and sequential forward search. Finally, an extended bank of classifiers composed of six techniques was compared, but only three of them could significantly improve accuracy rates: Fisher, Bagging Trees and AdaBoost. Our results reveal that the combination of different colour models applied to spatial texton maps provides the most efficient representation of the breast TMA. Specifically, we found that the best colour model combination is Hb, Luv and SCT for all classifiers and the classifier that performs best for all colour model combinations is the AdaBoost. On a database comprising 628 TMA images, classification yields an accuracy of 98.1% and a precision of 96.2% with a total of 316 features on spatial textons maps. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sztáray, Bálint; Voronova, Krisztina; Torma, Krisztián G.; Covert, Kyle J.; Bodi, Andras; Hemberger, Patrick; Gerber, Thomas; Osborn, David L.
2017-07-01
Photoelectron photoion coincidence (PEPICO) spectroscopy could become a powerful tool for the time-resolved study of multi-channel gas phase chemical reactions. Toward this goal, we have designed and tested electron and ion optics that form the core of a new PEPICO spectrometer, utilizing simultaneous velocity map imaging for both cations and electrons, while also achieving good cation mass resolution through space focusing. These optics are combined with a side-sampled, slow-flow chemical reactor for photolytic initiation of gas-phase chemical reactions. Together with a recent advance that dramatically increases the dynamic range in PEPICO spectroscopy [D. L. Osborn et al., J. Chem. Phys. 145, 164202 (2016)], the design described here demonstrates a complete prototype spectrometer and reactor interface to carry out time-resolved experiments. Combining dual velocity map imaging with cation space focusing yields tightly focused photoion images for translationally cold neutrals, while offering good mass resolution for thermal samples as well. The flexible optics design incorporates linear electric fields in the ionization region, surrounded by dual curved electric fields for velocity map imaging of ions and electrons. Furthermore, the design allows for a long extraction stage, which makes this the first PEPICO experiment to combine ion imaging with the unimolecular dissociation rate constant measurements of cations to detect and account for kinetic shifts. Four examples are shown to illustrate some capabilities of this new design. We recorded the threshold photoelectron spectrum of the propargyl and the iodomethyl radicals. While the former agrees well with a literature threshold photoelectron spectrum, we have succeeded in resolving the previously unobserved vibrational structure in the latter. We have also measured the bimolecular rate constant of the CH2I + O2 reaction and observed its product, the smallest Criegee intermediate, CH2OO. Finally, the second dissociative photoionization step of iodocyclohexane ions, the loss of ethylene from the cyclohexyl cation, is slow at threshold, as illustrated by the asymmetric threshold photoionization time-of-flight distributions.
NASA Astrophysics Data System (ADS)
Siudzińska, Katarzyna; Chruściński, Dariusz
2018-03-01
In matrix algebras, we introduce a class of linear maps that are irreducibly covariant with respect to the finite group generated by the Weyl operators. In particular, we analyze the irreducibly covariant quantum channels, that is, the completely positive and trace-preserving linear maps. Interestingly, imposing additional symmetries leads to the so-called generalized Pauli channels, which were recently considered in the context of the non-Markovian quantum evolution. Finally, we provide examples of irreducibly covariant positive but not necessarily completely positive maps.
Data-driven discovery of Koopman eigenfunctions using deep learning
NASA Astrophysics Data System (ADS)
Lusch, Bethany; Brunton, Steven L.; Kutz, J. Nathan
2017-11-01
Koopman operator theory transforms any autonomous non-linear dynamical system into an infinite-dimensional linear system. Since linear systems are well-understood, a mapping of non-linear dynamics to linear dynamics provides a powerful approach to understanding and controlling fluid flows. However, finding the correct change of variables remains an open challenge. We present a strategy to discover an approximate mapping using deep learning. Our neural networks find this change of variables, its inverse, and a finite-dimensional linear dynamical system defined on the new variables. Our method is completely data-driven and only requires measurements of the system, i.e. it does not require derivatives or knowledge of the governing equations. We find a minimal set of approximate Koopman eigenfunctions that are sufficient to reconstruct and advance the system to future states. We demonstrate the method on several dynamical systems.
Jay M. Ver Hoef; Hailemariam Temesgen; Sergio Gómez
2013-01-01
Forest surveys provide critical information for many diverse interests. Data are often collected from samples, and from these samples, maps of resources and estimates of aerial totals or averages are required. In this paper, two approaches for mapping and estimating totals; the spatial linear model (SLM) and k-NN (k-Nearest Neighbor) are compared, theoretically,...
A general number-to-space mapping deficit in developmental dyscalculia.
Huber, S; Sury, D; Moeller, K; Rubinsten, O; Nuerk, H-C
2015-01-01
Previous research on developmental dyscalculia (DD) suggested that deficits in the number line estimation task are related to a failure to represent number magnitude linearly. This conclusion was derived from the observation of logarithmically shaped estimation patterns. However, recent research questioned this idea of an isomorphic relationship between estimation patterns and number magnitude representation. In the present study, we evaluated an alternative hypothesis: impairments in the number line estimation task are due to a general deficit in mapping numbers onto space. Adults with DD and a matched control group had to learn linear and non-linear layouts of the number line via feedback. Afterwards, we assessed their performance how well they learnt the new number-space mappings. We found irrespective of the layouts worse performance of adults with DD. Additionally, in case of the linear layout, we observed that their performance did not differ from controls near reference points, but that differences between groups increased as the distance to reference point increased. We conclude that worse performance of adults with DD in the number line task might be due a deficit in mapping numbers onto space which can be partly overcome relying on reference points. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hierarchical tone mapping for high dynamic range image visualization
NASA Astrophysics Data System (ADS)
Qiu, Guoping; Duan, Jiang
2005-07-01
In this paper, we present a computationally efficient, practically easy to use tone mapping techniques for the visualization of high dynamic range (HDR) images in low dynamic range (LDR) reproduction devices. The new method, termed hierarchical nonlinear linear (HNL) tone-mapping operator maps the pixels in two hierarchical steps. The first step allocates appropriate numbers of LDR display levels to different HDR intensity intervals according to the pixel densities of the intervals. The second step linearly maps the HDR intensity intervals to theirs allocated LDR display levels. In the developed HNL scheme, the assignment of LDR display levels to HDR intensity intervals is controlled by a very simple and flexible formula with a single adjustable parameter. We also show that our new operators can be used for the effective enhancement of ordinary images.
NASA Astrophysics Data System (ADS)
Salkin, Louis; Courbin, Laurent; Panizza, Pascal
2012-09-01
Combining experiments and theory, we investigate the break-up dynamics of deformable objects, such as drops and bubbles, against a linear micro-obstacle. Our experiments bring the role of the viscosity contrast Δη between dispersed and continuous phases to light: the evolution of the critical capillary number to break a drop as a function of its size is either nonmonotonic (Δη>0) or monotonic (Δη≤0). In the case of positive viscosity contrasts, experiments and modeling reveal the existence of an unexpected critical object size for which the critical capillary number for breakup is minimum. Using simple physical arguments, we derive a model that well describes observations, provides diagrams mapping the four hydrodynamic regimes identified experimentally, and demonstrates that the critical size originating from confinement solely depends on geometrical parameters of the obstacle.
NASA Technical Reports Server (NTRS)
Dzurisin, D.
1977-01-01
Volcanic and tectonic implications of the surface morphology of Mercury are discussed. Mercurian scarps, ridges, troughs, and other lineaments are described and classified as planimetrically linear, arcuate, lobate, or irregular. A global pattern of lineaments is interpreted to reflect modification of linear crustal joints formed in response to stresses induced by tidal spindown. Large arcuate scarps on Mercury most likely record a period of compressional tectonism near the end of heavy bombardment. Shrinkage owing to planetary cooling is the mechanism preferred for their production. Measurements of local normal albedo are combined with computer-generated photometric maps of Mercury to provide constraints on the nature of surface materials and processes. If the mercurian surface obeys the average lunar photometric function, its normal albedo at 554 nm is .16 + or - .03.
NASA Astrophysics Data System (ADS)
Sudevan, Vipin; Aluri, Pavan K.; Yadav, Sarvesh Kumar; Saha, Rajib; Souradeep, Tarun
2017-06-01
We report an improved technique for diffuse foreground minimization from Cosmic Microwave Background (CMB) maps using a new multiphase iterative harmonic space internal-linear-combination (HILC) approach. Our method nullifies a foreground leakage that was present in the old and usual iterative HILC method. In phase 1 of the multiphase technique, we obtain an initial cleaned map using the single iteration HILC approach over the desired portion of the sky. In phase 2, we obtain a final CMB map using the iterative HILC approach; however, now, to nullify the leakage, during each iteration, some of the regions of the sky that are not being cleaned in the current iteration are replaced by the corresponding cleaned portions of the phase 1 map. We bring all input frequency maps to a common and maximum possible beam and pixel resolution at the beginning of the analysis, which significantly reduces data redundancy, memory usage, and computational cost, and avoids, during the HILC weight calculation, the deconvolution of partial sky harmonic coefficients by the azimuthally symmetric beam and pixel window functions, which in a strict mathematical sense, are not well defined. Using WMAP 9 year and Planck 2015 frequency maps, we obtain foreground-cleaned CMB maps and a CMB angular power spectrum for the multipole range 2≤slant {\\ell }≤slant 2500. Our power spectrum matches the published Planck results with some differences at different multipole ranges. We validate our method by performing Monte Carlo simulations. Finally, we show that the weights for HILC foreground minimization have the intrinsic characteristic that they also tend to produce a statistically isotropic CMB map.
NASA Astrophysics Data System (ADS)
Bilim, Funda; Kosaroglu, Sinan; Aydemir, Attila; Buyuksarac, Aydin
2017-12-01
In this study, curie point depth (CPD), heat flow, geothermal gradient, and radiogenic heat production maps of the Cappadocian region in central Anatolia are presented to reveal the thermal structure from the aeromagnetic data. The large, circular pattern in these maps matches with previously determined shallow (2 km in average) depression. Estimated CPDs in this depression filled with loose volcano-clastics and ignimbrite sheets of continental Neogene units vary from 7 to 12 km, while the geothermal gradient increases from 50 to 68 °C/km. Heat flows were calculated using two different conductivity coefficients of 2.3 and 2.7 Wm-1 K-1. The radiogenic heat production was also obtained between 0.45 and 0.70 μW m-3 in this area. Heat-flow maps were compared with the previous, regional heat-flow map of Turkey and significant differences were observed. In contrast to linear heat-flow increment through the northeast in the previous map in the literature, produced maps in this study include a large, caldera-like circular depression between Nevsehir, Aksaray, Nigde, and Yesilhisar cities indicating high geothermal gradient and higher heat-flow values. In addition, active deformation is evident with young magmatism in the Neogene and Quaternary times and a large volcanic cover on the surface. Boundaries of volcanic eruption centers and buried large intrusions are surrounded with the maxspots of the horizontal gradients of magnetic anomalies. Analytic signal (AS) map pointing-out exact locations of causative bodies is also presented in this study. Circular region in the combined map of AS and maxspots apparently indicates a possible caldera.
Landscape scale mapping of forest inventory data by nearest neighbor classification
Andrew Lister
2009-01-01
One of the goals of the Forest Service, U.S. Department of Agriculture's Forest Inventory and Analysis (FIA) program is large-area mapping. FIA scientists have tried many methods in the past, including geostatistical methods, linear modeling, nonlinear modeling, and simple choropleth and dot maps. Mapping methods that require individual model-based maps to be...
Thayer, Edward C.; Olson, Maynard V.; Karp, Richard M.
1999-01-01
Genetic and physical maps display the relative positions of objects or markers occurring within a target DNA molecule. In constructing maps, the primary objective is to determine the ordering of these objects. A further objective is to assign a coordinate to each object, indicating its distance from a reference end of the target molecule. This paper describes a computational method and a body of software for assigning coordinates to map objects, given a solution or partial solution to the ordering problem. We describe our method in the context of multiple–complete–digest (MCD) mapping, but it should be applicable to a variety of other mapping problems. Because of errors in the data or insufficient clone coverage to uniquely identify the true ordering of the map objects, a partial ordering is typically the best one can hope for. Once a partial ordering has been established, one often seeks to overlay a metric along the map to assess the distances between the map objects. This problem often proves intractable because of data errors such as erroneous local length measurements (e.g., large clone lengths on low-resolution physical maps). We present a solution to the coordinate assignment problem for MCD restriction-fragment mapping, in which a coordinated set of single-enzyme restriction maps are simultaneously constructed. We show that the coordinate assignment problem can be expressed as the solution of a system of linear constraints. If the linear system is free of inconsistencies, it can be solved using the standard Bellman–Ford algorithm. In the more typical case where the system is inconsistent, our program perturbs it to find a new consistent system of linear constraints, close to those of the given inconsistent system, using a modified Bellman–Ford algorithm. Examples are provided of simple map inconsistencies and the methods by which our program detects candidate data errors and directs the user to potential suspect regions of the map. PMID:9927487
NASA Astrophysics Data System (ADS)
Chu, Hone-Jay; Kong, Shish-Jeng; Chang, Chih-Hua
2018-03-01
The turbidity (TB) of a water body varies with time and space. Water quality is traditionally estimated via linear regression based on satellite images. However, estimating and mapping water quality require a spatio-temporal nonstationary model, while TB mapping necessitates the use of geographically and temporally weighted regression (GTWR) and geographically weighted regression (GWR) models, both of which are more precise than linear regression. Given the temporal nonstationary models for mapping water quality, GTWR offers the best option for estimating regional water quality. Compared with GWR, GTWR provides highly reliable information for water quality mapping, boasts a relatively high goodness of fit, improves the explanation of variance from 44% to 87%, and shows a sufficient space-time explanatory power. The seasonal patterns of TB and the main spatial patterns of TB variability can be identified using the estimated TB maps from GTWR and by conducting an empirical orthogonal function (EOF) analysis.
Galeano, Carlos H.; Fernandez, Andrea C.; Franco-Herrera, Natalia; Cichy, Karen A.; McClean, Phillip E.; Vanderleyden, Jos; Blair, Matthew W.
2011-01-01
Map-based cloning and fine mapping to find genes of interest and marker assisted selection (MAS) requires good genetic maps with reproducible markers. In this study, we saturated the linkage map of the intra-gene pool population of common bean DOR364×BAT477 (DB) by evaluating 2,706 molecular markers including SSR, SNP, and gene-based markers. On average the polymorphism rate was 7.7% due to the narrow genetic base between the parents. The DB linkage map consisted of 291 markers with a total map length of 1,788 cM. A consensus map was built using the core mapping populations derived from inter-gene pool crosses: DOR364×G19833 (DG) and BAT93×JALO EEP558 (BJ). The consensus map consisted of a total of 1,010 markers mapped, with a total map length of 2,041 cM across 11 linkage groups. On average, each linkage group on the consensus map contained 91 markers of which 83% were single copy markers. Finally, a synteny analysis was carried out using our highly saturated consensus maps compared with the soybean pseudo-chromosome assembly. A total of 772 marker sequences were compared with the soybean genome. A total of 44 syntenic blocks were identified. The linkage group Pv6 presented the most diverse pattern of synteny with seven syntenic blocks, and Pv9 showed the most consistent relations with soybean with just two syntenic blocks. Additionally, a co-linear analysis using common bean transcript map information against soybean coding sequences (CDS) revealed the relationship with 787 soybean genes. The common bean consensus map has allowed us to map a larger number of markers, to obtain a more complete coverage of the common bean genome. Our results, combined with synteny relationships provide tools to increase marker density in selected genomic regions to identify closely linked polymorphic markers for indirect selection, fine mapping or for positional cloning. PMID:22174773
Chen, Kevin T; Izquierdo-Garcia, David; Poynton, Clare B; Chonde, Daniel B; Catana, Ciprian
2017-03-01
To propose an MR-based method for generating continuous-valued head attenuation maps and to assess its accuracy and reproducibility. Demonstrating that novel MR-based photon attenuation correction methods are both accurate and reproducible is essential prior to using them routinely in research and clinical studies on integrated PET/MR scanners. Continuous-valued linear attenuation coefficient maps ("μ-maps") were generated by combining atlases that provided the prior probability of voxel positions belonging to a certain tissue class (air, soft tissue, or bone) and an MR intensity-based likelihood classifier to produce posterior probability maps of tissue classes. These probabilities were used as weights to generate the μ-maps. The accuracy of this probabilistic atlas-based continuous-valued μ-map ("PAC-map") generation method was assessed by calculating the voxel-wise absolute relative change (RC) between the MR-based and scaled CT-based attenuation-corrected PET images. To assess reproducibility, we performed pair-wise comparisons of the RC values obtained from the PET images reconstructed using the μ-maps generated from the data acquired at three time points. The proposed method produced continuous-valued μ-maps that qualitatively reflected the variable anatomy in patients with brain tumor and agreed well with the scaled CT-based μ-maps. The absolute RC comparing the resulting PET volumes was 1.76 ± 2.33 %, quantitatively demonstrating that the method is accurate. Additionally, we also showed that the method is highly reproducible, the mean RC value for the PET images reconstructed using the μ-maps obtained at the three visits being 0.65 ± 0.95 %. Accurate and highly reproducible continuous-valued head μ-maps can be generated from MR data using a probabilistic atlas-based approach.
A fruit quality gene map of Prunus
2009-01-01
Background Prunus fruit development, growth, ripening, and senescence includes major biochemical and sensory changes in texture, color, and flavor. The genetic dissection of these complex processes has important applications in crop improvement, to facilitate maximizing and maintaining stone fruit quality from production and processing through to marketing and consumption. Here we present an integrated fruit quality gene map of Prunus containing 133 genes putatively involved in the determination of fruit texture, pigmentation, flavor, and chilling injury resistance. Results A genetic linkage map of 211 markers was constructed for an intraspecific peach (Prunus persica) progeny population, Pop-DG, derived from a canning peach cultivar 'Dr. Davis' and a fresh market cultivar 'Georgia Belle'. The Pop-DG map covered 818 cM of the peach genome and included three morphological markers, 11 ripening candidate genes, 13 cold-responsive genes, 21 novel EST-SSRs from the ChillPeach database, 58 previously reported SSRs, 40 RAFs, 23 SRAPs, 14 IMAs, and 28 accessory markers from candidate gene amplification. The Pop-DG map was co-linear with the Prunus reference T × E map, with 39 SSR markers in common to align the maps. A further 158 markers were bin-mapped to the reference map: 59 ripening candidate genes, 50 cold-responsive genes, and 50 novel EST-SSRs from ChillPeach, with deduced locations in Pop-DG via comparative mapping. Several candidate genes and EST-SSRs co-located with previously reported major trait loci and quantitative trait loci for chilling injury symptoms in Pop-DG. Conclusion The candidate gene approach combined with bin-mapping and availability of a community-recognized reference genetic map provides an efficient means of locating genes of interest in a target genome. We highlight the co-localization of fruit quality candidate genes with previously reported fruit quality QTLs. The fruit quality gene map developed here is a valuable tool for dissecting the genetic architecture of fruit quality traits in Prunus crops. PMID:19995417
Terrain discovery and navigation of a multi-articulated linear robot using map-seeking circuits
NASA Astrophysics Data System (ADS)
Snider, Ross K.; Arathorn, David W.
2006-05-01
A significant challenge in robotics is providing a robot with the ability to sense its environment and then autonomously move while accommodating obstacles. The DARPA Grand Challenge, one of the most visible examples, set the goal of driving a vehicle autonomously for over a hundred miles avoiding obstacles along a predetermined path. Map-Seeking Circuits have shown their biomimetic capability in both vision and inverse kinematics and here we demonstrate their potential usefulness for intelligent exploration of unknown terrain using a multi-articulated linear robot. A robot that could handle any degree of terrain complexity would be useful for exploring inaccessible crowded spaces such as rubble piles in emergency situations, patrolling/intelligence gathering in tough terrain, tunnel exploration, and possibly even planetary exploration. Here we simulate autonomous exploratory navigation by an interaction of terrain discovery using the multi-articulated linear robot to build a local terrain map and exploitation of that growing terrain map to solve the propulsion problem of the robot.
NASA Astrophysics Data System (ADS)
Auvet, B.; Lidon, B.; Kartiwa, B.; Le Bissonnais, Y.; Poussin, J.-C.
2015-09-01
This paper presents an approach to model runoff and erosion risk in a context of data scarcity, whereas the majority of available models require large quantities of physical data that are frequently not accessible. To overcome this problem, our approach uses different sources of data, particularly on agricultural practices (tillage and land cover) and farmers' perceptions of runoff and erosion. The model was developed on a small (5 ha) cultivated watershed characterized by extreme conditions (slopes of up to 55 %, extreme rainfall events) on the Merapi volcano in Indonesia. Runoff was modelled using two versions of STREAM. First, a lumped version was used to determine the global parameters of the watershed. Second, a distributed version used three parameters for the production of runoff (slope, land cover and roughness), a precise DEM, and the position of waterways for runoff distribution. This information was derived from field observations and interviews with farmers. Both surface runoff models accurately reproduced runoff at the outlet. However, the distributed model (Nash-Sutcliffe = 0.94) was more accurate than the adjusted lumped model (N-S = 0.85), especially for the smallest and biggest runoff events, and produced accurate spatial distribution of runoff production and concentration. Different types of erosion processes (landslides, linear inter-ridge erosion, linear erosion in main waterways) were modelled as a combination of a hazard map (the spatial distribution of runoff/infiltration volume provided by the distributed model), and a susceptibility map combining slope, land cover and tillage, derived from in situ observations and interviews with farmers. Each erosion risk map gives a spatial representation of the different erosion processes including risk intensities and frequencies that were validated by the farmers and by in situ observations. Maps of erosion risk confirmed the impact of the concentration of runoff, the high susceptibility of long steep slopes, and revealed the critical role of tillage direction. Calibrating and validating models using in situ measurements, observations and farmers' perceptions made it possible to represent runoff and erosion risk despite the initial scarcity of hydrological data. Even if the models mainly provided orders of magnitude and qualitative information, they significantly improved our understanding of the watershed dynamics. In addition, the information produced by such models is easy for farmers to use to manage runoff and erosion by using appropriate agricultural practices.
Carbon emissions risk map from deforestation in the tropical Amazon
NASA Astrophysics Data System (ADS)
Ometto, J.; Soler, L. S.; Assis, T. D.; Oliveira, P. V.; Aguiar, A. P.
2011-12-01
Assis, Pedro Valle This work aims to estimate the carbon emissions from tropical deforestation in the Brazilian Amazon associated to the risk assessment of future land use change. The emissions are estimated by incorporating temporal deforestation dynamics, accounting for the biophysical and socioeconomic heterogeneity in the region, as well secondary forest growth dynamic in abandoned areas. The land cover change model that supported the risk assessment of deforestation, was run based on linear regressions. This method takes into account spatial heterogeneity of deforestation as the spatial variables adopted to fit the final regression model comprise: environmental aspects, economic attractiveness, accessibility and land tenure structure. After fitting a suitable regression models for each land cover category, the potential of each cell to be deforested (25x25km and 5x5 km of resolution) in the near future was used to calculate the risk assessment of land cover change. The carbon emissions model combines high-resolution new forest clear-cut mapping and four alternative sources of spatial information on biomass distribution for different vegetation types. The risk assessment map of CO2 emissions, was obtained by crossing the simulation results of the historical land cover changes to a map of aboveground biomass contained in the remaining forest. This final map represents the risk of CO2 emissions at 25x25km and 5x5 km until 2020, under a scenario of carbon emission reduction target.
NASA Astrophysics Data System (ADS)
Yang, Jian; He, Yuhong
2017-02-01
Quantifying impervious surfaces in urban and suburban areas is a key step toward a sustainable urban planning and management strategy. With the availability of fine-scale remote sensing imagery, automated mapping of impervious surfaces has attracted growing attention. However, the vast majority of existing studies have selected pixel-based and object-based methods for impervious surface mapping, with few adopting sub-pixel analysis of high spatial resolution imagery. This research makes use of a vegetation-bright impervious-dark impervious linear spectral mixture model to characterize urban and suburban surface components. A WorldView-3 image acquired on May 9th, 2015 is analyzed for its potential in automated unmixing of meaningful surface materials for two urban subsets and one suburban subset in Toronto, ON, Canada. Given the wide distribution of shadows in urban areas, the linear spectral unmixing is implemented in non-shadowed and shadowed areas separately for the two urban subsets. The results indicate that the accuracy of impervious surface mapping in suburban areas reaches up to 86.99%, much higher than the accuracies in urban areas (80.03% and 79.67%). Despite its merits in mapping accuracy and automation, the application of our proposed vegetation-bright impervious-dark impervious model to map impervious surfaces is limited due to the absence of soil component. To further extend the operational transferability of our proposed method, especially for the areas where plenty of bare soils exist during urbanization or reclamation, it is still of great necessity to mask out bare soils by automated classification prior to the implementation of linear spectral unmixing.
Global mapping of Al, Cu, Fe, and Zn in-use stocks and in-ground resources
Rauch, Jason N.
2009-01-01
Human activity has become a significant geomorphic force in modern times, resulting in unprecedented movements of material around Earth. An essential constituent of this material movement, the major industrial metals aluminium, copper, iron, and zinc in the human-built environment are mapped globally at 1-km nominal resolution for the year 2000 and compared with the locations of present-day in-ground resources. While the maps of in-ground resources generated essentially combine available databases, the mapping methodology of in-use stocks relies on the linear regression between gross domestic product and both in-use stock estimates and the Nighttime Lights of the World dataset. As the first global maps of in-use metal stocks, they reveal that a full 25% of the world's Fe, Al, Cu, and Zn in-use deposits are concentrated in three bands: (i) the Eastern seaboard from Washington, D.C. to Boston in the United States, (ii) England, Benelux into Germany and Northern Italy, and (iii) South Korea and Japan. This pattern is consistent across all metals investigated. In contrast, the global maps of primary metal resources reveal these deposits are more evenly distributed between the developed and developing worlds, with the distribution pattern differing depending on the metal. This analysis highlights the magnitude at which in-ground metal resources have been translocated to in-use stocks, largely from highly concentrated but globally dispersed in-ground deposits to more diffuse in-use stocks located primarily in developed urban regions. PMID:19858486
Multimodal Deep Autoencoder for Human Pose Recovery.
Hong, Chaoqun; Yu, Jun; Wan, Jian; Tao, Dacheng; Wang, Meng
2015-12-01
Video-based human pose recovery is usually conducted by retrieving relevant poses using image features. In the retrieving process, the mapping between 2D images and 3D poses is assumed to be linear in most of the traditional methods. However, their relationships are inherently non-linear, which limits recovery performance of these methods. In this paper, we propose a novel pose recovery method using non-linear mapping with multi-layered deep neural network. It is based on feature extraction with multimodal fusion and back-propagation deep learning. In multimodal fusion, we construct hypergraph Laplacian with low-rank representation. In this way, we obtain a unified feature description by standard eigen-decomposition of the hypergraph Laplacian matrix. In back-propagation deep learning, we learn a non-linear mapping from 2D images to 3D poses with parameter fine-tuning. The experimental results on three data sets show that the recovery error has been reduced by 20%-25%, which demonstrates the effectiveness of the proposed method.
Moeinaddini, Mazaher; Khorasani, Nematollah; Danehkar, Afshin; Darvishsefat, Ali Asghar; Zienalyan, Mehdi
2010-05-01
Selection of landfill site is a complex process and needs many diverse criteria. The purpose of this paper is to evaluate the suitability of the studied site as landfill for MSW in Karaj. Using weighted linear combination (WLC) method and spatial cluster analysis (SCA), suitable sites for allocation of landfill for a 20-year period were identified. For analyzing spatial auto-correlation of the land suitability map layer (LSML), Maron's I was used. Finally, using the analytical hierarchy process (AHP), the most preferred alternative for the landfill siting was identified. Main advantages of AHP are: relative ease of handling multiple criteria, easy to understand and effective handling of both qualitative and quantitative data. As a result, 6% of the study area is suitable for landfill siting and third alternative was identified as the most preferred for siting MSW landfill by AHP. The ranking of alternatives were obtained only by applying the WLC approach showed different results from the AHP. The WLC should be used only for the identification of alternatives and the AHP is used for prioritization. We suggest the employed procedure for other similar regions. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Mapping variation in radon potential both between and within geological units.
Miles, J C H; Appleton, J D
2005-09-01
Previously, the potential for high radon levels in UK houses has been mapped either on the basis of grouping the results of radon measurements in houses by grid squares or by geological units. In both cases, lognormal modelling of the distribution of radon concentrations was applied to allow the estimated proportion of houses above the UK radon Action Level (AL, 200 Bq m(-3)) to be mapped. This paper describes a method of combining the grid square and geological mapping methods to give more accurate maps than either method can provide separately. The land area is first divided up using a combination of bedrock and superficial geological characteristics derived from digital geological map data. Each different combination of geological characteristics may appear at the land surface in many discontinuous locations across the country. HPA has a database of over 430,000 houses in which long-term measurements of radon concentration have been made, and whose locations are accurately known. Each of these measurements is allocated to the appropriate bedrock--superficial geological combination underlying it. Taking each geological combination in turn, the spatial variation of radon potential is mapped, treating the combination as if it were continuous over the land area. All of the maps of radon potential within different geological combinations are then combined to produce a map of variation in radon potential over the whole land surface.
NASA Astrophysics Data System (ADS)
Murrieta Mendoza, Alejandro
Aircraft reference trajectory is an alternative method to reduce fuel consumption, thus the pollution released to the atmosphere. Fuel consumption reduction is of special importance for two reasons: first, because the aeronautical industry is responsible of 2% of the CO2 released to the atmosphere, and second, because it will reduce the flight cost. The aircraft fuel model was obtained from a numerical performance database which was created and validated by our industrial partner from flight experimental test data. A new methodology using the numerical database was proposed in this thesis to compute the fuel burn for a given trajectory. Weather parameters such as wind and temperature were taken into account as they have an important effect in fuel burn. The open source model used to obtain the weather forecast was provided by Weather Canada. A combination of linear and bi-linear interpolations allowed finding the required weather data. The search space was modelled using different graphs: one graph was used for mapping the different flight phases such as climb, cruise and descent, and another graph was used for mapping the physical space in which the aircraft would perform its flight. The trajectory was optimized in its vertical reference trajectory using the Beam Search algorithm, and a combination of the Beam Search algorithm with a search space reduction technique. The trajectory was optimized simultaneously for the vertical and lateral reference navigation plans while fulfilling a Required Time of Arrival constraint using three different metaheuristic algorithms: the artificial bee's colony, and the ant colony optimization. Results were validated using the software FlightSIMRTM, a commercial Flight Management System, an exhaustive search algorithm, and as flown flights obtained from flightawareRTM. All algorithms were able to reduce the fuel burn, and the flight costs. None None None None None None None
Lin, Zi-Jing; Li, Lin; Cazzell, Mary; Liu, Hanli
2014-08-01
Diffuse optical tomography (DOT) is a variant of functional near infrared spectroscopy and has the capability of mapping or reconstructing three dimensional (3D) hemodynamic changes due to brain activity. Common methods used in DOT image analysis to define brain activation have limitations because the selection of activation period is relatively subjective. General linear model (GLM)-based analysis can overcome this limitation. In this study, we combine the atlas-guided 3D DOT image reconstruction with GLM-based analysis (i.e., voxel-wise GLM analysis) to investigate the brain activity that is associated with risk decision-making processes. Risk decision-making is an important cognitive process and thus is an essential topic in the field of neuroscience. The Balloon Analog Risk Task (BART) is a valid experimental model and has been commonly used to assess human risk-taking actions and tendencies while facing risks. We have used the BART paradigm with a blocked design to investigate brain activations in the prefrontal and frontal cortical areas during decision-making from 37 human participants (22 males and 15 females). Voxel-wise GLM analysis was performed after a human brain atlas template and a depth compensation algorithm were combined to form atlas-guided DOT images. In this work, we wish to demonstrate the excellence of using voxel-wise GLM analysis with DOT to image and study cognitive functions in response to risk decision-making. Results have shown significant hemodynamic changes in the dorsal lateral prefrontal cortex (DLPFC) during the active-choice mode and a different activation pattern between genders; these findings correlate well with published literature in functional magnetic resonance imaging (fMRI) and fNIRS studies. Copyright © 2014 The Authors. Human Brain Mapping Published by Wiley Periodicals, Inc.
[Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].
Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong
2015-11-01
With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level.
Evaluation of ERTS imagery for spectral geological mapping in diverse terranes of New York State
NASA Technical Reports Server (NTRS)
Isachsen, Y. W.; Fakundiny, R. H.; Forster, S. W.
1974-01-01
Linear anomalies dominate the new geological information derived from ERTS-1 imagery, total lengths now exceeding 6000 km. Experimentation with a variety of viewing techniques suggests that conventional photogeologic analyses of band 7 results in the location of more than 97 percent of all linears found. The maxima on rose diagrams for ERTS-1 anomalies correspond well with those for mapped faults and topographic lineaments, despite a difference in relative magnitudes of maxima thought due to solar illumination direction. A multiscale analysis of linears showed that single topographic linears at 1:2,500,000 became segmented at 1:1,000,000, aligned zones of shorter parallel, en echelon, or conjugate linears at 1:500,000, and still shorter linears lacking obvious alignment at 1:250,000. Visible glacial features include individual drumlins, best seen in winter imagery, drumlinoids, eskers, ice-marginal drainage channels, glacial lake shorelines and sand plains, and end moraines.
An integrated pan-tropical biomass map using multiple reference datasets.
Avitabile, Valerio; Herold, Martin; Heuvelink, Gerard B M; Lewis, Simon L; Phillips, Oliver L; Asner, Gregory P; Armston, John; Ashton, Peter S; Banin, Lindsay; Bayol, Nicolas; Berry, Nicholas J; Boeckx, Pascal; de Jong, Bernardus H J; DeVries, Ben; Girardin, Cecile A J; Kearsley, Elizabeth; Lindsell, Jeremy A; Lopez-Gonzalez, Gabriela; Lucas, Richard; Malhi, Yadvinder; Morel, Alexandra; Mitchard, Edward T A; Nagy, Laszlo; Qie, Lan; Quinones, Marcela J; Ryan, Casey M; Ferry, Slik J W; Sunderland, Terry; Laurin, Gaia Vaglio; Gatti, Roberto Cazzolla; Valentini, Riccardo; Verbeeck, Hans; Wijaya, Arief; Willcock, Simon
2016-04-01
We combined two existing datasets of vegetation aboveground biomass (AGB) (Proceedings of the National Academy of Sciences of the United States of America, 108, 2011, 9899; Nature Climate Change, 2, 2012, 182) into a pan-tropical AGB map at 1-km resolution using an independent reference dataset of field observations and locally calibrated high-resolution biomass maps, harmonized and upscaled to 14 477 1-km AGB estimates. Our data fusion approach uses bias removal and weighted linear averaging that incorporates and spatializes the biomass patterns indicated by the reference data. The method was applied independently in areas (strata) with homogeneous error patterns of the input (Saatchi and Baccini) maps, which were estimated from the reference data and additional covariates. Based on the fused map, we estimated AGB stock for the tropics (23.4 N-23.4 S) of 375 Pg dry mass, 9-18% lower than the Saatchi and Baccini estimates. The fused map also showed differing spatial patterns of AGB over large areas, with higher AGB density in the dense forest areas in the Congo basin, Eastern Amazon and South-East Asia, and lower values in Central America and in most dry vegetation areas of Africa than either of the input maps. The validation exercise, based on 2118 estimates from the reference dataset not used in the fusion process, showed that the fused map had a RMSE 15-21% lower than that of the input maps and, most importantly, nearly unbiased estimates (mean bias 5 Mg dry mass ha(-1) vs. 21 and 28 Mg ha(-1) for the input maps). The fusion method can be applied at any scale including the policy-relevant national level, where it can provide improved biomass estimates by integrating existing regional biomass maps as input maps and additional, country-specific reference datasets. © 2015 John Wiley & Sons Ltd.
Singh, Ramesh K.; Senay, Gabriel B.; Velpuri, Naga Manohar; Bohms, Stefanie; Verdin, James P.
2014-01-01
Downscaling is one of the important ways of utilizing the combined benefits of the high temporal resolution of Moderate Resolution Imaging Spectroradiometer (MODIS) images and fine spatial resolution of Landsat images. We have evaluated the output regression with intercept method and developed the Linear with Zero Intercept (LinZI) method for downscaling MODIS-based monthly actual evapotranspiration (AET) maps to the Landsat-scale monthly AET maps for the Colorado River Basin for 2010. We used the 8-day MODIS land surface temperature product (MOD11A2) and 328 cloud-free Landsat images for computing AET maps and downscaling. The regression with intercept method does have limitations in downscaling if the slope and intercept are computed over a large area. A good agreement was obtained between downscaled monthly AET using the LinZI method and the eddy covariance measurements from seven flux sites within the Colorado River Basin. The mean bias ranged from −16 mm (underestimation) to 22 mm (overestimation) per month, and the coefficient of determination varied from 0.52 to 0.88. Some discrepancies between measured and downscaled monthly AET at two flux sites were found to be due to the prevailing flux footprint. A reasonable comparison was also obtained between downscaled monthly AET using LinZI method and the gridded FLUXNET dataset. The downscaled monthly AET nicely captured the temporal variation in sampled land cover classes. The proposed LinZI method can be used at finer temporal resolution (such as 8 days) with further evaluation. The proposed downscaling method will be very useful in advancing the application of remotely sensed images in water resources planning and management.
Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework
Antonopoulos, Georgios C.; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko
2015-01-01
A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available. PMID:26599984
Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework.
Antonopoulos, Georgios C; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko
2015-01-01
A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available.
Hashim, Mazlan
2015-01-01
This research presents the results of the GIS-based statistical models for generation of landslide susceptibility mapping using geographic information system (GIS) and remote-sensing data for Cameron Highlands area in Malaysia. Ten factors including slope, aspect, soil, lithology, NDVI, land cover, distance to drainage, precipitation, distance to fault, and distance to road were extracted from SAR data, SPOT 5 and WorldView-1 images. The relationships between the detected landslide locations and these ten related factors were identified by using GIS-based statistical models including analytical hierarchy process (AHP), weighted linear combination (WLC) and spatial multi-criteria evaluation (SMCE) models. The landslide inventory map which has a total of 92 landslide locations was created based on numerous resources such as digital aerial photographs, AIRSAR data, WorldView-1 images, and field surveys. Then, 80% of the landslide inventory was used for training the statistical models and the remaining 20% was used for validation purpose. The validation results using the Relative landslide density index (R-index) and Receiver operating characteristic (ROC) demonstrated that the SMCE model (accuracy is 96%) is better in prediction than AHP (accuracy is 91%) and WLC (accuracy is 89%) models. These landslide susceptibility maps would be useful for hazard mitigation purpose and regional planning. PMID:25898919
Shahabi, Himan; Hashim, Mazlan
2015-04-22
This research presents the results of the GIS-based statistical models for generation of landslide susceptibility mapping using geographic information system (GIS) and remote-sensing data for Cameron Highlands area in Malaysia. Ten factors including slope, aspect, soil, lithology, NDVI, land cover, distance to drainage, precipitation, distance to fault, and distance to road were extracted from SAR data, SPOT 5 and WorldView-1 images. The relationships between the detected landslide locations and these ten related factors were identified by using GIS-based statistical models including analytical hierarchy process (AHP), weighted linear combination (WLC) and spatial multi-criteria evaluation (SMCE) models. The landslide inventory map which has a total of 92 landslide locations was created based on numerous resources such as digital aerial photographs, AIRSAR data, WorldView-1 images, and field surveys. Then, 80% of the landslide inventory was used for training the statistical models and the remaining 20% was used for validation purpose. The validation results using the Relative landslide density index (R-index) and Receiver operating characteristic (ROC) demonstrated that the SMCE model (accuracy is 96%) is better in prediction than AHP (accuracy is 91%) and WLC (accuracy is 89%) models. These landslide susceptibility maps would be useful for hazard mitigation purpose and regional planning.
Liu, X; Gorsevski, P V; Yacobucci, M M; Onasch, C M
2016-06-01
Planning of shale gas infrastructure and drilling sites for hydraulic fracturing has important spatial implications. The evaluation of conflicting and competing objectives requires an explicit consideration of multiple criteria as they have important environmental and economic implications. This study presents a web-based multicriteria spatial decision support system (SDSS) prototype with a flexible and user-friendly interface that could provide educational or decision-making capabilities with respect to hydraulic fracturing site selection in eastern Ohio. One of the main features of this SDSS is to emphasize potential trade-offs between important factors of environmental and economic ramifications from hydraulic fracturing activities using a weighted linear combination (WLC) method. In the prototype, the GIS-enabled analytical components allow spontaneous visualization of available alternatives on maps which provide value-added features for decision support processes and derivation of final decision maps. The SDSS prototype also facilitates nonexpert participation capabilities using a mapping module, decision-making tool, group decision module, and social media sharing tools. The logical flow of successively presented forms and standardized criteria maps is used to generate visualization of trade-off scenarios and alternative solutions tailored to individual user's preferences that are graphed for subsequent decision-making.
Hyers-Ulam stability of a generalized Apollonius type quadratic mapping
NASA Astrophysics Data System (ADS)
Park, Chun-Gil; Rassias, Themistocles M.
2006-10-01
Let X,Y be linear spaces. It is shown that if a mapping satisfies the following functional equation: then the mapping is quadratic. We moreover prove the Hyers-Ulam stability of the functional equation (0.1) in Banach spaces.
Higher-dimensional attractors with absolutely continuous invariant probability
NASA Astrophysics Data System (ADS)
Bocker, Carlos; Bortolotti, Ricardo
2018-05-01
Consider a dynamical system given by , where E is a linear expanding map of , C is a linear contracting map of and f is in . We provide sufficient conditions for E that imply the existence of an open set of pairs for which the corresponding dynamic T admits a unique absolutely continuous invariant probability. A geometrical characteristic of transversality between self-intersections of images of is present in the dynamic of the maps in . In addition, we give a condition between E and C under which it is possible to perturb f to obtain a pair in .
Landsat analysis for uranium exploration in Northeast Turkey
Lee, Keenan
1983-01-01
No uranium deposits are known in the Trabzon, Turkey region, and consequently, exploration criteria have not been defined. Nonetheless, by analogy with uranium deposits studied elsewhere, exploration guides are suggested to include dense concentrations of linear features, lineaments -- especially with northwest trend, acidic plutonic rocks, and alteration indicated by limonite. A suite of digitally processed images of a single Landsat scene served as the image base for mapping 3,376 linear features. Analysis of the linear feature data yielded two statistically significant trends, which in turn defined two sets of strong lineaments. Color composite images were used to map acidic plutonic rocks and areas of surficial limonitic materials. The Landsat interpretation yielded a map of these exploration guides that may be used to evaluate relative uranium potential. One area in particular shows a high coincidence of favorable indicators.
The Super-linear Slope of the Spatially Resolved Star Formation Law in NGC 3521 and NGC 5194 (M51a)
NASA Astrophysics Data System (ADS)
Liu, Guilin; Koda, Jin; Calzetti, Daniela; Fukuhara, Masayuki; Momose, Rieko
2011-07-01
We have conducted interferometric observations with the Combined Array for Research in Millimeter Astronomy (CARMA) and an on-the-fly mapping with the 45 m telescope at Nobeyama Radio Observatory (NRO45) in the CO (J = 1-0) emission line of the nearby spiral galaxy NGC 3521. Using the new combined CARMA + NRO45 data of NGC 3521, together with similar data for NGC 5194 (M51a) and archival SINGS Hα, 24 μm THINGS H I, and Galaxy Evolution Explorer/Far-UV (FUV) data for these two galaxies, we investigate the empirical scaling law that connects the surface density of star formation rate (SFR) and cold gas (known as the Schmidt-Kennicutt law or S-K law) on a spatially resolved basis and find a super-linear slope for the S-K law when carefully subtracting the background emissions in the SFR image. We argue that plausibly deriving SFR maps of nearby galaxies requires the diffuse stellar and dust background emission to be subtracted carefully (especially in the mid-infrared and to a lesser extent in the FUV). Applying this approach, we perform a pixel-by-pixel analysis on both galaxies and quantitatively show that the controversial result whether the molecular S-K law (expressed as \\Sigma _SFR\\propto \\Sigma _H_2^{\\gamma _H_2}) is super-linear or basically linear is a result of removing or preserving the local background. In both galaxies, the power index of the molecular S-K law is super-linear (\\gamma _H_2\\gtrsim 1.5) at the highest available resolution (~230 pc) and decreases monotonically for decreasing resolution. We also find in both galaxies that the scatter of the molecular S-K law (\\sigma _H_2) monotonically increases as the resolution becomes higher, indicating a trend for which the S-K law breaks down below some scale. Both \\gamma _H_2 and \\sigma _H_2 are systematically larger in M51a than in NGC 3521, but when plotted against the de-projected scale (δdp), both quantities become highly consistent for the two galaxies, tentatively suggesting that the sub-kpc molecular S-K law in spiral galaxies depends only on the scale being considered, without varying among spiral galaxies. A logarithmic function \\gamma _H_2=-1.1 log [\\delta _dp/kpc]+1.4 and a linear relation \\sigma _H_2=-0.2 [\\delta _dp/kpc]+0.7 are obtained through fitting to the M51a data, which describes both galaxies impressively well on sub-kpc scales. A larger sample of galaxies with better sensitivity, resolution, and broader field of view are required to test the general applicability of these relations.
NASA Astrophysics Data System (ADS)
Planck Collaboration; Aghanim, N.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Ballardini, M.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Basak, S.; Benabed, K.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Burigana, C.; Calabrese, E.; Cardoso, J.-F.; Carron, J.; Chiang, H. C.; Colombo, L. P. L.; Comis, B.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; de Bernardis, P.; de Zotti, G.; Delabrouille, J.; Di Valentino, E.; Dickinson, C.; Diego, J. M.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Dusini, S.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fantaye, Y.; Finelli, F.; Forastieri, F.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frolov, A.; Galeotta, S.; Galli, S.; Ganga, K.; Génova-Santos, R. T.; Gerbino, M.; Ghosh, T.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Helou, G.; Henrot-Versillé, S.; Herranz, D.; Hivon, E.; Huang, Z.; Jaffe, A. H.; Jones, W. C.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Krachmalnicoff, N.; Kunz, M.; Kurki-Suonio, H.; Lamarre, J.-M.; Langer, M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Le Jeune, M.; Levrier, F.; Lilje, P. B.; Lilley, M.; Lindholm, V.; López-Caniego, M.; Ma, Y.-Z.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Matarrese, S.; Mauri, N.; McEwen, J. D.; Melchiorri, A.; Mennella, A.; Migliaccio, M.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Moss, A.; Natoli, P.; Oxborrow, C. A.; Pagano, L.; Paoletti, D.; Patanchon, G.; Perdereau, O.; Perotto, L.; Pettorino, V.; Piacentini, F.; Plaszczynski, S.; Polastri, L.; Polenta, G.; Puget, J.-L.; Rachen, J. P.; Racine, B.; Reinecke, M.; Remazeilles, M.; Renzi, A.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Ruiz-Granados, B.; Salvati, L.; Sandri, M.; Savelainen, M.; Scott, D.; Sirignano, C.; Sirri, G.; Soler, J. D.; Spencer, L. D.; Suur-Uski, A.-S.; Tauber, J. A.; Tavagnacco, D.; Tenti, M.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Trombetti, T.; Valiviita, J.; Van Tent, F.; Vielva, P.; Villa, F.; Vittorio, N.; Wandelt, B. D.; Wehus, I. K.; Zacchei, A.; Zonca, A.
2016-12-01
Using the Planck 2015 data release (PR2) temperature maps, we separate Galactic thermal dust emission from cosmic infrared background (CIB) anisotropies. For this purpose, we implement a specifically tailored component-separation method, the so-called generalized needlet internal linear combination (GNILC) method, which uses spatial information (the angular powerspectra) to disentangle the Galactic dust emission and CIB anisotropies. We produce significantly improved all-sky maps of Planck thermal dust emission, with reduced CIB contamination, at 353, 545, and 857 GHz. By reducing the CIB contamination of the thermal dust maps, we provide more accurate estimates of the local dust temperature and dust spectral index over the sky with reduced dispersion, especially at high Galactic latitudes above b = ±20°. We find that the dust temperature is T = (19.4 ± 1.3) K and the dust spectral index is β = 1.6 ± 0.1 averaged over the whole sky, while T = (19.4 ± 1.5) K and β = 1.6 ± 0.2 on 21% of the sky at high latitudes. Moreover, subtracting the new CIB-removed thermal dust maps from the CMB-removed Planck maps gives access to the CIB anisotropies over 60% of the sky at Galactic latitudes |b| > 20°. Because they are a significant improvement over previous Planck products, the GNILC maps are recommended for thermal dust science. The new CIB maps can be regarded as indirect tracers of the dark matter and they are recommended for exploring cross-correlations with lensing and large-scale structure optical surveys. The reconstructed GNILC thermal dust and CIB maps are delivered as Planck products.
Aghanim, N.; Ashdown, M.; Aumont, J.; ...
2016-12-12
Using the Planck 2015 data release (PR2) temperature maps, we separate Galactic thermal dust emission from cosmic infrared background (CIB) anisotropies. For this purpose, we implement a specifically tailored component-separation method, the so-called generalized needlet internal linear combination (GNILC) method, which uses spatial information (the angular powerspectra) to disentangle the Galactic dust emission and CIB anisotropies. We produce significantly improved all-sky maps of Planck thermal dust emission, with reduced CIB contamination, at 353, 545, and 857 GHz. By reducing the CIB contamination of the thermal dust maps, we provide more accurate estimates of the local dust temperature and dust spectralmore » index over the sky with reduced dispersion, especially at high Galactic latitudes above b = ±20°. We find that the dust temperature is T = (19.4 ± 1.3) K and the dust spectral index is β = 1.6 ± 0.1 averaged over the whole sky, while T = (19.4 ± 1.5) K and β = 1.6 ± 0.2 on 21% of the sky at high latitudes. Moreover, subtracting the new CIB-removed thermal dust maps from the CMB-removed Planck maps gives access to the CIB anisotropies over 60% of the sky at Galactic latitudes |b| > 20°. Because they are a significant improvement over previous Planck products, the GNILC maps are recommended for thermal dust science. The new CIB maps can be regarded as indirect tracers of the dark matter and they are recommended for exploring cross-correlations with lensing and large-scale structure optical surveys. The reconstructed GNILC thermal dust and CIB maps are delivered as Planck products.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aghanim, N.; Ashdown, M.; Aumont, J.
Using the Planck 2015 data release (PR2) temperature maps, we separate Galactic thermal dust emission from cosmic infrared background (CIB) anisotropies. For this purpose, we implement a specifically tailored component-separation method, the so-called generalized needlet internal linear combination (GNILC) method, which uses spatial information (the angular powerspectra) to disentangle the Galactic dust emission and CIB anisotropies. We produce significantly improved all-sky maps of Planck thermal dust emission, with reduced CIB contamination, at 353, 545, and 857 GHz. By reducing the CIB contamination of the thermal dust maps, we provide more accurate estimates of the local dust temperature and dust spectralmore » index over the sky with reduced dispersion, especially at high Galactic latitudes above b = ±20°. We find that the dust temperature is T = (19.4 ± 1.3) K and the dust spectral index is β = 1.6 ± 0.1 averaged over the whole sky, while T = (19.4 ± 1.5) K and β = 1.6 ± 0.2 on 21% of the sky at high latitudes. Moreover, subtracting the new CIB-removed thermal dust maps from the CMB-removed Planck maps gives access to the CIB anisotropies over 60% of the sky at Galactic latitudes |b| > 20°. Because they are a significant improvement over previous Planck products, the GNILC maps are recommended for thermal dust science. The new CIB maps can be regarded as indirect tracers of the dark matter and they are recommended for exploring cross-correlations with lensing and large-scale structure optical surveys. The reconstructed GNILC thermal dust and CIB maps are delivered as Planck products.« less
HARMONIC IN-PAINTING OF COSMIC MICROWAVE BACKGROUND SKY BY CONSTRAINED GAUSSIAN REALIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jaiseung; Naselsky, Pavel; Mandolesi, Nazzareno, E-mail: jkim@nbi.dk
The presence of astrophysical emissions between the last scattering surface and our vantage point requires us to apply a foreground mask on cosmic microwave background (CMB) sky maps, leading to large cuts around the Galactic equator and numerous holes. Since many CMB analysis, in particular on the largest angular scales, may be performed on a whole-sky map in a more straightforward and reliable manner, it is of utmost importance to develop an efficient method to fill in the masked pixels in a way compliant with the expected statistical properties and the unmasked pixels. In this Letter, we consider the Montemore » Carlo simulation of a constrained Gaussian field and derive it CMB anisotropy in harmonic space, where a feasible implementation is possible with good approximation. We applied our method to simulated data, which shows that our method produces a plausible whole-sky map, given the unmasked pixels, and a theoretical expectation. Subsequently, we applied our method to the Wilkinson Microwave Anisotropy Probe foreground-reduced maps and investigated the anomalous alignment between quadrupole and octupole components. From our investigation, we find that the alignment in the foreground-reduced maps is even higher than the Internal Linear Combination map. We also find that the V-band map has higher alignment than other bands, despite the expectation that the V-band map has less foreground contamination than other bands. Therefore, we find it hard to attribute the alignment to residual foregrounds. Our method will be complementary to other efforts on in-painting or reconstructing the masked CMB data, and of great use to Planck surveyor and future missions.« less
Dual-contrast agent photon-counting computed tomography of the heart: initial experience.
Symons, Rolf; Cork, Tyler E; Lakshmanan, Manu N; Evers, Robert; Davies-Venn, Cynthia; Rice, Kelly A; Thomas, Marvin L; Liu, Chia-Ying; Kappler, Steffen; Ulzheimer, Stefan; Sandfort, Veit; Bluemke, David A; Pourmorteza, Amir
2017-08-01
To determine the feasibility of dual-contrast agent imaging of the heart using photon-counting detector (PCD) computed tomography (CT) to simultaneously assess both first-pass and late enhancement of the myocardium. An occlusion-reperfusion canine model of myocardial infarction was used. Gadolinium-based contrast was injected 10 min prior to PCD CT. Iodinated contrast was infused immediately prior to PCD CT, thus capturing late gadolinium enhancement as well as first-pass iodine enhancement. Gadolinium and iodine maps were calculated using a linear material decomposition technique and compared to single-energy (conventional) images. PCD images were compared to in vivo and ex vivo magnetic resonance imaging (MRI) and histology. For infarct versus remote myocardium, contrast-to-noise ratio (CNR) was maximal on late enhancement gadolinium maps (CNR 9.0 ± 0.8, 6.6 ± 0.7, and 0.4 ± 0.4, p < 0.001 for gadolinium maps, single-energy images, and iodine maps, respectively). For infarct versus blood pool, CNR was maximum for iodine maps (CNR 11.8 ± 1.3, 3.8 ± 1.0, and 1.3 ± 0.4, p < 0.001 for iodine maps, gadolinium maps, and single-energy images, respectively). Combined first-pass iodine and late gadolinium maps allowed quantitative separation of blood pool, scar, and remote myocardium. MRI and histology analysis confirmed accurate PCD CT delineation of scar. Simultaneous multi-contrast agent cardiac imaging is feasible with photon-counting detector CT. These initial proof-of-concept results may provide incentives to develop new k-edge contrast agents, to investigate possible interactions between multiple simultaneously administered contrast agents, and to ultimately bring them to clinical practice.
Voltage and pace-capture mapping of linear ablation lesions overestimates chronic ablation gap size.
O'Neill, Louisa; Harrison, James; Chubb, Henry; Whitaker, John; Mukherjee, Rahul K; Bloch, Lars Ølgaard; Andersen, Niels Peter; Dam, Høgni; Jensen, Henrik K; Niederer, Steven; Wright, Matthew; O'Neill, Mark; Williams, Steven E
2018-04-26
Conducting gaps in lesion sets are a major reason for failure of ablation procedures. Voltage mapping and pace-capture have been proposed for intra-procedural identification of gaps. We aimed to compare gap size measured acutely and chronically post-ablation to macroscopic gap size in a porcine model. Intercaval linear ablation was performed in eight Göttingen minipigs with a deliberate gap of ∼5 mm left in the ablation line. Gap size was measured by interpolating ablation contact force values between ablation tags and thresholding at a low force cut-off of 5 g. Bipolar voltage mapping and pace-capture mapping along the length of the line were performed immediately, and at 2 months, post-ablation. Animals were euthanized and gap sizes were measured macroscopically. Voltage thresholds to define scar were determined by receiver operating characteristic analysis as <0.56 mV (acutely) and <0.62 mV (chronically). Taking the macroscopic gap size as gold standard, error in gap measurements were determined for voltage, pace-capture, and ablation contact force maps. All modalities overestimated chronic gap size, by 1.4 ± 2.0 mm (ablation contact force map), 5.1 ± 3.4 mm (pace-capture), and 9.5 ± 3.8 mm (voltage mapping). Error on ablation contact force map gap measurements were significantly less than for voltage mapping (P = 0.003, Tukey's multiple comparisons test). Chronically, voltage mapping and pace-capture mapping overestimated macroscopic gap size by 11.9 ± 3.7 and 9.8 ± 3.5 mm, respectively. Bipolar voltage and pace-capture mapping overestimate the size of chronic gap formation in linear ablation lesions. The most accurate estimation of chronic gap size was achieved by analysis of catheter-myocardium contact force during ablation.
Detecting chaos in particle accelerators through the frequency map analysis method.
Papaphilippou, Yannis
2014-06-01
The motion of beams in particle accelerators is dominated by a plethora of non-linear effects, which can enhance chaotic motion and limit their performance. The application of advanced non-linear dynamics methods for detecting and correcting these effects and thereby increasing the region of beam stability plays an essential role during the accelerator design phase but also their operation. After describing the nature of non-linear effects and their impact on performance parameters of different particle accelerator categories, the theory of non-linear particle motion is outlined. The recent developments on the methods employed for the analysis of chaotic beam motion are detailed. In particular, the ability of the frequency map analysis method to detect chaotic motion and guide the correction of non-linear effects is demonstrated in particle tracking simulations but also experimental data.
An accurate Kriging-based regional ionospheric model using combined GPS/BeiDou observations
NASA Astrophysics Data System (ADS)
Abdelazeem, Mohamed; Çelik, Rahmi N.; El-Rabbany, Ahmed
2018-01-01
In this study, we propose a regional ionospheric model (RIM) based on both of the GPS-only and the combined GPS/BeiDou observations for single-frequency precise point positioning (SF-PPP) users in Europe. GPS/BeiDou observations from 16 reference stations are processed in the zero-difference mode. A least-squares algorithm is developed to determine the vertical total electron content (VTEC) bi-linear function parameters for a 15-minute time interval. The Kriging interpolation method is used to estimate the VTEC values at a 1 ° × 1 ° grid. The resulting RIMs are validated for PPP applications using GNSS observations from another set of stations. The SF-PPP accuracy and convergence time obtained through the proposed RIMs are computed and compared with those obtained through the international GNSS service global ionospheric maps (IGS-GIM). The results show that the RIMs speed up the convergence time and enhance the overall positioning accuracy in comparison with the IGS-GIM model, particularly the combined GPS/BeiDou-based model.
Remote sensing of oceanic phytoplankton - Present capabilities and future goals
NASA Technical Reports Server (NTRS)
Esaias, W. E.
1980-01-01
A description is given of current work in the development of sensors, and their integration into increasingly powerful systems, for oceanic phytoplankton abundance estimation. Among the problems relevant to such work are phytoplankton ecology, the spatial and temporal domains, available sensor platforms, and sensor combinations. Among the platforms considered are satellites, aircraft, tethered balloons, helicopters, ships, and the Space Shuttle. Sensors discussed include microwave radiometers, laser fluorosensors, microwave scatterometers, multispectral scanners, Coastal Ocean Dynamics Radar (CODAR), and linear array detectors. Consideration is also given to the prospects for such future sensor systems as the National Oceanic Satellite System (NOSS) and the Airborne Integrated Mapping System (AIMS).
Combining Techniques to Refine Item to Skills Q-Matrices with a Partition Tree
ERIC Educational Resources Information Center
Desmarais, Michel C.; Xu, Peng; Beheshti, Behzad
2015-01-01
The problem of mapping items to skills is gaining interest with the emergence of recent techniques that can use data for both defining this mapping, and for refining mappings given by experts. We investigate the problem of refining mapping from an expert by combining the output of different techniques. The combination is based on a partition tree…
Unpacking the Complexity of Linear Equations from a Cognitive Load Theory Perspective
ERIC Educational Resources Information Center
Ngu, Bing Hiong; Phan, Huy P.
2016-01-01
The degree of element interactivity determines the complexity and therefore the intrinsic cognitive load of linear equations. The unpacking of linear equations at the level of operational and relational lines allows the classification of linear equations in a hierarchical level of complexity. Mapping similar operational and relational lines across…
Zero entropy continuous interval maps and MMLS-MMA property
NASA Astrophysics Data System (ADS)
Jiang, Yunping
2018-06-01
We prove that the flow generated by any continuous interval map with zero topological entropy is minimally mean-attractable and minimally mean-L-stable. One of the consequences is that any oscillating sequence is linearly disjoint from all flows generated by all continuous interval maps with zero topological entropy. In particular, the Möbius function is linearly disjoint from all flows generated by all continuous interval maps with zero topological entropy (Sarnak’s conjecture for continuous interval maps). Another consequence is a non-trivial example of a flow having discrete spectrum. We also define a log-uniform oscillating sequence and show a result in ergodic theory for comparison. This material is based upon work supported by the National Science Foundation. It is also partially supported by a collaboration grant from the Simons Foundation (grant number 523341) and PSC-CUNY awards and a grant from NSFC (grant number 11571122).
Riches, S F; Payne, G S; Morgan, V A; Dearnaley, D; Morgan, S; Partridge, M; Livni, N; Ogden, C; deSouza, N M
2015-05-01
The objectives are determine the optimal combination of MR parameters for discriminating tumour within the prostate using linear discriminant analysis (LDA) and to compare model accuracy with that of an experienced radiologist. Multiparameter MRIs in 24 patients before prostatectomy were acquired. Tumour outlines from whole-mount histology, T2-defined peripheral zone (PZ), and central gland (CG) were superimposed onto slice-matched parametric maps. T2, Apparent Diffusion Coefficient, initial area under the gadolinium curve, vascular parameters (K(trans),Kep,Ve), and (choline+polyamines+creatine)/citrate were compared between tumour and non-tumour tissues. Receiver operating characteristic (ROC) curves determined sensitivity and specificity at spectroscopic voxel resolution and per lesion, and LDA determined the optimal multiparametric model for identifying tumours. Accuracy was compared with an expert observer. Tumours were significantly different from PZ and CG for all parameters (all p < 0.001). Area under the ROC curve for discriminating tumour from non-tumour was significantly greater (p < 0.001) for the multiparametric model than for individual parameters; at 90 % specificity, sensitivity was 41 % (MRSI voxel resolution) and 59 % per lesion. At this specificity, an expert observer achieved 28 % and 49 % sensitivity, respectively. The model was more accurate when parameters from all techniques were included and performed better than an expert observer evaluating these data. • The combined model increases diagnostic accuracy in prostate cancer compared with individual parameters • The optimal combined model includes parameters from diffusion, spectroscopy, perfusion, and anatominal MRI • The computed model improves tumour detection compared to an expert viewing parametric maps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayah, N; Weiss, E; Watkins, W
Purpose: To evaluate the dose-mapping error (DME) inherent to conventional dose-mapping algorithms as a function of dose-matrix resolution. Methods: As DME has been reported to be greatest where dose-gradients overlap tissue-density gradients, non-clinical 66 Gy IMRT plans were generated for 11 lung patients with the target edge defined as the maximum 3D density gradient on the 0% (end of inhale) breathing phase. Post-optimization, Beams were copied to 9 breathing phases. Monte Carlo dose computed (with 2*2*2 mm{sup 3} resolution) on all 10 breathing phases was deformably mapped to phase 0% using the Monte Carlo energy-transfer method with congruent mass-mapping (EMCM);more » an externally implemented tri-linear interpolation method with voxel sub-division; Pinnacle’s internal (tri-linear) method; and a post-processing energy-mass voxel-warping method (dTransform). All methods used the same base displacement-vector-field (or it’s pseudo-inverse as appropriate) for the dose mapping. Mapping was also performed at 4*4*4 mm{sup 3} by merging adjacent dose voxels. Results: Using EMCM as the reference standard, no clinically significant (>1 Gy) DMEs were found for the mean lung dose (MLD), lung V20Gy, or esophagus dose-volume indices, although MLD and V20Gy were statistically different (2*2*2 mm{sup 3}). Pinnacle-to-EMCM target D98% DMEs of 4.4 and 1.2 Gy were observed ( 2*2*2 mm{sup 3}). However dTransform, which like EMCM conserves integral dose, had DME >1 Gy for one case. The root mean square RMS of the DME for the tri-linear-to- EMCM methods was lower for the smaller voxel volume for the tumor 4D-D98%, lung V20Gy, and cord D1%. Conclusion: When tissue gradients overlap with dose gradients, organs-at-risk DME was statistically significant but not clinically significant. Target-D98%-DME was deemed clinically significant for 2/11 patients (2*2*2 mm{sup 3}). Since tri-linear RMS-DME between EMCM and tri-linear was reduced at 2*2*2 mm{sup 3}, use of this resolution is recommended for dose mapping. Interpolative dose methods are sufficiently accurate for the majority of cases. J.V. Siebers receives funding support from Varian Medical Systems.« less
Some Applications Of Semigroups And Computer Algebra In Discrete Structures
NASA Astrophysics Data System (ADS)
Bijev, G.
2009-11-01
An algebraic approach to the pseudoinverse generalization problem in Boolean vector spaces is used. A map (p) is defined, which is similar to an orthogonal projection in linear vector spaces. Some other important maps with properties similar to those of the generalized inverses (pseudoinverses) of linear transformations and matrices corresponding to them are also defined and investigated. Let Ax = b be an equation with matrix A and vectors x and b Boolean. Stochastic experiments for solving the equation, which involves the maps defined and use computer algebra methods, have been made. As a result, the Hamming distance between vectors Ax = p(b) and b is equal or close to the least possible. We also share our experience in using computer algebra systems for teaching discrete mathematics and linear algebra and research. Some examples for computations with binary relations using Maple are given.
Hisatake, Shintaro; Tada, Keiji; Nagatsuma, Tadao
2010-03-01
We demonstrate the generation of an optical frequency comb (OFC) with a Gaussian spectrum using a continuous-wave (CW) laser, based on spatial convolution of a slit and a periodically moving optical beam spot in a linear time-to-space mapping system. A CW optical beam is linearly mapped to a spatial signal using two sinusoidal electro-optic (EO) deflections and an OFC is extracted by inserting a narrow spatial slit in the Fourier-transform plane of a second EO deflector (EOD). The spectral shape of the OFC corresponds to the spatial beam profile in the near-field region of the second EOD, which can be manipulated by a spatial filter without spectral dispersers. In a proof-of-concept experiment, a 16.25-GHz-spaced, 240-GHz-wide Gaussian-envelope OFC (corresponding to 1.8 ps Gaussian pulse generation) was demonstrated.
Course transformation: Content, structure and effectiveness analysis
NASA Astrophysics Data System (ADS)
DuHadway, Linda P.
The organization of learning materials is often limited by the systems available for delivery of such material. Currently, the learning management system (LMS) is widely used to distribute course materials. These systems deliver the material in a text-based, linear way. As online education continues to expand and educators seek to increase their effectiveness by adding more effective active learning strategies, these delivery methods become a limitation. This work demonstrates the possibility of presenting course materials in a graphical way that expresses important relations and provides support for manipulating the order of those materials. The ENABLE system gathers data from an existing course, uses text analysis techniques, graph theory, graph transformation, and a user interface to create and present graphical course maps. These course maps are able to express information not currently available in the LMS. Student agents have been developed to traverse these course maps to identify the variety of possible paths through the material. The temporal relations imposed by the current course delivery methods have been replaced by prerequisite relations that express ordering that provides educational value. Reducing the connections to these more meaningful relations allows more possibilities for change. Technical methods are used to explore and calibrate linear and nonlinear models of learning. These methods are used to track mastery of learning material and identify relative difficulty values. Several probability models are developed and used to demonstrate that data from existing, temporally based courses can be used to make predictions about student success in courses using the same material but organized without the temporal limitations. Combined, these demonstrate the possibility of tools and techniques that can support the implementation of a graphical course map that allows varied paths and provides an enriched, more informative interface between the educator, the student, and the learning material. This fundamental change in how course materials are presented and interfaced with has the potential to make educational opportunities available to a broader spectrum of people with diverse abilities and circumstances. The graphical course map can be pivotal in attaining this transition.
Blind beam-hardening correction from Poisson measurements
NASA Astrophysics Data System (ADS)
Gu, Renliang; Dogandžić, Aleksandar
2016-02-01
We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme.
Deciphering groundwater potential zones in hard rock terrain using geospatial technology.
Dar, Imran A; Sankar, K; Dar, Mithas A
2011-02-01
Remote sensing and geographical information system (GIS) has become one of the leading tools in the field of groundwater research, which helps in assessing, monitoring, and conserving groundwater resources. This paper mainly deals with the integrated approach of remote sensing and GIS to delineate groundwater potential zones in hard rock terrain. Digitized vector maps pertaining to chosen parameters, viz. geomorphology, geology, land use/land cover, lineament, relief, and drainage, were converted to raster data using 23 m×23 m grid cell size. Moreover, curvature of the study area was also considered while manipulating the spatial data. The raster maps of these parameters were assigned to their respective theme weight and class weights. The individual theme weight was multiplied by its respective class weight and then all the raster thematic layers were aggregated in a linear combination equation in Arc Map GIS Raster Calculator module. Moreover, the weighted layers were statistically modeled to get the areal extent of groundwater prospects with respect to each thematic layer. The final result depicts the favorable prospective zones in the study area and can be helpful in better planning and management of groundwater resources especially in hard rock terrains.
SiSeRHMap v1.0: a simulator for mapped seismic response using a hybrid model
NASA Astrophysics Data System (ADS)
Grelle, G.; Bonito, L.; Lampasi, A.; Revellino, P.; Guerriero, L.; Sappa, G.; Guadagno, F. M.
2015-06-01
SiSeRHMap is a computerized methodology capable of drawing up prediction maps of seismic response. It was realized on the basis of a hybrid model which combines different approaches and models in a new and non-conventional way. These approaches and models are organized in a code-architecture composed of five interdependent modules. A GIS (Geographic Information System) Cubic Model (GCM), which is a layered computational structure based on the concept of lithodynamic units and zones, aims at reproducing a parameterized layered subsoil model. A metamodeling process confers a hybrid nature to the methodology. In this process, the one-dimensional linear equivalent analysis produces acceleration response spectra of shear wave velocity-thickness profiles, defined as trainers, which are randomly selected in each zone. Subsequently, a numerical adaptive simulation model (Spectra) is optimized on the above trainer acceleration response spectra by means of a dedicated Evolutionary Algorithm (EA) and the Levenberg-Marquardt Algorithm (LMA) as the final optimizer. In the final step, the GCM Maps Executor module produces a serial map-set of a stratigraphic seismic response at different periods, grid-solving the calibrated Spectra model. In addition, the spectra topographic amplification is also computed by means of a numerical prediction model. This latter is built to match the results of the numerical simulations related to isolate reliefs using GIS topographic attributes. In this way, different sets of seismic response maps are developed, on which, also maps of seismic design response spectra are defined by means of an enveloping technique.
The tonotopic map in the embryonic chicken cochlea.
Jones, S M; Jones, T A
1995-02-01
The purpose of the present study was to determine the tonotopic map in the chicken cochlea at 19 days of incubation (E19) by obtaining characteristic frequencies (CFs) for primary afferents, labeling the characterized neurons, and documenting their projections to the papilla. The lowest and highest CFs recorded were 188 and 1623 Hz respectively. The embryonic tonotopic map coincided with maps reported for post-hatch chicks. There were no evidence that neurons selective to low frequencies project inappropriately to more basal locations of the embryonic papilla. Linear regression was used to estimate the frequency gradient (b = 0.037 +/- 0.012 In Hz/% [b +/- SEb]) and intercept (In C, where C = 111 Hz) of the semilog plot of frequency versus cochlear position (in % distance from apex). From these estimates the octave distribution was calculated to be 18.7%/octave or 0.58 mm/octave. These quantities were not significantly different from those found in post hatch chickens. We conclude that the tonotopic map of the avian cochlea for CFs between 100 and 1700 Hz is stable and relatively mature from age E19 to post-hatch day 21 (P21). The most striking sign of immaturity in the E19 embryo is the limited range of high CFs. We offer the hypothesis that, between the ages of E19 and P21, improvements in middle ear admittance alone or in combination with functional maturation of the cochlear base may be the principal factors responsible for the appearance of adult-like high CF limits and not an apically shifting tonotopic map.
The tonotopic map in the embryonic chicken cochlea
NASA Technical Reports Server (NTRS)
Jones, S. M.; Jones, T. A.
1995-01-01
The purpose of the present study was to determine the tonotopic map in the chicken cochlea at 19 days of incubation (E19) by obtaining characteristic frequencies (CFs) for primary afferents, labeling the characterized neurons, and documenting their projections to the papilla. The lowest and highest CFs recorded were 188 and 1623 Hz respectively. The embryonic tonotopic map coincided with maps reported for post-hatch chicks. There were no evidence that neurons selective to low frequencies project inappropriately to more basal locations of the embryonic papilla. Linear regression was used to estimate the frequency gradient (b = 0.037 +/- 0.012 In Hz/% [b +/- SEb]) and intercept (In C, where C = 111 Hz) of the semilog plot of frequency versus cochlear position (in % distance from apex). From these estimates the octave distribution was calculated to be 18.7%/octave or 0.58 mm/octave. These quantities were not significantly different from those found in post hatch chickens. We conclude that the tonotopic map of the avian cochlea for CFs between 100 and 1700 Hz is stable and relatively mature from age E19 to post-hatch day 21 (P21). The most striking sign of immaturity in the E19 embryo is the limited range of high CFs. We offer the hypothesis that, between the ages of E19 and P21, improvements in middle ear admittance alone or in combination with functional maturation of the cochlear base may be the principal factors responsible for the appearance of adult-like high CF limits and not an apically shifting tonotopic map.
A regression analysis of filler particle content to predict composite wear.
Jaarda, M J; Wang, R F; Lang, B R
1997-01-01
It has been hypothesized that composite wear is correlated to filler particle content. There is a paucity of research to substantiate this theory despite numerous projects evaluating the correlation. The purpose of this study was to determine whether a linear relationship existed between composite wear and filler particle content of 12 composites. In vivo wear data had been previously collected for the 12 composites and served as basis for this study. Scanning electron microscopy and backscatter electron imaging were combined with digital imaging analysis to develop "profile maps" of the filler particle composition of the composites. These profile maps included eight parameters: (1) total number of filler particles/28742.6 microns2, (2) percent of area occupied by all of the filler particles, (3) mean filler particle size, (4) percent of area occupied by the matrix, (5) percent of area occupied by filler particles, r (radius) 1.0 < or = micron, (6) percent of area occupied by filler particles, r = 1.0 < or = 4.5 microns, (7) percent of area occupied by filler particles, r = 4.5 < or = 10 microns, and (8) percent of area occupied by filler particles, r > 10 microns. Forward stepwise regression analyses were used with composite wear as the dependent variable and the eight parameters as independent variables. The results revealed a linear relationship between composite wear and the filler particle content. A mathematical formula was developed to predict composite wear.
Aein, Fereshteh; Aliakbari, Fatemeh
2017-01-01
Concept map is a useful cognitive tool for enhancing a student's critical thinking (CT) by encouraging students to process information deeply for understanding. However, the evidence regarding its effectiveness on nursing students' CT is contradictory. This paper compares the effectiveness of concept mapping and traditional linear nursing care planning on students' CT. An experimental design was used to examine the CT of 60 baccalaureate students who participated in pediatric clinical nursing course in the Shahrekord University of Medical Sciences, Shahrekord, Iran in 2013. Participants were randomly divided into six equal groups of each 10 student, of which three groups were the control group, and the others were the experimental group. The control group completed nine traditional linear nursing care plans, whereas experimental group completed nine concept maps during the course. Both groups showed significant improvement in overall and all subscales of the California CT skill test from pretest to posttest ( P < 0.001), but t -test demonstrated that improvement in students' CT skills in the experimental group was significantly greater than in the control group after the program ( P < 0.001). Our findings support that concept mapping can be used as a clinical teaching-learning activity to promote CT in nursing students.
Aein, Fereshteh; Aliakbari, Fatemeh
2017-01-01
Introduction: Concept map is a useful cognitive tool for enhancing a student's critical thinking (CT) by encouraging students to process information deeply for understanding. However, the evidence regarding its effectiveness on nursing students’ CT is contradictory. This paper compares the effectiveness of concept mapping and traditional linear nursing care planning on students’ CT. Methods: An experimental design was used to examine the CT of 60 baccalaureate students who participated in pediatric clinical nursing course in the Shahrekord University of Medical Sciences, Shahrekord, Iran in 2013. Results: Participants were randomly divided into six equal groups of each 10 student, of which three groups were the control group, and the others were the experimental group. The control group completed nine traditional linear nursing care plans, whereas experimental group completed nine concept maps during the course. Both groups showed significant improvement in overall and all subscales of the California CT skill test from pretest to posttest (P < 0.001), but t-test demonstrated that improvement in students’ CT skills in the experimental group was significantly greater than in the control group after the program (P < 0.001). Conclusions: Our findings support that concept mapping can be used as a clinical teaching-learning activity to promote CT in nursing students. PMID:28546978
NASA Astrophysics Data System (ADS)
Courdurier, M.; Monard, F.; Osses, A.; Romero, F.
2015-09-01
In medical single-photon emission computed tomography (SPECT) imaging, we seek to simultaneously obtain the internal radioactive sources and the attenuation map using not only ballistic measurements but also first-order scattering measurements and assuming a very specific scattering regime. The problem is modeled using the radiative transfer equation by means of an explicit non-linear operator that gives the ballistic and scattering measurements as a function of the radioactive source and attenuation distributions. First, by differentiating this non-linear operator we obtain a linearized inverse problem. Then, under regularity hypothesis for the source distribution and attenuation map and considering small attenuations, we rigorously prove that the linear operator is invertible and we compute its inverse explicitly. This allows proof of local uniqueness for the non-linear inverse problem. Finally, using the previous inversion result for the linear operator, we propose a new type of iterative algorithm for simultaneous source and attenuation recovery for SPECT based on the Neumann series and a Newton-Raphson algorithm.
The growth of carbon chains in IRC +10216 mapped with ALMA⋆
Agúndez, M.; Cernicharo, J.; Quintana-Lacaci, G.; Castro-Carrizo, A.; Velilla Prieto, L.; Marcelino, N.; Guélin, M.; Joblin, C.; Martín-Gago, J. A.; Gottlieb, C. A.; Patel, N. A.; McCarthy, M. C.
2017-01-01
Linear carbon chains are common in various types of astronomical molecular sources. Possible formation mechanisms involve both bottom-up and top-down routes. We have carried out a combined observational and modeling study of the formation of carbon chains in the C-star envelope IRC +10216, where the polymerization of acetylene and hydrogen cyanide induced by ultraviolet photons can drive the formation of linear carbon chains of increasing length. We have used ALMA to map the emission of λ 3 mm rotational lines of the hydrocarbon radicals C2H, C4H, and C6H, and the CN-containing species CN, C3N, HC3N, and HC5N with an angular resolution of ~1″. The spatial distribution of all these species is a hollow, 5-10″ wide, spherical shell located at a radius of 10-20″ from the star, with no appreciable emission close to the star. Our observations resolve the broad shell of carbon chains into thinner sub-shells which are 1-2″ wide and not fully concentric, indicating that the mass loss process has been discontinuous and not fully isotropic. The radial distributions of the species mapped reveal subtle differences: while the hydrocarbon radicals have very similar radial distributions, the CN-containing species show more diverse distributions, with HC3N appearing earlier in the expansion and the radical CN extending later than the rest of the species. The observed morphology can be rationalized by a chemical model in which the growth of polyynes is mainly produced by rapid gas-phase chemical reactions of C2H and C4H radicals with unsaturated hydrocarbons, while cyanopolyynes are mainly formed from polyynes in gas-phase reactions with CN and C3N radicals. PMID:28469283
Remote sensing the sea surface CO2 of the Baltic Sea using the SOMLO methodology
NASA Astrophysics Data System (ADS)
Parard, G.; Charantonis, A. A.; Rutgerson, A.
2015-06-01
Studies of coastal seas in Europe have noted the high variability of the CO2 system. This high variability, generated by the complex mechanisms driving the CO2 fluxes, complicates the accurate estimation of these mechanisms. This is particularly pronounced in the Baltic Sea, where the mechanisms driving the fluxes have not been characterized in as much detail as in the open oceans. In addition, the joint availability of in situ measurements of CO2 and of sea-surface satellite data is limited in the area. In this paper, we used the SOMLO (self-organizing multiple linear output; Sasse et al., 2013) methodology, which combines two existing methods (i.e. self-organizing maps and multiple linear regression) to estimate the ocean surface partial pressure of CO2 (pCO2) in the Baltic Sea from the remotely sensed sea surface temperature, chlorophyll, coloured dissolved organic matter, net primary production, and mixed-layer depth. The outputs of this research have a horizontal resolution of 4 km and cover the 1998-2011 period. These outputs give a monthly map of the Baltic Sea at a very fine spatial resolution. The reconstructed pCO2 values over the validation data set have a correlation of 0.93 with the in situ measurements and a root mean square error of 36 μatm. Removing any of the satellite parameters degraded this reconstructed CO2 flux, so we chose to supply any missing data using statistical imputation. The pCO2 maps produced using this method also provide a confidence level of the reconstruction at each grid point. The results obtained are encouraging given the sparsity of available data, and we expect to be able to produce even more accurate reconstructions in coming years, given the predicted acquisition of new data.
Accurate construction of consensus genetic maps via integer linear programming.
Wu, Yonghui; Close, Timothy J; Lonardi, Stefano
2011-01-01
We study the problem of merging genetic maps, when the individual genetic maps are given as directed acyclic graphs. The computational problem is to build a consensus map, which is a directed graph that includes and is consistent with all (or, the vast majority of) the markers in the input maps. However, when markers in the individual maps have ordering conflicts, the resulting consensus map will contain cycles. Here, we formulate the problem of resolving cycles in the context of a parsimonious paradigm that takes into account two types of errors that may be present in the input maps, namely, local reshuffles and global displacements. The resulting combinatorial optimization problem is, in turn, expressed as an integer linear program. A fast approximation algorithm is proposed, and an additional speedup heuristic is developed. Our algorithms were implemented in a software tool named MERGEMAP which is freely available for academic use. An extensive set of experiments shows that MERGEMAP consistently outperforms JOINMAP, which is the most popular tool currently available for this task, both in terms of accuracy and running time. MERGEMAP is available for download at http://www.cs.ucr.edu/~yonghui/mgmap.html.
SiSeRHMap v1.0: a simulator for mapped seismic response using a hybrid model
NASA Astrophysics Data System (ADS)
Grelle, Gerardo; Bonito, Laura; Lampasi, Alessandro; Revellino, Paola; Guerriero, Luigi; Sappa, Giuseppe; Guadagno, Francesco Maria
2016-04-01
The SiSeRHMap (simulator for mapped seismic response using a hybrid model) is a computerized methodology capable of elaborating prediction maps of seismic response in terms of acceleration spectra. It was realized on the basis of a hybrid model which combines different approaches and models in a new and non-conventional way. These approaches and models are organized in a code architecture composed of five interdependent modules. A GIS (geographic information system) cubic model (GCM), which is a layered computational structure based on the concept of lithodynamic units and zones, aims at reproducing a parameterized layered subsoil model. A meta-modelling process confers a hybrid nature to the methodology. In this process, the one-dimensional (1-D) linear equivalent analysis produces acceleration response spectra for a specified number of site profiles using one or more input motions. The shear wave velocity-thickness profiles, defined as trainers, are randomly selected in each zone. Subsequently, a numerical adaptive simulation model (Emul-spectra) is optimized on the above trainer acceleration response spectra by means of a dedicated evolutionary algorithm (EA) and the Levenberg-Marquardt algorithm (LMA) as the final optimizer. In the final step, the GCM maps executor module produces a serial map set of a stratigraphic seismic response at different periods, grid solving the calibrated Emul-spectra model. In addition, the spectra topographic amplification is also computed by means of a 3-D validated numerical prediction model. This model is built to match the results of the numerical simulations related to isolate reliefs using GIS morphometric data. In this way, different sets of seismic response maps are developed on which maps of design acceleration response spectra are also defined by means of an enveloping technique.
NASA Astrophysics Data System (ADS)
Fedrigo, Melissa; Newnham, Glenn J.; Coops, Nicholas C.; Culvenor, Darius S.; Bolton, Douglas K.; Nitschke, Craig R.
2018-02-01
Light detection and ranging (lidar) data have been increasingly used for forest classification due to its ability to penetrate the forest canopy and provide detail about the structure of the lower strata. In this study we demonstrate forest classification approaches using airborne lidar data as inputs to random forest and linear unmixing classification algorithms. Our results demonstrated that both random forest and linear unmixing models identified a distribution of rainforest and eucalypt stands that was comparable to existing ecological vegetation class (EVC) maps based primarily on manual interpretation of high resolution aerial imagery. Rainforest stands were also identified in the region that have not previously been identified in the EVC maps. The transition between stand types was better characterised by the random forest modelling approach. In contrast, the linear unmixing model placed greater emphasis on field plots selected as endmembers which may not have captured the variability in stand structure within a single stand type. The random forest model had the highest overall accuracy (84%) and Cohen's kappa coefficient (0.62). However, the classification accuracy was only marginally better than linear unmixing. The random forest model was applied to a region in the Central Highlands of south-eastern Australia to produce maps of stand type probability, including areas of transition (the 'ecotone') between rainforest and eucalypt forest. The resulting map provided a detailed delineation of forest classes, which specifically recognised the coalescing of stand types at the landscape scale. This represents a key step towards mapping the structural and spatial complexity of these ecosystems, which is important for both their management and conservation.
Kokuryo, Daisuke; Aoki, Ichio; Yuba, Eiji; Kono, Kenji; Aoshima, Sadahito; Kershaw, Jeff; Saga, Tsuneo
2017-07-01
The combination of radiotherapy with chemotherapy is one of the most promising strategies for cancer treatment. Here, a novel combination strategy utilizing carbon ion irradiation as a high-linear energy transfer (LET) radiotherapy and a thermo-triggered nanodevice is proposed, and drug accumulation in the tumor and treatment effects are evaluated using magnetic resonance imaging relaxometry and immunohistology (Ki-67, n = 15). The thermo-triggered liposomal anticancer nanodevice was administered into colon-26 tumor-grafted mice, and drug accumulation and efficacy was compared for 6 groups (n = 32) that received or did not receive the radiotherapy and thermo trigger. In vivo quantitative R 1 maps visually demonstrated that the multimodal thermosensitive polymer-modified liposomes (MTPLs) can accumulate in the tumor tissue regardless of whether the region was irradiated by carbon ions or not. The tumor volume after combination treatment with carbon ion irradiation and MTPLs with thermo-triggering was significantly smaller than all the control groups at 8 days after treatment. The proposed strategy of combining high-LET irradiation and the nanodevice provides an effective approach for minimally invasive cancer treatment. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
An approach to localize the retinal blood vessels using bit planes and centerline detection.
Fraz, M M; Barman, S A; Remagnino, P; Hoppe, A; Basit, A; Uyyanonvara, B; Rudnicka, A R; Owen, C G
2012-11-01
The change in morphology, diameter, branching pattern or tortuosity of retinal blood vessels is an important indicator of various clinical disorders of the eye and the body. This paper reports an automated method for segmentation of blood vessels in retinal images. A unique combination of techniques for vessel centerlines detection and morphological bit plane slicing is presented to extract the blood vessel tree from the retinal images. The centerlines are extracted by using the first order derivative of a Gaussian filter in four orientations and then evaluation of derivative signs and average derivative values is performed. Mathematical morphology has emerged as a proficient technique for quantifying the blood vessels in the retina. The shape and orientation map of blood vessels is obtained by applying a multidirectional morphological top-hat operator with a linear structuring element followed by bit plane slicing of the vessel enhanced grayscale image. The centerlines are combined with these maps to obtain the segmented vessel tree. The methodology is tested on three publicly available databases DRIVE, STARE and MESSIDOR. The results demonstrate that the performance of the proposed algorithm is comparable with state of the art techniques in terms of accuracy, sensitivity and specificity. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Playing Linear Number Board Games Improves Children's Mathematical Knowledge
ERIC Educational Resources Information Center
Siegler, Robert S.; Ramani, Geetha
2009-01-01
The present study focused on two main goals. One was to test the "representational mapping hypothesis": The greater the transparency of the mapping between physical materials and desired internal representations, the greater the learning of the desired internal representation. The implication of the representational mapping hypothesis in the…
Microbial genome sequencing using optical mapping and Illumina sequencing
USDA-ARS?s Scientific Manuscript database
Introduction Optical mapping is a technique in which strands of genomic DNA are digested with one or more restriction enzymes, and a physical map of the genome constructed from the resulting image. In outline, genomic DNA is extracted from a pure culture, linearly arrayed on a specialized glass sli...
EEG and MEG data analysis in SPM8.
Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl
2011-01-01
SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools.
EEG and MEG Data Analysis in SPM8
Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl
2011-01-01
SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools. PMID:21437221
Kamali, Tschackad; Považay, Boris; Kumar, Sunil; Silberberg, Yaron; Hermann, Boris; Werkmeister, René; Drexler, Wolfgang; Unterhuber, Angelika
2014-10-01
We demonstrate a multimodal optical coherence tomography (OCT) and online Fourier transform coherent anti-Stokes Raman scattering (FTCARS) platform using a single sub-12 femtosecond (fs) Ti:sapphire laser enabling simultaneous extraction of structural and chemical ("morphomolecular") information of biological samples. Spectral domain OCT prescreens the specimen providing a fast ultrahigh (4×12 μm axial and transverse) resolution wide field morphologic overview. Additional complementary intrinsic molecular information is obtained by zooming into regions of interest for fast label-free chemical mapping with online FTCARS spectroscopy. Background-free CARS is based on a Michelson interferometer in combination with a highly linear piezo stage, which allows for quick point-to-point extraction of CARS spectra in the fingerprint region in less than 125 ms with a resolution better than 4 cm(-1) without the need for averaging. OCT morphology and CARS spectral maps indicating phosphate and carbonate bond vibrations from human bone samples are extracted to demonstrate the performance of this hybrid imaging platform.
NASA Astrophysics Data System (ADS)
Nikitin, Maxim; Yuriev, Mikhail; Brusentsov, Nikolai; Vetoshko, Petr; Nikitin, Petr
2010-12-01
Quantitative detection of magnetic nanoparticles (MP) in vivo is very important for various biomedical applications. Our original detection method based on non-linear MP magnetization has been modified for non-invasive in vivo mapping of the MP distribution among different organs of rats. A novel highly sensitive room-temperature device equipped with an external probe has been designed and tested for quantification of MP within 20-mm depth from the animal skin. Results obtained by external in vivo scanning of rats by the probe and ex vivo MP quantification in different organs of rats well correlated. The method allows long-term in vivo study of MP evolution, clearance and redistribution among different organs of the animal. Experiments showed that dynamics in vivo strongly depend on MP characteristics (size, material, coatings, etc.), site of injection and dose. The developed detection method combined with the magnetic nanolabels can substitute the radioactive labeling in many applications.
Scalable Regression Tree Learning on Hadoop using OpenPlanet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, Wei; Simmhan, Yogesh; Prasanna, Viktor
As scientific and engineering domains attempt to effectively analyze the deluge of data arriving from sensors and instruments, machine learning is becoming a key data mining tool to build prediction models. Regression tree is a popular learning model that combines decision trees and linear regression to forecast numerical target variables based on a set of input features. Map Reduce is well suited for addressing such data intensive learning applications, and a proprietary regression tree algorithm, PLANET, using MapReduce has been proposed earlier. In this paper, we describe an open source implement of this algorithm, OpenPlanet, on the Hadoop framework usingmore » a hybrid approach. Further, we evaluate the performance of OpenPlanet using realworld datasets from the Smart Power Grid domain to perform energy use forecasting, and propose tuning strategies of Hadoop parameters to improve the performance of the default configuration by 75% for a training dataset of 17 million tuples on a 64-core Hadoop cluster on FutureGrid.« less
Guo, J.; Tsang, L.; Josberger, E.G.; Wood, A.W.; Hwang, J.-N.; Lettenmaier, D.P.
2003-01-01
This paper presents an algorithm that estimates the spatial distribution and temporal evolution of snow water equivalent and snow depth based on passive remote sensing measurements. It combines the inversion of passive microwave remote sensing measurements via dense media radiative transfer modeling results with snow accumulation and melt model predictions to yield improved estimates of snow depth and snow water equivalent, at a pixel resolution of 5 arc-min. In the inversion, snow grain size evolution is constrained based on pattern matching by using the local snow temperature history. This algorithm is applied to produce spatial snow maps of Upper Rio Grande River basin in Colorado. The simulation results are compared with that of the snow accumulation and melt model and a linear regression method. The quantitative comparison with the ground truth measurements from four Snowpack Telemetry (SNOTEL) sites in the basin shows that this algorithm is able to improve the estimation of snow parameters.
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harpool, K; De La Fuente Herman, T; Ahmad, S
Purpose: To investigate quantitatively the accuracy of dose distributions for the Ir-192 high-dose-rate (HDR) brachytherapy source calculated by the Brachytherapy-Planning system (BPS) and measured using a multiple-array-diode-detector in a heterogeneous medium. Methods: A two-dimensional diode-array-detector system (MapCheck2) was scanned with a catheter and the CT-images were loaded into the Varian-Brachytherapy-Planning which uses TG-43-formalism for dose calculation. Treatment plans were calculated for different combinations of one dwell-position and varying irradiation times and different-dwell positions and fixed irradiation time with the source placed 12mm from the diode-array plane. The calculated dose distributions were compared to the measured doses with MapCheck2 delivered bymore » an Ir-192-source from a Nucletron-Microselectron-V2-remote-after-loader. The linearity of MapCheck2 was tested for a range of dwell-times (2–600 seconds). The angular effect was tested with 30 seconds irradiation delivered to the central-diode and then moving the source away in increments of 10mm. Results: Large differences were found between calculated and measured dose distributions. These differences are mainly due to absence of heterogeneity in the dose calculation and diode-artifacts in the measurements. The dose differences between measured and calculated due to heterogeneity ranged from 5%–12% depending on the position of the source relative to the diodes in MapCheck2 and different heterogeneities in the beam path. The linearity test of the diode-detector showed 3.98%, 2.61%, and 2.27% over-response at short irradiation times of 2, 5, and 10 seconds, respectively, and within 2% for 20 to 600 seconds (p-value=0.05) which depends strongly on MapCheck2 noise. The angular dependency was more pronounced at acute angles ranging up to 34% at 5.7 degrees. Conclusion: Large deviations between measured and calculated dose distributions for HDR-brachytherapy with Ir-192 may be improved when considering medium heterogeneity and dose-artifact of the diodes. This study demonstrates that multiple-array-diode-detectors provide practical and accurate dosimeter to verify doses delivered from the brachytherapy Ir-192-source.« less
The Super-Linear Slope Of The Spatially-resolved Star Formation Law In NGC 3521 And NGC 5194 (m51a)
NASA Astrophysics Data System (ADS)
Liu, Guilin; Koda, J.; Calzetti, D.; Fukuhara, M.; Momose, R.
2011-01-01
We have conducted interferometric observations with CARMA and an OTF mapping with the 45-m telescope at NRO in the CO (1-0) emission line of NGC 3521. Combining these new data, together with similar data for M51a and archival SINGS H-alpha, 24um, THINGS H I and GALEX FUV data for both galaxies, we investigate the empirical scaling law that connects the surface density of star formation rate (SFR) and cold gas (the Schmidt-Kennicutt law) on a spatially-resolved basis, and find a super-linear slope when carefully subtracting the background emissions in the SFR image. We argue that plausibly deriving SFR maps of nearby galaxies requires the diffuse stellar/dust background emission to be carefully subtracted (especially in mid-IR). An approach to complete this task is presented and applied in our pixel-by-pixel analysis on both galaxies, showing that the controversial results whether the molecular S-K law is super-linear or basically linear is a result of removing or preserving the local background. In both galaxies, the power index of the molecular S-K law is super-linear (1.5-1.9) at the highest available resolution (230 pc), and decreases monotonically for decreasing resolution; while the scatter (mainly intrinsic) increases as the resolution becomes higher, indicating a trend for which the S-K law breaks down below some scale. Both quantities are systematically larger in M51a than in NGC 3521, but when plotted against the de-projected scale, they become highly consistent between the two galaxies, tentatively suggesting that the sub-kpc molecular S-K law in spiral galaxies depends only on the scale being considered, without varying amongst spiral galaxies. We obtaion slope=-1.1[log(scale/kpc)]+1.4 and scatter=-0.2 [scale/kpc]+0.7 through fitting to the M51a data, which describes both galaxies impressively well on sub-kpc scales. However, a larger sample of galaxies with better sensitivity, resolution and broader FoV are required to test these results.
Acoustic-articulatory mapping in vowels by locally weighted regression
McGowan, Richard S.; Berger, Michael A.
2009-01-01
A method for mapping between simultaneously measured articulatory and acoustic data is proposed. The method uses principal components analysis on the articulatory and acoustic variables, and mapping between the domains by locally weighted linear regression, or loess [Cleveland, W. S. (1979). J. Am. Stat. Assoc. 74, 829–836]. The latter method permits local variation in the slopes of the linear regression, assuming that the function being approximated is smooth. The methodology is applied to vowels of four speakers in the Wisconsin X-ray Microbeam Speech Production Database, with formant analysis. Results are examined in terms of (1) examples of forward (articulation-to-acoustics) mappings and inverse mappings, (2) distributions of local slopes and constants, (3) examples of correlations among slopes and constants, (4) root-mean-square error, and (5) sensitivity of formant frequencies to articulatory change. It is shown that the results are qualitatively correct and that loess performs better than global regression. The forward mappings show different root-mean-square error properties than the inverse mappings indicating that this method is better suited for the forward mappings than the inverse mappings, at least for the data chosen for the current study. Some preliminary results on sensitivity of the first two formant frequencies to the two most important articulatory principal components are presented. PMID:19813812
Zhang, Liping; Zhang, Shiwen; Huang, Yajie; Cao, Meng; Huang, Yuanfang; Zhang, Hongyan
2016-01-01
Understanding abandoned mine land (AML) changes during land reclamation is crucial for reusing damaged land resources and formulating sound ecological restoration policies. This study combines the linear programming (LP) model and the CLUE-S model to simulate land-use dynamics in the Mentougou District (Beijing, China) from 2007 to 2020 under three reclamation scenarios, that is, the planning scenario based on the general land-use plan in study area (scenario 1), maximal comprehensive benefits (scenario 2), and maximal ecosystem service value (scenario 3). Nine landscape-scale graph metrics were then selected to describe the landscape characteristics. The results show that the coupled model presented can simulate the dynamics of AML effectively and the spatially explicit transformations of AML were different. New cultivated land dominates in scenario 1, while construction land and forest land account for major percentages in scenarios 2 and 3, respectively. Scenario 3 has an advantage in most of the selected indices as the patches combined most closely. To conclude, reclaiming AML by transformation into more forest can reduce the variability and maintain the stability of the landscape ecological system in study area. These findings contribute to better mapping AML dynamics and providing policy support for the management of AML. PMID:27023575
Zhang, Liping; Zhang, Shiwen; Huang, Yajie; Cao, Meng; Huang, Yuanfang; Zhang, Hongyan
2016-03-24
Understanding abandoned mine land (AML) changes during land reclamation is crucial for reusing damaged land resources and formulating sound ecological restoration policies. This study combines the linear programming (LP) model and the CLUE-S model to simulate land-use dynamics in the Mentougou District (Beijing, China) from 2007 to 2020 under three reclamation scenarios, that is, the planning scenario based on the general land-use plan in study area (scenario 1), maximal comprehensive benefits (scenario 2), and maximal ecosystem service value (scenario 3). Nine landscape-scale graph metrics were then selected to describe the landscape characteristics. The results show that the coupled model presented can simulate the dynamics of AML effectively and the spatially explicit transformations of AML were different. New cultivated land dominates in scenario 1, while construction land and forest land account for major percentages in scenarios 2 and 3, respectively. Scenario 3 has an advantage in most of the selected indices as the patches combined most closely. To conclude, reclaiming AML by transformation into more forest can reduce the variability and maintain the stability of the landscape ecological system in study area. These findings contribute to better mapping AML dynamics and providing policy support for the management of AML.
Du, Jia; Younes, Laurent; Qiu, Anqi
2011-01-01
This paper introduces a novel large deformation diffeomorphic metric mapping algorithm for whole brain registration where sulcal and gyral curves, cortical surfaces, and intensity images are simultaneously carried from one subject to another through a flow of diffeomorphisms. To the best of our knowledge, this is the first time that the diffeomorphic metric from one brain to another is derived in a shape space of intensity images and point sets (such as curves and surfaces) in a unified manner. We describe the Euler–Lagrange equation associated with this algorithm with respect to momentum, a linear transformation of the velocity vector field of the diffeomorphic flow. The numerical implementation for solving this variational problem, which involves large-scale kernel convolution in an irregular grid, is made feasible by introducing a class of computationally friendly kernels. We apply this algorithm to align magnetic resonance brain data. Our whole brain mapping results show that our algorithm outperforms the image-based LDDMM algorithm in terms of the mapping accuracy of gyral/sulcal curves, sulcal regions, and cortical and subcortical segmentation. Moreover, our algorithm provides better whole brain alignment than combined volumetric and surface registration (Postelnicu et al., 2009) and hierarchical attribute matching mechanism for elastic registration (HAMMER) (Shen and Davatzikos, 2002) in terms of cortical and subcortical volume segmentation. PMID:21281722
Evrendilek, Fatih
2007-12-12
This study aims at quantifying spatio-temporal dynamics of monthly mean dailyincident photosynthetically active radiation (PAR) over a vast and complex terrain such asTurkey. The spatial interpolation method of universal kriging, and the combination ofmultiple linear regression (MLR) models and map algebra techniques were implemented togenerate surface maps of PAR with a grid resolution of 500 x 500 m as a function of fivegeographical and 14 climatic variables. Performance of the geostatistical and MLR modelswas compared using mean prediction error (MPE), root-mean-square prediction error(RMSPE), average standard prediction error (ASE), mean standardized prediction error(MSPE), root-mean-square standardized prediction error (RMSSPE), and adjustedcoefficient of determination (R² adj. ). The best-fit MLR- and universal kriging-generatedmodels of monthly mean daily PAR were validated against an independent 37-year observeddataset of 35 climate stations derived from 160 stations across Turkey by the Jackknifingmethod. The spatial variability patterns of monthly mean daily incident PAR were moreaccurately reflected in the surface maps created by the MLR-based models than in thosecreated by the universal kriging method, in particular, for spring (May) and autumn(November). The MLR-based spatial interpolation algorithms of PAR described in thisstudy indicated the significance of the multifactor approach to understanding and mappingspatio-temporal dynamics of PAR for a complex terrain over meso-scales.
Klem, S A; Farrington, J M; Leff, R D
1993-08-01
To determine whether variations in the flow rate of epinephrine solutions administered via commonly available infusion pumps lead to significant variations in blood pressure (BP) in vivo. Prospective, randomized, crossover study with factorial design, using infusion pumps with four different operating mechanisms (pulsatile diaphragm, linear piston/syringe, cyclic piston-valve, and linear peristaltic) and three drug delivery rates (1, 5, and 10 mL/hr). Two healthy, mixed-breed dogs (12 to 16 kg). Dogs were made hypotensive with methohexital bolus and continuous infusion. BP was restored to normal with constant-dose epinephrine infusion via two pumps at each rate. Femoral mean arterial pressure (MAP) was recorded every 10 secs. Pump-flow continuity was quantitated in vitro using a digital gravimetric technique. Variations in MAP and flow continuity were expressed by the coefficient of variation; analysis of variance was used for comparisons. The mean coefficients of variations for MAP varied from 3.8 +/- 3.1% (linear piston/syringe) to 6.1 +/- 6.6% (linear peristaltic), and from 3.4 +/- 2.2% (10 mL/hr) to 7.9 +/- 6.6% (1 mL/hr). The coefficients of variation for in vitro flow continuity ranged from 9 +/- 8% (linear piston-syringe) to 250 +/- 162% (pulsatile diaphragm), and from 35 +/- 44% (10 mL/hr) to 138 +/- 196% (1 mL/hr). Both the type of pump and infusion rate significantly (p < .001) influenced variation in drug delivery rate. The 1 mL/hr infusion rate significantly (p < .01) influenced MAP variation. Cyclic fluctuations in MAP of < or = 30 mm Hg were observed using the pulsatile diaphragm pump at 1 mL/hr. Factors inherent in the operating mechanisms of infusion pumps may result in clinically important hemodynamic fluctuations when administering a concentrated short-acting vasoactive medication at slow infusion rates.
NASA Astrophysics Data System (ADS)
Punjabi, Alkesh; Ali, Halima; Farhat, Hamidullah
2009-07-01
Extra terms are added to the generating function of the simple map (Punjabi et al 1992 Phys. Rev. Lett. 69 3322) to adjust shear of magnetic field lines in divertor tokamaks. From this new generating function, a higher shear map is derived from a canonical transformation. A continuous analog of the higher shear map is also derived. The method of maps (Punjabi et al 1994 J. Plasma Phys. 52 91) is used to calculate the average shear, stochastic broadening of the ideal separatrix near the X-point in the principal plane of the tokamak, loss of poloidal magnetic flux from inside the ideal separatrix, magnetic footprint on the collector plate, and its area, and the radial diffusion coefficient of magnetic field lines near the X-point. It is found that the width of the stochastic layer near the X-point and the loss of poloidal flux from inside the ideal separatrix scale linearly with average shear. The area of magnetic footprints scales roughly linearly with average shear. Linear scaling of the area is quite good when the average shear is greater than or equal to 1.25. When the average shear is in the range 1.1-1.25, the area of the footprint fluctuates (as a function of average shear) and scales faster than linear scaling. Radial diffusion of field lines near the X-point increases very rapidly by about four orders of magnitude as average shear increases from about 1.15 to 1.5. For higher values of average shear, diffusion increases linearly, and comparatively very slowly. The very slow scaling of the radial diffusion of the field can flatten the plasma pressure gradient near the separatrix, and lead to the elimination of type-I edge localized modes.
Mapping nonlinear receptive field structure in primate retina at single cone resolution
Li, Peter H; Greschner, Martin; Gunning, Deborah E; Mathieson, Keith; Sher, Alexander; Litke, Alan M; Paninski, Liam
2015-01-01
The function of a neural circuit is shaped by the computations performed by its interneurons, which in many cases are not easily accessible to experimental investigation. Here, we elucidate the transformation of visual signals flowing from the input to the output of the primate retina, using a combination of large-scale multi-electrode recordings from an identified ganglion cell type, visual stimulation targeted at individual cone photoreceptors, and a hierarchical computational model. The results reveal nonlinear subunits in the circuity of OFF midget ganglion cells, which subserve high-resolution vision. The model explains light responses to a variety of stimuli more accurately than a linear model, including stimuli targeted to cones within and across subunits. The recovered model components are consistent with known anatomical organization of midget bipolar interneurons. These results reveal the spatial structure of linear and nonlinear encoding, at the resolution of single cells and at the scale of complete circuits. DOI: http://dx.doi.org/10.7554/eLife.05241.001 PMID:26517879
A linear programming approach to max-sum problem: a review.
Werner, Tomás
2007-07-01
The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.
Steiner, M. A.; Bunn, J. R.; Einhorn, J. R.; ...
2017-05-16
This study reports an angular diffraction peak shift that scales linearly with the neutron beam path length traveled through a diffracting sample. This shift was observed in the context of mapping the residual stress state of a large U–8 wt% Mo casting, as well as during complementary measurements on a smaller casting of the same material. If uncorrected, this peak shift implies a non-physical level of residual stress. A hypothesis for the origin of this shift is presented, based upon non-ideal focusing of the neutron monochromator in combination with changes to the wavelength distribution reaching the detector due to factorsmore » such as attenuation. The magnitude of the shift is observed to vary linearly with the width of the diffraction peak reaching the detector. Consideration of this shift will be important for strain measurements requiring long path lengths through samples with significant attenuation. This effect can probably be reduced by selecting smaller voxel slit widths.« less
An automated mapping satellite system ( Mapsat).
Colvocoresses, A.P.
1982-01-01
The favorable environment of space permits a satellite to orbit the Earth with very high stability as long as no local perturbing forces are involved. Solid-state linear-array sensors have no moving parts and create no perturbing force on the satellite. Digital data from highly stabilized stereo linear arrays are amenable to simplified processing to produce both planimetric imagery and elevation data. A satellite imaging system, called Mapsat, including this concept has been proposed to produce data from which automated mapping in near real time can be accomplished. Image maps as large as 1:50 000 scale with contours as close as a 20-m interval may be produced from Mapsat data. -from Author
Toward best practices for developing regional connectivity maps.
Beier, Paul; Spencer, Wayne; Baldwin, Robert F; McRae, Brad H
2011-10-01
To conserve ecological connectivity (the ability to support animal movement, gene flow, range shifts, and other ecological and evolutionary processes that require large areas), conservation professionals need coarse-grained maps to serve as decision-support tools or vision statements and fine-grained maps to prescribe site-specific interventions. To date, research has focused primarily on fine-grained maps (linkage designs) covering small areas. In contrast, we devised 7 steps to coarsely map dozens to hundreds of linkages over a large area, such as a nation, province, or ecoregion. We provide recommendations on how to perform each step on the basis of our experiences with 6 projects: California Missing Linkages (2001), Arizona Wildlife Linkage Assessment (2006), California Essential Habitat Connectivity (2010), Two Countries, One Forest (northeastern United States and southeastern Canada) (2010), Washington State Connected Landscapes (2010), and the Bhutan Biological Corridor Complex (2010). The 2 most difficult steps are mapping natural landscape blocks (areas whose conservation value derives from the species and ecological processes within them) and determining which pairs of blocks can feasibly be connected in a way that promotes conservation. Decision rules for mapping natural landscape blocks and determining which pairs of blocks to connect must reflect not only technical criteria, but also the values and priorities of stakeholders. We recommend blocks be mapped on the basis of a combination of naturalness, protection status, linear barriers, and habitat quality for selected species. We describe manual and automated procedures to identify currently functioning or restorable linkages. Once pairs of blocks have been identified, linkage polygons can be mapped by least-cost modeling, other approaches from graph theory, or individual-based movement models. The approaches we outline make assumptions explicit, have outputs that can be improved as underlying data are improved, and help implementers focus strictly on ecological connectivity. ©2011 Society for Conservation Biology.
On-the-go mapping of soil mechanical resistance using a linear depth effect model.
USDA-ARS?s Scientific Manuscript database
An instrumented blade sensor was developed to map soil mechanical resistance as well as its change with depth. The sensor has become a part of the Integrated Soil Physical Properties Mapping System (ISPPMS), which also includes an optical and a capacitor-based sensor. The instrumented blade of the...
Background controlled QTL mapping in pure-line genetic populations derived from four-way crosses
Zhang, S; Meng, L; Wang, J; Zhang, L
2017-01-01
Pure lines derived from multiple parents are becoming more important because of the increased genetic diversity, the possibility to conduct replicated phenotyping trials in multiple environments and potentially high mapping resolution of quantitative trait loci (QTL). In this study, we proposed a new mapping method for QTL detection in pure-line populations derived from four-way crosses, which is able to control the background genetic variation through a two-stage mapping strategy. First, orthogonal variables were created for each marker and used in an inclusive linear model, so as to completely absorb the genetic variation in the mapping population. Second, inclusive composite interval mapping approach was implemented for one-dimensional scanning, during which the inclusive linear model was employed to control the background variation. Simulation studies using different genetic models demonstrated that the new method is efficient when considering high detection power, low false discovery rate and high accuracy in estimating quantitative trait loci locations and effects. For illustration, the proposed method was applied in a reported wheat four-way recombinant inbred line population. PMID:28722705
Background controlled QTL mapping in pure-line genetic populations derived from four-way crosses.
Zhang, S; Meng, L; Wang, J; Zhang, L
2017-10-01
Pure lines derived from multiple parents are becoming more important because of the increased genetic diversity, the possibility to conduct replicated phenotyping trials in multiple environments and potentially high mapping resolution of quantitative trait loci (QTL). In this study, we proposed a new mapping method for QTL detection in pure-line populations derived from four-way crosses, which is able to control the background genetic variation through a two-stage mapping strategy. First, orthogonal variables were created for each marker and used in an inclusive linear model, so as to completely absorb the genetic variation in the mapping population. Second, inclusive composite interval mapping approach was implemented for one-dimensional scanning, during which the inclusive linear model was employed to control the background variation. Simulation studies using different genetic models demonstrated that the new method is efficient when considering high detection power, low false discovery rate and high accuracy in estimating quantitative trait loci locations and effects. For illustration, the proposed method was applied in a reported wheat four-way recombinant inbred line population.
Stochastic Analysis and Design of Systems
2011-09-14
measures is described by the Frobenius - Perron operator corresponding to the map T (qi, ., .). This is the unique operator [Pi] such that∫ A [Pi]µ(x...ξi(k)) are non-linear, the Frobenius - Perron operators are linear operators, but infinite-dimensional. For more details on the theory of these...given by the Frobenius - Perron operator corresponding to the map R(qi, qj , ., .). This is given as ∫ A [Mi,j ]µ(x)dx = Eηj ∫ Rn µ(x).χA(R(qi, qj , x
Structural interpretations based on ERTS-1 imagery, Bighorn Region, Wyoming-Montana
NASA Technical Reports Server (NTRS)
Hoppin, R. A.
1973-01-01
Structural analysis is being carried out on bands MSS 5 and 7 of scene 1085-17294. Geologic strucutre is primarily revealed in the topographic relief and drainage. Topographic linears are particularly well developed in the bighorn uplift. Many of these occur along known faults and shear zones in the Precambrian core; several have not been previously mapped. These linears, however, do not continue into the younger rocks of the flanks or do so in a much less marked manner than in the core. Linears are far less abundant in the basin or are manifested only in very subtle tonal contrasts and somewhat straight drainage segments. Some of the linears are aligned along trends previously postulated on the basis of surface mapping to be lineaments. The imagery reveals little or no evidence of strike-slip displacements along these lineaments.
Computation of the anharmonic orbits in two piecewise monotonic maps with a single discontinuity
NASA Astrophysics Data System (ADS)
Li, Yurong; Du, Zhengdong
2017-02-01
In this paper, the bifurcation values for two typical piecewise monotonic maps with a single discontinuity are computed. The variation of the parameter of those maps leads to a sequence of border-collision and period-doubling bifurcations, generating a sequence of anharmonic orbits on the boundary of chaos. The border-collision and period-doubling bifurcation values are computed by the word-lifting technique and the Maple fsolve function or the Newton-Raphson method, respectively. The scaling factors which measure the convergent rates of the bifurcation values and the width of the stable periodic windows, respectively, are investigated. We found that these scaling factors depend on the parameters of the maps, implying that they are not universal. Moreover, if one side of the maps is linear, our numerical results suggest that those quantities converge increasingly. In particular, for the linear-quadratic case, they converge to one of the Feigenbaum constants δ _F= 4.66920160\\cdots.
Reconstructing Information in Large-Scale Structure via Logarithmic Mapping
NASA Astrophysics Data System (ADS)
Szapudi, Istvan
We propose to develop a new method to extract information from large-scale structure data combining two-point statistics and non-linear transformations; before, this information was available only with substantially more complex higher-order statistical methods. Initially, most of the cosmological information in large-scale structure lies in two-point statistics. With non- linear evolution, some of that useful information leaks into higher-order statistics. The PI and group has shown in a series of theoretical investigations how that leakage occurs, and explained the Fisher information plateau at smaller scales. This plateau means that even as more modes are added to the measurement of the power spectrum, the total cumulative information (loosely speaking the inverse errorbar) is not increasing. Recently we have shown in Neyrinck et al. (2009, 2010) that a logarithmic (and a related Gaussianization or Box-Cox) transformation on the non-linear Dark Matter or galaxy field reconstructs a surprisingly large fraction of this missing Fisher information of the initial conditions. This was predicted by the earlier wave mechanical formulation of gravitational dynamics by Szapudi & Kaiser (2003). The present proposal is focused on working out the theoretical underpinning of the method to a point that it can be used in practice to analyze data. In particular, one needs to deal with the usual real-life issues of galaxy surveys, such as complex geometry, discrete sam- pling (Poisson or sub-Poisson noise), bias (linear, or non-linear, deterministic, or stochastic), redshift distortions, pro jection effects for 2D samples, and the effects of photometric redshift errors. We will develop methods for weak lensing and Sunyaev-Zeldovich power spectra as well, the latter specifically targetting Planck. In addition, we plan to investigate the question of residual higher- order information after the non-linear mapping, and possible applications for cosmology. Our aim will be to work out practical methods, with the ultimate goal of cosmological parameter estimation. We will quantify with standard MCMC and Fisher methods (including DETF Figure of merit when applicable) the efficiency of our estimators, comparing with the conventional method, that uses the un-transformed field. Preliminary results indicate that the increase for NASA's WFIRST in the DETF Figure of Merit would be 1.5-4.2 using a range of pessimistic to optimistic assumptions, respectively.
NASA Technical Reports Server (NTRS)
Isachsen, Y. W. (Principal Investigator); Fakundiny, R. H.; Forster, S. W.
1974-01-01
The author has identified the following significant results. Linear anomalies dominate the new geological information derived from ERTS-1 imagery, total lengths now exceeding 26,500 km. Maxima on rose diagrams for ERTS-1 anomalies correspond well with those for mapped faults and topographic lineaments. Multi-scale analysis of linears shows that single topographic linears at 1:2,500,000 may become dashed linears at 1:1,000,000 aligned zones of shorter parallel, en echelon, or conjugate linears at 1:5000,000, and shorter linears lacking any conspicuous zonal alignment at 1:250,000. Field work in the Catskills suggests that the prominent new NNE lineaments may be surface manifestations of dip slip faulting in the basement, and that it may become possible to map major joint sets over extensive plateau regions directly on the imagery. Most circular features found were explained away by U-2 airfoto analysis but several remain as anomalies. Visible glacial features include individual drumlins, drumlinoids, eskers, ice-marginal drainage channels, glacial lake shorelines, sand plains, and end moraines.
Evaluation of ERTS-1 imagery for spectral geological mapping in diverse terranes of New York State
NASA Technical Reports Server (NTRS)
Isachsen, Y. W. (Principal Investigator); Fakundiny, R. H.; Forster, S. W.
1973-01-01
The author has identified the following significant results. Linear anomalies dominate the new geological information derived from ERTS-1 imagery, total lengths now exceeding 6000 km. Experimentation with a variety of viewing techniques suggest that conventional photogeologic analyses of band 7 results in the location of more than 97 percent of all linears found. Bedrock lithologic types are distinguishable only where they are topographically expressed or govern land use signatures. The maxima on rose diagrams for ERTS-1 anomalies correspond well with those for mapped faults and topographic lineaments. A multiscale analysis of linears showed that single topographic linears at 1:2,500,000 became dashed linears at 1:1,000,000 aligned zones of shorter parallel, en echelon, or conjugate linears at 1:500,00. Most circular features found were explained away by U-2 airphoto analysis but several remain as anomalies. Visible glacial features include individual drumlins, best seen in winter imagery, drumlinoids, eskers, ice-marginal drainage channels, glacial lake shorelines and sand plains, and end moraines.
Visual saliency in MPEG-4 AVC video stream
NASA Astrophysics Data System (ADS)
Ammar, M.; Mitrea, M.; Hasnaoui, M.; Le Callet, P.
2015-03-01
Visual saliency maps already proved their efficiency in a large variety of image/video communication application fields, covering from selective compression and channel coding to watermarking. Such saliency maps are generally based on different visual characteristics (like color, intensity, orientation, motion,…) computed from the pixel representation of the visual content. This paper resumes and extends our previous work devoted to the definition of a saliency map solely extracted from the MPEG-4 AVC stream syntax elements. The MPEG-4 AVC saliency map thus defined is a fusion of static and dynamic map. The static saliency map is in its turn a combination of intensity, color and orientation features maps. Despite the particular way in which all these elementary maps are computed, the fusion techniques allowing their combination plays a critical role in the final result and makes the object of the proposed study. A total of 48 fusion formulas (6 for combining static features and, for each of them, 8 to combine static to dynamic features) are investigated. The performances of the obtained maps are evaluated on a public database organized at IRCCyN, by computing two objective metrics: the Kullback-Leibler divergence and the area under curve.
2D discontinuous piecewise linear map: Emergence of fashion cycles.
Gardini, L; Sushko, I; Matsuyama, K
2018-05-01
We consider a discrete-time version of the continuous-time fashion cycle model introduced in Matsuyama, 1992. Its dynamics are defined by a 2D discontinuous piecewise linear map depending on three parameters. In the parameter space of the map periodicity, regions associated with attracting cycles of different periods are organized in the period adding and period incrementing bifurcation structures. The boundaries of all the periodicity regions related to border collision bifurcations are obtained analytically in explicit form. We show the existence of several partially overlapping period incrementing structures, that is, a novelty for the considered class of maps. Moreover, we show that if the time-delay in the discrete time formulation of the model shrinks to zero, the number of period incrementing structures tends to infinity and the dynamics of the discrete time fashion cycle model converges to those of continuous-time fashion cycle model.
Varieties of quantity estimation in children.
Sella, Francesco; Berteletti, Ilaria; Lucangeli, Daniela; Zorzi, Marco
2015-06-01
In the number-to-position task, with increasing age and numerical expertise, children's pattern of estimates shifts from a biased (nonlinear) to a formal (linear) mapping. This widely replicated finding concerns symbolic numbers, whereas less is known about other types of quantity estimation. In Experiment 1, Preschool, Grade 1, and Grade 3 children were asked to map continuous quantities, discrete nonsymbolic quantities (numerosities), and symbolic (Arabic) numbers onto a visual line. Numerical quantity was matched for the symbolic and discrete nonsymbolic conditions, whereas cumulative surface area was matched for the continuous and discrete quantity conditions. Crucially, in the discrete condition children's estimation could rely either on the cumulative area or numerosity. All children showed a linear mapping for continuous quantities, whereas a developmental shift from a logarithmic to a linear mapping was observed for both nonsymbolic and symbolic numerical quantities. Analyses on individual estimates suggested the presence of two distinct strategies in estimating discrete nonsymbolic quantities: one based on numerosity and the other based on spatial extent. In Experiment 2, a non-spatial continuous quantity (shades of gray) and new discrete nonsymbolic conditions were added to the set used in Experiment 1. Results confirmed the linear patterns for the continuous tasks, as well as the presence of a subset of children relying on numerosity for the discrete nonsymbolic numerosity conditions despite the availability of continuous visual cues. Overall, our findings demonstrate that estimation of numerical and non-numerical quantities is based on different processing strategies and follow different developmental trajectories. (c) 2015 APA, all rights reserved).
Neukermans, Griet; Ruddick, Kevin; Bernard, Emilien; Ramon, Didier; Nechad, Bouchra; Deschamps, Pierre-Yves
2009-08-03
Geostationary ocean colour sensors have not yet been launched into space, but are under consideration by a number of space agencies. This study provides a proof of concept for mapping of Total Suspended Matter (TSM) in turbid coastal waters from geostationary platforms with the existing SEVIRI (Spinning Enhanced Visible and InfraRed Imager) meteorological sensor on the METEOSAT Second Generation platform. Data are available in near real time every 15 minutes. SEVIRI lacks sufficient bands for chlorophyll remote sensing but its spectral resolution is sufficient for quantification of Total Suspended Matter (TSM) in turbid waters, using a single broad red band, combined with a suitable near infrared band. A test data set for mapping of TSM in the Southern North Sea was obtained covering 35 consecutive days from June 28 until July 31 2006. Atmospheric correction of SEVIRI images includes corrections for Rayleigh and aerosol scattering, absorption by atmospheric gases and atmospheric transmittances. The aerosol correction uses assumptions on the ratio of marine reflectances and aerosol reflectances in the red and near-infrared bands. A single band TSM retrieval algorithm, calibrated by non-linear regression of seaborne measurements of TSM and marine reflectance was applied. The effect of the above assumptions on the uncertainty of the marine reflectance and TSM products was analysed. Results show that (1) mapping of TSM in the Southern North Sea is feasible with SEVIRI for turbid waters, though with considerable uncertainties in clearer waters, (2) TSM maps are well correlated with TSM maps obtained from MODIS AQUA and (3) during cloud-free days, high frequency dynamics of TSM are detected.
Wood transportation systems-a spin-off of a computerized information and mapping technique
William W. Phillips; Thomas J. Corcoran
1978-01-01
A computerized mapping system originally developed for planning the control of the spruce budworm in Maine has been extended into a tool for planning road net-work development and optimizing transportation costs. A budgetary process and a mathematical linear programming routine are used interactively with the mapping and information retrieval capabilities of the system...
PowerPoint and Concept Maps: A Great Double Act
ERIC Educational Resources Information Center
Simon, Jon
2015-01-01
This article explores how concept maps can provide a useful addition to PowerPoint slides to convey interconnections of knowledge and help students see how knowledge is often non-linear. While most accounting educators are familiar with PowerPoint, they are likely to be less familiar with concept maps and this article shows how the tool can be…
Granger-causality maps of diffusion processes.
Wahl, Benjamin; Feudel, Ulrike; Hlinka, Jaroslav; Wächter, Matthias; Peinke, Joachim; Freund, Jan A
2016-02-01
Granger causality is a statistical concept devised to reconstruct and quantify predictive information flow between stochastic processes. Although the general concept can be formulated model-free it is often considered in the framework of linear stochastic processes. Here we show how local linear model descriptions can be employed to extend Granger causality into the realm of nonlinear systems. This novel treatment results in maps that resolve Granger causality in regions of state space. Through examples we provide a proof of concept and illustrate the utility of these maps. Moreover, by integration we convert the local Granger causality into a global measure that yields a consistent picture for a global Ornstein-Uhlenbeck process. Finally, we recover invariance transformations known from the theory of autoregressive processes.
Characterization of intermittency in renewal processes: Application to earthquakes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akimoto, Takuma; Hasumi, Tomohiro; Aizawa, Yoji
2010-03-15
We construct a one-dimensional piecewise linear intermittent map from the interevent time distribution for a given renewal process. Then, we characterize intermittency by the asymptotic behavior near the indifferent fixed point in the piecewise linear intermittent map. Thus, we provide a framework to understand a unified characterization of intermittency and also present the Lyapunov exponent for renewal processes. This method is applied to the occurrence of earthquakes using the Japan Meteorological Agency and the National Earthquake Information Center catalog. By analyzing the return map of interevent times, we find that interevent times are not independent and identically distributed random variablesmore » but that the conditional probability distribution functions in the tail obey the Weibull distribution.« less
2012-01-01
Background Most modern citrus cultivars have an interspecific origin. As a foundational step towards deciphering the interspecific genome structures, a reference whole genome sequence was produced by the International Citrus Genome Consortium from a haploid derived from Clementine mandarin. The availability of a saturated genetic map of Clementine was identified as an essential prerequisite to assist the whole genome sequence assembly. Clementine is believed to be a ‘Mediterranean’ mandarin × sweet orange hybrid, and sweet orange likely arose from interspecific hybridizations between mandarin and pummelo gene pools. The primary goals of the present study were to establish a Clementine reference map using codominant markers, and to perform comparative mapping of pummelo, sweet orange, and Clementine. Results Five parental genetic maps were established from three segregating populations, which were genotyped with Single Nucleotide Polymorphism (SNP), Simple Sequence Repeats (SSR) and Insertion-Deletion (Indel) markers. An initial medium density reference map (961 markers for 1084.1 cM) of the Clementine was established by combining male and female Clementine segregation data. This Clementine map was compared with two pummelo maps and a sweet orange map. The linear order of markers was highly conserved in the different species. However, significant differences in map size were observed, which suggests a variation in the recombination rates. Skewed segregations were much higher in the male than female Clementine mapping data. The mapping data confirmed that Clementine arose from hybridization between ‘Mediterranean’ mandarin and sweet orange. The results identified nine recombination break points for the sweet orange gamete that contributed to the Clementine genome. Conclusions A reference genetic map of citrus, used to facilitate the chromosome assembly of the first citrus reference genome sequence, was established. The high conservation of marker order observed at the interspecific level should allow reasonable inferences of most citrus genome sequences by mapping next-generation sequencing (NGS) data in the reference genome sequence. The genome of the haploid Clementine used to establish the citrus reference genome sequence appears to have been inherited primarily from the ‘Mediterranean’ mandarin. The high frequency of skewed allelic segregations in the male Clementine data underline the probable extent of deviation from Mendelian segregation for characters controlled by heterozygous loci in male parents. PMID:23126659
NASA Astrophysics Data System (ADS)
Verdoodt, Ann; Baert, Geert; Van Ranst, Eric
2014-05-01
Central African soil resources are characterised by a large variability, ranging from stony, shallow or sandy soils with poor life-sustaining capabilities to highly weathered soils that recycle and support large amounts of biomass. Socio-economic drivers within this largely rural region foster inappropriate land use and management, threaten soil quality and finally culminate into a declining soil productivity and increasing food insecurity. For the development of sustainable land use strategies targeting development planning and natural hazard mitigation, decision makers often rely on legacy soil maps and soil profile databases. Recent development cooperation financed projects led to the design of soil information systems for Rwanda, D.R. Congo, and (ongoing) Burundi. A major challenge is to exploit these existing soil databases and convert them into soil inference systems through an optimal combination of digital soil mapping techniques, land evaluation tools, and biogeochemical models. This presentation aims at (1) highlighting some key characteristics of typical Central African soils, (2) assessing the positional, geographic and semantic quality of the soil information systems, and (3) revealing its potential impacts on the use of these datasets for thematic mapping of soil ecosystem services (e.g. organic carbon storage, pH buffering capacity). Soil map quality is assessed considering positional and semantic quality, as well as geographic completeness. Descriptive statistics, decision tree classification and linear regression techniques are used to mine the soil profile databases. Geo-matching as well as class-matching approaches are considered when developing thematic maps. Variability in inherent as well as dynamic soil properties within the soil taxonomic units is highlighted. It is hypothesized that within-unit variation in soil properties highly affects the use and interpretation of thematic maps for ecosystem services mapping. Results will mainly be based on analyses done in Rwanda, but can be complemented with ongoing research results or prospects for Burundi.
Neural networks for satellite remote sensing and robotic sensor interpretation
NASA Astrophysics Data System (ADS)
Martens, Siegfried
Remote sensing of forests and robotic sensor fusion can be viewed, in part, as supervised learning problems, mapping from sensory input to perceptual output. This dissertation develops ARTMAP neural networks for real-time category learning, pattern recognition, and prediction tailored to remote sensing and robotics applications. Three studies are presented. The first two use ARTMAP to create maps from remotely sensed data, while the third uses an ARTMAP system for sensor fusion on a mobile robot. The first study uses ARTMAP to predict vegetation mixtures in the Plumas National Forest based on spectral data from the Landsat Thematic Mapper satellite. While most previous ARTMAP systems have predicted discrete output classes, this project develops new capabilities for multi-valued prediction. On the mixture prediction task, the new network is shown to perform better than maximum likelihood and linear mixture models. The second remote sensing study uses an ARTMAP classification system to evaluate the relative importance of spectral and terrain data for map-making. This project has produced a large-scale map of remotely sensed vegetation in the Sierra National Forest. Network predictions are validated with ground truth data, and maps produced using the ARTMAP system are compared to a map produced by human experts. The ARTMAP Sierra map was generated in an afternoon, while the labor intensive expert method required nearly a year to perform the same task. The robotics research uses an ARTMAP system to integrate visual information and ultrasonic sensory information on a B14 mobile robot. The goal is to produce a more accurate measure of distance than is provided by the raw sensors. ARTMAP effectively combines sensory sources both within and between modalities. The improved distance percept is used to produce occupancy grid visualizations of the robot's environment. The maps produced point to specific problems of raw sensory information processing and demonstrate the benefits of using a neural network system for sensor fusion.
McCallister, Andrew; Zhang, Le; Burant, Alex; Katz, Laurence; Branca, Rosa Tamara
2017-11-01
To assess the spatial correlation between MRI and 18F-fludeoxyglucose positron emission tomography (FDG-PET) maps of human brown adipose tissue (BAT) and to measure differences in fat fraction (FF) between glucose avid and non-avid regions of the supraclavicular fat depot using a hybrid FDG-PET/MR scanner. In 16 healthy volunteers, mean age of 30 and body mass index of 26, FF, R2*, and FDG uptake maps were acquired simultaneously using a hybrid PET/MR system while employing an individualized cooling protocol to maximally stimulate BAT. Fourteen of the 16 volunteers reported BAT-positive FDG-PET scans. MR FF maps of BAT correlate well with combined FDG-PET/MR maps of BAT only in subjects with intense glucose uptake. The results indicate that the extent of the spatial correlation positively correlates with maximum FDG uptake in the supraclavicular fat depot. No consistent, significant differences were found in FF or R2* between FDG avid and non-avid supraclavicular fat regions. In a few FDG-positive subjects, a small but significant linear decrease in BAT FF was observed during BAT stimulation. MR FF, when used in conjunction with FDG uptake maps, can be seen as a valuable, radiation-free alternative to CT and can be used to measure tissue hydration and lipid consumption in some subjects. Magn Reson Med 78:1922-1932, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Optimized multiple linear mappings for single image super-resolution
NASA Astrophysics Data System (ADS)
Zhang, Kaibing; Li, Jie; Xiong, Zenggang; Liu, Xiuping; Gao, Xinbo
2017-12-01
Learning piecewise linear regression has been recognized as an effective way for example learning-based single image super-resolution (SR) in literature. In this paper, we employ an expectation-maximization (EM) algorithm to further improve the SR performance of our previous multiple linear mappings (MLM) based SR method. In the training stage, the proposed method starts with a set of linear regressors obtained by the MLM-based method, and then jointly optimizes the clustering results and the low- and high-resolution subdictionary pairs for regression functions by using the metric of the reconstruction errors. In the test stage, we select the optimal regressor for SR reconstruction by accumulating the reconstruction errors of m-nearest neighbors in the training set. Thorough experimental results carried on six publicly available datasets demonstrate that the proposed SR method can yield high-quality images with finer details and sharper edges in terms of both quantitative and perceptual image quality assessments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harpool, K; De La Fuente Herman, T; Ahmad, S
Purpose: To evaluate the performance of a two-dimensional (2D) array-diode- detector for geometric and dosimetric quality assurance (QA) tests of high-dose-rate (HDR) brachytherapy with an Ir-192-source. Methods: A phantom setup was designed that encapsulated a two-dimensional (2D) array-diode-detector (MapCheck2) and a catheter for the HDR brachytherapy Ir-192 source. This setup was used to perform both geometric and dosimetric quality assurance for the HDR-Ir192 source. The geometric tests included: (a) measurement of the position of the source and (b) spacing between different dwell positions. The dosimteric tests include: (a) linearity of output with time, (b) end effect and (c) relative dosemore » verification. The 2D-dose distribution measured with MapCheck2 was used to perform the previous tests. The results of MapCheck2 were compared with the corresponding quality assurance testes performed with Gafchromic-film and well-ionization-chamber. Results: The position of the source and the spacing between different dwell-positions were reproducible within 1 mm accuracy by measuring the position of maximal dose using MapCheck2 in contrast to the film which showed a blurred image of the dwell positions due to limited film sensitivity to irradiation. The linearity of the dose with dwell times measured from MapCheck2 was superior to the linearity measured with ionization chamber due to higher signal-to-noise ratio of the diode readings. MapCheck2 provided more accurate measurement of the end effect with uncertainty < 1.5% in comparison with the ionization chamber uncertainty of 3%. Although MapCheck2 did not provide absolute calibration dosimeter for the activity of the source, it provided accurate tool for relative dose verification in HDR-brachytherapy. Conclusion: The 2D-array-diode-detector provides a practical, compact and accurate tool to perform quality assurance for HDR-brachytherapy with an Ir-192 source. The diodes in MapCheck2 have high radiation sensitivity and linearity that is superior to Gafchromic-films and ionization chamber used for geometric and dosimetric QA in HDR-brachytherapy, respectively.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ureba, A.; Salguero, F. J.; Barbeiro, A. R.
Purpose: The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. Methods: The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called “biophysical” map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reducemore » the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Results: Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast irradiation case (Case II) solved with photon and electron modulated beams (IMRT + MERT); and a prostatic bed case (Case III) with a pronounced concave-shaped PTV by using volumetric modulated arc therapy. In the three cases, the required target prescription doses and constraints on organs at risk were fulfilled in a short enough time to allow routine clinical implementation. The quality assurance protocol followed to check CARMEN system showed a high agreement with the experimental measurements. Conclusions: A Monte Carlo treatment planning model exclusively based on maps performed from patient imaging data has been presented. The sequencing of these maps allows obtaining deliverable apertures which are weighted for modulation under a linear programming formulation. The model is able to solve complex radiotherapy treatments with high accuracy in an efficient computation time.« less
NASA Astrophysics Data System (ADS)
Mohanty, M. P.; Karmakar, S.; Ghosh, S.
2017-12-01
Many countries across the Globe are victims of floods. To monitor them, various sophisticated algorithms and flood models are used by the scientific community. However, there still lies a gap to efficiently mapping flood risk. The limitations being: (i) scarcity of extensive data inputs required for precise flood modeling, (ii) fizzling performance of models in large and complex terrains (iii) high computational cost and time, and (iv) inexpertise in handling model simulations by civic bodies. These factors trigger the necessity of incorporating uncomplicated and inexpensive, yet precise approaches to identify areas at different levels of flood risk. The present study addresses this issue by utilizing various easily available, low cost data in a GIS environment for a large flood prone and data poor region. A set of geomorphic indicators of Digital Elevation Model (DEM) are analysed through linear binary classification, and are used to identify the flood hazard. The performance of these indicators is then investigated using receiver operating characteristics (ROC) curve, whereas the calibration and validation of the derived flood maps are accomplished through a comparison with dynamically coupled 1-D 2-D flood model outputs. A high degree of similarity on flood inundation proves the reliability of the proposed approach in identifying flood hazard. On the other hand, an extensive list of socio-economic indicators is selected to represent the flood vulnerability at a very finer forward sortation level using multivariate Data Envelopment Analysis (DEA). A set of bivariate flood risk maps is derived combining the flood hazard and socio-economic vulnerability maps. Given the acute problem of floods in developing countries, the proposed methodology which may be characterized by low computational cost, lesser data requirement and limited flood modeling complexity may facilitate local authorities and planners for deriving effective flood management strategies.
CMB anisotropies at all orders: the non-linear Sachs-Wolfe formula
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roldan, Omar, E-mail: oaroldan@if.ufrj.br
2017-08-01
We obtain the non-linear generalization of the Sachs-Wolfe + integrated Sachs-Wolfe (ISW) formula describing the CMB temperature anisotropies. Our formula is valid at all orders in perturbation theory, is also valid in all gauges and includes scalar, vector and tensor modes. A direct consequence of our results is that the maps of the logarithmic temperature anisotropies are much cleaner than the usual CMB maps, because they automatically remove many secondary anisotropies. This can for instance, facilitate the search for primordial non-Gaussianity in future works. It also disentangles the non-linear ISW from other effects. Finally, we provide a method which canmore » iteratively be used to obtain the lensing solution at the desired order.« less
Augmented paper maps: Exploring the design space of a mixed reality system
NASA Astrophysics Data System (ADS)
Paelke, Volker; Sester, Monika
Paper maps and mobile electronic devices have complementary strengths and shortcomings in outdoor use. In many scenarios, like small craft sailing or cross-country trekking, a complete replacement of maps is neither useful nor desirable. Paper maps are fail-safe, relatively cheap, offer superior resolution and provide large scale overview. In uses like open-water sailing it is therefore mandatory to carry adequate maps/charts. GPS based mobile devices, on the other hand, offer useful features like automatic positioning and plotting, real-time information update and dynamic adaptation to user requirements. While paper maps are now commonly used in combination with mobile GPS devices, there is no meaningful integration between the two, and the combined use leads to a number of interaction problems and potential safety issues. In this paper we explore the design space of augmented paper maps in which maps are augmented with additional functionality through a mobile device to achieve a meaningful integration between device and map that combines their respective strengths.
MIDAS: Regionally linear multivariate discriminative statistical mapping.
Varol, Erdem; Sotiras, Aristeidis; Davatzikos, Christos
2018-07-01
Statistical parametric maps formed via voxel-wise mass-univariate tests, such as the general linear model, are commonly used to test hypotheses about regionally specific effects in neuroimaging cross-sectional studies where each subject is represented by a single image. Despite being informative, these techniques remain limited as they ignore multivariate relationships in the data. Most importantly, the commonly employed local Gaussian smoothing, which is important for accounting for registration errors and making the data follow Gaussian distributions, is usually chosen in an ad hoc fashion. Thus, it is often suboptimal for the task of detecting group differences and correlations with non-imaging variables. Information mapping techniques, such as searchlight, which use pattern classifiers to exploit multivariate information and obtain more powerful statistical maps, have become increasingly popular in recent years. However, existing methods may lead to important interpretation errors in practice (i.e., misidentifying a cluster as informative, or failing to detect truly informative voxels), while often being computationally expensive. To address these issues, we introduce a novel efficient multivariate statistical framework for cross-sectional studies, termed MIDAS, seeking highly sensitive and specific voxel-wise brain maps, while leveraging the power of regional discriminant analysis. In MIDAS, locally linear discriminative learning is applied to estimate the pattern that best discriminates between two groups, or predicts a variable of interest. This pattern is equivalent to local filtering by an optimal kernel whose coefficients are the weights of the linear discriminant. By composing information from all neighborhoods that contain a given voxel, MIDAS produces a statistic that collectively reflects the contribution of the voxel to the regional classifiers as well as the discriminative power of the classifiers. Critically, MIDAS efficiently assesses the statistical significance of the derived statistic by analytically approximating its null distribution without the need for computationally expensive permutation tests. The proposed framework was extensively validated using simulated atrophy in structural magnetic resonance imaging (MRI) and further tested using data from a task-based functional MRI study as well as a structural MRI study of cognitive performance. The performance of the proposed framework was evaluated against standard voxel-wise general linear models and other information mapping methods. The experimental results showed that MIDAS achieves relatively higher sensitivity and specificity in detecting group differences. Together, our results demonstrate the potential of the proposed approach to efficiently map effects of interest in both structural and functional data. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Arinilhaq; Widita, R.
2016-03-01
Diagnosis of macular degeneration using a Stratus OCT with a fast macular thickness map (FMTM) method produced six B-scan images of macula from different angles. The images were converted into a retinal thickness chart to be evaluated by normal distribution percentile of data so that it can be classified as normal thickness of macula or as experiencing abnormality (e.g. thickening and thinning). Unfortunately, the diagnostic images only represent the retinal thickness in several areas of the macular region. Thus, this study is aims to obtain the entire retinal thickness in the macula area from Status OCT's output images. Basically, the volumetric image is obtained by combining each of the six images. Reconstruction consists of a series of processes such as pre-processing, segmentation, and interpolation. Linear interpolation techniques are used to fill the empty pixels in reconstruction matrix. Based on the results, this method is able to provide retinal thickness maps on the macula surface and the macula 3D image. Retinal thickness map can display the macula area which experienced abnormalities. The macula 3D image can show the layers of tissue in the macula that is abnormal. The system built cannot replace ophthalmologist in decision making in term of diagnosis.
ProteinShader: illustrative rendering of macromolecules
Weber, Joseph R
2009-01-01
Background Cartoon-style illustrative renderings of proteins can help clarify structural features that are obscured by space filling or balls and sticks style models, and recent advances in programmable graphics cards offer many new opportunities for improving illustrative renderings. Results The ProteinShader program, a new tool for macromolecular visualization, uses information from Protein Data Bank files to produce illustrative renderings of proteins that approximate what an artist might create by hand using pen and ink. A combination of Hermite and spherical linear interpolation is used to draw smooth, gradually rotating three-dimensional tubes and ribbons with a repeating pattern of texture coordinates, which allows the application of texture mapping, real-time halftoning, and smooth edge lines. This free platform-independent open-source program is written primarily in Java, but also makes extensive use of the OpenGL Shading Language to modify the graphics pipeline. Conclusion By programming to the graphics processor unit, ProteinShader is able to produce high quality images and illustrative rendering effects in real-time. The main feature that distinguishes ProteinShader from other free molecular visualization tools is its use of texture mapping techniques that allow two-dimensional images to be mapped onto the curved three-dimensional surfaces of ribbons and tubes with minimum distortion of the images. PMID:19331660
Self-Powered Temperature-Mapping Sensors Based on Thermo-Magneto-Electric Generator.
Chun, Jinsung; Kishore, Ravi Anant; Kumar, Prashant; Kang, Min-Gyu; Kang, Han Byul; Sanghadasa, Mohan; Priya, Shashank
2018-04-04
We demonstrate a thermo-magneto-electric generator (TMEG) based on second-order phase transition of soft magnetic materials that provides a promising pathway for scavenging low-grade heat. It takes advantage of the cyclic magnetic forces of attraction and repulsion arising through ferromagnetic-to-paramagnetic phase transition to create mechanical vibrations that are converted into electricity through piezoelectric benders. To enhance the mechanical vibration frequency and thereby the output power of the TMEG, we utilize the nonlinear behavior of piezoelectric cantilevers and enhanced thermal transport through silver (Ag) nanoparticles (NPs) applied on the surface of a soft magnet. This results in large enhancement of the oscillation frequency reaching up to 9 Hz (300% higher compared with that of the prior literature). Optimization of the piezoelectric beam and Ag NP distribution resulted in the realization of nonlinear TMEGs that can generate a high output power of 80 μW across the load resistance of 0.91 MΩ, which is 2200% higher compared with that of the linear TMEG. Using a nonlinear TMEG, we fabricated and evaluated self-powered temperature-mapping sensors for monitoring the thermal variations across the surface. Combined, our results demonstrate that nonlinear TMEGs can provide additional functionality including temperature monitoring, thermal mapping, and powering sensor nodes.
NASA Astrophysics Data System (ADS)
Castillo, Jose Alan A.; Apan, Armando A.; Maraseni, Tek N.; Salmo, Severino G.
2017-12-01
The recent launch of the Sentinel-1 (SAR) and Sentinel-2 (multispectral) missions offers a new opportunity for land-based biomass mapping and monitoring especially in the tropics where deforestation is highest. Yet, unlike in agriculture and inland land uses, the use of Sentinel imagery has not been evaluated for biomass retrieval in mangrove forest and the non-forest land uses that replaced mangroves. In this study, we evaluated the ability of Sentinel imagery for the retrieval and predictive mapping of above-ground biomass of mangroves and their replacement land uses. We used Sentinel SAR and multispectral imagery to develop biomass prediction models through the conventional linear regression and novel Machine Learning algorithms. We developed models each from SAR raw polarisation backscatter data, multispectral bands, vegetation indices, and canopy biophysical variables. The results show that the model based on biophysical variable Leaf Area Index (LAI) derived from Sentinel-2 was more accurate in predicting the overall above-ground biomass. In contrast, the model which utilised optical bands had the lowest accuracy. However, the SAR-based model was more accurate in predicting the biomass in the usually deficient to low vegetation cover non-forest replacement land uses such as abandoned aquaculture pond, cleared mangrove and abandoned salt pond. These models had 0.82-0.83 correlation/agreement of observed and predicted value, and root mean square error of 27.8-28.5 Mg ha-1. Among the Sentinel-2 multispectral bands, the red and red edge bands (bands 4, 5 and 7), combined with elevation data, were the best variable set combination for biomass prediction. The red edge-based Inverted Red-Edge Chlorophyll Index had the highest prediction accuracy among the vegetation indices. Overall, Sentinel-1 SAR and Sentinel-2 multispectral imagery can provide satisfactory results in the retrieval and predictive mapping of the above-ground biomass of mangroves and the replacement non-forest land uses, especially with the inclusion of elevation data. The study demonstrates encouraging results in biomass mapping of mangroves and other coastal land uses in the tropics using the freely accessible and relatively high-resolution Sentinel imagery.
New techniques on oil spill modelling applied in the Eastern Mediterranean sea
NASA Astrophysics Data System (ADS)
Zodiatis, George; Kokinou, Eleni; Alves, Tiago; Lardner, Robin
2016-04-01
Small or large oil spills resulting from accidents on oil and gas platforms or due to the maritime traffic comprise a major environmental threat for all marine and coastal systems, and they are responsible for huge economic losses concerning the human infrastructures and the tourism. This work aims at presenting the integration of oil-spill model, bathymetric, meteorological, oceanographic, geomorphological and geological data to assess the impact of oil spills in maritime regions such as bays, as well as in the open sea, carried out in the Eastern Mediterranean Sea within the frame of NEREIDs, MEDESS-4MS and RAOP-Med EU projects. The MEDSLIK oil spill predictions are successfully combined with bathymetric analyses, the shoreline susceptibility and hazard mapping to predict the oil slick trajectories and the extend of the coastal areas affected. Based on MEDSLIK results, oil spill spreading and dispersion scenarios are produced both for non-mitigated and mitigated oil spills. MEDSLIK model considers three response combating methods of floating oil spills: a) mechanical recovery using skimmers or similar mechanisms; b) destruction by fire, c) use of dispersants or other bio-chemical means and deployment of booms. Shoreline susceptibility map can be compiled for the study areas based on the Environmental Susceptibility Index. The ESI classification considers a range of values between 1 and 9, with level 1 (ESI 1) representing areas of low susceptibility, impermeable to oil spilt during accidents, such as linear shorelines with rocky cliffs. In contrast, ESI 9 shores are highly vulnerable, and often coincide with natural reserves and special protected areas. Additionally, hazard maps of the maritime and coastal areas, possibly exposed to the danger on an oil spill, evaluate and categorize the hazard in levels from low to very high. This is important because a) Prior to an oil spill accident, hazard and shoreline susceptibility maps are made available to design preparedness and prevention plans in an effective way, b) After an oil spill accident, oil spill predictions can be combined with hazard maps to provide information on the oil spill dispersion and their impacts. This way, prevention plans can be directly modified at any time after the accident.
Costet, Alexandre; Wan, Elaine; Bunting, Ethan; Grondin, Julien; Garan, Hasan; Konofagou, Elisa
2016-01-01
Characterization and mapping of arrhythmias is currently performed through invasive insertion and manipulation of cardiac catheters. Electromechanical wave imaging (EWI) is a non-invasive ultrasound-based imaging technique, which tracks the electromechanical activation that immediately follows electrical activation. Electrical and electromechanical activations were previously found to be linearly correlated in the left ventricle, but the relationship has not yet been investigated in the three other chambers of the heart. The objective of this study was to investigate the relationship between electrical and electromechanical activations and validate EWI in all four chambers of the heart with conventional 3D electroanatomical mapping. Six (n = 6) normal adult canines were used in this study. The electrical activation sequence was mapped in all four chambers of the heart, both endocardially and epicardially using the St Jude's EnSite 3D mapping system (St. Jude Medical, Secaucus, NJ). EWI acquisitions were performed in all four chambers during normal sinus rhythm, and during pacing in the left ventricle. Isochrones of the electromechanical activation were generated from standard echocardiographic imaging views. Electrical and electromechanical activation maps were co-registered and compared, and electrical and electromechanical activation times were plotted against each other and linear regression was performed for each pair of activation maps. Electromechanical and electrical activations were found to be directly correlated with slopes of the correlation ranging from 0.77 to 1.83, electromechanical delays between 9 and 58 ms and R2 values from 0.71 to 0.92. The linear correlation between electrical and electromechanical activations and the agreement between the activation maps indicate that the electromechanical activation follows the pattern of propagation of the electrical activation. This suggests that EWI may be used as a novel non-invasive method to accurately characterize and localize sources of arrhythmias. PMID:27782003
Costet, Alexandre; Wan, Elaine; Bunting, Ethan; Grondin, Julien; Garan, Hasan; Konofagou, Elisa
2016-11-21
Characterization and mapping of arrhythmias is currently performed through invasive insertion and manipulation of cardiac catheters. Electromechanical wave imaging (EWI) is a non-invasive ultrasound-based imaging technique, which tracks the electromechanical activation that immediately follows electrical activation. Electrical and electromechanical activations were previously found to be linearly correlated in the left ventricle, but the relationship has not yet been investigated in the three other chambers of the heart. The objective of this study was to investigate the relationship between electrical and electromechanical activations and validate EWI in all four chambers of the heart with conventional 3D electroanatomical mapping. Six (n = 6) normal adult canines were used in this study. The electrical activation sequence was mapped in all four chambers of the heart, both endocardially and epicardially using the St Jude's EnSite 3D mapping system (St. Jude Medical, Secaucus, NJ). EWI acquisitions were performed in all four chambers during normal sinus rhythm, and during pacing in the left ventricle. Isochrones of the electromechanical activation were generated from standard echocardiographic imaging views. Electrical and electromechanical activation maps were co-registered and compared, and electrical and electromechanical activation times were plotted against each other and linear regression was performed for each pair of activation maps. Electromechanical and electrical activations were found to be directly correlated with slopes of the correlation ranging from 0.77 to 1.83, electromechanical delays between 9 and 58 ms and R 2 values from 0.71 to 0.92. The linear correlation between electrical and electromechanical activations and the agreement between the activation maps indicate that the electromechanical activation follows the pattern of propagation of the electrical activation. This suggests that EWI may be used as a novel non-invasive method to accurately characterize and localize sources of arrhythmias.
Cosmological N -body simulations with generic hot dark matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandbyge, Jacob; Hannestad, Steen, E-mail: jacobb@phys.au.dk, E-mail: sth@phys.au.dk
2017-10-01
We have calculated the non-linear effects of generic fermionic and bosonic hot dark matter components in cosmological N -body simulations. For sub-eV masses, the non-linear power spectrum suppression caused by thermal free-streaming resembles the one seen for massive neutrinos, whereas for masses larger than 1 eV, the non-linear relative suppression of power is smaller than in linear theory. We furthermore find that in the non-linear regime, one can map fermionic to bosonic models by performing a simple transformation.
Cosmological N-body simulations with generic hot dark matter
NASA Astrophysics Data System (ADS)
Brandbyge, Jacob; Hannestad, Steen
2017-10-01
We have calculated the non-linear effects of generic fermionic and bosonic hot dark matter components in cosmological N-body simulations. For sub-eV masses, the non-linear power spectrum suppression caused by thermal free-streaming resembles the one seen for massive neutrinos, whereas for masses larger than 1 eV, the non-linear relative suppression of power is smaller than in linear theory. We furthermore find that in the non-linear regime, one can map fermionic to bosonic models by performing a simple transformation.
Thompson, E.M.; Wald, D.J.
2012-01-01
Despite obvious limitations as a proxy for site amplification, the use of time-averaged shear-wave velocity over the top 30 m (VS30) remains widely practiced, most notably through its use as an explanatory variable in ground motion prediction equations (and thus hazard maps and ShakeMaps, among other applications). As such, we are developing an improved strategy for producing VS30 maps given the common observational constraints. Using the abundant VS30 measurements in Taiwan, we compare alternative mapping methods that combine topographic slope, surface geology, and spatial correlation structure. The different VS30 mapping algorithms are distinguished by the way that slope and geology are combined to define a spatial model of VS30. We consider the globally applicable slope-only model as a baseline to which we compare two methods of combining both slope and geology. For both hybrid approaches, we model spatial correlation structure of the residuals using the kriging-with-a-trend technique, which brings the map into closer agreement with the observations. Cross validation indicates that we can reduce the uncertainty of the VS30 map by up to 16% relative to the slope-only approach.
Mind Maps as a Lifelong Learning Tool
ERIC Educational Resources Information Center
Erdem, Aliye
2017-01-01
Mind map, which was developed by Tony Buzan as a note-taking technique, is an application which has the power of uncovering the thoughts which the brain has about a subject from different viewpoints and which activate the right and left lobes of the brain together as an alternative to linear thought. It is known that mind maps have benefits such…
Mind Mapping in Executive Education: Applications and Outcomes.
ERIC Educational Resources Information Center
Mento, Anthony J.; Martinelli, Patrick; Jones, Raymond M.
1999-01-01
Illustrates the technique of mind mapping as applied in executive education and management development. Indicates that most of the 70 students surveyed appreciated its use for recall and creative thinking, although some prefer a top-to-bottom, linear outline approach. (SK)
Polynomial approximation of Poincare maps for Hamiltonian system
NASA Technical Reports Server (NTRS)
Froeschle, Claude; Petit, Jean-Marc
1992-01-01
Different methods are proposed and tested for transforming a non-linear differential system, and more particularly a Hamiltonian one, into a map without integrating the whole orbit as in the well-known Poincare return map technique. We construct piecewise polynomial maps by coarse-graining the phase-space surface of section into parallelograms and using either only values of the Poincare maps at the vertices or also the gradient information at the nearest neighbors to define a polynomial approximation within each cell. The numerical experiments are in good agreement with both the real symplectic and Poincare maps.
NASA Astrophysics Data System (ADS)
Mitra, S.; Dey, S.; Siddartha, G.; Bhattacharya, S.
2016-12-01
We estimate 1-dimensional path average fundamental mode group velocity dispersion curves from regional Rayleigh and Love waves sampling the Indian subcontinent. The path average measurements are combined through a tomographic inversion to obtain 2-dimensional group velocity variation maps between periods of 10 and 80 s. The region of study is parametrised as triangular grids with 1° sides for the tomographic inversion. Rayleigh and Love wave dispersion curves from each node point is subsequently extracted and jointly inverted to obtain a radially anisotropic shear wave velocity model through global optimisation using Genetic Algorithm. The parametrization of the model space is done using three crustal layers and four mantle layers over a half-space with varying VpH , VsV and VsH. The anisotropic parameter (η) is calculated from empirical relations and the density of the layers are taken from PREM. Misfit for the model is calculated as a sum of error-weighted average dispersion curves. The 1-dimensional anisotropic shear wave velocity at each node point is combined using linear interpolation to obtain 3-dimensional structure beneath the region. Synthetic tests are performed to estimate the resolution of the tomographic maps which will be presented with our results. We envision to extend this to a larger dataset in near future to obtain high resolution anisotrpic shear wave velocity structure beneath India, Himalaya and Tibet.
Synchrotron X-ray fluorescence spectroscopy of salts in natural sea ice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Obbard, Rachel W.; Lieb-Lappen, Ross M.; Nordick, Katherine V.
We describe the use of synchrotron-based X-ray fluorescence spectroscopy to examine the microstructural location of specific elements, primarily salts, in sea ice. This work was part of an investigation of the location of bromine in the sea ice-snowpack-blowing snow system, where it plays a part in the heterogeneous chemistry that contributes to tropospheric ozone depletion episodes. We analyzed samples at beamline 13-ID-E of the Advanced Photon Source at Argonne National Laboratory. Using an 18 keV incident energy beam, we produced elemental maps of salts for sea ice samples from the Ross Sea, Antarctica. The distribution of salts in sea icemore » depends on ice type. In our columnar ice samples, Br was located in parallel lines spaced roughly 0.5 mm apart, corresponding to the spacing of lamellae in the skeletal region during initial ice growth. The maps revealed concentrations of Br in linear features in samples from all but the topmost and bottommost depths. For those samples, the maps revealed rounded features. Calibration of the Br elemental maps showed bulk concentrations to be 5–10 g/m 3, with concentrations ten times larger in the linear features. Through comparison with horizontal thin sections, we could verify that these linear features were brine sheets or layers.« less
Synchrotron X-ray fluorescence spectroscopy of salts in natural sea ice
Obbard, Rachel W.; Lieb-Lappen, Ross M.; Nordick, Katherine V.; ...
2016-10-23
We describe the use of synchrotron-based X-ray fluorescence spectroscopy to examine the microstructural location of specific elements, primarily salts, in sea ice. This work was part of an investigation of the location of bromine in the sea ice-snowpack-blowing snow system, where it plays a part in the heterogeneous chemistry that contributes to tropospheric ozone depletion episodes. We analyzed samples at beamline 13-ID-E of the Advanced Photon Source at Argonne National Laboratory. Using an 18 keV incident energy beam, we produced elemental maps of salts for sea ice samples from the Ross Sea, Antarctica. The distribution of salts in sea icemore » depends on ice type. In our columnar ice samples, Br was located in parallel lines spaced roughly 0.5 mm apart, corresponding to the spacing of lamellae in the skeletal region during initial ice growth. The maps revealed concentrations of Br in linear features in samples from all but the topmost and bottommost depths. For those samples, the maps revealed rounded features. Calibration of the Br elemental maps showed bulk concentrations to be 5–10 g/m 3, with concentrations ten times larger in the linear features. Through comparison with horizontal thin sections, we could verify that these linear features were brine sheets or layers.« less
NASA Technical Reports Server (NTRS)
Erickson, J. M.; Street, J. S. (Principal Investigator); Munsell, C. J.; Obrien, D. E.
1975-01-01
The author has identified the following significant results. ERTS-1 imagery in a variety of formats was used to locate linear, tonal, and hazy features and to relate them to areas of hydrocarbon production in the Williston Basin of North Dakota, eastern Montana, and northern South Dakota. Derivative maps of rectilinear, curvilinear, tonal, and hazy features were made using standard laboratory techniques. Mapping of rectilinears on both bands 5 and 7 over the entire region indicated the presence of a northeast-southwest and a northwest-southeast regional trend which is indicative of the bedrock fracture pattern in the basin. Curved lines generally bound areas of unique tone, maps of tonal patterns repeat many of the boundaries seen on curvilinear maps. Tones were best analyzed on spring and fall imagery in the Williston Basin. It is postulated that hazy areas are caused by atmospheric phenomena. The ability to use ERTS imagery as an exploration tool was examined where petroleum and gas are presently produced (Bottineau Field, Nesson and Antelope anticlines, Redwing Creek, and Cedar Creek anticline). It is determined that some tonal and linear features coincide with location of present production in Redwing and Cedar Creeks. In the remaining cases, targets could not be sufficiently well defined to justify this method.
Extended shelf life of soy bread using modified atmosphere packaging.
Fernandez, Ursula; Vodovotz, Yael; Courtney, Polly; Pascall, Melvin A
2006-03-01
This study investigated the use of modified atmosphere packaging (MAP) to extend the shelf life of soy bread with and without calcium propionate as a chemical preservative. The bread samples were packaged in pouches made from low-density polyethylene (LDPE) as the control (film 1), high-barrier laminated linear low-density polyethylene (LLDPE)-nylon-ethylene vinyl alcohol-nylon-LLDPE (film 2), and medium-barrier laminated LLDPE-nylon-LLDPE (film 3). The headspace gases used were atmosphere (air) as control, 50% CO2-50% N2, or 20% CO2-80% N2. The shelf life was determined by monitoring mold and yeast (M+Y) and aerobic plate counts (APC) in soy bread samples stored at 21 degrees C +/- 3 degrees C and 38% +/- 2% relative humidity. At 0, 2, 4, 6, 8, 10, and 12 days of storage, soy bread samples were removed, and the M+Y and APC were determined. The preservative, the films, and the headspace gases had significant effects on both the M+Y counts and the APC of soy bread samples. The combination of film 2 in the 50% CO2-50% N2 or 20% CO2-80% N2 headspace gases without calcium propionate as the preservative inhibited the M+Y growth by 6 days and the APC by 4 days. It was thus concluded that MAP using film 2 with either the 50% CO2-50% N2 or 20% CO2-80% N2 was the best combination for shelf-life extension of the soy bread without the need for a chemical preservative. These MAP treatments extended the shelf life by at least 200%.
Vaughn, Nicholas R.; Asner, Gregory P.; Smit, Izak P. J.; Riddel, Edward S.
2015-01-01
Factors controlling savanna woody vegetation structure vary at multiple spatial and temporal scales, and as a consequence, unraveling their combined effects has proven to be a classic challenge in savanna ecology. We used airborne LiDAR (light detection and ranging) to map three-dimensional woody vegetation structure throughout four savanna watersheds, each contrasting in geologic substrate and climate, in Kruger National Park, South Africa. By comparison of the four watersheds, we found that geologic substrate had a stronger effect than climate in determining watershed-scale differences in vegetation structural properties, including cover, height and crown density. Generalized Linear Models were used to assess the spatial distribution of woody vegetation structural properties, including cover, height and crown density, in relation to mapped hydrologic, topographic and fire history traits. For each substrate and climate combination, models incorporating topography, hydrology and fire history explained up to 30% of the remaining variation in woody canopy structure, but inclusion of a spatial autocovariate term further improved model performance. Both crown density and the cover of shorter woody canopies were determined more by unknown factors likely to be changing on smaller spatial scales, such as soil texture, herbivore abundance or fire behavior, than by our mapped regional-scale changes in topography and hydrology. We also detected patterns in spatial covariance at distances up to 50–450 m, depending on watershed and structural metric. Our results suggest that large-scale environmental factors play a smaller role than is often attributed to them in determining woody vegetation structure in southern African savannas. This highlights the need for more spatially-explicit, wide-area analyses using high resolution remote sensing techniques. PMID:26660502
Vaughn, Nicholas R; Asner, Gregory P; Smit, Izak P J; Riddel, Edward S
2015-01-01
Factors controlling savanna woody vegetation structure vary at multiple spatial and temporal scales, and as a consequence, unraveling their combined effects has proven to be a classic challenge in savanna ecology. We used airborne LiDAR (light detection and ranging) to map three-dimensional woody vegetation structure throughout four savanna watersheds, each contrasting in geologic substrate and climate, in Kruger National Park, South Africa. By comparison of the four watersheds, we found that geologic substrate had a stronger effect than climate in determining watershed-scale differences in vegetation structural properties, including cover, height and crown density. Generalized Linear Models were used to assess the spatial distribution of woody vegetation structural properties, including cover, height and crown density, in relation to mapped hydrologic, topographic and fire history traits. For each substrate and climate combination, models incorporating topography, hydrology and fire history explained up to 30% of the remaining variation in woody canopy structure, but inclusion of a spatial autocovariate term further improved model performance. Both crown density and the cover of shorter woody canopies were determined more by unknown factors likely to be changing on smaller spatial scales, such as soil texture, herbivore abundance or fire behavior, than by our mapped regional-scale changes in topography and hydrology. We also detected patterns in spatial covariance at distances up to 50-450 m, depending on watershed and structural metric. Our results suggest that large-scale environmental factors play a smaller role than is often attributed to them in determining woody vegetation structure in southern African savannas. This highlights the need for more spatially-explicit, wide-area analyses using high resolution remote sensing techniques.
Jafari, Ramin; Chhabra, Shalini; Prince, Martin R; Wang, Yi; Spincemaille, Pascal
2018-04-01
To propose an efficient algorithm to perform dual input compartment modeling for generating perfusion maps in the liver. We implemented whole field-of-view linear least squares (LLS) to fit a delay-compensated dual-input single-compartment model to very high temporal resolution (four frames per second) contrast-enhanced 3D liver data, to calculate kinetic parameter maps. Using simulated data and experimental data in healthy subjects and patients, whole-field LLS was compared with the conventional voxel-wise nonlinear least-squares (NLLS) approach in terms of accuracy, performance, and computation time. Simulations showed good agreement between LLS and NLLS for a range of kinetic parameters. The whole-field LLS method allowed generating liver perfusion maps approximately 160-fold faster than voxel-wise NLLS, while obtaining similar perfusion parameters. Delay-compensated dual-input liver perfusion analysis using whole-field LLS allows generating perfusion maps with a considerable speedup compared with conventional voxel-wise NLLS fitting. Magn Reson Med 79:2415-2421, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-07-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.
Mapping the conformational free energy of aspartic acid in the gas phase and in aqueous solution.
Comitani, Federico; Rossi, Kevin; Ceriotti, Michele; Sanz, M Eugenia; Molteni, Carla
2017-04-14
The conformational free energy landscape of aspartic acid, a proteogenic amino acid involved in a wide variety of biological functions, was investigated as an example of the complexity that multiple rotatable bonds produce even in relatively simple molecules. To efficiently explore such a landscape, this molecule was studied in the neutral and zwitterionic forms, in the gas phase and in water solution, by means of molecular dynamics and the enhanced sampling method metadynamics with classical force-fields. Multi-dimensional free energy landscapes were reduced to bi-dimensional maps through the non-linear dimensionality reduction algorithm sketch-map to identify the energetically stable conformers and their interconnection paths. Quantum chemical calculations were then performed on the minimum free energy structures. Our procedure returned the low energy conformations observed experimentally in the gas phase with rotational spectroscopy [M. E. Sanz et al., Phys. Chem. Chem. Phys. 12, 3573 (2010)]. Moreover, it provided information on higher energy conformers not accessible to experiments and on the conformers in water. The comparison between different force-fields and quantum chemical data highlighted the importance of the underlying potential energy surface to accurately capture energy rankings. The combination of force-field based metadynamics, sketch-map analysis, and quantum chemical calculations was able to produce an exhaustive conformational exploration in a range of significant free energies that complements the experimental data. Similar protocols can be applied to larger peptides with complex conformational landscapes and would greatly benefit from the next generation of accurate force-fields.
Mapping the conformational free energy of aspartic acid in the gas phase and in aqueous solution
NASA Astrophysics Data System (ADS)
Comitani, Federico; Rossi, Kevin; Ceriotti, Michele; Sanz, M. Eugenia; Molteni, Carla
2017-04-01
The conformational free energy landscape of aspartic acid, a proteogenic amino acid involved in a wide variety of biological functions, was investigated as an example of the complexity that multiple rotatable bonds produce even in relatively simple molecules. To efficiently explore such a landscape, this molecule was studied in the neutral and zwitterionic forms, in the gas phase and in water solution, by means of molecular dynamics and the enhanced sampling method metadynamics with classical force-fields. Multi-dimensional free energy landscapes were reduced to bi-dimensional maps through the non-linear dimensionality reduction algorithm sketch-map to identify the energetically stable conformers and their interconnection paths. Quantum chemical calculations were then performed on the minimum free energy structures. Our procedure returned the low energy conformations observed experimentally in the gas phase with rotational spectroscopy [M. E. Sanz et al., Phys. Chem. Chem. Phys. 12, 3573 (2010)]. Moreover, it provided information on higher energy conformers not accessible to experiments and on the conformers in water. The comparison between different force-fields and quantum chemical data highlighted the importance of the underlying potential energy surface to accurately capture energy rankings. The combination of force-field based metadynamics, sketch-map analysis, and quantum chemical calculations was able to produce an exhaustive conformational exploration in a range of significant free energies that complements the experimental data. Similar protocols can be applied to larger peptides with complex conformational landscapes and would greatly benefit from the next generation of accurate force-fields.
High-resolution three-dimensional imaging with compress sensing
NASA Astrophysics Data System (ADS)
Wang, Jingyi; Ke, Jun
2016-10-01
LIDAR three-dimensional imaging technology have been used in many fields, such as military detection. However, LIDAR require extremely fast data acquisition speed. This makes the manufacture of detector array for LIDAR system is very difficult. To solve this problem, we consider using compress sensing which can greatly decrease the data acquisition and relax the requirement of a detection device. To use the compressive sensing idea, a spatial light modulator will be used to modulate the pulsed light source. Then a photodetector is used to receive the reflected light. A convex optimization problem is solved to reconstruct the 2D depth map of the object. To improve the resolution in transversal direction, we use multiframe image restoration technology. For each 2D piecewise-planar scene, we move the SLM half-pixel each time. Then the position where the modulated light illuminates will changed accordingly. We repeat moving the SLM to four different directions. Then we can get four low-resolution depth maps with different details of the same plane scene. If we use all of the measurements obtained by the subpixel movements, we can reconstruct a high-resolution depth map of the sense. A linear minimum-mean-square error algorithm is used for the reconstruction. By combining compress sensing and multiframe image restoration technology, we reduce the burden on data analyze and improve the efficiency of detection. More importantly, we obtain high-resolution depth maps of a 3D scene.
Buscema, Massimo; Grossi, Enzo; Montanini, Luisa; Street, Maria E.
2015-01-01
Objectives Intra-uterine growth retardation is often of unknown origin, and is of great interest as a “Fetal Origin of Adult Disease” has been now well recognized. We built a benchmark based upon a previously analysed data set related to Intrauterine Growth Retardation with 46 subjects described by 14 variables, related with the insulin-like growth factor system and pro-inflammatory cytokines, namely interleukin -6 and tumor necrosis factor -α. Design and Methods We used new algorithms for optimal information sorting based on the combination of two neural network algorithms: Auto-contractive Map and Activation and Competition System. Auto-Contractive Map spatializes the relationships among variables or records by constructing a suitable embedding space where ‘closeness’ among variables or records reflects accurately their associations. The Activation and Competition System algorithm instead works as a dynamic non linear associative memory on the weight matrices of other algorithms, and is able to produce a prototypical variable profile of a given target. Results Classical statistical analysis, proved to be unable to distinguish intrauterine growth retardation from appropriate-for-gestational age (AGA) subjects due to the high non-linearity of underlying functions. Auto-contractive map succeeded in clustering and differentiating completely the conditions under study, while Activation and Competition System allowed to develop the profile of variables which discriminated the two conditions under study better than any other previous form of attempt. In particular, Activation and Competition System showed that ppropriateness for gestational age was explained by IGF-2 relative gene expression, and by IGFBP-2 and TNF-α placental contents. IUGR instead was explained by IGF-I, IGFBP-1, IGFBP-2 and IL-6 gene expression in placenta. Conclusion This further analysis provided further insight into the placental key-players of fetal growth within the insulin-like growth factor and cytokine systems. Our previous published analysis could identify only which variables were predictive of fetal growth in general, and identified only some relationships. PMID:26158499
Integrated Quantitative Transcriptome Maps of Human Trisomy 21 Tissues and Cells
Pelleri, Maria Chiara; Cattani, Chiara; Vitale, Lorenza; Antonaros, Francesca; Strippoli, Pierluigi; Locatelli, Chiara; Cocchi, Guido; Piovesan, Allison; Caracausi, Maria
2018-01-01
Down syndrome (DS) is due to the presence of an extra full or partial chromosome 21 (Hsa21). The identification of genes contributing to DS pathogenesis could be the key to any rational therapy of the associated intellectual disability. We aim at generating quantitative transcriptome maps in DS integrating all gene expression profile datasets available for any cell type or tissue, to obtain a complete model of the transcriptome in terms of both expression values for each gene and segmental trend of gene expression along each chromosome. We used the TRAM (Transcriptome Mapper) software for this meta-analysis, comparing transcript expression levels and profiles between DS and normal brain, lymphoblastoid cell lines, blood cells, fibroblasts, thymus and induced pluripotent stem cells, respectively. TRAM combined, normalized, and integrated datasets from different sources and across diverse experimental platforms. The main output was a linear expression value that may be used as a reference for each of up to 37,181 mapped transcripts analyzed, related to both known genes and expression sequence tag (EST) clusters. An independent example in vitro validation of fibroblast transcriptome map data was performed through “Real-Time” reverse transcription polymerase chain reaction showing an excellent correlation coefficient (r = 0.93, p < 0.0001) with data obtained in silico. The availability of linear expression values for each gene allowed the testing of the gene dosage hypothesis of the expected 3:2 DS/normal ratio for Hsa21 as well as other human genes in DS, in addition to listing genes differentially expressed with statistical significance. Although a fraction of Hsa21 genes escapes dosage effects, Hsa21 genes are selectively over-expressed in DS samples compared to genes from other chromosomes, reflecting a decisive role in the pathogenesis of the syndrome. Finally, the analysis of chromosomal segments reveals a high prevalence of Hsa21 over-expressed segments over the other genomic regions, suggesting, in particular, a specific region on Hsa21 that appears to be frequently over-expressed (21q22). Our complete datasets are released as a new framework to investigate transcription in DS for individual genes as well as chromosomal segments in different cell types and tissues. PMID:29740474
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Liqiang; Wu, Di; Li, Yuhua
Purpose : X-ray fluorescence (XRF) is a promising technique with sufficient specificity and sensitivity for identifying and quantifying features in small samples containing high atomic number (Z) materials such as iodine, gadolinium, and gold. In this study, the feasibility of applying XRF to early breast cancer diagnosis and treatment is studied using a novel approach for three-dimensional (3D) x-ray fluorescence mapping (XFM) of gold nanoparticle (GNP)-loaded objects in a physical phantom at the technical level. Methods : All the theoretical analysis and experiments are conducted under the condition of using x-ray pencil beam and a compactly integrated x-ray spectrometer. Themore » penetrability of the fluorescence x-rays from GNPs is first investigated by adopting a combination of BR12 with 70 mm/50 mm in thickness on the excitation/emission path to mimic the possible position of tumor goldin vivo. Then, a physical phantom made of BR12 is designed to translate in 3D space with three precise linear stages and subsequently the step by step XFM scanning is performed. The experimental technique named as background subtraction is applied to isolate the gold fluorescence from each spectrum obtained by the spectrometer. Afterwards, the attenuations of both the incident primary x-ray beam with energies beyond the gold K-edge energy (80.725 keV) and the isolated gold K{sub α} fluorescence x-rays (65.99 –69.80 keV) acquired after background subtraction are well calibrated, and finally the unattenuated K{sub α} fluorescence counts are used to realize mapping reconstruction and to describe the linear relationship between gold fluorescence counts and corresponding concentration of gold solutions. Results : The penetration results show that the goldK{sub α} fluorescence x-rays have sufficient penetrability for this phantom study, and the reconstructed mapping results indicate that both the spatial distribution and relative concentration of GNPs within the designed BR12 phantom can be well identified and quantified. Conclusions : Although the XFM method in this investigation is still studied at the technical level and is not yet practical for routinein vivo mapping tasks with GNPs, the current penetrability measurements and phantom study strongly suggest the feasibility to establish and develop a 3D XFM system.« less
Extraction of basic roadway information for non-state roads in Florida.
DOT National Transportation Integrated Search
2015-06-01
The Florida Department of Transportation (FDOT) has continued to maintain a linear-referenced All-Roads map : that includes both state and non-state local roads. The state portion of the map could be populated with select data : from FDOTs R...
Kwon, Oh-Hun; Park, Hyunjin; Seo, Sang-Won; Na, Duk L.; Lee, Jong-Min
2015-01-01
The mean diffusivity (MD) value has been used to describe microstructural properties in Diffusion Tensor Imaging (DTI) in cortical gray matter (GM). Recently, researchers have applied a cortical surface generated from the T1-weighted volume. When the DTI data are analyzed using the cortical surface, it is important to assign an accurate MD value from the volume space to the vertex of the cortical surface, considering the anatomical correspondence between the DTI and the T1-weighted image. Previous studies usually sampled the MD value using the nearest-neighbor (NN) method or Linear method, even though there are geometric distortions in diffusion-weighted volumes. Here we introduce a Surface Guided Diffusion Mapping (SGDM) method to compensate for such geometric distortions. We compared our SGDM method with results using NN and Linear methods by investigating differences in the sampled MD value. We also projected the tissue classification results of non-diffusion-weighted volumes to the cortical midsurface. The CSF probability values provided by the SGDM method were lower than those produced by the NN and Linear methods. The MD values provided by the NN and Linear methods were significantly greater than those of the SGDM method in regions suffering from geometric distortion. These results indicate that the NN and Linear methods assigned the MD value in the CSF region to the cortical midsurface (GM region). Our results suggest that the SGDM method is an effective way to correct such mapping errors. PMID:26236180
Hodges, Mary H; Soares Magalhães, Ricardo J; Paye, Jusufu; Koroma, Joseph B; Sonnie, Mustapha; Clements, Archie; Zhang, Yaobi
2012-01-01
A national mapping of Schistosoma haematobium was conducted in Sierra Leone before the mass drug administration (MDA) with praziquantel. Together with the separate mapping of S. mansoni and soil-transmitted helminths, the national control programme was able to plan the MDA strategies according to the World Health Organization guidelines for preventive chemotherapy for these diseases. A total of 52 sites/schools were selected according to prior knowledge of S. haematobium endemicity taking into account a good spatial coverage within each district, and a total of 2293 children aged 9-14 years were examined. Spatial analysis showed that S. haematobium is heterogeneously distributed in the country with significant spatial clustering in the central and eastern regions of the country, most prevalent in Bo (24.6% and 8.79 eggs/10 ml), Koinadugu (20.4% and 3.53 eggs/10 ml) and Kono (25.3% and 7.91 eggs/10 ml) districts. By combining this map with the previously reported maps on intestinal schistosomiasis using a simple probabilistic model, the combined schistosomiasis prevalence map highlights the presence of high-risk communities in an extensive area in the northeastern half of the country. By further combining the hookworm prevalence map, the at-risk population of school-age children requiring integrated schistosomiasis/soil-transmitted helminth treatment regimens according to the coendemicity was estimated. The first comprehensive national mapping of urogenital schistosomiasis in Sierra Leone was conducted. Using a new method for calculating the combined prevalence of schistosomiasis using estimates from two separate surveys, we provided a robust coendemicity mapping for overall urogenital and intestinal schistosomiasis. We also produced a coendemicity map of schistosomiasis and hookworm. These coendemicity maps can be used to guide the decision making for MDA strategies in combination with the local knowledge and programme needs.
Automatic analysis and classification of surface electromyography.
Abou-Chadi, F E; Nashar, A; Saad, M
2001-01-01
In this paper, parametric modeling of surface electromyography (EMG) algorithms that facilitates automatic SEMG feature extraction and artificial neural networks (ANN) are combined for providing an integrated system for the automatic analysis and diagnosis of myopathic disorders. Three paradigms of ANN were investigated: the multilayer backpropagation algorithm, the self-organizing feature map algorithm and a probabilistic neural network model. The performance of the three classifiers was compared with that of the old Fisher linear discriminant (FLD) classifiers. The results have shown that the three ANN models give higher performance. The percentage of correct classification reaches 90%. Poorer diagnostic performance was obtained from the FLD classifier. The system presented here indicates that surface EMG, when properly processed, can be used to provide the physician with a diagnostic assist device.
Fabrication of Silicon Backshort Assembly for Waveguide-Coupled Superconducting Detectors
NASA Technical Reports Server (NTRS)
Crowe, E.; Bennett, C. L.; Chuss, D. T.; Denis, K. L.; Eimer, J.; Lourie, N.; Marriage, T.; Moseley, S. H.; Rostem, K.; Stevenson, T. R.;
2012-01-01
The Cosmology Large Angular Scale Surveyor (CLASS) is a ground-based instrument that will measure the polarization of the cosmic microwave background to search for gravitational waves from a posited epoch of inflation early in the universe s history. We are currently developing detectors that address the challenges of this measurement by combining the excellent beam-forming attributes of feedhorns with the low-noise performance of Transition-Edge sensors. These detectors utilize a planar orthomode transducer that maps the horizontal and vertical linear polarized components in a dual-mode waveguide to separate microstrip lines. On-chip filters define the bandpass in each channel, and the signals are terminated in resistors that are thermally coupled to the transition-edge sensors operating at 150 mK.
Studies of the net surface radiative flux from satellite radiances during FIFE
NASA Technical Reports Server (NTRS)
Frouin, Robert
1993-01-01
Studies of the net surface radiative flux from satellite radiances during First ISLSCP Field Experiment (FIFE) are presented. Topics covered include: radiative transfer model validation; calibration of VISSR and AVHRR solar channels; development and refinement of algorithms to estimate downward solar and terrestrial irradiances at the surface, including photosynthetically available radiation (PAR) and surface albedo; verification of these algorithms using in situ measurements; production of maps of shortwave irradiance, surface albedo, and related products; analysis of the temporal variability of shortwave irradiance over the FIFE site; development of a spectroscopy technique to estimate atmospheric total water vapor amount; and study of optimum linear combinations of visible and near-infrared reflectances for estimating the fraction of PAR absorbed by plants.
NASA Astrophysics Data System (ADS)
Leroux, Romain; Chatellier, Ludovic; David, Laurent
2018-01-01
This article is devoted to the estimation of time-resolved particle image velocimetry (TR-PIV) flow fields using a time-resolved point measurements of a voltage signal obtained by hot-film anemometry. A multiple linear regression model is first defined to map the TR-PIV flow fields onto the voltage signal. Due to the high temporal resolution of the signal acquired by the hot-film sensor, the estimates of the TR-PIV flow fields are obtained with a multiple linear regression method called orthonormalized partial least squares regression (OPLSR). Subsequently, this model is incorporated as the observation equation in an ensemble Kalman filter (EnKF) applied on a proper orthogonal decomposition reduced-order model to stabilize it while reducing the effects of the hot-film sensor noise. This method is assessed for the reconstruction of the flow around a NACA0012 airfoil at a Reynolds number of 1000 and an angle of attack of {20}°. Comparisons with multi-time delay-modified linear stochastic estimation show that both the OPLSR and EnKF combined with OPLSR are more accurate as they produce a much lower relative estimation error, and provide a faithful reconstruction of the time evolution of the velocity flow fields.
Multiple imputation of rainfall missing data in the Iberian Mediterranean context
NASA Astrophysics Data System (ADS)
Miró, Juan Javier; Caselles, Vicente; Estrela, María José
2017-11-01
Given the increasing need for complete rainfall data networks, in recent years have been proposed diverse methods for filling gaps in observed precipitation series, progressively more advanced that traditional approaches to overcome the problem. The present study has consisted in validate 10 methods (6 linear, 2 non-linear and 2 hybrid) that allow multiple imputation, i.e., fill at the same time missing data of multiple incomplete series in a dense network of neighboring stations. These were applied for daily and monthly rainfall in two sectors in the Júcar River Basin Authority (east Iberian Peninsula), which is characterized by a high spatial irregularity and difficulty of rainfall estimation. A classification of precipitation according to their genetic origin was applied as pre-processing, and a quantile-mapping adjusting as post-processing technique. The results showed in general a better performance for the non-linear and hybrid methods, highlighting that the non-linear PCA (NLPCA) method outperforms considerably the Self Organizing Maps (SOM) method within non-linear approaches. On linear methods, the Regularized Expectation Maximization method (RegEM) was the best, but far from NLPCA. Applying EOF filtering as post-processing of NLPCA (hybrid approach) yielded the best results.
Computer-composite mapping for geologists
van Driel, J.N.
1980-01-01
A computer program for overlaying maps has been tested and evaluated as a means for producing geologic derivative maps. Four maps of the Sugar House Quadrangle, Utah, were combined, using the Multi-Scale Data Analysis and Mapping Program, in a single composite map that shows the relative stability of the land surface during earthquakes. Computer-composite mapping can provide geologists with a powerful analytical tool and a flexible graphic display technique. Digitized map units can be shown singly, grouped with different units from the same map, or combined with units from other source maps to produce composite maps. The mapping program permits the user to assign various values to the map units and to specify symbology for the final map. Because of its flexible storage, easy manipulation, and capabilities of graphic output, the composite-mapping technique can readily be applied to mapping projects in sedimentary and crystalline terranes, as well as to maps showing mineral resource potential. ?? 1980 Springer-Verlag New York Inc.
Linear maps preserving maximal deviation and the Jordan structure of quantum systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamhalter, Jan
2012-12-15
In the algebraic approach to quantum theory, a quantum observable is given by an element of a Jordan algebra and a state of the system is modelled by a normalized positive functional on the underlying algebra. Maximal deviation of a quantum observable is the largest statistical deviation one can obtain in a particular state of the system. The main result of the paper shows that each linear bijective transformation between JBW algebras preserving maximal deviations is formed by a Jordan isomorphism or a minus Jordan isomorphism perturbed by a linear functional multiple of an identity. It shows that only onemore » numerical statistical characteristic has the power to determine the Jordan algebraic structure completely. As a consequence, we obtain that only very special maps can preserve the diameter of the spectra of elements. Nonlinear maps preserving the pseudometric given by maximal deviation are also described. The results generalize hitherto known theorems on preservers of maximal deviation in the case of self-adjoint parts of von Neumann algebras proved by Molnar.« less
The influence of contextual reward statistics on risk preference
Rigoli, Francesco; Rutledge, Robb B.; Dayan, Peter; Dolan, Raymond J.
2016-01-01
Decision theories mandate that organisms should adjust their behaviour in the light of the contextual reward statistics. We tested this notion using a gambling choice task involving distinct contexts with different reward distributions. The best fitting model of subjects' behaviour indicated that the subjective values of options depended on several factors, including a baseline gambling propensity, a gambling preference dependent on reward amount, and a contextual reward adaptation factor. Combining this behavioural model with simultaneous functional magnetic resonance imaging we probed neural responses in three key regions linked to reward and value, namely ventral tegmental area/substantia nigra (VTA/SN), ventromedial prefrontal cortex (vmPFC) and ventral striatum (VST). We show that activity in the VTA/SN reflected contextual reward statistics to the extent that context affected behaviour, activity in the vmPFC represented a value difference between chosen and unchosen options while VST responses reflected a non-linear mapping between the actual objective rewards and their subjective value. The findings highlight a multifaceted basis for choice behaviour with distinct mappings between components of this behaviour and value sensitive brain regions. PMID:26707890
Erosion and deterioration of the Isles Dernieres Barrier Island Arc, Louisiana, U.S.A.: 1853 to 1988
McBride, Randolph A.; Penland, Shea; Jaffe, Bruce E.; Williams, S. Jeffress; Sallenger, Asbury H.; Westphal, Karen A.
1989-01-01
Using cartographic and aerial photography data from the years 1853, 1890, 1934, 1956, 1978, 1984, and 1988, shoreline change maps of the Isles Dernieres barrier island arc were constructed. These data were accurately superimposed, using a computer mapping system, which removed projection, datum, scale, and other cartographic inconsistencies. Linear, areal, and perimeter measurements indicate that the Isles Dernieres are suffering rapid rates of coastal erosion, land loss, and breakup. Bayside and gulfside erosion, in combination with sediment shortage and subsidence, have caused the Isles Dernieres to narrow through time. In addition, the core of the barrier island arc does not migrate landward and instead, breaks up in place as a result of inlet breaching and development. This is in contrast to other models of landward barrier island migration during transgression. If these trends continue, the Isles Dernieres will likely evolve into a subaqueous inner-shelf shoal by the early 21st century. Loss of the Isles Dernieres barrier island arc will severely impact the Terrebonne parish estuary, resulting in decreased environmental quality and increased public risk from storms and hurricanes.
Simulating multiprimary LCDs on standard tri-stimulus LC displays
NASA Astrophysics Data System (ADS)
Lebowsky, Fritz; Vonneilich, Katrin; Bonse, Thomas
2008-01-01
Large-scale, direct view TV screens, in particular those based on liquid crystal technology, are beginning to use subpixel structures with more than three subpixels to implement a multi-primary display with up to six primaries. Since their input color space is likely to remain tri-stimulus RGB we first focus on some fundamental constraints. Among them, we elaborate simplified gamut mapping architectures as well as color filter geometry, transparency, and chromaticity coordinates in color space. Based on a 'display centric' RGB color space tetrahedrization combined with linear interpolation we describe a simulation framework which enables optimization for up to 7 primaries. We evaluated the performance through mapping the multi-primary design back onto a RGB LC display gamut without building a prototype multi-primary display. As long as we kept the RGB equivalent output signal within the display gamut we could analyze all desirable multi-primary configurations with regard to colorimetric variance and visually perceived quality. Not only does our simulation tool enable us to verify a novel concept it also demonstrates how carefully one needs to design a multiprimary display for LCD TV applications.
An efficient method for removing point sources from full-sky radio interferometric maps
NASA Astrophysics Data System (ADS)
Berger, Philippe; Oppermann, Niels; Pen, Ue-Li; Shaw, J. Richard
2017-12-01
A new generation of wide-field radio interferometers designed for 21-cm surveys is being built as drift scan instruments allowing them to observe large fractions of the sky. With large numbers of antennas and frequency channels, the enormous instantaneous data rates of these telescopes require novel, efficient, data management and analysis techniques. The m-mode formalism exploits the periodicity of such data with the sidereal day, combined with the assumption of statistical isotropy of the sky, to achieve large computational savings and render optimal analysis methods computationally tractable. We present an extension to that work that allows us to adopt a more realistic sky model and treat objects such as bright point sources. We develop a linear procedure for deconvolving maps, using a Wiener filter reconstruction technique, which simultaneously allows filtering of these unwanted components. We construct an algorithm, based on the Sherman-Morrison-Woodbury formula, to efficiently invert the data covariance matrix, as required for any optimal signal-to-noise ratio weighting. The performance of our algorithm is demonstrated using simulations of a cylindrical transit telescope.
The isotropic radio background revisited
NASA Astrophysics Data System (ADS)
Fornengo, Nicolao; Lineros, Roberto A.; Regis, Marco; Taoso, Marco
2014-04-01
We present an extensive analysis on the determination of the isotropic radio background. We consider six different radio maps, ranging from 22 MHz to 2.3 GHz and covering a large fraction of the sky. The large scale emission is modeled as a linear combination of an isotropic component plus the Galactic synchrotron radiation and thermal bremsstrahlung. Point-like and extended sources are either masked or accounted for by means of a template. We find a robust estimate of the isotropic radio background, with limited scatter among different Galactic models. The level of the isotropic background lies significantly above the contribution obtained by integrating the number counts of observed extragalactic sources. Since the isotropic component dominates at high latitudes, thus making the profile of the total emission flat, a Galactic origin for such excess appears unlikely. We conclude that, unless a systematic offset is present in the maps, and provided that our current understanding of the Galactic synchrotron emission is reasonable, extragalactic sources well below the current experimental threshold seem to account for the majority of the brightness of the extragalactic radio sky.
Lateral Variations in Geologic Structure and Tectonic Setting from Remote Sensing Data
1983-05-01
bodies. Analogous magnetic anomaly patterns perhaps can be inferred, since regional lithologies are comparable with some volcanic bodies around the...32 14 Geologic map of the Katahdin Batholith . . . . . . . . . . . 34 15 Bouguer gravity map of Mai ne ... ............ . 36 16 Magnetic anomaly map... magnetic anomaly patterns perhaps can be inferred, since regional lithologies are comparable with some volcanic bodies around the plutons. Linear
HectoMAPping the Universe. Karl Schwarzschild Award Lecture 2014
NASA Astrophysics Data System (ADS)
Geller, Margaret J.; Hwang, Ho Seong
2015-06-01
During the last three decades progress in mapping the Universe from an age of 400 000 years to the present has been stunning. Instrument/telescope combinations have naturally determined the sampling of various redshift ranges. Here we outline the impact of the Hectospec on the MMT on exploration of the Universe in the redshift range 0.2 ⪉ z ⪉ 0.8. We focus on dense redshift surveys, SHELS and HectoMAP. SHELS is a complete magnitude limited survey covering 8 square degrees. The HectoMAP survey combines a red-selected dense redshift survey and a weak lensing map covering 50 square degrees. Combining the dense redshift survey with a Subaru HyperSuprimeCam (HSC) weak lensing map will provide a powerful probe of the way galaxies trace the distribution of dark matter on a wide range of physical scales.
Izquierdo-Garcia, David; Hansen, Adam E.; Förster, Stefan; Benoit, Didier; Schachoff, Sylvia; Fürst, Sebastian; Chen, Kevin T.; Chonde, Daniel B.; Catana, Ciprian
2014-01-01
We present an approach for head MR-based attenuation correction (MR-AC) based on the Statistical Parametric Mapping (SPM8) software that combines segmentation- and atlas-based features to provide a robust technique to generate attenuation maps (µ-maps) from MR data in integrated PET/MR scanners. Methods Coregistered anatomical MR and CT images acquired in 15 glioblastoma subjects were used to generate the templates. The MR images from these subjects were first segmented into 6 tissue classes (gray and white matter, cerebro-spinal fluid, bone and soft tissue, and air), which were then non-rigidly coregistered using a diffeomorphic approach. A similar procedure was used to coregister the anatomical MR data for a new subject to the template. Finally, the CT-like images obtained by applying the inverse transformations were converted to linear attenuation coefficients (LACs) to be used for AC of PET data. The method was validated on sixteen new subjects with brain tumors (N=12) or mild cognitive impairment (N=4) who underwent CT and PET/MR scans. The µ-maps and corresponding reconstructed PET images were compared to those obtained using the gold standard CT-based approach and the Dixon-based method available on the Siemens Biograph mMR scanner. Relative change (RC) images were generated in each case and voxel- and region of interest (ROI)-based analyses were performed. Results The leave-one-out cross-validation analysis of the data from the 15 atlas-generation subjects showed small errors in brain LACs (RC=1.38%±4.52%) compared to the gold standard. Similar results (RC=1.86±4.06%) were obtained from the analysis of the atlas-validation datasets. The voxel- and ROI-based analysis of the corresponding reconstructed PET images revealed quantification errors of 3.87±5.0% and 2.74±2.28%, respectively. The Dixon-based method performed substantially worse (the mean RC values were 13.0±10.25% and 9.38±4.97%, respectively). Areas closer to skull showed the largest improvement. Conclusion We have presented an SPM8-based approach for deriving the head µ-map from MR data to be used for PET AC in integrated PET/MR scanners. Its implementation is straightforward and only requires the morphological data acquired with a single MR sequence. The method is very accurate and robust, combining the strengths of both segmentation- and atlas-based approaches while minimizing their drawbacks. PMID:25278515
Generalized ISAR--part I: an optimal method for imaging large naval vessels.
Given, James A; Schmidt, William R
2005-11-01
We describe a generalized inverse synthetic aperture radar (ISAR) process that performs well under a wide variety of conditions common to the naval ISAR tests of large vessels. In particular, the generalized ISAR process performs well in the presence of moderate intensity ship roll. The process maps localized scatterers onto peaks on the ISAR plot. However, in a generalized ISAR plot, each of the two coordinates of a peak is a fixed linear combination of the three ship coordinates of the scatterer causing the peak. Combining this process with interferometry will then provide high-accuracy three-dimensional location of the important scatterers on a ship. We show that ISAR can be performed in the presence of simultaneous roll and aspect change, provided the two Doppler rates are not too close in magnitude. We derive the equations needed for generalized ISAR, both roll driven and aspect driven, and test them against simulations performed in a variety of conditions, including large roll amplitudes.
Exploring multivariate representations of indices along linear geographic features
NASA Astrophysics Data System (ADS)
Bleisch, Susanne; Hollenstein, Daria
2018-05-01
A study of the walkability of a Swiss town required finding suitable representations of multivariate geographical da-ta. The goal was to represent multiple indices of walkability concurrently and visualizing the data along the street network it relates to. Different indices of pedestrian friendliness were assessed for short street sections and then mapped to an overlaid grid. Basic and composite glyphs were designed using square- or triangle-areas to display one to four index values concurrently within the grid structure. Color was used to indicate different indices. Implement-ing visualizations for different combinations of index sets, we find that single values can be emphasized or de-emphasized by selecting the color scheme accordingly and that different color selections either allow perceiving sin-gle values or overall trends over the evaluated area. Values for up to four indices can be displayed in combination within the resulting geovisualizations and the underlying gridded road network references the data to its real world locations.
An energy analysis of torrefaction for upgrading microalga residue as a solid fuel.
Chen, Wei-Hsin; Huang, Ming-Yueh; Chang, Jo-Shu; Chen, Chun-Yen; Lee, Wen-Jhy
2015-06-01
The torrefaction characteristics and energy utilization of microalga Chlamydomonas sp. JSC4 (C. sp. JSC4) residue under the combination of temperature and duration are studied by examining contour maps. The torrefaction temperature on the contour line of solid yield has a trend to linearly decrease with increasing duration. An index of relative energy efficiency (REE) is introduced to identify the performance of energy utilization for upgrading biomass. For a fixed energy yield, the optimal operation can be found to maximize the heating value of the biomass and minimize the solid yield. The energy utilization under the combination of a high temperature and a short duration is more efficient than that of a low temperature and a long duration. The maximum REE along the contour line of energy yield is always exhibited at the highest temperature (300°C) where the energy efficiency can be enlarged by a factor of at least 2.36. Copyright © 2015 Elsevier Ltd. All rights reserved.
The inverse problem: Ocean tides derived from earth tide observations
NASA Technical Reports Server (NTRS)
Kuo, J. T.
1978-01-01
Indirect mapping ocean tides by means of land and island-based tidal gravity measurements is presented. The inverse scheme of linear programming is used for indirect mapping of ocean tides. Open ocean tides were measured by the numerical integration of Laplace's tidal equations.
Sengupta, Partho P; Mehta, Vimal; Arora, Ramesh; Mohan, Jagdish C; Khandheria, Bijoy K
2005-07-01
This study tested the hypothesis that linear mapping of regional myocardial strain comprehensively assesses variations in regional myocardial function in hypertrophic cardiomyopathy. Hypertrophic cardiomyopathy is characterized by disorganized myocardial architecture that results in spatial and temporal nonuniformity of regional function. Left ventricular deformation was quantified in 20 patients with hypertrophic cardiomyopathy and compared with 25 age- and sex-matched control subjects. Abnormalities in subendocardial strain ranged from reduced longitudinal shortening to paradoxical systolic lengthening and delayed regional longitudinal contractions that were often located in small subsegmental areas. These variations were underestimated significantly by arbitrary measurements compared with linear mapping, in which a region of interest was moved across the longitudinal length of left ventricle (difference of peak and least strain, 10.7% +/- 5.1% vs 17% +/- 5.5%; P < .001). Echocardiographic assessment of variations in regional strain requires careful mapping and may be inappropriately assessed if left ventricular segments are sampled at arbitrary focal locations.
NASA Astrophysics Data System (ADS)
Kabiri, K.
2017-09-01
The capabilities of Sentinel-2A imagery to determine bathymetric information in shallow coastal waters were examined. In this regard, two Sentinel-2A images (acquired on February and March 2016 in calm weather and relatively low turbidity) were selected from Nayband Bay, located in the northern Persian Gulf. In addition, a precise and accurate bathymetric map for the study area were obtained and used for both calibrating the models and validating the results. Traditional linear and ratio transform techniques, as well as a novel integrated method, were employed to determine depth values. All possible combinations of the three bands (Band 2: blue (458-523 nm), Band 3: green (543-578 nm), and Band 4: red (650-680 nm), spatial resolution: 10 m) have been considered (11 options) using the traditional linear and ratio transform techniques, together with 10 model options for the integrated method. The accuracy of each model was assessed by comparing the determined bathymetric information with field measured values. The correlation coefficients (R2), and root mean square errors (RMSE) for validation points were calculated for all models and for two satellite images. When compared with the linear transform method, the method employing ratio transformation with a combination of all three bands yielded more accurate results (R2Mac = 0.795, R2Feb = 0.777, RMSEMac = 1.889 m, and RMSEFeb =2.039 m). Although most of the integrated transform methods (specifically the method including all bands and band ratios) have yielded the highest accuracy, these increments were not significant, hence the ratio transformation has selected as optimum method.
Monitoring the Deformation of High-Rise Buildings in Shanghai Luijiazui Zone by Tomo-Psinsar
NASA Astrophysics Data System (ADS)
Zhou, L. F.; Ma, P. F.; Xia, Y.; Xie, C. H.
2018-05-01
In this study, we utilize a Tomography-based Persistent Scatterers Interferometry (Tomo-PSInSAR) approach for monitoring the deformation performances of high-rise buildings, i.e. SWFC and Jin Mao Tower, in Shanghai Lujiazui Zone. For the purpose of this study, we use 31 Stripmap acquisitions from TerraSAR-X missions, spanning from December 2009 to February 2013. Considering thermal expansion, creep and shrinkage are two long-term movements that occur in high-rise buildings with concrete structures, we use an extended 4-D SAR phase model, and three parameters (height, deformation velocity, and thermal amplitude) are estimated simultaneously. Moreover, we apply a two-tier network strategy to detect single and double PSs with no need for preliminary removal of the atmospheric phase screen (APS) in the study area, avoiding possible error caused by the uncertainty in spatiotemporal filtering. Thermal expansion is illustrated in the thermal amplitude map, and deformation due to creep and shrinkage is revealed in the linear deformation velocity map. The thermal amplitude map demonstrates that the derived thermal amplitude of the two high-rise buildings both dilate and contract periodically, which is highly related to the building height due to the upward accumulative effect of thermal expansion. The linear deformation velocity map reveals that SWFC is subject to deformation during the new built period due to creep and shrinkage, which is height-dependent movements in the linear velocity map. It is worth mention that creep and shrinkage induces movements that increase with the increasing height in the downward direction. In addition, the deformation rates caused by creep and shrinkage are largest at the beginning and gradually decrease, and at last achieve a steady state as time goes infinity. On the contrary, the linear deformation velocity map shows that Jin Mao Tower is almost stable, and the reason is that it is an old built building, which is not influenced by creep and shrinkage as the load is relaxed and dehydration proceeds. This study underlines the potential of the Tomo-PSInSAR solution for the monitoring deformation performance of high-rise buildings, which offers a quantitative indicator to local authorities and planners for assessing potential damages.
Combining 3D structure of real video and synthetic objects
NASA Astrophysics Data System (ADS)
Kim, Man-Bae; Song, Mun-Sup; Kim, Do-Kyoon
1998-04-01
This paper presents a new approach of combining real video and synthetic objects. The purpose of this work is to use the proposed technology in the fields of advanced animation, virtual reality, games, and so forth. Computer graphics has been used in the fields previously mentioned. Recently, some applications have added real video to graphic scenes for the purpose of augmenting the realism that the computer graphics lacks in. This approach called augmented or mixed reality can produce more realistic environment that the entire use of computer graphics. Our approach differs from the virtual reality and augmented reality in the manner that computer- generated graphic objects are combined to 3D structure extracted from monocular image sequences. The extraction of the 3D structure requires the estimation of 3D depth followed by the construction of a height map. Graphic objects are then combined to the height map. The realization of our proposed approach is carried out in the following steps: (1) We derive 3D structure from test image sequences. The extraction of the 3D structure requires the estimation of depth and the construction of a height map. Due to the contents of the test sequence, the height map represents the 3D structure. (2) The height map is modeled by Delaunay triangulation or Bezier surface and each planar surface is texture-mapped. (3) Finally, graphic objects are combined to the height map. Because 3D structure of the height map is already known, Step (3) is easily manipulated. Following this procedure, we produced an animation video demonstrating the combination of the 3D structure and graphic models. Users can navigate the realistic 3D world whose associated image is rendered on the display monitor.
Combining Biomarkers Linearly and Nonlinearly for Classification Using the Area Under the ROC Curve
Fong, Youyi; Yin, Shuxin; Huang, Ying
2016-01-01
In biomedical studies, it is often of interest to classify/predict a subject’s disease status based on a variety of biomarker measurements. A commonly used classification criterion is based on AUC - Area under the Receiver Operating Characteristic Curve. Many methods have been proposed to optimize approximated empirical AUC criteria, but there are two limitations to the existing methods. First, most methods are only designed to find the best linear combination of biomarkers, which may not perform well when there is strong nonlinearity in the data. Second, many existing linear combination methods use gradient-based algorithms to find the best marker combination, which often result in sub-optimal local solutions. In this paper, we address these two problems by proposing a new kernel-based AUC optimization method called Ramp AUC (RAUC). This method approximates the empirical AUC loss function with a ramp function, and finds the best combination by a difference of convex functions algorithm. We show that as a linear combination method, RAUC leads to a consistent and asymptotically normal estimator of the linear marker combination when the data is generated from a semiparametric generalized linear model, just as the Smoothed AUC method (SAUC). Through simulation studies and real data examples, we demonstrate that RAUC out-performs SAUC in finding the best linear marker combinations, and can successfully capture nonlinear pattern in the data to achieve better classification performance. We illustrate our method with a dataset from a recent HIV vaccine trial. PMID:27058981
Enhancing sparsity of Hermite polynomial expansions by iterative rotations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiu; Lei, Huan; Baker, Nathan A.
2016-02-01
Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.
A Near-Infrared and Thermal Imager for Mapping Titan's Surface Features
NASA Technical Reports Server (NTRS)
Aslam, S.; Hewagma, T.; Jennings, D. E.; Nixon, C.
2012-01-01
Approximately 10% of the solar insolation reaches the surface of Titan through atmospheric spectral windows. We will discuss a filter based imaging system for a future Titan orbiter that will exploit these windows mapping surface features, cloud regions, polar storms. In the near-infrared (NIR), two filters (1.28 micrometer and 1.6 micrometer), strategically positioned between CH1 absorption bands, and InSb linear array pixels will explore the solar reflected radiation. We propose to map the mid, infrared (MIR) region with two filters: 9.76 micrometer and 5.88-to-6.06 micrometers with MCT linear arrays. The first will map MIR thermal emission variations due to surface albedo differences in the atmospheric window between gas phase CH3D and C2H4 opacity sources. The latter spans the crossover spectral region where observed radiation transitions from being dominated by thermal emission to solar reflected light component. The passively cooled linear arrays will be incorporated into the focal plane of a light-weight thin film stretched membrane 10 cm telescope. A rad-hard ASIC together with an FPGA will be used for detector pixel readout and detector linear array selection depending on if the field-of-view (FOV) is looking at the day- or night-side of Titan. The instantaneous FOV corresponds to 3.1, 15.6, and 31.2 mrad for the 1, 5, and 10 micrometer channels, respectively. For a 1500 km orbit, a 5 micrometer channel pixel represents a spatial resolution of 91 m, with a FOV that spans 23 kilometers, and Titan is mapped in a push-broom manner as determined by the orbital path. The system mass and power requirements are estimated to be 6 kg and 5 W, respectively. The package is proposed for a polar orbiter with a lifetime matching two Saturn seasons.
Wave propagation in a strongly nonlinear locally resonant granular crystal
NASA Astrophysics Data System (ADS)
Vorotnikov, K.; Starosvetsky, Y.; Theocharis, G.; Kevrekidis, P. G.
2018-02-01
In this work, we study the wave propagation in a recently proposed acoustic structure, the locally resonant granular crystal. This structure is composed of a one-dimensional granular crystal of hollow spherical particles in contact, containing linear resonators. The relevant model is presented and examined through a combination of analytical approximations (based on ODE and nonlinear map analysis) and of numerical results. The generic dynamics of the system involves a degradation of the well-known traveling pulse of the standard Hertzian chain of elastic beads. Nevertheless, the present system is richer, in that as the primary pulse decays, secondary ones emerge and eventually interfere with it creating modulated wavetrains. Remarkably, upon suitable choices of parameters, this interference "distills" a weakly nonlocal solitary wave (a "nanopteron"). This motivates the consideration of such nonlinear structures through a separate Fourier space technique, whose results suggest the existence of such entities not only with a single-side tail, but also with periodic tails on both ends. These tails are found to oscillate with the intrinsic oscillation frequency of the out-of-phase motion between the outer hollow bead and its internal linear attachment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasco, D.W.; Ferretti, Alessandro; Novali, Fabrizio
2008-05-01
Transient pressure variations within a reservoir can be treated as a propagating front and analyzed using an asymptotic formulation. From this perspective one can define a pressure 'arrival time' and formulate solutions along trajectories, in the manner of ray theory. We combine this methodology and a technique for mapping overburden deformation into reservoir volume change as a means to estimate reservoir flow properties, such as permeability. Given the entire 'travel time' or phase field, obtained from the deformation data, we can construct the trajectories directly, there-by linearizing the inverse problem. A numerical study indicates that, using this approach, we canmore » infer large-scale variations in flow properties. In an application to Interferometric Synthetic Aperture (InSAR) observations associated with a CO{sub 2} injection at the Krechba field, Algeria, we image pressure propagation to the northwest. An inversion for flow properties indicates a linear trend of high permeability. The high permeability correlates with a northwest trending fault on the flank of the anticline which defines the field.« less
Gstat: a program for geostatistical modelling, prediction and simulation
NASA Astrophysics Data System (ADS)
Pebesma, Edzer J.; Wesseling, Cees G.
1998-01-01
Gstat is a computer program for variogram modelling, and geostatistical prediction and simulation. It provides a generic implementation of the multivariable linear model with trends modelled as a linear function of coordinate polynomials or of user-defined base functions, and independent or dependent, geostatistically modelled, residuals. Simulation in gstat comprises conditional or unconditional (multi-) Gaussian sequential simulation of point values or block averages, or (multi-) indicator sequential simulation. Besides many of the popular options found in other geostatistical software packages, gstat offers the unique combination of (i) an interactive user interface for modelling variograms and generalized covariances (residual variograms), that uses the device-independent plotting program gnuplot for graphical display, (ii) support for several ascii and binary data and map file formats for input and output, (iii) a concise, intuitive and flexible command language, (iv) user customization of program defaults, (v) no built-in limits, and (vi) free, portable ANSI-C source code. This paper describes the class of problems gstat can solve, and addresses aspects of efficiency and implementation, managing geostatistical projects, and relevant technical details.
Heat conductivity in graphene and related materials: A time-domain modal analysis
NASA Astrophysics Data System (ADS)
Gill-Comeau, Maxime; Lewis, Laurent J.
2015-11-01
We use molecular dynamics (MD) simulations to study heat conductivity in single-layer graphene and graphite. We analyze the MD trajectories through a time-domain modal analysis and show that this is essential for obtaining a reliable representation of the heat flow in graphene and graphite as it permits the proper treatment of collective vibrational excitations, in contrast to a frequency-domain formulation. Our temperature-dependent results are in very good agreement with experiment and, for temperatures in the range 300-1200 K, we find that the ZA branch allows more heat flow than all other branches combined while the contributions of the TA, LA, and ZO branches are comparable at all temperatures. Conductivity mappings reveal strong collective excitations associated with low-frequency ZA modes. We demonstrate that these collective effects are a consequence of the quadratic nature of the ZA branch as they also show up in graphite but are reduced in strained graphene, where the dispersion becomes linear, and are absent in diamond, where acoustic branches are linear. In general, neglecting collective excitations yields errors similar to those from the single-mode relaxation-time approximation.
Wu, Jingbo; Zhang, Hui; Wang, Yuanzhi; Qiao, Jun; Chen, Chuangfu; Gao, Goege F.; Allain, Jean-Pierre; Li, Chengyao
2012-01-01
More than 35,000 new cases of human brucellosis were reported in 2010 by the Chinese Center for Disease Control and Prevention. An attenuated B. melitensis vaccine M5-90 is currently used for vaccination of sheep and goats in China. In the study, a periplasmic protein BP26 from M5-90 was characterized for its epitope reactivity with mouse monoclonal and sheep antibodies. A total of 29 monoclonal antibodies (mAbs) against recombinant BP26 (rBP26) were produced, which were tested for reactivity with a panel of BP26 peptides, three truncated rBP26 and native BP26 containing membrane protein extracts (NMP) of B. melitensis M5-90 in ELISA and Western-Blot. The linear, semi-conformational and conformational epitopes from native BP26 were identified. Two linear epitopes recognized by mAbs were revealed by 28 of 16mer overlapping peptides, which were accurately mapped as the core motif of amino acid residues 93DRDLQTGGI101 (position 93 to 101) or residues 104QPIYVYPD111, respectively. The reactivity of linear epitope peptides, rBP26 and NMP was tested with 137 sheep sera by ELISAs, of which the two linear epitopes had 65–70% reactivity and NMP 90% consistent with the results of a combination of two standard serological tests. The results were helpful for evaluating the reactivity of BP26 antigen in M5-90. PMID:22457830
Hengl, Tomislav; Heuvelink, Gerard B. M.; Kempen, Bas; Leenaars, Johan G. B.; Walsh, Markus G.; Shepherd, Keith D.; Sila, Andrew; MacMillan, Robert A.; Mendes de Jesus, Jorge; Tamene, Lulseged; Tondoh, Jérôme E.
2015-01-01
80% of arable land in Africa has low soil fertility and suffers from physical soil problems. Additionally, significant amounts of nutrients are lost every year due to unsustainable soil management practices. This is partially the result of insufficient use of soil management knowledge. To help bridge the soil information gap in Africa, the Africa Soil Information Service (AfSIS) project was established in 2008. Over the period 2008–2014, the AfSIS project compiled two point data sets: the Africa Soil Profiles (legacy) database and the AfSIS Sentinel Site database. These data sets contain over 28 thousand sampling locations and represent the most comprehensive soil sample data sets of the African continent to date. Utilizing these point data sets in combination with a large number of covariates, we have generated a series of spatial predictions of soil properties relevant to the agricultural management—organic carbon, pH, sand, silt and clay fractions, bulk density, cation-exchange capacity, total nitrogen, exchangeable acidity, Al content and exchangeable bases (Ca, K, Mg, Na). We specifically investigate differences between two predictive approaches: random forests and linear regression. Results of 5-fold cross-validation demonstrate that the random forests algorithm consistently outperforms the linear regression algorithm, with average decreases of 15–75% in Root Mean Squared Error (RMSE) across soil properties and depths. Fitting and running random forests models takes an order of magnitude more time and the modelling success is sensitive to artifacts in the input data, but as long as quality-controlled point data are provided, an increase in soil mapping accuracy can be expected. Results also indicate that globally predicted soil classes (USDA Soil Taxonomy, especially Alfisols and Mollisols) help improve continental scale soil property mapping, and are among the most important predictors. This indicates a promising potential for transferring pedological knowledge from data rich countries to countries with limited soil data. PMID:26110833
Assessing biomass of diverse coastal marsh ecosystems using statistical and machine learning models
NASA Astrophysics Data System (ADS)
Mo, Yu; Kearney, Michael S.; Riter, J. C. Alexis; Zhao, Feng; Tilley, David R.
2018-06-01
The importance and vulnerability of coastal marshes necessitate effective ways to closely monitor them. Optical remote sensing is a powerful tool for this task, yet its application to diverse coastal marsh ecosystems consisting of different marsh types is limited. This study samples spectral and biophysical data from freshwater, intermediate, brackish, and saline marshes in Louisiana, and develops statistical and machine learning models to assess the marshes' biomass with combined ground, airborne, and spaceborne remote sensing data. It is found that linear models derived from NDVI and EVI are most favorable for assessing Leaf Area Index (LAI) using multispectral data (R2 = 0.7 and 0.67, respectively), and the random forest models are most useful in retrieving LAI and Aboveground Green Biomass (AGB) using hyperspectral data (R2 = 0.91 and 0.84, respectively). It is also found that marsh type and plant species significantly impact the linear model development (P < .05 in both cases). Sensors with coarser spatial resolution yield lower LAI values because the fine water networks are not detected and mixed into the vegetation pixels. The Landsat OLI-derived map shows the LAI of coastal mashes in Louisiana mostly ranges from 0 to 5.0, and is highest for freshwater marshes and for marshes in the Atchafalaya Bay delta. The CASI-derived maps show that LAI of saline marshes at Bay Batiste typically ranges from 0.9 to 1.5, and the AGB is mostly less than 900 g/m2. This study provides solutions for assessing the biomass of Louisiana's coastal marshes using various optical remote sensing techniques, and highlights the impacts of the marshes' species composition on the model development and the sensors' spatial resolution on biomass mapping, thereby providing useful tools for monitoring the biomass of coastal marshes in Louisiana and diverse coastal marsh ecosystems elsewhere.
Hengl, Tomislav; Heuvelink, Gerard B M; Kempen, Bas; Leenaars, Johan G B; Walsh, Markus G; Shepherd, Keith D; Sila, Andrew; MacMillan, Robert A; Mendes de Jesus, Jorge; Tamene, Lulseged; Tondoh, Jérôme E
2015-01-01
80% of arable land in Africa has low soil fertility and suffers from physical soil problems. Additionally, significant amounts of nutrients are lost every year due to unsustainable soil management practices. This is partially the result of insufficient use of soil management knowledge. To help bridge the soil information gap in Africa, the Africa Soil Information Service (AfSIS) project was established in 2008. Over the period 2008-2014, the AfSIS project compiled two point data sets: the Africa Soil Profiles (legacy) database and the AfSIS Sentinel Site database. These data sets contain over 28 thousand sampling locations and represent the most comprehensive soil sample data sets of the African continent to date. Utilizing these point data sets in combination with a large number of covariates, we have generated a series of spatial predictions of soil properties relevant to the agricultural management--organic carbon, pH, sand, silt and clay fractions, bulk density, cation-exchange capacity, total nitrogen, exchangeable acidity, Al content and exchangeable bases (Ca, K, Mg, Na). We specifically investigate differences between two predictive approaches: random forests and linear regression. Results of 5-fold cross-validation demonstrate that the random forests algorithm consistently outperforms the linear regression algorithm, with average decreases of 15-75% in Root Mean Squared Error (RMSE) across soil properties and depths. Fitting and running random forests models takes an order of magnitude more time and the modelling success is sensitive to artifacts in the input data, but as long as quality-controlled point data are provided, an increase in soil mapping accuracy can be expected. Results also indicate that globally predicted soil classes (USDA Soil Taxonomy, especially Alfisols and Mollisols) help improve continental scale soil property mapping, and are among the most important predictors. This indicates a promising potential for transferring pedological knowledge from data rich countries to countries with limited soil data.
PET attenuation correction for flexible MRI surface coils in hybrid PET/MRI using a 3D depth camera
NASA Astrophysics Data System (ADS)
Frohwein, Lynn J.; Heß, Mirco; Schlicher, Dominik; Bolwin, Konstantin; Büther, Florian; Jiang, Xiaoyi; Schäfers, Klaus P.
2018-01-01
PET attenuation correction for flexible MRI radio frequency surface coils in hybrid PET/MRI is still a challenging task, as position and shape of these coils conform to large inter-patient variabilities. The purpose of this feasibility study is to develop a novel method for the incorporation of attenuation information about flexible surface coils in PET reconstruction using the Microsoft Kinect V2 depth camera. The depth information is used to determine a dense point cloud of the coil’s surface representing the shape of the coil. From a CT template—acquired once in advance—surface information of the coil is extracted likewise and converted into a point cloud. The two point clouds are then registered using a combination of an iterative-closest-point (ICP) method and a partially rigid registration step. Using the transformation derived through the point clouds, the CT template is warped and thereby adapted to the PET/MRI scan setup. The transformed CT template is converted into an attenuation map from Hounsfield units into linear attenuation coefficients. The resulting fitted attenuation map is then integrated into the MRI-based patient-specific DIXON-based attenuation map of the actual PET/MRI scan. A reconstruction of phantom PET data acquired with the coil present in the field-of-view (FoV), but without the corresponding coil attenuation map, shows large artifacts in regions close to the coil. The overall count loss is determined to be around 13% compared to a PET scan without the coil present in the FoV. A reconstruction using the new μ-map resulted in strongly reduced artifacts as well as increased overall PET intensities with a remaining relative difference of about 1% to a PET scan without the coil in the FoV.
Artificial-epitope mapping for CK-MB assay.
Tai, Dar-Fu; Ho, Yi-Fang; Wu, Cheng-Hsin; Lin, Tzu-Chieh; Lu, Kuo-Hao; Lin, Kun-Shian
2011-06-07
A quantitative method using artificial antibody to detect creatine kinases was developed. Linear epitope sequences were selected based on an artificial-epitope mapping strategy. Nine different MIPs corresponding to the selected peptides were then fabricated on QCM chips. The subtle conformational changes were also recognized by these chips.
SfiI genomic cleavage map of Escherichia coli K-12 strain MG1655.
Perkins, J D; Heath, J D; Sharma, B R; Weinstock, G M
1992-01-01
An SfiI restriction map of Escherichia coli K-12 strain MG1655 is presented. The map contains thirty-one cleavage sites separating fragments ranging in size from 407 kb to 3.7 kb. Several techniques were used in the construction of this map, including CHEF pulsed field gel electrophoresis; physical analysis of a set of twenty-six auxotrophic transposon insertions; correlation with the restriction map of Kohara and coworkers using the commercially available E. coli Gene Mapping Membranes; analysis of publicly available sequence information; and correlation of the above data with the combined genetic and physical map developed by Rudd, et al. The combination of these techniques has yielded a map in which all but one site can be localized within a range of +/- 2 kb, and over half the sites can be localized precisely by sequence data. Two sites present in the EcoSeq5 sequence database are not cleaved in MG1655 and four sites are noted to be sensitive to methylation by the dcm methylase. This map, combined with the NotI physical map of MG1655, can aid in the rapid, precise mapping of several different types of genetic alterations, including transposon mediated mutations and other insertions, inversions, deletions and duplications. Images PMID:1312707
Decision-level fusion of SAR and IR sensor information for automatic target detection
NASA Astrophysics Data System (ADS)
Cho, Young-Rae; Yim, Sung-Hyuk; Cho, Hyun-Woong; Won, Jin-Ju; Song, Woo-Jin; Kim, So-Hyeon
2017-05-01
We propose a decision-level architecture that combines synthetic aperture radar (SAR) and an infrared (IR) sensor for automatic target detection. We present a new size-based feature, called target-silhouette to reduce the number of false alarms produced by the conventional target-detection algorithm. Boolean Map Visual Theory is used to combine a pair of SAR and IR images to generate the target-enhanced map. Then basic belief assignment is used to transform this map into a belief map. The detection results of sensors are combined to build the target-silhouette map. We integrate the fusion mass and the target-silhouette map on the decision level to exclude false alarms. The proposed algorithm is evaluated using a SAR and IR synthetic database generated by SE-WORKBENCH simulator, and compared with conventional algorithms. The proposed fusion scheme achieves higher detection rate and lower false alarm rate than the conventional algorithms.
Imparting Motion to a Test Object Such as a Motor Vehicle in a Controlled Fashion
NASA Technical Reports Server (NTRS)
Southward, Stephen C. (Inventor); Reubush, Chandler (Inventor); Pittman, Bryan (Inventor); Roehrig, Kurt (Inventor); Gerard, Doug (Inventor)
2014-01-01
An apparatus imparts motion to a test object such as a motor vehicle in a controlled fashion. A base has mounted on it a linear electromagnetic motor having a first end and a second end, the first end being connected to the base. A pneumatic cylinder and piston combination have a first end and a second end, the first end connected to the base so that the pneumatic cylinder and piston combination is generally parallel with the linear electromagnetic motor. The second ends of the linear electromagnetic motor and pneumatic cylinder and piston combination being commonly linked to a mount for the test object. A control system for the linear electromagnetic motor and pneumatic cylinder and piston combination drives the pneumatic cylinder and piston combination to support a substantial static load of the test object and the linear electromagnetic motor to impart controlled motion to the test object.
NASA Astrophysics Data System (ADS)
Hidayati, H.; Ramli, R.
2018-04-01
This paper aims to provide a description of the implementation of Physic Problem Solving strategy combined with concept maps in General Physics learning at Department of Physics, Universitas Negeri Padang. Action research has been conducted in two cycles where each end of the cycle is reflected and improved for the next cycle. Implementation of Physics Problem Solving strategy combined with concept map can increase student activity in solving general physics problem with an average increase of 15% and can improve student learning outcomes from 42,7 in the cycle I become 62,7 in cycle II in general physics at the Universitas Negeri Padang. In the future, the implementation of Physic Problem Solving strategy combined with concept maps will need to be considered in Physics courses.
Registration of 4D time-series of cardiac images with multichannel Diffeomorphic Demons.
Peyrat, Jean-Marc; Delingette, Hervé; Sermesant, Maxime; Pennec, Xavier; Xu, Chenyang; Ayache, Nicholas
2008-01-01
In this paper, we propose a generic framework for intersubject non-linear registration of 4D time-series images. In this framework, spatio-temporal registration is defined by mapping trajectories of physical points as opposed to spatial registration that solely aims at mapping homologous points. First, we determine the trajectories we want to register in each sequence using a motion tracking algorithm based on the Diffeomorphic Demons algorithm. Then, we perform simultaneously pairwise registrations of corresponding time-points with the constraint to map the same physical points over time. We show this trajectory registration can be formulated as a multichannel registration of 3D images. We solve it using the Diffeomorphic Demons algorithm extended to vector-valued 3D images. This framework is applied to the inter-subject non-linear registration of 4D cardiac CT sequences.
Change Detection via Selective Guided Contrasting Filters
NASA Astrophysics Data System (ADS)
Vizilter, Y. V.; Rubis, A. Y.; Zheltov, S. Y.
2017-05-01
Change detection scheme based on guided contrasting was previously proposed. Guided contrasting filter takes two images (test and sample) as input and forms the output as filtered version of test image. Such filter preserves the similar details and smooths the non-similar details of test image with respect to sample image. Due to this the difference between test image and its filtered version (difference map) could be a basis for robust change detection. Guided contrasting is performed in two steps: at the first step some smoothing operator (SO) is applied for elimination of test image details; at the second step all matched details are restored with local contrast proportional to the value of some local similarity coefficient (LSC). The guided contrasting filter was proposed based on local average smoothing as SO and local linear correlation as LSC. In this paper we propose and implement new set of selective guided contrasting filters based on different combinations of various SO and thresholded LSC. Linear average and Gaussian smoothing, nonlinear median filtering, morphological opening and closing are considered as SO. Local linear correlation coefficient, morphological correlation coefficient (MCC), mutual information, mean square MCC and geometrical correlation coefficients are applied as LSC. Thresholding of LSC allows operating with non-normalized LSC and enhancing the selective properties of guided contrasting filters: details are either totally recovered or not recovered at all after the smoothing. These different guided contrasting filters are tested as a part of previously proposed change detection pipeline, which contains following stages: guided contrasting filtering on image pyramid, calculation of difference map, binarization, extraction of change proposals and testing change proposals using local MCC. Experiments on real and simulated image bases demonstrate the applicability of all proposed selective guided contrasting filters. All implemented filters provide the robustness relative to weak geometrical discrepancy of compared images. Selective guided contrasting based on morphological opening/closing and thresholded morphological correlation demonstrates the best change detection result.
Corrêa, Wesley G; Durand, Marina T; Becari, Christiane; Tezini, Geisa C S V; do Carmo, Jussara M; de Oliveira, Mauro; Prado, Cibele M; Fazan, Rubens; Salgado, Helio C
2015-01-01
The increase in acetylcholine yielded by pyridostigmine (PYR), an acetylcholinesterase inhibitor, was evaluated for its effect on the haemodynamic responses-mean arterial pressure (MAP) and heart rate (HR)-and their nycthemeral oscillation in mice before and one week after myocardial infarction (MI). Mice were anesthetized (isoflurane), and a telemetry transmitter was implanted into the carotid artery. After 5 days of recovery, the MAP and HR were recorded for 48 h (10 s every 10 min). Following this procedure, mice were submitted to surgery for sham or coronary artery ligation and received drinking water (VEHICLE) with or without PYR. Five days after surgery, the haemodynamic recordings were recommenced. Sham surgery combined with VEHICLE did not affect basal MAP and HR; nevertheless, these haemodynamic parameters were higher during the night, before and after surgery. MI combined with VEHICLE displayed decreased MAP and increased HR; these haemodynamic parameters were also higher during the night, before and after surgery. Sham surgery combined with PYR displayed similar results for MAP as sham combined with VEHICLE; however, PYR produced bradycardia. Nevertheless, MI combined with PYR exhibited no change in MAP and HR, but these haemodynamic parameters were also higher during the night, before and after surgery. Therefore, MI decreased MAP and increased HR, while PYR prevented these alterations. Neither MI nor PYR affected nycthemeral oscillations of MAP and HR. These findings indicate that the increase in acetylcholine yielded by PYR protected the haemodynamic alterations caused by MI in mice, without affecting the nycthemeral haemodynamic oscillations. Copyright © 2014 Elsevier B.V. All rights reserved.
DOT National Transportation Integrated Search
2009-04-28
A study was conducted to explore the utility and recognition of lines and linear patterns on electronic displays depicting aeronautical charting information, such as electronic charts and moving map displays. The goal of this research is to support t...
Flexible Learning Itineraries Based on Conceptual Maps
ERIC Educational Resources Information Center
Agudelo, Olga Lucía; Salinas, Jesús
2015-01-01
The use of learning itineraries based on conceptual maps is studied in order to propose a more flexible instructional design that strengthens the learning process focused on the student, generating non-linear processes, characterising its elements, setting up relationships between them and shaping a general model with specifications for each…
Kobayashi, Akira; Yokogawa, Hideaki; Sugiyama, Kazuhisa
2012-01-01
The purpose of this study was to investigate pathological changes of the corneal cell layer in patients with map-dot-fingerprint (epithelial basement membrane) dystrophy by in vivo laser corneal confocal microscopy. Two patients were evaluated using a cornea-specific in vivo laser scanning confocal microscope (Heidelberg Retina Tomograph 2 Rostock Cornea Module, HRT 2-RCM). The affected corneal areas of both patients were examined. Image analysis was performed to identify corneal epithelial and stromal deposits correlated with this dystrophy. Variously shaped (linear, multilaminar, curvilinear, ring-shape, geographic) highly reflective materials were observed in the "map" area, mainly in the basal epithelial cell layer. In "fingerprint" lesions, multiple linear and curvilinear hyporeflective lines were observed. Additionally, in the affected corneas, infiltration of possible Langerhans cells and other inflammatory cells was observed as highly reflective Langerhans cell-like or dot images. Finally, needle-shaped materials were observed in one patient. HRT 2-RCM laser confocal microscopy is capable of identifying corneal microstructural changes related to map-dot-fingerprint corneal dystrophy in vivo. The technique may be useful in elucidating the pathogenesis and natural course of map-dot-fingerprint corneal dystrophy and other similar basement membrane abnormalities.
NASA Astrophysics Data System (ADS)
Lartizien, Carole; Marache-Francisco, Simon; Prost, Rémy
2012-02-01
Positron emission tomography (PET) using fluorine-18 deoxyglucose (18F-FDG) has become an increasingly recommended tool in clinical whole-body oncology imaging for the detection, diagnosis, and follow-up of many cancers. One way to improve the diagnostic utility of PET oncology imaging is to assist physicians facing difficult cases of residual or low-contrast lesions. This study aimed at evaluating different schemes of computer-aided detection (CADe) systems for the guided detection and localization of small and low-contrast lesions in PET. These systems are based on two supervised classifiers, linear discriminant analysis (LDA) and the nonlinear support vector machine (SVM). The image feature sets that serve as input data consisted of the coefficients of an undecimated wavelet transform. An optimization study was conducted to select the best combination of parameters for both the SVM and the LDA. Different false-positive reduction (FPR) methods were evaluated to reduce the number of false-positive detections per image (FPI). This includes the removal of small detected clusters and the combination of the LDA and SVM detection maps. The different CAD schemes were trained and evaluated based on a simulated whole-body PET image database containing 250 abnormal cases with 1230 lesions and 250 normal cases with no lesion. The detection performance was measured on a separate series of 25 testing images with 131 lesions. The combination of the LDA and SVM score maps was shown to produce very encouraging detection performance for both the lung lesions, with 91% sensitivity and 18 FPIs, and the liver lesions, with 94% sensitivity and 10 FPIs. Comparison with human performance indicated that the different CAD schemes significantly outperformed human detection sensitivities, especially regarding the low-contrast lesions.
The spatio-temporal mapping of epileptic networks: Combination of EEG–fMRI and EEG source imaging
Vulliemoz, S.; Thornton, R.; Rodionov, R.; Carmichael, D.W.; Guye, M.; Lhatoo, S.; McEvoy, A.W.; Spinelli, L.; Michel, C.M.; Duncan, J.S.; Lemieux, L.
2009-01-01
Simultaneous EEG–fMRI acquisitions in patients with epilepsy often reveal distributed patterns of Blood Oxygen Level Dependant (BOLD) change correlated with epileptiform discharges. We investigated if electrical source imaging (ESI) performed on the interictal epileptiform discharges (IED) acquired during fMRI acquisition could be used to study the dynamics of the networks identified by the BOLD effect, thereby avoiding the limitations of combining results from separate recordings. Nine selected patients (13 IED types identified) with focal epilepsy underwent EEG–fMRI. Statistical analysis was performed using SPM5 to create BOLD maps. ESI was performed on the IED recorded during fMRI acquisition using a realistic head model (SMAC) and a distributed linear inverse solution (LAURA). ESI could not be performed in one case. In 10/12 remaining studies, ESI at IED onset (ESIo) was anatomically close to one BOLD cluster. Interestingly, ESIo was closest to the positive BOLD cluster with maximal statistical significance in only 4/12 cases and closest to negative BOLD responses in 4/12 cases. Very small BOLD clusters could also have clinical relevance in some cases. ESI at later time frame (ESIp) showed propagation to remote sources co-localised with other BOLD clusters in half of cases. In concordant cases, the distance between maxima of ESI and the closest EEG–fMRI cluster was less than 33 mm, in agreement with previous studies. We conclude that simultaneous ESI and EEG–fMRI analysis may be able to distinguish areas of BOLD response related to initiation of IED from propagation areas. This combination provides new opportunities for investigating epileptic networks. PMID:19408351
Enumeration of Extended m-Regular Linear Stacks.
Guo, Qiang-Hui; Sun, Lisa H; Wang, Jian
2016-12-01
The contact map of a protein fold in the two-dimensional (2D) square lattice has arc length at least 3, and each internal vertex has degree at most 2, whereas the two terminal vertices have degree at most 3. Recently, Chen, Guo, Sun, and Wang studied the enumeration of [Formula: see text]-regular linear stacks, where each arc has length at least [Formula: see text] and the degree of each vertex is bounded by 2. Since the two terminal points in a protein fold in the 2D square lattice may form contacts with at most three adjacent lattice points, we are led to the study of extended [Formula: see text]-regular linear stacks, in which the degree of each terminal point is bounded by 3. This model is closed to real protein contact maps. Denote the generating functions of the [Formula: see text]-regular linear stacks and the extended [Formula: see text]-regular linear stacks by [Formula: see text] and [Formula: see text], respectively. We show that [Formula: see text] can be written as a rational function of [Formula: see text]. For a certain [Formula: see text], by eliminating [Formula: see text], we obtain an equation satisfied by [Formula: see text] and derive the asymptotic formula of the numbers of [Formula: see text]-regular linear stacks of length [Formula: see text].
NASA Astrophysics Data System (ADS)
Safari, A.; Sharifi, M. A.; Amjadiparvar, B.
2010-05-01
The GRACE mission has substantiated the low-low satellite-to-satellite tracking (LL-SST) concept. The LL-SST configuration can be combined with the previously realized high-low SST concept in the CHAMP mission to provide a much higher accuracy. The line of sight (LOS) acceleration difference between the GRACE satellite pair is the mostly used observable for mapping the global gravity field of the Earth in terms of spherical harmonic coefficients. In this paper, mathematical formulae for LOS acceleration difference observations have been derived and the corresponding linear system of equations has been set up for spherical harmonic up to degree and order 120. The total number of unknowns is 14641. Such a linear equation system can be solved with iterative solvers or direct solvers. However, the runtime of direct methods or that of iterative solvers without a suitable preconditioner increases tremendously. This is the reason why we need a more sophisticated method to solve the linear system of problems with a large number of unknowns. Multiplicative variant of the Schwarz alternating algorithm is a domain decomposition method, which allows it to split the normal matrix of the system into several smaller overlaped submatrices. In each iteration step the multiplicative variant of the Schwarz alternating algorithm solves linear systems with the matrices obtained from the splitting successively. It reduces both runtime and memory requirements drastically. In this paper we propose the Multiplicative Schwarz Alternating Algorithm (MSAA) for solving the large linear system of gravity field recovery. The proposed algorithm has been tested on the International Association of Geodesy (IAG)-simulated data of the GRACE mission. The achieved results indicate the validity and efficiency of the proposed algorithm in solving the linear system of equations from accuracy and runtime points of view. Keywords: Gravity field recovery, Multiplicative Schwarz Alternating Algorithm, Low-Low Satellite-to-Satellite Tracking
Eugenio, Francisco; Marcello, Javier; Martin, Javier; Rodríguez-Esparragón, Dionisio
2017-11-16
Remote multispectral data can provide valuable information for monitoring coastal water ecosystems. Specifically, high-resolution satellite-based imaging systems, as WorldView-2 (WV-2), can generate information at spatial scales needed to implement conservation actions for protected littoral zones. However, coastal water-leaving radiance arriving at the space-based sensor is often small as compared to reflected radiance. In this work, complex approaches, which usually use an accurate radiative transfer code to correct the atmospheric effects, such as FLAASH, ATCOR and 6S, have been implemented for high-resolution imagery. They have been assessed in real scenarios using field spectroradiometer data. In this context, the three approaches have achieved excellent results and a slightly superior performance of 6S model-based algorithm has been observed. Finally, for the mapping of benthic habitats in shallow-waters marine protected environments, a relevant application of the proposed atmospheric correction combined with an automatic deglinting procedure is presented. This approach is based on the integration of a linear mixing model of benthic classes within the radiative transfer model of the water. The complete methodology has been applied to selected ecosystems in the Canary Islands (Spain) but the obtained results allow the robust mapping of the spatial distribution and density of seagrass in coastal waters and the analysis of multitemporal variations related to the human activity and climate change in littoral zones.
Eugenio, Francisco; Marcello, Javier; Martin, Javier
2017-01-01
Remote multispectral data can provide valuable information for monitoring coastal water ecosystems. Specifically, high-resolution satellite-based imaging systems, as WorldView-2 (WV-2), can generate information at spatial scales needed to implement conservation actions for protected littoral zones. However, coastal water-leaving radiance arriving at the space-based sensor is often small as compared to reflected radiance. In this work, complex approaches, which usually use an accurate radiative transfer code to correct the atmospheric effects, such as FLAASH, ATCOR and 6S, have been implemented for high-resolution imagery. They have been assessed in real scenarios using field spectroradiometer data. In this context, the three approaches have achieved excellent results and a slightly superior performance of 6S model-based algorithm has been observed. Finally, for the mapping of benthic habitats in shallow-waters marine protected environments, a relevant application of the proposed atmospheric correction combined with an automatic deglinting procedure is presented. This approach is based on the integration of a linear mixing model of benthic classes within the radiative transfer model of the water. The complete methodology has been applied to selected ecosystems in the Canary Islands (Spain) but the obtained results allow the robust mapping of the spatial distribution and density of seagrass in coastal waters and the analysis of multitemporal variations related to the human activity and climate change in littoral zones. PMID:29144444
NASA Astrophysics Data System (ADS)
Rizzo, R. E.; Healy, D.; Farrell, N. J.
2017-12-01
We have implemented a novel image processing tool, namely two-dimensional (2D) Morlet wavelet analysis, capable of detecting changes occurring in fracture patterns at different scales of observation, and able of recognising the dominant fracture orientations and the spatial configurations for progressively larger (or smaller) scale of analysis. Because of its inherited anisotropy, the Morlet wavelet is proved to be an excellent choice for detecting directional linear features, i.e. regions where the amplitude of the signal is regular along one direction and has sharp variation along the perpendicular direction. Performances of the Morlet wavelet are tested against the 'classic' Mexican hat wavelet, deploying a complex synthetic fracture network. When applied to a natural fracture network, formed triaxially (σ1>σ2=σ3) deforming a core sample of the Hopeman sandstone, the combination of 2D Morlet wavelet and wavelet coefficient maps allows for the detection of characteristic scale orientation and length transitions, associated with the shifts from distributed damage to the growth of localised macroscopic shear fracture. A complementary outcome arises from the wavelet coefficient maps produced by increasing the wavelet scale parameter. These maps can be used to chart the variations in the spatial distribution of the analysed entities, meaning that it is possible to retrieve information on the density of fracture patterns at specific length scales during deformation.
Slic Superpixels for Object Delineation from Uav Data
NASA Astrophysics Data System (ADS)
Crommelinck, S.; Bennett, R.; Gerke, M.; Koeva, M. N.; Yang, M. Y.; Vosselman, G.
2017-08-01
Unmanned aerial vehicles (UAV) are increasingly investigated with regard to their potential to create and update (cadastral) maps. UAVs provide a flexible and low-cost platform for high-resolution data, from which object outlines can be accurately delineated. This delineation could be automated with image analysis methods to improve existing mapping procedures that are cost, time and labor intensive and of little reproducibility. This study investigates a superpixel approach, namely simple linear iterative clustering (SLIC), in terms of its applicability to UAV data. The approach is investigated in terms of its applicability to high-resolution UAV orthoimages and in terms of its ability to delineate object outlines of roads and roofs. Results show that the approach is applicable to UAV orthoimages of 0.05 m GSD and extents of 100 million and 400 million pixels. Further, the approach delineates the objects with the high accuracy provided by the UAV orthoimages at completeness rates of up to 64 %. The approach is not suitable as a standalone approach for object delineation. However, it shows high potential for a combination with further methods that delineate objects at higher correctness rates in exchange of a lower localization quality. This study provides a basis for future work that will focus on the incorporation of multiple methods for an interactive, comprehensive and accurate object delineation from UAV data. This aims to support numerous application fields such as topographic and cadastral mapping.
From Spiking Neuron Models to Linear-Nonlinear Models
Ostojic, Srdjan; Brunel, Nicolas
2011-01-01
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Geomorphic domains and linear features on Landsat images, Circle Quadrangle, Alaska
Simpson, S.L.
1984-01-01
A remote sensing study using Landsat images was undertaken as part of the Alaska Mineral Resource Assessment Program (AMRAP). Geomorphic domains A and B, identified on enhanced Landsat images, divide Circle quadrangle south of Tintina fault zone into two regional areas having major differences in surface characteristics. Domain A is a roughly rectangular, northeast-trending area of relatively low relief and simple, widely spaced drainages, except where igneous rocks are exposed. In contrast, domain B, which bounds two sides of domain A, is more intricately dissected showing abrupt changes in slope and relatively high relief. The northwestern part of geomorphic domain A includes a previously mapped tectonostratigraphic terrane. The southeastern boundary of domain A occurs entirely within the adjoining tectonostratigraphic terrane. The sharp geomorphic contrast along the southeastern boundary of domain A and the existence of known faults along this boundary suggest that the southeastern part of domain A may be a subdivision of the adjoining terrane. Detailed field studies would be necessary to determine the characteristics of the subdivision. Domain B appears to be divisible into large areas of different geomorphic terrains by east-northeast-trending curvilinear lines drawn on Landsat images. Segments of two of these lines correlate with parts of boundaries of mapped tectonostratigraphic terranes. On Landsat images prominent north-trending lineaments together with the curvilinear lines form a large-scale regional pattern that is transected by mapped north-northeast-trending high-angle faults. The lineaments indicate possible lithlogic variations and/or structural boundaries. A statistical strike-frequency analysis of the linear features data for Circle quadrangle shows that northeast-trending linear features predominate throughout, and that most northwest-trending linear features are found south of Tintina fault zone. A major trend interval of N.64-72E. in the linear feature data, corresponds to the strike of foliations in metamorphic rocks and magnetic anomalies reflecting compositional variations suggesting that most linear features in the southern part of the quadrangle probably are related to lithologic variations brought about by folding and foliation of metamorphic rocks. A second important trend interval, N.14-35E., may be related to thrusting south of the Tintina fault zone, as high concentrations of linear features within this interval are found in areas of mapped thrusts. Low concentrations of linear features are found in areas of most igneous intrusives. High concentrations of linear features do not correspond to areas of mineralization in any consistent or significant way that would allow concentration patterns to be easily used as an aid in locating areas of mineralization. The results of this remote sensing study indicate that there are several possibly important areas where further detailed studies are warranted.
A comparative study of linear and nonlinear MIMO feedback configurations
NASA Technical Reports Server (NTRS)
Desoer, C. A.; Lin, C. A.
1984-01-01
In this paper, a comparison is conducted of several feedback configurations which have appeared in the literature (e.g. unity-feedback, model-reference, etc.). The linear time-invariant multi-input multi-output case is considered. For each configuration, the stability conditions are specified, the relation between achievable I/O maps and the achievable disturbance-to-output maps is examined, and the effect of various subsystem perturbations on the system performance is studied. In terms of these considerations, it is demonstrated that one of the configurations considered is better than all the others. The results are then extended to the nonlinear multi-input multi-output case.
Semi-automatic mapping of linear-trending bedforms using 'Self-Organizing Maps' algorithm
NASA Astrophysics Data System (ADS)
Foroutan, M.; Zimbelman, J. R.
2017-09-01
Increased application of high resolution spatial data such as high resolution satellite or Unmanned Aerial Vehicle (UAV) images from Earth, as well as High Resolution Imaging Science Experiment (HiRISE) images from Mars, makes it necessary to increase automation techniques capable of extracting detailed geomorphologic elements from such large data sets. Model validation by repeated images in environmental management studies such as climate-related changes as well as increasing access to high-resolution satellite images underline the demand for detailed automatic image-processing techniques in remote sensing. This study presents a methodology based on an unsupervised Artificial Neural Network (ANN) algorithm, known as Self Organizing Maps (SOM), to achieve the semi-automatic extraction of linear features with small footprints on satellite images. SOM is based on competitive learning and is efficient for handling huge data sets. We applied the SOM algorithm to high resolution satellite images of Earth and Mars (Quickbird, Worldview and HiRISE) in order to facilitate and speed up image analysis along with the improvement of the accuracy of results. About 98% overall accuracy and 0.001 quantization error in the recognition of small linear-trending bedforms demonstrate a promising framework.
Yu, Huibin; Song, Yonghui; Liu, Ruixia; Pan, Hongwei; Xiang, Liancheng; Qian, Feng
2014-10-01
The stabilization of latent tracers of dissolved organic matter (DOM) of wastewater was analyzed by three-dimensional excitation-emission matrix (EEM) fluorescence spectroscopy coupled with self-organizing map and classification and regression tree analysis (CART) in wastewater treatment performance. DOM of water samples collected from primary sedimentation, anaerobic, anoxic, oxic and secondary sedimentation tanks in a large-scale wastewater treatment plant contained four fluorescence components: tryptophan-like (C1), tyrosine-like (C2), microbial humic-like (C3) and fulvic-like (C4) materials extracted by self-organizing map. These components showed good positive linear correlations with dissolved organic carbon of DOM. C1 and C2 were representative components in the wastewater, and they were removed to a higher extent than those of C3 and C4 in the treatment process. C2 was a latent parameter determined by CART to differentiate water samples of oxic and secondary sedimentation tanks from the successive treatment units, indirectly proving that most of tyrosine-like material was degraded by anaerobic microorganisms. C1 was an accurate parameter to comprehensively separate the samples of the five treatment units from each other, indirectly indicating that tryptophan-like material was decomposed by anaerobic and aerobic bacteria. EEM fluorescence spectroscopy in combination with self-organizing map and CART analysis can be a nondestructive effective method for characterizing structural component of DOM fractions and monitoring organic matter removal in wastewater treatment process. Copyright © 2014 Elsevier Ltd. All rights reserved.
Fusion of pixel and object-based features for weed mapping using unmanned aerial vehicle imagery
NASA Astrophysics Data System (ADS)
Gao, Junfeng; Liao, Wenzhi; Nuyttens, David; Lootens, Peter; Vangeyte, Jürgen; Pižurica, Aleksandra; He, Yong; Pieters, Jan G.
2018-05-01
The developments in the use of unmanned aerial vehicles (UAVs) and advanced imaging sensors provide new opportunities for ultra-high resolution (e.g., less than a 10 cm ground sampling distance (GSD)) crop field monitoring and mapping in precision agriculture applications. In this study, we developed a strategy for inter- and intra-row weed detection in early season maize fields from aerial visual imagery. More specifically, the Hough transform algorithm (HT) was applied to the orthomosaicked images for inter-row weed detection. A semi-automatic Object-Based Image Analysis (OBIA) procedure was developed with Random Forests (RF) combined with feature selection techniques to classify soil, weeds and maize. Furthermore, the two binary weed masks generated from HT and OBIA were fused for accurate binary weed image. The developed RF classifier was evaluated by 5-fold cross validation, and it obtained an overall accuracy of 0.945, and Kappa value of 0.912. Finally, the relationship of detected weeds and their ground truth densities was quantified by a fitted linear model with a coefficient of determination of 0.895 and a root mean square error of 0.026. Besides, the importance of input features was evaluated, and it was found that the ratio of vegetation length and width was the most significant feature for the classification model. Overall, our approach can yield a satisfactory weed map, and we expect that the obtained accurate and timely weed map from UAV imagery will be applicable to realize site-specific weed management (SSWM) in early season crop fields for reducing spraying non-selective herbicides and costs.
DNA Translator and Aligner: HyperCard utilities to aid phylogenetic analysis of molecules.
Eernisse, D J
1992-04-01
DNA Translator and Aligner are molecular phylogenetics HyperCard stacks for Macintosh computers. They manipulate sequence data to provide graphical gene mapping, conversions, translations and manual multiple-sequence alignment editing. DNA Translator is able to convert documented GenBank or EMBL documented sequences into linearized, rescalable gene maps whose gene sequences are extractable by clicking on the corresponding map button or by selection from a scrolling list. Provided gene maps, complete with extractable sequences, consist of nine metazoan, one yeast, and one ciliate mitochondrial DNAs and three green plant chloroplast DNAs. Single or multiple sequences can be manipulated to aid in phylogenetic analysis. Sequences can be translated between nucleic acids and proteins in either direction with flexible support of alternate genetic codes and ambiguous nucleotide symbols. Multiple aligned sequence output from diverse sources can be converted to Nexus, Hennig86 or PHYLIP format for subsequent phylogenetic analysis. Input or output alignments can be examined with Aligner, a convenient accessory stack included in the DNA Translator package. Aligner is an editor for the manual alignment of up to 100 sequences that toggles between display of matched characters and normal unmatched sequences. DNA Translator also generates graphic displays of amino acid coding and codon usage frequency relative to all other, or only synonymous, codons for approximately 70 select organism-organelle combinations. Codon usage data is compatible with spreadsheet or UWGCG formats for incorporation of additional molecules of interest. The complete package is available via anonymous ftp and is free for non-commercial uses.
Using mental mapping to unpack perceived cycling risk.
Manton, Richard; Rau, Henrike; Fahy, Frances; Sheahan, Jerome; Clifford, Eoghan
2016-03-01
Cycling is the most energy-efficient mode of transport and can bring extensive environmental, social and economic benefits. Research has highlighted negative perceptions of safety as a major barrier to the growth of cycling. Understanding these perceptions through the application of novel place-sensitive methodological tools such as mental mapping could inform measures to increase cyclist numbers and consequently improve cyclist safety. Key steps to achieving this include: (a) the design of infrastructure to reduce actual risks and (b) targeted work on improving safety perceptions among current and future cyclists. This study combines mental mapping, a stated-preference survey and a transport infrastructure inventory to unpack perceptions of cycling risk and to reveal both overlaps and discrepancies between perceived and actual characteristics of the physical environment. Participants translate mentally mapped cycle routes onto hard-copy base-maps, colour-coding road sections according to risk, while a transport infrastructure inventory captures the objective cycling environment. These qualitative and quantitative data are matched using Geographic Information Systems and exported to statistical analysis software to model the individual and (infra)structural determinants of perceived cycling risk. This method was applied to cycling conditions in Galway City (Ireland). Participants' (n=104) mental maps delivered data-rich perceived safety observations (n=484) and initial comparison with locations of cycling collisions suggests some alignment between perception and reality, particularly relating to danger at roundabouts. Attributing individual and (infra)structural characteristics to each observation, a Generalised Linear Mixed Model statistical analysis identified segregated infrastructure, road width, the number of vehicles as well as gender and cycling experience as significant, and interactions were found between individual and infrastructural variables. The paper concludes that mental mapping is a highly useful tool for assessing perceptions of cycling risk with a strong visual aspect and significant potential for public participation. This distinguishes it from more traditional cycling safety assessment tools that focus solely on the technical assessment of cycling infrastructure. Further development of online mapping tools is recommended as part of bicycle suitability measures to engage cyclists and the general public and to inform 'soft' and 'hard' cycling policy responses. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koch, C.D.; Pirkle, F.L.; Schmidt, J.S.
1981-01-01
A Principal Components Analysis (PCA) has been written to aid in the interpretation of multivariate aerial radiometric data collected by the US Department of Energy (DOE) under the National Uranium Resource Evaluation (NURE) program. The variations exhibited by these data have been reduced and classified into a number of linear combinations by using the PCA program. The PCA program then generates histograms and outlier maps of the individual variates. Black and white plots can be made on a Calcomp plotter by the application of follow-up programs. All programs referred to in this guide were written for a DEC-10. From thismore » analysis a geologist may begin to interpret the data structure. Insight into geological processes underlying the data may be obtained.« less
Temporal Stability of the NDVI-LAI Relationship in a Napa Valley Vineyard
NASA Technical Reports Server (NTRS)
Johnson, L. F.
2003-01-01
Remotely sensed normalized difference vegetation index (NDVI) values, derived from high-resolution satellite images, were compared with ground measurements of vineyard leaf area index (LAI) periodically during the 2001 growing season. The two variables were strongly related at six ground calibration sites on each of four occasions (r squared = 0.91 to 0.98). Linear regression equations relating the two variables did not significantly differ by observation date, and a single equation accounted for 92 percent of the variance in the combined dataset. Temporal stability of the relationship opens the possibility of transforming NDVI maps to LAI in the absence of repeated ground calibration fieldwork. In order to take advantage of this circumstance, however, steps should be taken to assure temporal consistency in spectral data values comprising the NDVI.
Combination of dynamic Bayesian network classifiers for the recognition of degraded characters
NASA Astrophysics Data System (ADS)
Likforman-Sulem, Laurence; Sigelle, Marc
2009-01-01
We investigate in this paper the combination of DBN (Dynamic Bayesian Network) classifiers, either independent or coupled, for the recognition of degraded characters. The independent classifiers are a vertical HMM and a horizontal HMM whose observable outputs are the image columns and the image rows respectively. The coupled classifiers, presented in a previous study, associate the vertical and horizontal observation streams into single DBNs. The scores of the independent and coupled classifiers are then combined linearly at the decision level. We compare the different classifiers -independent, coupled or linearly combined- on two tasks: the recognition of artificially degraded handwritten digits and the recognition of real degraded old printed characters. Our results show that coupled DBNs perform better on degraded characters than the linear combination of independent HMM scores. Our results also show that the best classifier is obtained by linearly combining the scores of the best coupled DBN and the best independent HMM.
Combined linear theory/impact theory method for analysis and design of high speed configurations
NASA Technical Reports Server (NTRS)
Brooke, D.; Vondrasek, D. V.
1980-01-01
Pressure distributions on a wing body at Mach 4.63 are calculated. The combined theory is shown to give improved predictions over either linear theory or impact theory alone. The combined theory is also applied in the inverse design mode to calculate optimum camber slopes at Mach 4.63. Comparisons with optimum camber slopes obtained from unmodified linear theory show large differences. Analysis of the results indicate that the combined theory correctly predicts the effect of thickness on the loading distributions at high Mach numbers, and that finite thickness wings optimized at high Mach numbers using unmodified linear theory will not achieve the minimum drag characteristics for which they are designed.
NASA Astrophysics Data System (ADS)
Sulistyo, Bambang
2016-11-01
The research was aimed at studying the efect of choosing three different C factor formulae derived from NDVI on a fully raster-based erosion modelling of The USLE using remote sensing data and GIS technique. Methods applied was by analysing all factors affecting erosion such that all data were in the form of raster. Those data were R, K, LS, C and P factors. Monthly R factor was evaluated based on formula developed by Abdurachman. K factor was determined using modified formula used by Ministry of Forestry based on soil samples taken in the field. LS factor was derived from Digital Elevation Model. Three C factors used were all derived from NDVI and developed by Suriyaprasit (non-linear) and by Sulistyo (linear and non-linear). P factor was derived from the combination between slope data and landcover classification interpreted from Landsat 7 ETM+. Another analysis was the creation of map of Bulk Density used to convert erosion unit. To know the model accuracy, model validation was done by applying statistical analysis and by comparing Emodel with Eactual. A threshold value of ≥ 0.80 or ≥ 80% was chosen to justify. The research result showed that all Emodel using three formulae of C factors have coeeficient of correlation value of > 0.8. The results of analysis of variance showed that there was significantly difference between Emodel and Eactual when using C factor formula developed by Suriyaprasit and Sulistyo (non-linear). Among the three formulae, only Emodel using C factor formula developed by Sulistyo (linear) reached the accuracy of 81.13% while the other only 56.02% as developed by Sulistyo (nonlinear) and 4.70% as developed by Suriyaprasit, respectively.
Squeeze-film dampers for turbomachinery stabilization
NASA Technical Reports Server (NTRS)
Mclean, L. J.; Hahn, E. J.
1984-01-01
A technique for investigating the stability and damping present in centrally preloaded radially symmetric multi-mass flexible rotor bearing systems is presented. In general, one needs to find the eigenvalues of the linearized perturbation equations, though zero frequency stability maps may be found by solving as many simultaneous non-linear equations as there are dampers; and in the case of a single damper, such maps may be found directly, regardless of the number of degrees of freedom. The technique is illustrated for a simple symmetric four degree of freedom flexible rotor with an unpressurized damper. This example shows that whereas zero frequency stability maps are likely to prove to be a simple way to delineate multiple solution possibilities, they do not provide full stability information. Further, particularly for low bearing parameters, the introduction of an unpressurized squeeze film damper may promote instability in an otherwise stable system.
Linear response formula for piecewise expanding unimodal maps
NASA Astrophysics Data System (ADS)
Baladi, Viviane; Smania, Daniel
2008-04-01
The average R(t)=\\int \\varphi\\,\\rmd \\mu_t of a smooth function phiv with respect to the SRB measure μt of a smooth one-parameter family ft of piecewise expanding interval maps is not always Lipschitz (Baladi 2007 Commun. Math. Phys. 275 839-59, Mazzolena 2007 Master's Thesis Rome 2, Tor Vergata). We prove that if ft is tangent to the topological class of f, and if ∂t ft|t = 0 = X circle f, then R(t) is differentiable at zero, and R'(0) coincides with the resummation proposed (Baladi 2007) of the (a priori divergent) series \\sum_{n=0}^\\infty \\int X(y) \\partial_y (\\varphi \\circ f^n)(y)\\,\\rmd \\mu_0(y) given by Ruelle's conjecture. In fact, we show that t map μt is differentiable within Radon measures. Linear response is violated if and only if ft is transversal to the topological class of f.
NASA Astrophysics Data System (ADS)
Ramirez, Andres; Rahnemoonfar, Maryam
2017-04-01
A hyperspectral image provides multidimensional figure rich in data consisting of hundreds of spectral dimensions. Analyzing the spectral and spatial information of such image with linear and non-linear algorithms will result in high computational time. In order to overcome this problem, this research presents a system using a MapReduce-Graphics Processing Unit (GPU) model that can help analyzing a hyperspectral image through the usage of parallel hardware and a parallel programming model, which will be simpler to handle compared to other low-level parallel programming models. Additionally, Hadoop was used as an open-source version of the MapReduce parallel programming model. This research compared classification accuracy results and timing results between the Hadoop and GPU system and tested it against the following test cases: the CPU and GPU test case, a CPU test case and a test case where no dimensional reduction was applied.
A model of the extent and distribution of woody linear features in rural Great Britain.
Scholefield, Paul; Morton, Dan; Rowland, Clare; Henrys, Peter; Howard, David; Norton, Lisa
2016-12-01
Hedges and lines of trees (woody linear features) are important boundaries that connect and enclose habitats, buffer the effects of land management, and enhance biodiversity in increasingly impoverished landscapes. Despite their acknowledged importance in the wider countryside, they are usually not considered in models of landscape function due to their linear nature and the difficulties of acquiring relevant data about their character, extent, and location. We present a model which uses national datasets to describe the distribution of woody linear features along boundaries in Great Britain. The method can be applied for other boundary types and in other locations around the world across a range of spatial scales where different types of linear feature can be separated using characteristics such as height or width. Satellite-derived Land Cover Map 2007 (LCM2007) provided the spatial framework for locating linear features and was used to screen out areas unsuitable for their occurrence, that is, offshore, urban, and forest areas. Similarly, Ordnance Survey Land-Form PANORAMA®, a digital terrain model, was used to screen out where they do not occur. The presence of woody linear features on boundaries was modelled using attributes from a canopy height dataset obtained by subtracting a digital terrain map (DTM) from a digital surface model (DSM). The performance of the model was evaluated against existing woody linear feature data in Countryside Survey across a range of scales. The results indicate that, despite some underestimation, this simple approach may provide valuable information on the extents and locations of woody linear features in the countryside at both local and national scales.
NASA Astrophysics Data System (ADS)
Aurière, M.; López Ariste, A.; Mathias, P.; Lèbre, A.; Josselin, E.; Montargès, M.; Petit, P.; Chiavassa, A.; Paletou, F.; Fabas, N.; Konstantinova-Antova, R.; Donati, J.-F.; Grunhut, J. H.; Wade, G. A.; Herpin, F.; Kervella, P.; Perrin, G.; Tessore, B.
2016-06-01
Context. Betelgeuse is an M supergiant that harbors spots and giant granules at its surface and presents linear polarization of its continuum. Aims: We have previously discovered linear polarization signatures associated with individual lines in the spectra of cool and evolved stars. Here, we investigate whether a similar linearly polarized spectrum exists for Betelgeuse. Methods: We used the spectropolarimeter Narval, combining multiple polarimetric sequences to obtain high signal-to-noise ratio spectra of individual lines, as well as the least-squares deconvolution (LSD) approach, to investigate the presence of an averaged linearly polarized profile for the photospheric lines. Results: We have discovered the existence of a linearly polarized spectrum for Betelgeuse, detecting a rather strong signal (at a few times 10-4 of the continuum intensity level), both in individual lines and in the LSD profiles. Studying its properties and the signal observed for the resonant Na I D lines, we conclude that we are mainly observing depolarization of the continuum by the absorption lines. The linear polarization of the Betelgeuse continuum is due to the anisotropy of the radiation field induced by brightness spots at the surface and Rayleigh scattering in the atmosphere. We have developed a geometrical model to interpret the observed polarization, from which we infer the presence of two brightness spots and their positions on the surface of Betelgeuse. We show that applying the model to each velocity bin along the Stokes Q and U profiles allows the derivation of a map of the bright spots. We use the Narval linear polarization observations of Betelgeuse obtained over a period of 1.4 yr to study the evolution of the spots and of the atmosphere. Conclusions: Our study of the linearly polarized spectrum of Betelgeuse provides a novel method for studying the evolution of brightness spots at its surface and complements quasi-simultaneous observations obtained with PIONIER at the VLTI. Based on observations obtained at the Télescope Bernard Lyot (TBL) at Observatoire du Pic du Midi, CNRS/INSU and Université de Toulouse, France.
NASA Technical Reports Server (NTRS)
Schweikhard, W. G.; Dennon, S. R.
1986-01-01
A review of the Melick method of inlet flow dynamic distortion prediction by statistical means is provided. These developments include the general Melick approach with full dynamic measurements, a limited dynamic measurement approach, and a turbulence modelling approach which requires no dynamic rms pressure fluctuation measurements. These modifications are evaluated by comparing predicted and measured peak instantaneous distortion levels from provisional inlet data sets. A nonlinear mean-line following vortex model is proposed and evaluated as a potential criterion for improving the peak instantaneous distortion map generated from the conventional linear vortex of the Melick method. The model is simplified to a series of linear vortex segments which lay along the mean line. Maps generated with this new approach are compared with conventionally generated maps, as well as measured peak instantaneous maps. Inlet data sets include subsonic, transonic, and supersonic inlets under various flight conditions.
NASA Astrophysics Data System (ADS)
Sunyaev, Rashid A.; Khatri, Rishi
2013-03-01
y-type spectral distortions of the cosmic microwave background allow us to detect clusters and groups of galaxies, filaments of hot gas and the non-uniformities in the warm hot intergalactic medium. Several CMB experiments (on small areas of sky) and theoretical groups (for full sky) have recently published y-type distortion maps. We propose to search for two artificial hot spots in such y-type maps resulting from the incomplete subtraction of the effect of the motion induced dipole on the cosmic microwave background sky. This dipole introduces, at second order, additional temperature and y-distortion anisotropy on the sky of amplitude few μK which could potentially be measured by Planck HFI and Pixie experiments and can be used as a source of cross channel calibration by CMB experiments. This y-type distortion is present in every pixel and is not the result of averaging the whole sky. This distortion, calculated exactly from the known linear dipole, can be subtracted from the final y-type maps, if desired.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sunyaev, Rashid A.; Khatri, Rishi, E-mail: sunyaev@mpa-garching.mpg.de, E-mail: khatri@mpa-garching.mpg.de
2013-03-01
y-type spectral distortions of the cosmic microwave background allow us to detect clusters and groups of galaxies, filaments of hot gas and the non-uniformities in the warm hot intergalactic medium. Several CMB experiments (on small areas of sky) and theoretical groups (for full sky) have recently published y-type distortion maps. We propose to search for two artificial hot spots in such y-type maps resulting from the incomplete subtraction of the effect of the motion induced dipole on the cosmic microwave background sky. This dipole introduces, at second order, additional temperature and y-distortion anisotropy on the sky of amplitude few μKmore » which could potentially be measured by Planck HFI and Pixie experiments and can be used as a source of cross channel calibration by CMB experiments. This y-type distortion is present in every pixel and is not the result of averaging the whole sky. This distortion, calculated exactly from the known linear dipole, can be subtracted from the final y-type maps, if desired.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalligiannaki, Evangelia, E-mail: ekalligian@tem.uoc.gr; Harmandaris, Vagelis, E-mail: harman@uoc.gr; Institute of Applied and Computational Mathematics
Using the probabilistic language of conditional expectations, we reformulate the force matching method for coarse-graining of molecular systems as a projection onto spaces of coarse observables. A practical outcome of this probabilistic description is the link of the force matching method with thermodynamic integration. This connection provides a way to systematically construct a local mean force and to optimally approximate the potential of mean force through force matching. We introduce a generalized force matching condition for the local mean force in the sense that allows the approximation of the potential of mean force under both linear and non-linear coarse grainingmore » mappings (e.g., reaction coordinates, end-to-end length of chains). Furthermore, we study the equivalence of force matching with relative entropy minimization which we derive for general non-linear coarse graining maps. We present in detail the generalized force matching condition through applications to specific examples in molecular systems.« less
Reliable two-dimensional phase unwrapping method using region growing and local linear estimation.
Zhou, Kun; Zaitsev, Maxim; Bao, Shanglian
2009-10-01
In MRI, phase maps can provide useful information about parameters such as field inhomogeneity, velocity of blood flow, and the chemical shift between water and fat. As phase is defined in the (-pi,pi] range, however, phase wraps often occur, which complicates image analysis and interpretation. This work presents a two-dimensional phase unwrapping algorithm that uses quality-guided region growing and local linear estimation. The quality map employs the variance of the second-order partial derivatives of the phase as the quality criterion. Phase information from unwrapped neighboring pixels is used to predict the correct phase of the current pixel using a linear regression method. The algorithm was tested on both simulated and real data, and is shown to successfully unwrap phase images that are corrupted by noise and have rapidly changing phase. (c) 2009 Wiley-Liss, Inc.
Azimuthally invariant Mueller-matrix mapping of biological optically anisotropic network
NASA Astrophysics Data System (ADS)
Ushenko, Yu. O.; Vanchuliak, O.; Bodnar, G. B.; Ushenko, V. O.; Grytsyuk, M.; Pavlyukovich, N.; Pavlyukovich, O. V.; Antonyuk, O.
2017-09-01
A new technique of Mueller-matrix mapping of polycrystalline structure of histological sections of biological tissues is suggested. The algorithms of reconstruction of distribution of parameters of linear and circular dichroism of histological sections liver tissue of mice with different degrees of severity of diabetes are found. The interconnections between such distributions and parameters of linear and circular dichroism of liver of mice tissue histological sections are defined. The comparative investigations of coordinate distributions of parameters of amplitude anisotropy formed by Liver tissue with varying severity of diabetes (10 days and 24 days) are performed. The values and ranges of change of the statistical (moments of the 1st - 4th order) parameters of coordinate distributions of the value of linear and circular dichroism are defined. The objective criteria of cause of the degree of severity of the diabetes differentiation are determined.
NASA Astrophysics Data System (ADS)
Hsu, Kuo-Lin; Gupta, Hoshin V.; Gao, Xiaogang; Sorooshian, Soroosh; Imam, Bisher
2002-12-01
Artificial neural networks (ANNs) can be useful in the prediction of hydrologic variables, such as streamflow, particularly when the underlying processes have complex nonlinear interrelationships. However, conventional ANN structures suffer from network training issues that significantly limit their widespread application. This paper presents a multivariate ANN procedure entitled self-organizing linear output map (SOLO), whose structure has been designed for rapid, precise, and inexpensive estimation of network structure/parameters and system outputs. More important, SOLO provides features that facilitate insight into the underlying processes, thereby extending its usefulness beyond forecast applications as a tool for scientific investigations. These characteristics are demonstrated using a classic rainfall-runoff forecasting problem. Various aspects of model performance are evaluated in comparison with other commonly used modeling approaches, including multilayer feedforward ANNs, linear time series modeling, and conceptual rainfall-runoff modeling.
Mapping Soil pH Buffering Capacity of Selected Fields
NASA Technical Reports Server (NTRS)
Weaver, A. R.; Kissel, D. E.; Chen, F.; West, L. T.; Adkins, W.; Rickman, D.; Luvall, J. C.
2003-01-01
Soil pH buffering capacity, since it varies spatially within crop production fields, may be used to define sampling zones to assess lime requirement, or for modeling changes in soil pH when acid forming fertilizers or manures are added to a field. Our objective was to develop a procedure to map this soil property. One hundred thirty six soil samples (0 to 15 cm depth) from three Georgia Coastal Plain fields were titrated with calcium hydroxide to characterize differences in pH buffering capacity of the soils. Since the relationship between soil pH and added calcium hydroxide was approximately linear for all samples up to pH 6.5, the slope values of these linear relationships for all soils were regressed on the organic C and clay contents of the 136 soil samples using multiple linear regression. The equation that fit the data best was b (slope of pH vs. lime added) = 0.00029 - 0.00003 * % clay + 0.00135 * % O/C, r(exp 2) = 0.68. This equation was applied within geographic information system (GIS) software to create maps of soil pH buffering capacity for the three fields. When the mapped values of the pH buffering capacity were compared with measured values for a total of 18 locations in the three fields, there was good general agreement. A regression of directly measured pH buffering capacities on mapped pH buffering capacities at the field locations for these samples gave an r(exp 2) of 0.88 with a slope of 1.04 for a group of soils that varied approximately tenfold in their pH buffering capacities.
Saliency detection using mutual consistency-guided spatial cues combination
NASA Astrophysics Data System (ADS)
Wang, Xin; Ning, Chen; Xu, Lizhong
2015-09-01
Saliency detection has received extensive interests due to its remarkable contribution to wide computer vision and pattern recognition applications. However, most existing computational models are designed for detecting saliency in visible images or videos. When applied to infrared images, they may suffer from limitations in saliency detection accuracy and robustness. In this paper, we propose a novel algorithm to detect visual saliency in infrared images by mutual consistency-guided spatial cues combination. First, based on the luminance contrast and contour characteristics of infrared images, two effective saliency maps, i.e., the luminance contrast saliency map and contour saliency map are constructed, respectively. Afterwards, an adaptive combination scheme guided by mutual consistency is exploited to integrate these two maps to generate the spatial saliency map. This idea is motivated by the observation that different maps are actually related to each other and the fusion scheme should present a logically consistent view of these maps. Finally, an enhancement technique is adopted to incorporate spatial saliency maps at various scales into a unified multi-scale framework to improve the reliability of the final saliency map. Comprehensive evaluations on real-life infrared images and comparisons with many state-of-the-art saliency models demonstrate the effectiveness and superiority of the proposed method for saliency detection in infrared images.
NASA Astrophysics Data System (ADS)
Kim, Soo Jeong; Lee, Dong Hyuk; Song, Inchang; Kim, Nam Gook; Park, Jae-Hyeung; Kim, JongHyo; Han, Man Chung; Min, Byong Goo
1998-07-01
Phase-contrast (PC) method of magnetic resonance imaging (MRI) has bee used for quantitative measurements of flow velocity and volume flow rate. It is a noninvasive technique which provides an accurate two-dimensional velocity image. Moreover, Phase Contrast Cine magnetic resonance imaging combines the flow dependent contrast of PC-MRI with the ability of cardiac cine imaging to produce images throughout the cardiac cycle. However, the accuracy of the data acquired from the single through-plane velocity encoding can be reduced by the effect of flow direction, because in many practical cases flow directions are not uniform throughout the whole region of interest. In this study, we present dynamic three-dimensional velocity vector mapping method using PC-MRI which can visualize the complex flow pattern through 3D volume rendered images displayed dynamically. The direction of velocity mapping can be selected along any three orthogonal axes. By vector summation, the three maps can be combined to form a velocity vector map that determines the velocity regardless of the flow direction. At the same time, Cine method is used to observe the dynamic change of flow. We performed a phantom study to evaluate the accuracy of the suggested PC-MRI in continuous and pulsatile flow measurement. Pulsatile flow wave form is generated by the ventricular assistant device (VAD), HEMO-PULSA (Biomedlab, Seoul, Korea). We varied flow velocity, pulsatile flow wave form, and pulsing rate. The PC-MRI-derived velocities were compared with Doppler-derived results. The velocities of the two measurements showed a significant linear correlation. Dynamic three-dimensional velocity vector mapping was carried out for two cases. First, we applied to the flow analysis around the artificial heart valve in a flat phantom. We could observe the flow pattern around the valve through the 3-dimensional cine image. Next, it is applied to the complex flow inside the polymer sac that is used as ventricle in totally implantable artificial heart (TAH). As a result we could observe the flow pattern around the valves of the sac, though complex flow can not be detected correctly in the conventional phase contrast method. In addition, we could calculate the cardiac output from TAH sac by quantitative measurement of the volume of flow across the outlet valve.
NASA Astrophysics Data System (ADS)
Wang, Yanjie; Liao, Qinhong; Yang, Guijun; Feng, Haikuan; Yang, Xiaodong; Yue, Jibo
2016-06-01
In recent decades, many spectral vegetation indices (SVIs) have been proposed to estimate the leaf nitrogen concentration (LNC) of crops. However, most of these indices were based on the field hyperspectral reflectance. To test whether they can be used in aerial remote platform effectively, in this work a comparison of the sensitivity between several broad-band and red edge-based SVIs to LNC is investigated over different crop types. By using data from experimental LNC values over 4 different crop types and image data acquired using the Compact Airborne Spectrographic Imager (CASI) sensor, the extensive dataset allowed us to evaluate broad-band and red edge-based SVIs. The result indicated that NDVI performed the best among the selected SVIs while red edge-based SVIs didn't show the potential for estimating the LNC based on the CASI data due to the spectral resolution. In order to search for the optimal SVIs, the band combination algorithm has been used in this work. The best linear correlation against the experimental LNC dataset was obtained by combining the 626.20nm and 569.00nm wavebands. These wavelengths correspond to the maximal chlorophyll absorption and reflection position region, respectively, and are known to be sensitive to the physiological status of the plant. Then this linear relationship was applied to the CASI image for generating an LNC map, which can guide farmers in the accurate application of their N fertilization strategies.
An empirical-statistical model for laser cladding of Ti-6Al-4V powder on Ti-6Al-4V substrate
NASA Astrophysics Data System (ADS)
Nabhani, Mohammad; Razavi, Reza Shoja; Barekat, Masoud
2018-03-01
In this article, Ti-6Al-4V powder alloy was directly deposited on Ti-6Al-4V substrate using laser cladding process. In this process, some key parameters such as laser power (P), laser scanning rate (V) and powder feeding rate (F) play important roles. Using linear regression analysis, this paper develops the empirical-statistical relation between these key parameters and geometrical characteristics of single clad tracks (i.e. clad height, clad width, penetration depth, wetting angle, and dilution) as a combined parameter (PαVβFγ). The results indicated that the clad width linearly depended on PV-1/3 and powder feeding rate had no effect on it. The dilution controlled by a combined parameter as VF-1/2 and laser power was a dispensable factor. However, laser power was the dominant factor for the clad height, penetration depth, and wetting angle so that they were proportional to PV-1F1/4, PVF-1/8, and P3/4V-1F-1/4, respectively. Based on the results of correlation coefficient (R > 0.9) and analysis of residuals, it was confirmed that these empirical-statistical relations were in good agreement with the measured values of single clad tracks. Finally, these relations led to the design of a processing map that can predict the geometrical characteristics of the single clad tracks based on the key parameters.
Al-Abadi, Alaa M; Shahid, Shamsuddin
2015-09-01
In this study, index of entropy and catastrophe theory methods were used for demarcating groundwater potential in an arid region using weighted linear combination techniques in geographical information system (GIS) environment. A case study from Badra area in the eastern part of central of Iraq was analyzed and discussed. Six factors believed to have influence on groundwater occurrence namely elevation, slope, aquifer transmissivity and storativity, soil, and distance to fault were prepared as raster thematic layers to facility integration into GIS environment. The factors were chosen based on the availability of data and local conditions of the study area. Both techniques were used for computing weights and assigning ranks vital for applying weighted linear combination approach. The results of application of both modes indicated that the most influential groundwater occurrence factors were slope and elevation. The other factors have relatively smaller values of weights implying that these factors have a minor role in groundwater occurrence conditions. The groundwater potential index (GPI) values for both models were classified using natural break classification scheme into five categories: very low, low, moderate, high, and very high. For validation of generated GPI, the relative operating characteristic (ROC) curves were used. According to the obtained area under the curve, the catastrophe model with 78 % prediction accuracy was found to perform better than entropy model with 77 % prediction accuracy. The overall results indicated that both models have good capability for predicting groundwater potential zones.
The Primordial Inflation Explorer (PIXIE)
NASA Technical Reports Server (NTRS)
Kogut, Alan; Chuss, David T.; Dotson, Jessie; Dwek, Eli; Fixsen, Dale J.; Halpern, Mark; Hinshaw, Gary F.; Meyer, Stephan; Moseley, S. Harvey; Seiffert, Michael D.;
2014-01-01
The Primordial Inflation Explorer is an Explorer-class mission to measure the gravity-wave signature of primordial inflation through its distinctive imprint on the linear polarization of the cosmic microwave background. PIXIE uses an innovative optical design to achieve background-limited sensitivity in 400 spectral channels spanning 2.5 decades in frequency from 30 GHz to 6 THz (1 cm to 50 micron wavelength). Multi-moded non-imaging optics feed a polarizing Fourier Transform Spectrometer to produce a set of interference fringes, proportional to the difference spectrum between orthogonal linear polarizations from the two input beams. Multiple levels of symmetry and signal modulation combine to reduce the instrumental signature and confusion from unpolarized sources to negligible levels. PIXIE will map the full sky in Stokes I, Q, and U parameters with angular resolution 2.6 deg and sensitivity 0.2 µK per 1 deg square pixel. The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r less than 10(exp -3) at 5 standard deviations. In addition, PIXIE will measure the absolute frequency spectrum to constrain physical processes ranging from inflation to the nature of the first stars to the physical conditions within the interstellar medium of the Galaxy. We describe the PIXIE instrument and mission architecture with an emphasis on the expected level of systematic error suppression.
Filtering Non-Linear Transfer Functions on Surfaces.
Heitz, Eric; Nowrouzezahrai, Derek; Poulin, Pierre; Neyret, Fabrice
2014-07-01
Applying non-linear transfer functions and look-up tables to procedural functions (such as noise), surface attributes, or even surface geometry are common strategies used to enhance visual detail. Their simplicity and ability to mimic a wide range of realistic appearances have led to their adoption in many rendering problems. As with any textured or geometric detail, proper filtering is needed to reduce aliasing when viewed across a range of distances, but accurate and efficient transfer function filtering remains an open problem for several reasons: transfer functions are complex and non-linear, especially when mapped through procedural noise and/or geometry-dependent functions, and the effects of perspective and masking further complicate the filtering over a pixel's footprint. We accurately solve this problem by computing and sampling from specialized filtering distributions on the fly, yielding very fast performance. We investigate the case where the transfer function to filter is a color map applied to (macroscale) surface textures (like noise), as well as color maps applied according to (microscale) geometric details. We introduce a novel representation of a (potentially modulated) color map's distribution over pixel footprints using Gaussian statistics and, in the more complex case of high-resolution color mapped microsurface details, our filtering is view- and light-dependent, and capable of correctly handling masking and occlusion effects. Our approach can be generalized to filter other physical-based rendering quantities. We propose an application to shading with irradiance environment maps over large terrains. Our framework is also compatible with the case of transfer functions used to warp surface geometry, as long as the transformations can be represented with Gaussian statistics, leading to proper view- and light-dependent filtering results. Our results match ground truth and our solution is well suited to real-time applications, requires only a few lines of shader code (provided in supplemental material, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/TVCG.2013.102), is high performance, and has a negligible memory footprint.
Exploring cosmic origins with CORE: Gravitational lensing of the CMB
NASA Astrophysics Data System (ADS)
Challinor, A.; Allison, R.; Carron, J.; Errard, J.; Feeney, S.; Kitching, T.; Lesgourgues, J.; Lewis, A.; Zubeldía, Í.; Achucarro, A.; Ade, P.; Ashdown, M.; Ballardini, M.; Banday, A. J.; Banerji, R.; Bartlett, J.; Bartolo, N.; Basak, S.; Baumann, D.; Bersanelli, M.; Bonaldi, A.; Bonato, M.; Borrill, J.; Bouchet, F.; Boulanger, F.; Brinckmann, T.; Bucher, M.; Burigana, C.; Buzzelli, A.; Cai, Z.-Y.; Calvo, M.; Carvalho, C.-S.; Castellano, G.; Chluba, J.; Clesse, S.; Colantoni, I.; Coppolecchia, A.; Crook, M.; d'Alessandro, G.; de Bernardis, P.; de Gasperis, G.; De Zotti, G.; Delabrouille, J.; Di Valentino, E.; Diego, J.-M.; Fernandez-Cobos, R.; Ferraro, S.; Finelli, F.; Forastieri, F.; Galli, S.; Genova-Santos, R.; Gerbino, M.; González-Nuevo, J.; Grandis, S.; Greenslade, J.; Hagstotz, S.; Hanany, S.; Handley, W.; Hernandez-Monteagudo, C.; Hervías-Caimapo, C.; Hills, M.; Hivon, E.; Kiiveri, K.; Kisner, T.; Kunz, M.; Kurki-Suonio, H.; Lamagna, L.; Lasenby, A.; Lattanzi, M.; Liguori, M.; Lindholm, V.; López-Caniego, M.; Luzzi, G.; Maffei, B.; Martinez-González, E.; Martins, C. J. A. P.; Masi, S.; Matarrese, S.; McCarthy, D.; Melchiorri, A.; Melin, J.-B.; Molinari, D.; Monfardini, A.; Natoli, P.; Negrello, M.; Notari, A.; Paiella, A.; Paoletti, D.; Patanchon, G.; Piat, M.; Pisano, G.; Polastri, L.; Polenta, G.; Pollo, A.; Poulin, V.; Quartin, M.; Remazeilles, M.; Roman, M.; Rubino-Martin, J.-A.; Salvati, L.; Tartari, A.; Tomasi, M.; Tramonte, D.; Trappe, N.; Trombetti, T.; Tucker, C.; Valiviita, J.; Van de Weijgaert, R.; van Tent, B.; Vennin, V.; Vielva, P.; Vittorio, N.; Young, K.; Zannoni, M.
2018-04-01
Lensing of the cosmic microwave background (CMB) is now a well-developed probe of the clustering of the large-scale mass distribution over a broad range of redshifts. By exploiting the non-Gaussian imprints of lensing in the polarization of the CMB, the CORE mission will allow production of a clean map of the lensing deflections over nearly the full-sky. The number of high-S/N modes in this map will exceed current CMB lensing maps by a factor of 40, and the measurement will be sample-variance limited on all scales where linear theory is valid. Here, we summarise this mission product and discuss the science that will follow from its power spectrum and the cross-correlation with other clustering data. For example, the summed mass of neutrinos will be determined to an accuracy of 17 meV combining CORE lensing and CMB two-point information with contemporaneous measurements of the baryon acoustic oscillation feature in the clustering of galaxies, three times smaller than the minimum total mass allowed by neutrino oscillation measurements. Lensing has applications across many other science goals of CORE, including the search for B-mode polarization from primordial gravitational waves. Here, lens-induced B-modes will dominate over instrument noise, limiting constraints on the power spectrum amplitude of primordial gravitational waves. With lensing reconstructed by CORE, one can "delens" the observed polarization internally, reducing the lensing B-mode power by 60 %. This can be improved to 70 % by combining lensing and measurements of the cosmic infrared background from CORE, leading to an improvement of a factor of 2.5 in the error on the amplitude of primordial gravitational waves compared to no delensing (in the null hypothesis of no primordial B-modes). Lensing measurements from CORE will allow calibration of the halo masses of the tens of thousands of galaxy clusters that it will find, with constraints dominated by the clean polarization-based estimators. The 19 frequency channels proposed for CORE will allow accurate removal of Galactic emission from CMB maps. We present initial findings that show that residual Galactic foreground contamination will not be a significant source of bias for lensing power spectrum measurements with CORE.
ANNIT - An Efficient Inversion Algorithm based on Prediction Principles
NASA Astrophysics Data System (ADS)
Růžek, B.; Kolář, P.
2009-04-01
Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good performance of the algorithm. Both versions and documentation are available on Internet and anybody can download them. The goal of this presentation is to offer the algorithm and computer codes for anybody interested in the solution to inverse problems.
Papadia, Andrea; Gasparri, Maria Luisa; Genoud, Sophie; Bernd, Klaeser; Mueller, Michael D
2017-11-01
The aim of the study was to evaluate the use of PET/CT and/or SLN mapping alone or in combination in cervical cancer patients. Data on stage IA1-IIA cervical cancer patients undergoing PET/CT and SLN mapping were retrospectively collected. Sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of PET/CT and SLN mapping, alone or in combination, in identifying cervical cancer patients with lymph node metastases were calculated. Sixty patients met the inclusion criteria. PET/CT showed a sensitivity of 68%, a specificity of 84%, a PPV of 61% and a NPV of 88% in detecting lymph nodal metastases. SLN mapping showed a sensitivity of 93%, a specificity of 100%, a PPV of 100% and a NPV of 97%. The combination of PET/CT and SLN mapping showed a sensitivity of 100%, a specificity of 86%, a PPV of 72% and a NPV of 100%. For patients with tumors of >2 cm in diameter, the PET/CT showed a sensitivity of 68%, a specificity of 72%, a PPV of 61% and a NPV of 86%. SLN mapping showed a sensitivity of 93%, a specificity of 100%, a PPV of 100% and a NPV of 95%. The combination of PET/CT and SLN mapping showed a sensitivity of 100%, a specificity of 76%, a PPV of 72% and a NPV of 100%. PET/CT represents a "safety net" that helps the surgeon in identifying metastatic lymph nodes, especially in patients with larger tumors.
High-Resolution Global Geologic Map of Ceres from NASA Dawn Mission
NASA Astrophysics Data System (ADS)
Williams, D. A.; Buczkowski, D. L.; Crown, D. A.; Frigeri, A.; Hughson, K.; Kneissl, T.; Krohn, K.; Mest, S. C.; Pasckert, J. H.; Platz, T.; Ruesch, O.; Schulzeck, F.; Scully, J. E. C.; Sizemore, H. G.; Nass, A.; Jaumann, R.; Raymond, C. A.; Russell, C. T.
2018-06-01
This presentation will discuss the completed 1:4,000,000 global geologic map of dwarf planet Ceres derived from Dawn Framing Camera Low Altitude Mapping Orbit (LAMo) images, combining 15 quadrangle maps.
Tackling non-linearities with the effective field theory of dark energy and modified gravity
NASA Astrophysics Data System (ADS)
Frusciante, Noemi; Papadomanolakis, Georgios
2017-12-01
We present the extension of the effective field theory framework to the mildly non-linear scales. The effective field theory approach has been successfully applied to the late time cosmic acceleration phenomenon and it has been shown to be a powerful method to obtain predictions about cosmological observables on linear scales. However, mildly non-linear scales need to be consistently considered when testing gravity theories because a large part of the data comes from those scales. Thus, non-linear corrections to predictions on observables coming from the linear analysis can help in discriminating among different gravity theories. We proceed firstly by identifying the necessary operators which need to be included in the effective field theory Lagrangian in order to go beyond the linear order in perturbations and then we construct the corresponding non-linear action. Moreover, we present the complete recipe to map any single field dark energy and modified gravity models into the non-linear effective field theory framework by considering a general action in the Arnowitt-Deser-Misner formalism. In order to illustrate this recipe we proceed to map the beyond-Horndeski theory and low-energy Hořava gravity into the effective field theory formalism. As a final step we derived the 4th order action in term of the curvature perturbation. This allowed us to identify the non-linear contributions coming from the linear order perturbations which at the next order act like source terms. Moreover, we confirm that the stability requirements, ensuring the positivity of the kinetic term and the speed of propagation for scalar mode, are automatically satisfied once the viability of the theory is demanded at linear level. The approach we present here will allow to construct, in a model independent way, all the relevant predictions on observables at mildly non-linear scales.
Student Connections of Linear Algebra Concepts: An Analysis of Concept Maps
ERIC Educational Resources Information Center
Lapp, Douglas A.; Nyman, Melvin A.; Berry, John S.
2010-01-01
This article examines the connections of linear algebra concepts in a first course at the undergraduate level. The theoretical underpinnings of this study are grounded in the constructivist perspective (including social constructivism), Vernaud's theory of conceptual fields and Pirie and Kieren's model for the growth of mathematical understanding.…
Yourganov, Grigori; Schmah, Tanya; Churchill, Nathan W; Berman, Marc G; Grady, Cheryl L; Strother, Stephen C
2014-08-01
The field of fMRI data analysis is rapidly growing in sophistication, particularly in the domain of multivariate pattern classification. However, the interaction between the properties of the analytical model and the parameters of the BOLD signal (e.g. signal magnitude, temporal variance and functional connectivity) is still an open problem. We addressed this problem by evaluating a set of pattern classification algorithms on simulated and experimental block-design fMRI data. The set of classifiers consisted of linear and quadratic discriminants, linear support vector machine, and linear and nonlinear Gaussian naive Bayes classifiers. For linear discriminant, we used two methods of regularization: principal component analysis, and ridge regularization. The classifiers were used (1) to classify the volumes according to the behavioral task that was performed by the subject, and (2) to construct spatial maps that indicated the relative contribution of each voxel to classification. Our evaluation metrics were: (1) accuracy of out-of-sample classification and (2) reproducibility of spatial maps. In simulated data sets, we performed an additional evaluation of spatial maps with ROC analysis. We varied the magnitude, temporal variance and connectivity of simulated fMRI signal and identified the optimal classifier for each simulated environment. Overall, the best performers were linear and quadratic discriminants (operating on principal components of the data matrix) and, in some rare situations, a nonlinear Gaussian naïve Bayes classifier. The results from the simulated data were supported by within-subject analysis of experimental fMRI data, collected in a study of aging. This is the first study that systematically characterizes interactions between analysis model and signal parameters (such as magnitude, variance and correlation) on the performance of pattern classifiers for fMRI. Copyright © 2014 Elsevier Inc. All rights reserved.
Relation of the lunar volcano complexes lying on the identical linear gravity anomaly
NASA Astrophysics Data System (ADS)
Yamamoto, K.; Haruyama, J.; Ohtake, M.; Iwata, T.; Ishihara, Y.
2015-12-01
There are several large-scale volcanic complexes, e.g., Marius Hills, Aristarchus Plateau, Rumker Hills, and Flamsteed area in western Oceanus Procellarum of the lunar nearside. For better understanding of the lunar thermal history, it is important to study these areas intensively. The magmatisms and volcanic eruption mechanisms of these volcanic complexes have been discussed from geophysical and geochemical perspectives using data sets acquired by lunar explorers. In these data sets, precise gravity field data obtained by Gravity Recovery and Interior Laboratory (GRAIL) gives information on mass anomalies below the lunar surface, and useful to estimate location and mass of the embedded magmas. Using GRAIL data, Andrews-Hanna et al. (2014) prepared gravity gradient map of the Moon. They discussed the origin of the quasi-rectangular pattern of narrow linear gravity gradient anomalies located along the border of Oceanus Procellarum and suggested that the underlying dikes played important roles in magma plumbing system. In the gravity gradient map, we found that there are also several small linear gravity gradient anomaly patterns in the inside of the large quasi-rectangular pattern, and that one of the linear anomalies runs through multiple gravity anomalies in the vicinity of Aristarchus, Marius and Flamstead volcano complexes. Our concern is whether the volcanisms of these complexes are caused by common factors or not. To clarify this, we firstly estimated the mass and depth of the embedded magmas as well as the directions of the linear gravity anomalies. The results were interpreted by comparing with the chronological and KREEP distribution maps on the lunar surface. We suggested providing mechanisms of the magma to these regions and finally discussed whether the volcanisms of these multiple volcano complex regions are related with each other or not.
Overview of Multi-Kilowatt Free-Piston Stirling Power Conversion Research at Glenn Research Center
NASA Technical Reports Server (NTRS)
Geng, Steven M.; Mason, Lee S.; Dyson, Rodger W.; Penswick, L. Barry
2008-01-01
As a step towards development of Stirling power conversion for potential use in Fission Surface Power (FSP) systems, a pair of commercially available 1 kW class free-piston Stirling convertors and a pair of commercially available pressure wave generators (which will be plumbed together to create a high power Stirling linear alternator test rig) have been procured for in-house testing at Glenn Research Center (GRC). Delivery of both the Stirling convertors and the linear alternator test rig is expected by October 2007. The 1 kW class free-piston Stirling convertors will be tested at GRC to map and verify performance. The convertors will later be modified to operate with a NaK liquid metal pumped loop for thermal energy input. The high power linear alternator test rig will be used to map and verify high power Stirling linear alternator performance and to develop power management and distribution (PMAD) methods and techniques. This paper provides an overview of the multi-kilowatt free-piston Stirling power conversion work being performed at GRC.
Fixing convergence of Gaussian belief propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jason K; Bickson, Danny; Dolev, Danny
Gaussian belief propagation (GaBP) is an iterative message-passing algorithm for inference in Gaussian graphical models. It is known that when GaBP converges it converges to the correct MAP estimate of the Gaussian random vector and simple sufficient conditions for its convergence have been established. In this paper we develop a double-loop algorithm for forcing convergence of GaBP. Our method computes the correct MAP estimate even in cases where standard GaBP would not have converged. We further extend this construction to compute least-squares solutions of over-constrained linear systems. We believe that our construction has numerous applications, since the GaBP algorithm ismore » linked to solution of linear systems of equations, which is a fundamental problem in computer science and engineering. As a case study, we discuss the linear detection problem. We show that using our new construction, we are able to force convergence of Montanari's linear detection algorithm, in cases where it would originally fail. As a consequence, we are able to increase significantly the number of users that can transmit concurrently.« less
Analyzing linear spatial features in ecology.
Buettel, Jessie C; Cole, Andrew; Dickey, John M; Brook, Barry W
2018-06-01
The spatial analysis of dimensionless points (e.g., tree locations on a plot map) is common in ecology, for instance using point-process statistics to detect and compare patterns. However, the treatment of one-dimensional linear features (fiber processes) is rarely attempted. Here we appropriate the methods of vector sums and dot products, used regularly in fields like astrophysics, to analyze a data set of mapped linear features (logs) measured in 12 × 1-ha forest plots. For this demonstrative case study, we ask two deceptively simple questions: do trees tend to fall downhill, and if so, does slope gradient matter? Despite noisy data and many potential confounders, we show clearly that topography (slope direction and steepness) of forest plots does matter to treefall. More generally, these results underscore the value of mathematical methods of physics to problems in the spatial analysis of linear features, and the opportunities that interdisciplinary collaboration provides. This work provides scope for a variety of future ecological analyzes of fiber processes in space. © 2018 by the Ecological Society of America.
Overview of Multi-kilowatt Free-Piston Stirling Power Conversion Research at GRC
NASA Technical Reports Server (NTRS)
Geng, Steven M.; Mason, Lee S.; Dyson, Rodger W.; Penswick, L. Barry
2008-01-01
As a step towards development of Stirling power conversion for potential use in Fission Surface Power (FSP) systems, a pair of commercially available 1 kW class free-piston Stirling convertors and a pair of commercially available pressure wave generators (which will be plumbed together to create a high power Stirling linear alternator test rig) have been procured for in-house testing at Glenn Research Center. Delivery of both the Stirling convertors and the linear alternator test rig is expected by October, 2007. The 1 kW class free-piston Stirling convertors will be tested at GRC to map and verify performance. The convertors will later be modified to operate with a NaK liquid metal pumped loop for thermal energy input. The high power linear alternator test rig will be used to map and verify high power Stirling linear alternator performance and to develop power management and distribution (PMAD) methods and techniques. This paper provides an overview of the multi-kilowatt free-piston Stirling power conversion work being performed at GRC.
Overview of Multi-Kilowatt Free-Piston Stirling Power Conversion Research at GRC
NASA Astrophysics Data System (ADS)
Geng, Steven M.; Mason, Lee S.; Dyson, Rodger W.; Penswick, L. Barry
2008-01-01
As a step towards development of Stirling power conversion for potential use in Fission Surface Power (FSP) systems, a pair of commercially available 1 kW class free-piston Stirling convertors and a pair of commercially available pressure wave generators (which will be plumbed together to create a high power Stirling linear alternator test rig) have been procured for in-house testing at Glenn Research Center. Delivery of both the Stirling convertors and the linear alternator test rig is expected by October, 2007. The 1 kW class free-piston Stirling convertors will be tested at GRC to map and verify performance. The convertors will later be modified to operate with a NaK liquid metal pumped loop for thermal energy input. The high power linear alternator test rig will be used to map and verify high power Stirling linear alternator performance and to develop power management and distribution (PMAD) methods and techniques. This paper provides an overview of the multi-kilowatt free-piston Stirling power conversion work being performed at GRC.
Primate empathy: three factors and their combinations for empathy-related phenomena.
Yamamoto, Shinya
2017-05-01
Empathy as a research topic is receiving increasing attention, although there seems some confusion on the definition of empathy across different fields. Frans de Waal (de Waal FBM. Putting the altruism back into altruism: the evolution of empathy. Annu Rev Psychol 2008, 59:279-300. doi:10.1146/annurev.psych.59.103006.093625) used empathy as an umbrella term and proposed a comprehensive model for the evolution of empathy with some of its basic elements in nonhuman animals. In de Waal's model, empathy consists of several layers distinguished by required cognitive levels; the perception-action mechanism plays the core role for connecting ourself and others. Then, human-like empathy such as perspective-taking develops in outer layers according to cognitive sophistication, leading to prosocial acts such as targeted helping. I agree that animals demonstrate many empathy-related phenomena; however, the species differences and the level of cognitive sophistication of the phenomena might be interpreted in another way than this simple linearly developing model. Our recent studies with chimpanzees showed that their perspective-taking ability does not necessarily lead to proactive helping behavior. Herein, as a springboard for further studies, I reorganize the empathy-related phenomena by proposing a combination model instead of the linear development model. This combination model is composed of three organizing factors: matching with others, understanding of others, and prosociality. With these three factors and their combinations, most empathy-related matters can be categorized and mapped to appropriate context; this may be a good first step to discuss the evolution of empathy in relation to the neural connections in human and nonhuman animal brains. I would like to propose further comparative studies, especially from the viewpoint of Homo-Pan (chimpanzee and bonobo) comparison. WIREs Cogn Sci 2017, 8:e1431. doi: 10.1002/wcs.1431 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.
Maps of the Magellanic clouds from combined South Pole Telescope and Planck data
Crawford, T. M.; Chown, R.; Holder, G. P.; ...
2016-12-09
Here, we present maps of the Large and Small Magellanic Clouds from combined South Pole Telescope (SPT) and Planck data. Both instruments are designed to make measurements of the cosmic microwave background but are sensitive to any source of millimeter-wave (mm-wave) emission. The Planck satellite observes in nine mm-wave bands, while the SPT data used in this work were taken with the three-band SPT-SZ camera. The SPT-SZ bands correspond closely to three of the nine Planck bands, namely those centered at 1.4, 2.1, and 3.0 mm. The angular resolution of the Planck data in these bands ranges from 5 tomore » 10 arcmin, while the SPT resolution in these bands ranges from 1.0 to 1.7 arcmin. The combined maps take advantage of the high resolution of the SPT data and the long-timescale stability of the space-based Planck observations to deliver high signal-to-noise and robust brightness measurements on scales from the size of the maps down to ~1 arcmin. In each of the three bands, we first calibrate and color-correct the SPT data to match the Planck data, then we use noise estimates from each instrument and knowledge of each instrument's beam, or point-spread function, to make the inverse-variance-weighted combination of the two instruments' data as a function of angular scale. Furthermore, we create maps assuming a range of underlying emission spectra (for the color correction) and at a range of final resolutions. We perform several consistency tests on the combined maps and estimate the expected noise in measurements of features in the maps. Finally, we compare the maps of the Large Magellanic Cloud (LMC) from this work to maps from the Herschel HERITAGE survey, finding general consistency between the datasets. Furthermore, the broad wavelength coverage provides evidence of different emission mechanisms at work in different environments in the LMC.« less
Fourier-based linear systems description of free-breathing pulmonary magnetic resonance imaging
NASA Astrophysics Data System (ADS)
Capaldi, D. P. I.; Svenningsen, S.; Cunningham, I. A.; Parraga, G.
2015-03-01
Fourier-decomposition of free-breathing pulmonary magnetic resonance imaging (FDMRI) was recently piloted as a way to provide rapid quantitative pulmonary maps of ventilation and perfusion without the use of exogenous contrast agents. This method exploits fast pulmonary MRI acquisition of free-breathing proton (1H) pulmonary images and non-rigid registration to compensate for changes in position and shape of the thorax associated with breathing. In this way, ventilation imaging using conventional MRI systems can be undertaken but there has been no systematic evaluation of fundamental image quality measurements based on linear systems theory. We investigated the performance of free-breathing pulmonary ventilation imaging using a Fourier-based linear system description of each operation required to generate FDMRI ventilation maps. Twelve subjects with chronic obstructive pulmonary disease (COPD) or bronchiectasis underwent pulmonary function tests and MRI. Non-rigid registration was used to co-register the temporal series of pulmonary images. Pulmonary voxel intensities were aligned along a time axis and discrete Fourier transforms were performed on the periodic signal intensity pattern to generate frequency spectra. We determined the signal-to-noise ratio (SNR) of the FDMRI ventilation maps using a conventional approach (SNRC) and using the Fourier-based description (SNRF). Mean SNR was 4.7 ± 1.3 for subjects with bronchiectasis and 3.4 ± 1.8, for COPD subjects (p>.05). SNRF was significantly different than SNRC (p<.01). SNRF was approximately 50% of SNRC suggesting that the linear system model well-estimates the current approach.
Fisz, Jacek J
2006-12-07
The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.
Ille, Sebastian; Sollmann, Nico; Hauck, Theresa; Maurer, Stefanie; Tanigawa, Noriko; Obermueller, Thomas; Negwer, Chiara; Droese, Doris; Zimmer, Claus; Meyer, Bernhard; Ringel, Florian; Krieg, Sandro M
2015-07-01
Repetitive navigated transcranial magnetic stimulation (rTMS) is now increasingly used for preoperative language mapping in patients with lesions in language-related areas of the brain. Yet its correlation with intraoperative direct cortical stimulation (DCS) has to be improved. To increase rTMS's specificity and positive predictive value, the authors aim to provide thresholds for rTMS's positive language areas. Moreover, they propose a protocol for combining rTMS with functional MRI (fMRI) to combine the strength of both methods. The authors performed multimodal language mapping in 35 patients with left-sided perisylvian lesions by using rTMS, fMRI, and DCS. The rTMS mappings were conducted with a picture-to-trigger interval (PTI, time between stimulus presentation and stimulation onset) of either 0 or 300 msec. The error rates (ERs; that is, the number of errors per number of stimulations) were calculated for each region of the cortical parcellation system (CPS). Subsequently, the rTMS mappings were analyzed through different error rate thresholds (ERT; that is, the ER at which a CPS region was defined as language positive in terms of rTMS), and the 2-out-of-3 rule (a stimulation site was defined as language positive in terms of rTMS if at least 2 out of 3 stimulations caused an error). As a second step, the authors combined the results of fMRI and rTMS in a predefined protocol of combined noninvasive mapping. To validate this noninvasive protocol, they correlated its results to DCS during awake surgery. The analysis by different rTMS ERTs obtained the highest correlation regarding sensitivity and a low rate of false positives for the ERTs of 15%, 20%, 25%, and the 2-out-of-3 rule. However, when comparing the combined fMRI and rTMS results with DCS, the authors observed an overall specificity of 83%, a positive predictive value of 51%, a sensitivity of 98%, and a negative predictive value of 95%. In comparison with fMRI, rTMS is a more sensitive but less specific tool for preoperative language mapping than DCS. Moreover, rTMS is most reliable when using ERTs of 15%, 20%, 25%, or the 2-out-of-3 rule and a PTI of 0 msec. Furthermore, the combination of fMRI and rTMS leads to a higher correlation to DCS than both techniques alone, and the presented protocols for combined noninvasive language mapping might play a supportive role in the language-mapping assessment prior to the gold-standard intraoperative DCS.
CRISM Multispectral and Hyperspectral Mapping Data - A Global Data Set for Hydrated Mineral Mapping
NASA Astrophysics Data System (ADS)
Seelos, F. P.; Hash, C. D.; Murchie, S. L.; Lim, H.
2017-12-01
The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) is a visible through short-wave infrared hyperspectral imaging spectrometer (VNIR S-detector: 364-1055 nm; IR L-detector: 1001-3936 nm; 6.55 nm sampling) that has been in operation on the Mars Reconnaissance Orbiter (MRO) since 2006. Over the course of the MRO mission, CRISM has acquired 290,000 individual mapping observation segments (mapping strips) with a variety of observing modes and data characteristics (VNIR/IR; 100/200 m/pxl; multi-/hyper-spectral band selection) over a wide range of observing conditions (atmospheric state, observation geometry, instrument state). CRISM mapping data coverage density varies primarily with latitude and secondarily due to seasonal and operational considerations. The aggregate global IR mapping data coverage currently stands at 85% ( 80% at the equator with 40% repeat sampling), which is sufficient spatial sampling density to support the assembly of empirically optimized radiometrically consistent mapping mosaic products. The CRISM project has defined a number of mapping mosaic data products (e.g. Multispectral Reduced Data Record (MRDR) map tiles) with varying degrees of observation-specific processing and correction applied prior to mosaic assembly. A commonality among the mosaic products is the presence of inter-observation radiometric discrepancies which are traceable to variable observation circumstances or associated atmospheric/photometric correction residuals. The empirical approach to radiometric reconciliation leverages inter-observation spatial overlaps and proximal relationships to construct a graph that encodes the mosaic structure and radiometric discrepancies. The graph theory abstraction allows the underling structure of the msaic to be evaluated and the corresponding optimization problem configured so it is well-posed. Linear and non-linear least squares optimization is then employed to derive a set of observation- and wavelength- specific model parameters for a series of transform functions that minimize the total radiometric discrepancy across the mosaic. This empirical approach to CRISM data radiometric reconciliation and the utility of the resulting mapping data mosaic products for hydrated mineral mapping will be presented.
Grams, Vanessa; Wellmann, Robin; Preuß, Siegfried; Grashorn, Michael A; Kjaer, Jörgen B; Bessei, Werner; Bennewitz, Jörn
2015-09-30
Feather pecking (FP) in laying hens is a well-known and multi-factorial behaviour with a genetic background. In a selection experiment, two lines were developed for 11 generations for high (HFP) and low (LFP) feather pecking, respectively. Starting with the second generation of selection, there was a constant difference in mean number of FP bouts between both lines. We used the data from this experiment to perform a quantitative genetic analysis and to map selection signatures. Pedigree and phenotypic data were available for the last six generations of both lines. Univariate quantitative genetic analyses were conducted using mixed linear and generalized mixed linear models assuming a Poisson distribution. Selection signatures were mapped using 33,228 single nucleotide polymorphisms (SNPs) genotyped on 41 HFP and 34 LFP individuals of generation 11. For each SNP, we estimated Wright's fixation index (FST). We tested the null hypothesis that FST is driven purely by genetic drift against the alternative hypothesis that it is driven by genetic drift and selection. The mixed linear model failed to analyze the LFP data because of the large number of 0s in the observation vector. The Poisson model fitted the data well and revealed a small but continuous genetic trend in both lines. Most of the 17 genome-wide significant SNPs were located on chromosomes 3 and 4. Thirteen clusters with at least two significant SNPs within an interval of 3 Mb maximum were identified. Two clusters were mapped on chromosomes 3, 4, 8 and 19. Of the 17 genome-wide significant SNPs, 12 were located within the identified clusters. This indicates a non-random distribution of significant SNPs and points to the presence of selection sweeps. Data on FP should be analysed using generalised linear mixed models assuming a Poisson distribution, especially if the number of FP bouts is small and the distribution is heavily peaked at 0. The FST-based approach was suitable to map selection signatures that need to be confirmed by linkage or association mapping.
Graphical function mapping as a new way to explore cause-and-effect chains
Evans, Mary Anne
2016-01-01
Graphical function mapping provides a simple method for improving communication within interdisciplinary research teams and between scientists and nonscientists. This article introduces graphical function mapping using two examples and discusses its usefulness. Function mapping projects the outcome of one function into another to show the combined effect. Using this mathematical property in a simpler, even cartoon-like, graphical way allows the rapid combination of multiple information sources (models, empirical data, expert judgment, and guesses) in an intuitive visual to promote further discussion, scenario development, and clear communication.
Minimization for conditional simulation: Relationship to optimal transport
NASA Astrophysics Data System (ADS)
Oliver, Dean S.
2014-05-01
In this paper, we consider the problem of generating independent samples from a conditional distribution when independent samples from the prior distribution are available. Although there are exact methods for sampling from the posterior (e.g. Markov chain Monte Carlo or acceptance/rejection), these methods tend to be computationally demanding when evaluation of the likelihood function is expensive, as it is for most geoscience applications. As an alternative, in this paper we discuss deterministic mappings of variables distributed according to the prior to variables distributed according to the posterior. Although any deterministic mappings might be equally useful, we will focus our discussion on a class of algorithms that obtain implicit mappings by minimization of a cost function that includes measures of data mismatch and model variable mismatch. Algorithms of this type include quasi-linear estimation, randomized maximum likelihood, perturbed observation ensemble Kalman filter, and ensemble of perturbed analyses (4D-Var). When the prior pdf is Gaussian and the observation operators are linear, we show that these minimization-based simulation methods solve an optimal transport problem with a nonstandard cost function. When the observation operators are nonlinear, however, the mapping of variables from the prior to the posterior obtained from those methods is only approximate. Errors arise from neglect of the Jacobian determinant of the transformation and from the possibility of discontinuous mappings.
Long-time uncertainty propagation using generalized polynomial chaos and flow map composition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luchtenburg, Dirk M., E-mail: dluchten@cooper.edu; Brunton, Steven L.; Rowley, Clarence W.
2014-10-01
We present an efficient and accurate method for long-time uncertainty propagation in dynamical systems. Uncertain initial conditions and parameters are both addressed. The method approximates the intermediate short-time flow maps by spectral polynomial bases, as in the generalized polynomial chaos (gPC) method, and uses flow map composition to construct the long-time flow map. In contrast to the gPC method, this approach has spectral error convergence for both short and long integration times. The short-time flow map is characterized by small stretching and folding of the associated trajectories and hence can be well represented by a relatively low-degree basis. The compositionmore » of these low-degree polynomial bases then accurately describes the uncertainty behavior for long integration times. The key to the method is that the degree of the resulting polynomial approximation increases exponentially in the number of time intervals, while the number of polynomial coefficients either remains constant (for an autonomous system) or increases linearly in the number of time intervals (for a non-autonomous system). The findings are illustrated on several numerical examples including a nonlinear ordinary differential equation (ODE) with an uncertain initial condition, a linear ODE with an uncertain model parameter, and a two-dimensional, non-autonomous double gyre flow.« less
GIS-based niche modeling for mapping species' habitats
Rotenberry, J.T.; Preston, K.L.; Knick, S.
2006-01-01
Ecological a??niche modelinga?? using presence-only locality data and large-scale environmental variables provides a powerful tool for identifying and mapping suitable habitat for species over large spatial extents. We describe a niche modeling approach that identifies a minimum (rather than an optimum) set of basic habitat requirements for a species, based on the assumption that constant environmental relationships in a species' distribution (i.e., variables that maintain a consistent value where the species occurs) are most likely to be associated with limiting factors. Environmental variables that take on a wide range of values where a species occurs are less informative because they do not limit a species' distribution, at least over the range of variation sampled. This approach is operationalized by partitioning Mahalanobis D2 (standardized difference between values of a set of environmental variables for any point and mean values for those same variables calculated from all points at which a species was detected) into independent components. The smallest of these components represents the linear combination of variables with minimum variance; increasingly larger components represent larger variances and are increasingly less limiting. We illustrate this approach using the California Gnatcatcher (Polioptila californica Brewster) and provide SAS code to implement it.
Design Patterns to Achieve 300x Speedup for Oceanographic Analytics in the Cloud
NASA Astrophysics Data System (ADS)
Jacob, J. C.; Greguska, F. R., III; Huang, T.; Quach, N.; Wilson, B. D.
2017-12-01
We describe how we achieve super-linear speedup over standard approaches for oceanographic analytics on a cluster computer and the Amazon Web Services (AWS) cloud. NEXUS is an open source platform for big data analytics in the cloud that enables this performance through a combination of horizontally scalable data parallelism with Apache Spark and rapid data search, subset, and retrieval with tiled array storage in cloud-aware NoSQL databases like Solr and Cassandra. NEXUS is the engine behind several public portals at NASA and OceanWorks is a newly funded project for the ocean community that will mature and extend this capability for improved data discovery, subset, quality screening, analysis, matchup of satellite and in situ measurements, and visualization. We review the Python language API for Spark and how to use it to quickly convert existing programs to use Spark to run with cloud-scale parallelism, and discuss strategies to improve performance. We explain how partitioning the data over space, time, or both leads to algorithmic design patterns for Spark analytics that can be applied to many different algorithms. We use NEXUS analytics as examples, including area-averaged time series, time averaged map, and correlation map.
Generalized Lee-Wick formulation from higher derivative field theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Inyong; Kwon, O-Kab; Department of Physics, BK21 Physics Research Division, Institute of Basic Science, Sungkyunkwan University, Suwon 440-746
2010-07-15
We study a higher derivative (HD) field theory with an arbitrary order of derivative for a real scalar field. The degree of freedom for the HD field can be converted to multiple fields with canonical kinetic terms up to the overall sign. The Lagrangian describing the dynamics of the multiple fields is known as the Lee-Wick (LW) form. The first step to obtain the LW form for a given HD Lagrangian is to find an auxiliary field (AF) Lagrangian which is equivalent to the original HD Lagrangian up to the quantum level. Until now, the AF Lagrangian has been studiedmore » only for N=2 and 3 cases, where N is the number of poles of the two-point function of the HD scalar field. We construct the AF Lagrangian for arbitrary N. By the linear combinations of AF fields, we also obtain the corresponding LW form. We find the explicit mapping matrices among the HD fields, the AF fields, and the LW fields. As an exercise of our construction, we calculate the relations among parameters and mapping matrices for N=2, 3, and 4 cases.« less
Super-Resolution for “Jilin-1” Satellite Video Imagery via a Convolutional Network
Wang, Zhongyuan; Wang, Lei; Ren, Yexian
2018-01-01
Super-resolution for satellite video attaches much significance to earth observation accuracy, and the special imaging and transmission conditions on the video satellite pose great challenges to this task. The existing deep convolutional neural-network-based methods require pre-processing or post-processing to be adapted to a high-resolution size or pixel format, leading to reduced performance and extra complexity. To this end, this paper proposes a five-layer end-to-end network structure without any pre-processing and post-processing, but imposes a reshape or deconvolution layer at the end of the network to retain the distribution of ground objects within the image. Meanwhile, we formulate a joint loss function by combining the output and high-dimensional features of a non-linear mapping network to precisely learn the desirable mapping relationship between low-resolution images and their high-resolution counterparts. Also, we use satellite video data itself as a training set, which favors consistency between training and testing images and promotes the method’s practicality. Experimental results on “Jilin-1” satellite video imagery show that this method demonstrates a superior performance in terms of both visual effects and measure metrics over competing methods. PMID:29652838
Measurement of CIB power spectra over large sky areas from Planck HFI maps
NASA Astrophysics Data System (ADS)
Mak, Daisy Suet Ying; Challinor, Anthony; Efstathiou, George; Lagache, Guilaine
2017-04-01
We present new measurements of the power spectra of the cosmic infrared background (CIB) anisotropies using the Planck 2015 full-mission High frequency instrument data at 353, 545 and 857 GHz over 20 000 deg2. We use techniques similar to those applied for the cosmological analysis of Planck, subtracting dust emission at the power spectrum level. Our analysis gives stable solutions for the CIB power spectra with increasing sky coverage up to about 50 per cent of the sky. These spectra agree well with H I-cleaned spectra from Planck measured on much smaller areas of sky with low Galactic dust emission. At 545 and 857 GHz, our CIB spectra agree well with those measured from Herschel data. We find that the CIB spectra at ℓ ≳ 500 are well fitted by a power-law model for the clustered CIB, with a shallow index γcib = 0.53 ± 0.02. This is consistent with the CIB results at 217 GHz from the cosmological parameter analysis of Planck. We show that a linear combination of the 545 and 857 GHz Planck maps is dominated by the CIB fluctuations at multipoles ℓ ≳ 300.
Super-Resolution for "Jilin-1" Satellite Video Imagery via a Convolutional Network.
Xiao, Aoran; Wang, Zhongyuan; Wang, Lei; Ren, Yexian
2018-04-13
Super-resolution for satellite video attaches much significance to earth observation accuracy, and the special imaging and transmission conditions on the video satellite pose great challenges to this task. The existing deep convolutional neural-network-based methods require pre-processing or post-processing to be adapted to a high-resolution size or pixel format, leading to reduced performance and extra complexity. To this end, this paper proposes a five-layer end-to-end network structure without any pre-processing and post-processing, but imposes a reshape or deconvolution layer at the end of the network to retain the distribution of ground objects within the image. Meanwhile, we formulate a joint loss function by combining the output and high-dimensional features of a non-linear mapping network to precisely learn the desirable mapping relationship between low-resolution images and their high-resolution counterparts. Also, we use satellite video data itself as a training set, which favors consistency between training and testing images and promotes the method's practicality. Experimental results on "Jilin-1" satellite video imagery show that this method demonstrates a superior performance in terms of both visual effects and measure metrics over competing methods.
Serre duality, Abel's theorem, and Jacobi inversion for supercurves over a thick superpoint
NASA Astrophysics Data System (ADS)
Rothstein, Mitchell J.; Rabin, Jeffrey M.
2015-04-01
The principal aim of this paper is to extend Abel's theorem to the setting of complex supermanifolds of dimension 1 | q over a finite-dimensional local supercommutative C-algebra. The theorem is proved by establishing a compatibility of Serre duality for the supercurve with Poincaré duality on the reduced curve. We include an elementary algebraic proof of the requisite form of Serre duality, closely based on the account of the reduced case given by Serre in Algebraic groups and class fields, combined with an invariance result for the topology on the dual of the space of répartitions. Our Abel map, taking Cartier divisors of degree zero to the dual of the space of sections of the Berezinian sheaf, modulo periods, is defined via Penkov's characterization of the Berezinian sheaf as the cohomology of the de Rham complex of the sheaf D of differential operators. We discuss the Jacobi inversion problem for the Abel map and give an example demonstrating that if n is an integer sufficiently large that the generic divisor of degree n is linearly equivalent to an effective divisor, this need not be the case for all divisors of degree n.
Elad, M; Feuer, A
1997-01-01
The three main tools in the single image restoration theory are the maximum likelihood (ML) estimator, the maximum a posteriori probability (MAP) estimator, and the set theoretic approach using projection onto convex sets (POCS). This paper utilizes the above known tools to propose a unified methodology toward the more complicated problem of superresolution restoration. In the superresolution restoration problem, an improved resolution image is restored from several geometrically warped, blurred, noisy and downsampled measured images. The superresolution restoration problem is modeled and analyzed from the ML, the MAP, and POCS points of view, yielding a generalization of the known superresolution restoration methods. The proposed restoration approach is general but assumes explicit knowledge of the linear space- and time-variant blur, the (additive Gaussian) noise, the different measured resolutions, and the (smooth) motion characteristics. A hybrid method combining the simplicity of the ML and the incorporation of nonellipsoid constraints is presented, giving improved restoration performance, compared with the ML and the POCS approaches. The hybrid method is shown to converge to the unique optimal solution of a new definition of the optimization problem. Superresolution restoration from motionless measurements is also discussed. Simulations demonstrate the power of the proposed methodology.
Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-04-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.
Tamura, Yukie; Ogawa, Hiroshi; Kapeller, Christoph; Prueckl, Robert; Takeuchi, Fumiya; Anei, Ryogo; Ritaccio, Anthony; Guger, Christoph; Kamada, Kyousuke
2016-12-01
OBJECTIVE Electrocortical stimulation (ECS) is the gold standard for functional brain mapping; however, precise functional mapping is still difficult in patients with language deficits. High gamma activity (HGA) between 80 and 140 Hz on electrocorticography is assumed to reflect localized cortical processing, whereas the cortico-cortical evoked potential (CCEP) can reflect bidirectional responses evoked by monophasic pulse stimuli to the language cortices when there is no patient cooperation. The authors propose the use of "passive" mapping by combining HGA mapping and CCEP recording without active tasks during conscious resections of brain tumors. METHODS Five patients, each with an intraaxial tumor in their dominant hemisphere, underwent conscious resection of their lesion with passive mapping. The authors performed functional localization for the receptive language area, using real-time HGA mapping, by listening passively to linguistic sounds. Furthermore, single electrical pulses were delivered to the identified receptive temporal language area to detect CCEPs in the frontal lobe. All mapping results were validated by ECS, and the sensitivity and specificity were evaluated. RESULTS Linguistic HGA mapping quickly identified the language area in the temporal lobe. Electrical stimulation by linguistic HGA mapping to the identified temporal receptive language area evoked CCEPs on the frontal lobe. The combination of linguistic HGA and frontal CCEPs needed no patient cooperation or effort. In this small case series, the sensitivity and specificity were 93.8% and 89%, respectively. CONCLUSIONS The described technique allows for simple and quick functional brain mapping with higher sensitivity and specificity than ECS mapping. The authors believe that this could improve the reliability of functional brain mapping and facilitate rational and objective operations. Passive mapping also sheds light on the underlying physiological mechanisms of language in the human brain.
A note on chaotic unimodal maps and applications.
Zhou, C T; He, X T; Yu, M Y; Chew, L Y; Wang, X G
2006-09-01
Based on the word-lift technique of symbolic dynamics of one-dimensional unimodal maps, we investigate the relation between chaotic kneading sequences and linear maximum-length shift-register sequences. Theoretical and numerical evidence that the set of the maximum-length shift-register sequences is a subset of the set of the universal sequence of one-dimensional chaotic unimodal maps is given. By stabilizing unstable periodic orbits on superstable periodic orbits, we also develop techniques to control the generation of long binary sequences.
Simultaneous Luminescence Pressure and Temperature Mapping
NASA Technical Reports Server (NTRS)
Buck, Gregory M. (Inventor)
1998-01-01
A simultaneous luminescence pressure and temperature mapping system is developed including improved dye application techniques for surface temperature and pressure measurements from 5 torr to 1000 torr with possible upgrade to from 0.5 torr to several atmospheres with improved camera resolution. Adsorbed perylene dye on slip-cast silica is pressure (oxygen) sensitive and reusable to relatively high temperatures (-150 C). Adsorbed luminescence has an approximately linear color shift with temperature, which can be used for independent temperature mapping and brightness pressure calibration with temperature.
Simultaneous Luminescence Pressure and Temperature Mapping System
NASA Technical Reports Server (NTRS)
Buck, Gregory M. (Inventor)
1995-01-01
A simultaneous luminescence pressure and temperature mapping system is developed including improved dye application techniques for surface temperature and pressure measurements from 5 torr to 1000 torr with possible upgrade to from 0.5 torr to several atmospheres with improved camera resolution. Adsorbed perylene dye on slip-cast silica is pressure (oxygen) sensitive and reusable to relatively high temperatures (approximately 150 C). Adsorbed luminescence has an approximately linear color shift with temperature, which can be used for independent temperature mapping and brightness pressure calibration with temperature.
MapEdit: solution to continuous raster map creation
NASA Astrophysics Data System (ADS)
Rančić, Dejan; Djordjevi-Kajan, Slobodanka
2003-03-01
The paper describes MapEdit, MS Windows TM software for georeferencing and rectification of scanned paper maps. The software produces continuous raster maps which can be used as background in geographical information systems. Process of continuous raster map creation using MapEdit "mosaicking" function is also described as well as the georeferencing and rectification algorithms which are used in MapEdit. Our approach for georeferencing and rectification using four control points and two linear transformations for each scanned map part, together with nearest neighbor resampling method, represents low cost—high speed solution that produce continuous raster maps with satisfactory quality for many purposes (±1 pixel). Quality assessment of several continuous raster maps at different scales that have been created using our software and methodology, has been undertaken and results are presented in the paper. For the quality control of the produced raster maps we referred to three wide adopted standards: US Standard for Digital Cartographic Data, National Standard for Spatial Data Accuracy and US National Map Accuracy Standard. The results obtained during the quality assessment process are given in the paper and show that our maps meat all three standards.
Nitrate contamination risk assessment in groundwater at regional scale
NASA Astrophysics Data System (ADS)
Daniela, Ducci
2016-04-01
Nitrate groundwater contamination is widespread in the world, due to the intensive use of fertilizers, to the leaking from the sewage network and to the presence of old septic systems. This research presents a methodology for groundwater contamination risk assessment using thematic maps derived mainly from the land-use map and from statistical data available at the national institutes of statistic (especially demographic and environmental data). The potential nitrate contamination is considered as deriving from three sources: agricultural, urban and periurban. The first one is related to the use of fertilizers. For this reason the land-use map is re-classified on the basis of the crop requirements in terms of fertilizers. The urban source is the possibility of leaks from the sewage network and, consequently, is linked to the anthropogenic pressure, expressed by the population density, weighted on the basis of the mapped urbanized areas of the municipality. The periurban sources include the un-sewered areas, especially present in the periurban context, where illegal sewage connections coexist with on-site sewage disposal (cesspools, septic tanks and pit latrines). The potential nitrate contamination map is produced by overlaying the agricultural, urban and periurban maps. The map combination process is very easy, being an algebraic combination: the output values are the arithmetic average of the input values. The groundwater vulnerability to contamination can be assessed using parametric methods, like DRASTIC or easier, like AVI (that involves a limited numbers of parameters). In most of cases, previous documents produced at regional level can be used. The pollution risk map is obtained by combining the thematic maps of the potential nitrate contamination map and the groundwater contamination vulnerability map. The criterion for the linkages of the different GIS layers is very easy, corresponding to an algebraic combination. The methodology has been successfully applied in a large flat area of southern Italy, with high concentrations in NO3.
NASA Astrophysics Data System (ADS)
Bruhwiler, D. L.; Cary, J. R.; Shasharina, S.
1998-04-01
The MAPA accelerator modeling code symplectically advances the full nonlinear map, tangent map and tangent map derivative through all accelerator elements. The tangent map and its derivative are nonlinear generalizations of Browns first- and second-order matrices(K. Brown, SLAC-75, Rev. 4 (1982), pp. 107-118.), and they are valid even near the edges of the dynamic aperture, which may be beyond the radius of convergence for a truncated Taylor series. In order to avoid truncation of the map and its derivatives, the Hamiltonian is split into pieces for which the map can be obtained analytically. Yoshidas method(H. Yoshida, Phys. Lett. A 150 (1990), pp. 262-268.) is then used to obtain a symplectic approximation to the map, while the tangent map and its derivative are appropriately composed at each step to obtain them with equal accuracy. We discuss our splitting of the quadrupole and combined-function dipole Hamiltonians and show that typically few steps are required for a high-energy accelerator.
Partitioning sources of variation in vertebrate species richness
Boone, R.B.; Krohn, W.B.
2000-01-01
Aim: To explore biogeographic patterns of terrestrial vertebrates in Maine, USA using techniques that would describe local and spatial correlations with the environment. Location: Maine, USA. Methods: We delineated the ranges within Maine (86,156 km2) of 275 species using literature and expert review. Ranges were combined into species richness maps, and compared to geomorphology, climate, and woody plant distributions. Methods were adapted that compared richness of all vertebrate classes to each environmental correlate, rather than assessing a single explanatory theory. We partitioned variation in species richness into components using tree and multiple linear regression. Methods were used that allowed for useful comparisons between tree and linear regression results. For both methods we partitioned variation into broad-scale (spatially autocorrelated) and fine-scale (spatially uncorrelated) explained and unexplained components. By partitioning variance, and using both tree and linear regression in analyses, we explored the degree of variation in species richness for each vertebrate group that Could be explained by the relative contribution of each environmental variable. Results: In tree regression, climate variation explained richness better (92% of mean deviance explained for all species) than woody plant variation (87%) and geomorphology (86%). Reptiles were highly correlated with environmental variation (93%), followed by mammals, amphibians, and birds (each with 84-82% deviance explained). In multiple linear regression, climate was most closely associated with total vertebrate richness (78%), followed by woody plants (67%) and geomorphology (56%). Again, reptiles were closely correlated with the environment (95%), followed by mammals (73%), amphibians (63%) and birds (57%). Main conclusions: Comparing variation explained using tree and multiple linear regression quantified the importance of nonlinear relationships and local interactions between species richness and environmental variation, identifying the importance of linear relationships between reptiles and the environment, and nonlinear relationships between birds and woody plants, for example. Conservation planners should capture climatic variation in broad-scale designs; temperatures may shift during climate change, but the underlying correlations between the environment and species richness will presumably remain.
NASA Astrophysics Data System (ADS)
Kleinwaechter, Tobias; Goldberg, Lars; Palmer, Charlotte; Schaper, Lucas; Schwinkendorf, Jan-Patrick; Osterhoff, Jens
2012-10-01
Laser-driven wakefield acceleration within capillary discharge waveguides has been used to generate high-quality electron bunches with GeV-scale energies. However, owing to fluctuations in laser and plasma conditions in combination with a difficult to control self-injection mechanism in the non-linear wakefield regime these bunches are often not reproducible and can feature large energy spreads. Specialized plasma targets with tailored density profiles offer the possibility to overcome these issues by controlling the injection and acceleration processes. This requires precise manipulation of the longitudinal density profile. Therefore our target concept is based on a capillary structure with multiple gas in- and outlets. Potential target designs are simulated using the fluid code OpenFOAM and those meeting the specified criteria are fabricated using femtosecond-laser machining of structures into sapphire plates. Density profiles are measured over a range of inlet pressures utilizing gas-density profilometry via Raman scattering and pressure calibration with longitudinal interferometry. In combination these allow absolute density mapping. Here we report the preliminary results.
NASA Astrophysics Data System (ADS)
Sah, Si Mohamed; Forchheimer, Daniel; Borgani, Riccardo; Haviland, David
2018-02-01
We present a polynomial force reconstruction of the tip-sample interaction force in Atomic Force Microscopy. The method uses analytical expressions for the slow-time amplitude and phase evolution, obtained from time-averaging over the rapidly oscillating part of the cantilever dynamics. The slow-time behavior can be easily obtained in either the numerical simulations or the experiment in which a high-Q resonator is perturbed by a weak nonlinearity and a periodic driving force. A direct fit of the theoretical expressions to the simulated and experimental data gives the best-fit parameters for the force model. The method combines and complements previous works (Platz et al., 2013; Forchheimer et al., 2012 [2]) and it allows for computationally more efficient parameter mapping with AFM. Results for the simulated asymmetric piecewise linear force and VdW-DMT force models are compared with the reconstructed polynomial force and show a good agreement. It is also shown that the analytical amplitude and phase modulation equations fit well with the experimental data.
Cheng, Jun-Hu; Sun, Da-Wen; Pu, Hongbin
2016-04-15
The potential use of feature wavelengths for predicting drip loss in grass carp fish, as affected by being frozen at -20°C for 24 h and thawed at 4°C for 1, 2, 4, and 6 days, was investigated. Hyperspectral images of frozen-thawed fish were obtained and their corresponding spectra were extracted. Least-squares support vector machine and multiple linear regression (MLR) models were established using five key wavelengths, selected by combining a genetic algorithm and successive projections algorithm, and this showed satisfactory performance in drip loss prediction. The MLR model with a determination coefficient of prediction (R(2)P) of 0.9258, and lower root mean square error estimated by a prediction (RMSEP) of 1.12%, was applied to transfer each pixel of the image and generate the distribution maps of exudation changes. The results confirmed that it is feasible to identify the feature wavelengths using variable selection methods and chemometric analysis for developing on-line multispectral imaging. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ding, Ting; Hu, Hong; Bai, Chen; Guo, Shifang; Yang, Miao; Wang, Supin; Wan, Mingxi
2016-07-01
Cavitation plays important roles in almost all high-intensity focused ultrasound (HIFU) applications. However, current two-dimensional (2D) cavitation mapping could only provide cavitation activity in one plane. This study proposed a three-dimensional (3D) ultrasound plane-by-plane active cavitation mapping (3D-UPACM) for HIFU in free field and pulsatile flow. The acquisition of channel-domain raw radio-frequency (RF) data in 3D space was performed by sequential plane-by-plane 2D ultrafast active cavitation mapping. Between two adjacent unit locations, there was a waiting time to make cavitation nuclei distribution of the liquid back to the original state. The 3D cavitation map equivalent to the one detected at one time and over the entire volume could be reconstructed by Marching Cube algorithm. Minimum variance (MV) adaptive beamforming was combined with coherence factor (CF) weighting (MVCF) or compressive sensing (CS) method (MVCS) to process the raw RF data for improved beamforming or more rapid data processing. The feasibility of 3D-UPACM was demonstrated in tap-water and a phantom vessel with pulsatile flow. The time interval between temporal evolutions of cavitation bubble cloud could be several microseconds. MVCF beamformer had a signal-to-noise ratio (SNR) at 14.17dB higher, lateral and axial resolution at 2.88times and 1.88times, respectively, which were compared with those of B-mode active cavitation mapping. MVCS beamformer had only 14.94% time penalty of that of MVCF beamformer. This 3D-UPACM technique employs the linear array of a current ultrasound diagnosis system rather than a 2D array transducer to decrease the cost of the instrument. Moreover, although the application is limited by the requirement for a gassy fluid medium or a constant supply of new cavitation nuclei that allows replenishment of nuclei between HIFU exposures, this technique may exhibit a useful tool in 3D cavitation mapping for HIFU with high speed, precision and resolution, especially in a laboratory environment where more careful analysis may be required under controlled conditions. Copyright © 2016 Elsevier B.V. All rights reserved.
Mapping Critical Loads of Atmospheric Nitrogen Deposition in the Rocky Mountains, USA
NASA Astrophysics Data System (ADS)
Nanus, L.; Clow, D. W.; Stephens, V. C.; Saros, J. E.
2010-12-01
Atmospheric nitrogen (N) deposition can adversely affect sensitive aquatic ecosystems at high-elevations in the western United States. Critical loads are the amount of deposition of a given pollutant that an ecosystem can receive below which ecological effects are thought not to occur. GIS-based landscape models were used to create maps for high-elevation areas across the Rocky Mountain region showing current atmospheric deposition rates of nitrogen (N), critical loads of N, and exceedances of critical loads of N. Atmospheric N deposition maps for the region were developed at 400 meter resolution using gridded precipitation data and spatially interpolated chemical concentrations in rain and snow. Critical loads maps were developed based on chemical thresholds corresponding to observed ecological effects, and estimated ecosystem sensitivities calculated from basin characteristics. Diatom species assemblages were used as an indicator of ecosystem health to establish critical loads of N. Chemical thresholds (concentrations) were identified for surface waters by using a combination of in-situ growth experiments and observed spatial patterns in surface-water chemistry and diatom species assemblages across an N deposition gradient. Ecosystem sensitivity was estimated using a multiple-linear regression approach in which observed surface water nitrate concentrations at 530 sites were regressed against estimates of inorganic N deposition and basin characteristics (topography, soil type and amount, bedrock geology, vegetation type) to develop predictive models of surface water chemistry. Modeling results indicated that the significant explanatory variables included percent slope, soil permeability, and vegetation type (including barren land, shrub, and grassland) and were used to predict high-elevation surface water nitrate concentrations across the Rocky Mountains. Chemical threshold concentrations were substituted into an inverted form of the model equations and applied to estimate critical loads for each stream reach within a basin, from which critical loads maps were created. Atmospheric N deposition maps were overlaid on the critical loads maps to identify areas in the Rocky Mountain region where critical loads are being exceeded, or where they may do so in the future. This approach may be transferable to other high-elevation areas of the United States and the world.
Rakotomanana, Fanjasoa; Randremanana, Rindra V; Rabarijaona, Léon P; Duchemin, Jean Bernard; Ratovonjato, Jocelyn; Ariey, Frédéric; Rudant, Jean Paul; Jeanne, Isabelle
2007-01-01
Background The highlands of Madagascar present an unstable transmission pattern of malaria. The population has no immunity, and the central highlands have been the sites of epidemics with particularly high fatality. The most recent epidemic occurred in the 1980s, and caused about 30,000 deaths. The fight against malaria epidemics in the highlands has been based on indoor insecticide spraying to control malaria vectors. Any preventive programme involving generalised cover in the highlands will require very substantial logistical support. We used multicriteria evaluation, by the method of weighted linear combination, as basis for improved targeting of actions by determining priority zones for intervention. Results Image analysis and field validation showed the accuracy of mapping rice fields to be between 82.3% and 100%, and the Kappa coefficient was 0.86 to 0.99. A significant positive correlation was observed between the abundance of the vector Anopheles funestus and temperature; the correlation coefficient was 0.599 (p < 0.001). A significant negative correlation was observed between vector abundance and human population density: the correlation coefficient was -0.551 (p < 0.003). Factor weights were determined by pair-wise comparison and the consistency ratio was 0.04. Risk maps of the six study zones were obtained according to a gradient of risk. Nine of thirteen results of alert confirmed by the Epidemiological Surveillance Post were in concordance with the risk map. Conclusion This study is particularly valuable for the management of vector control programmes, and particularly the reduction of the vector population with a view to preventing disease. The risk map obtained can be used to identify priority zones for the management of resources, and also help avoid systematic and generalised spraying throughout the highlands: such spraying is particularly difficult and expensive. The accuracy of the mapping, both as concerns time and space, is dependent on the availability of data. Continuous monitoring of malaria transmission factors must be undertaken to detect any changes. A regular case notification allows risk map to be verified. These actions should therefore be implemented so that risk maps can be satisfactorily assessed. PMID:17261177
NASA Astrophysics Data System (ADS)
Forbes, D. L.; Bell, T.; Campbell, D. C.; Cowan, B.; Deering, R. L.; Hatcher, S. V.; Hughes Clarke, J. E.; Irvine, M.; Manson, G. K.; Smith, I. R.; Edinger, E.
2015-12-01
Since 2012 we have carried out extensive multibeam bathymetric and backscatter surveys in coastal waters of eastern Baffin Island, supplemented by sub-bottom imaging and coring. Shore-zone surveys have been undertaken in proximity to the communities of Iqaluit and Qikiqtarjuaq, following earlier work in Clyde River. These support benthic habitat mapping, geological exploration, analysis of past and present sea-level trends, and assessment of coastal hazards relating to climate change and seabed instability. Outputs include a seamless topographic-bathymetric digital elevation model (DEM) of extensive boulder-strewn tidal flats in the large tidal-range setting at Iqaluit, supporting analysis of coastal flooding, wave run-up, and sea-ice impacts on a rapidly developing urban waterfront in the context of climate change. Seabed mapping of inner Frobisher Bay seaward of Iqaluit reveals a potential local tsunami hazard in widespread submarine slope failures, the triggers, magnitudes, and ages of which are the subject of ongoing research. In fjords of the Cumberland Peninsula, this project has mapped numerous submerged delta terraces at 19 to 45 m present water depth. These attest to an early postglacial submerged shoreline, displaced by glacial-isostatic adjustment. It rises linearly over a distance of 100 km east to west, where a submerged boulder barricade on a -16 m shoreline was discovered at a proposed port site in Broughton Channel near Qikiqtarjuaq. Palaeotopographic mapping using the multibeam data revealed an enclosed estuarine environment quite different from the present-day open passage swept by tidal currents. At Clyde River, combined seabed and onshore DEMs with geohazard mapping provided foundation data for community assessment and planning under a local knowledge co-production initiative. The geohazard work identified portions of the town-site more vulnerable to both coastal flooding and potential thaw subsidence, while the shallow delta terrace suggested a reversal from falling to rising relative sea levels. Overall, the coastal mapping results constitute baseline geoscience knowledge infrastructure for navigation, fisheries, port engineering, municipal planning, and informing sustainability initiatives in the isolated coastal communities of this Arctic region.
MAPS OF THE MAGELLANIC CLOUDS FROM COMBINED SOUTH POLE TELESCOPE AND PLANCK DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, T. M.; Benson, B. A.; Bleem, L. E.
We present maps of the Large and Small Magellanic Clouds from combined South Pole Telescope (SPT) and Planck data. The Planck satellite observes in nine bands, while the SPT data used in this work were taken with the three-band SPT-SZ camera, The SPT-SZ bands correspond closely to three of the nine Planck bands, namely those centered at 1.4, 2.1, and 3.0 mm. The angular resolution of the Planck data ranges from 5 to 10 arcmin, while the SPT resolution ranges from 1.0 to 1.7 arcmin. The combined maps take advantage of the high resolution of the SPT data and themore » long-timescale stability of the space-based Planck observations to deliver robust brightness measurements on scales from the size of the maps down to ∼1 arcmin. In each band, we first calibrate and color-correct the SPT data to match the Planck data, then we use noise estimates from each instrument and knowledge of each instrument’s beam to make the inverse-variance-weighted combination of the two instruments’ data as a function of angular scale. We create maps assuming a range of underlying emission spectra and at a range of final resolutions. We perform several consistency tests on the combined maps and estimate the expected noise in measurements of features in them. We compare maps from this work to those from the Herschel HERITAGE survey, finding general consistency between the data sets. All data products described in this paper are available for download from the NASA Legacy Archive for Microwave Background Data Analysis server.« less
Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer
NASA Astrophysics Data System (ADS)
Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre
2014-07-01
We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.
Monitoring the bending and twist of morphing structures
NASA Astrophysics Data System (ADS)
Smoker, J.; Baz, A.
2008-03-01
This paper presents the development of the theoretical basis for the design of sensor networks for determining the 2-dimensioal shape of morphing structures by monitoring simultaneously the bending and twist deflections. The proposed development is based on the non-linear theory of finite elements to extract the transverse linear and angular deflections of a plate-like structure. The sensors outputs are wirelessly transmitted to the command unit to simultaneously compute maps of the linear and angular deflections and maps of the strain distribution of the entire structure. The deflection and shape information are required to ascertain that the structure is properly deployed and that its surfaces are operating wrinkle-free. The strain map ensures that the structure is not loaded excessively to adversely affect its service life. The developed theoretical model is validated experimentally using a prototype of a variable cambered span morphing structure provided with a network of distributed sensors. The structure/sensor network system is tested under various static conditions to determine the response characteristics of the proposed sensor network as compared to other conventional sensor systems. The presented theoretical and experimental techniques can have a great impact on the safe deployment and effective operation of a wide variety of morphing and inflatable structures such as morphing aircraft, solar sails, inflatable wings, and large antennas.
Restoring Low Sidelobe Antenna Patterns with Failed Elements in a Phased Array Antenna
2016-02-01
optimum low sidelobes are demonstrated in several examples. Index Terms — Array signal processing, beams, linear algebra , phased arrays, shaped...represented by a linear combination of low sidelobe beamformers with no failed elements, ’s, in a neighborhood around under the constraint that the linear ...would expect that linear combinations of them in a neighborhood around would also have low sidelobes. The algorithms in this paper exploit this
SOMBI: Bayesian identification of parameter relations in unstructured cosmological data
NASA Astrophysics Data System (ADS)
Frank, Philipp; Jasche, Jens; Enßlin, Torsten A.
2016-11-01
This work describes the implementation and application of a correlation determination method based on self organizing maps and Bayesian inference (SOMBI). SOMBI aims to automatically identify relations between different observed parameters in unstructured cosmological or astrophysical surveys by automatically identifying data clusters in high-dimensional datasets via the self organizing map neural network algorithm. Parameter relations are then revealed by means of a Bayesian inference within respective identified data clusters. Specifically such relations are assumed to be parametrized as a polynomial of unknown order. The Bayesian approach results in a posterior probability distribution function for respective polynomial coefficients. To decide which polynomial order suffices to describe correlation structures in data, we include a method for model selection, the Bayesian information criterion, to the analysis. The performance of the SOMBI algorithm is tested with mock data. As illustration we also provide applications of our method to cosmological data. In particular, we present results of a correlation analysis between galaxy and active galactic nucleus (AGN) properties provided by the SDSS catalog with the cosmic large-scale-structure (LSS). The results indicate that the combined galaxy and LSS dataset indeed is clustered into several sub-samples of data with different average properties (for example different stellar masses or web-type classifications). The majority of data clusters appear to have a similar correlation structure between galaxy properties and the LSS. In particular we revealed a positive and linear dependency between the stellar mass, the absolute magnitude and the color of a galaxy with the corresponding cosmic density field. A remaining subset of data shows inverted correlations, which might be an artifact of non-linear redshift distortions.
Nystrom, Elizabeth A.
2018-02-01
Drinking water for New York City is supplied from several large reservoirs, including a system of reservoirs west of the Hudson River. To provide updated reservoir capacity tables and bathymetry maps of the City’s six West of Hudson reservoirs, bathymetric surveys were conducted by the U.S. Geological Survey from 2013 to 2015. Depths were surveyed with a single-beam echo sounder and real-time kinematic global positioning system along planned transects at predetermined intervals for each reservoir. A separate quality assurance dataset of echo sounder points was collected along transects at oblique angles to the main transects for accuracy assessment. Field-survey data were combined with water surface elevations in a geographic information system to create three-dimensional surfaces in the form of triangulated irregular networks (TINs) representing the elevations of the reservoir geomorphology. The TINs were linearly enforced to better represent geomorphic features within the reservoirs. The linearly enforced TINs were then used to create raster surfaces and 2-foot-interval contour maps of the reservoirs. Elevation-area-capacity tables were calculated at 0.01-foot intervals. The results of the surveys show that the total capacity of the West of Hudson reservoirs has decreased by 11.5 billion gallons (Ggal), or 2.3 percent, since construction, and the useable capacity (the volume above the minimum operating level required to deliver full flow for drinking water supply) has decreased by 7.9 Ggal (1.7 percent). The available capacity (the volume between the spillway elevation and the lowest intake or sill elevation used for drinking water supply) decreased by 9.6 Ggal (2.0 percent), and dead storage (the volume below the lowest intake or sill elevation) decreased by 1.9 Ggal (11.6 percent).
"Geo-statistics methods and neural networks in geophysical applications: A case study"
NASA Astrophysics Data System (ADS)
Rodriguez Sandoval, R.; Urrutia Fucugauchi, J.; Ramirez Cruz, L. C.
2008-12-01
The study is focus in the Ebano-Panuco basin of northeastern Mexico, which is being explored for hydrocarbon reservoirs. These reservoirs are in limestones and there is interest in determining porosity and permeability in the carbonate sequences. The porosity maps presented in this study are estimated from application of multiattribute and neural networks techniques, which combine geophysics logs and 3-D seismic data by means of statistical relationships. The multiattribute analysis is a process to predict a volume of any underground petrophysical measurement from well-log and seismic data. The data consist of a series of target logs from wells which tie a 3-D seismic volume. The target logs are neutron porosity logs. From the 3-D seismic volume a series of sample attributes is calculated. The objective of this study is to derive a set of attributes and the target log values. The selected set is determined by a process of forward stepwise regression. The analysis can be linear or nonlinear. In the linear mode the method consists of a series of weights derived by least-square minimization. In the nonlinear mode, a neural network is trained using the select attributes as inputs. In this case we used a probabilistic neural network PNN. The method is applied to a real data set from PEMEX. For better reservoir characterization the porosity distribution was estimated using both techniques. The case shown a continues improvement in the prediction of the porosity from the multiattribute to the neural network analysis. The improvement is in the training and the validation, which are important indicators of the reliability of the results. The neural network showed an improvement in resolution over the multiattribute analysis. The final maps provide more realistic results of the porosity distribution.
Feng, Xiang; Deistung, Andreas; Dwyer, Michael G; Hagemeier, Jesper; Polak, Paul; Lebenberg, Jessica; Frouin, Frédérique; Zivadinov, Robert; Reichenbach, Jürgen R; Schweser, Ferdinand
2017-06-01
Accurate and robust segmentation of subcortical gray matter (SGM) nuclei is required in many neuroimaging applications. FMRIB's Integrated Registration and Segmentation Tool (FIRST) is one of the most popular software tools for automated subcortical segmentation based on T 1 -weighted (T1w) images. In this work, we demonstrate that FIRST tends to produce inaccurate SGM segmentation results in the case of abnormal brain anatomy, such as present in atrophied brains, due to a poor spatial match of the subcortical structures with the training data in the MNI space as well as due to insufficient contrast of SGM structures on T1w images. Consequently, such deviations from the average brain anatomy may introduce analysis bias in clinical studies, which may not always be obvious and potentially remain unidentified. To improve the segmentation of subcortical nuclei, we propose to use FIRST in combination with a special Hybrid image Contrast (HC) and Non-Linear (nl) registration module (HC-nlFIRST), where the hybrid image contrast is derived from T1w images and magnetic susceptibility maps to create subcortical contrast that is similar to that in the Montreal Neurological Institute (MNI) template. In our approach, a nonlinear registration replaces FIRST's default linear registration, yielding a more accurate alignment of the input data to the MNI template. We evaluated our method on 82 subjects with particularly abnormal brain anatomy, selected from a database of >2000 clinical cases. Qualitative and quantitative analyses revealed that HC-nlFIRST provides improved segmentation compared to the default FIRST method. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomez-Cardona, D; Li, K; Lubner, M G
Purpose: The introduction of the highly nonlinear MBIR algorithm to clinical CT systems has made CNR an invalid metric for kV optimization. The purpose of this work was to develop a task-based framework to unify kV and mAs optimization for both FBP- and MBIR-based CT systems. Methods: The kV-mAs optimization was formulated as a constrained minimization problem: to select kV and mAs to minimize dose under the constraint of maintaining the detection performance as clinically prescribed. To experimentally solve this optimization problem, exhaustive measurements of detectability index (d’) for a hepatic lesion detection task were performed at 15 different mAmore » levels and 4 kV levels using an anthropomorphic phantom. The measured d’ values were used to generate an iso-detectability map; similarly, dose levels recorded at different kV-mAs combinations were used to generate an iso-dose map. The iso-detectability map was overlaid on top of the iso-dose map so that for a prescribed detectability level d’, the optimal kV-mA can be determined from the crossing between the d’ contour and the dose contour that corresponds to the minimum dose. Results: Taking d’=16 as an example: the kV-mAs combinations on the measured iso-d’ line of MBIR are 80–150 (3.8), 100–140 (6.6), 120–150 (11.3), and 140–160 (17.2), where values in the parentheses are measured dose values. As a Result, the optimal kV was 80 and optimal mA was 150. In comparison, the optimal kV and mA for FBP were 100 and 500, which corresponded to a dose level of 24 mGy. Results of in vivo animal experiments were consistent with the phantom results. Conclusion: A new method to optimize kV and mAs selection has been developed. This method is applicable to both linear and nonlinear CT systems such as those using MBIR. Additional dose savings can be achieved by combining MBIR with this method. This work was partially supported by an NIH grant R01CA169331 and GE Healthcare. K. Li, D. Gomez-Cardona, M. G. Lubner: Nothing to disclose. P. J. Pickhardt: Co-founder, VirtuoCTC, LLC Stockholder, Cellectar Biosciences, Inc. G.-H. Chen: Research funded, GE Healthcare; Research funded, Siemens AX.« less
On the Existence of Star Products on Quotient Spaces of Linear Hamiltonian Torus Actions
NASA Astrophysics Data System (ADS)
Herbig, Hans-Christian; Iyengar, Srikanth B.; Pflaum, Markus J.
2009-08-01
We discuss BFV deformation quantization (Bordemann et al. in A homological approach to singular reduction in deformation quantization, singularity theory, pp. 443-461. World Scientific, Hackensack, 2007) in the special case of a linear Hamiltonian torus action. In particular, we show that the Koszul complex on the moment map of an effective linear Hamiltonian torus action is acyclic. We rephrase the nonpositivity condition of Arms and Gotay (Adv Math 79(1):43-103, 1990) for linear Hamiltonian torus actions. It follows that reduced spaces of such actions admit continuous star products.
NASA Astrophysics Data System (ADS)
Szatmári, Gábor; Laborczi, Annamária; Takács, Katalin; Pásztor, László
2017-04-01
The knowledge about soil organic carbon (SOC) baselines and changes, and the detection of vulnerable hot spots for SOC losses and gains under climate change and changed land management is still fairly limited. Thus Global Soil Partnership (GSP) has been requested to develop a global SOC mapping campaign by 2017. GSPs concept builds on official national data sets, therefore, a bottom-up (country-driven) approach is pursued. The elaborated Hungarian methodology suits the general specifications of GSOC17 provided by GSP. The input data for GSOC17@HU mapping approach has involved legacy soil data bases, as well as proper environmental covariates related to the main soil forming factors, such as climate, organisms, relief and parent material. Nowadays, digital soil mapping (DSM) highly relies on the assumption that soil properties of interest can be modelled as a sum of a deterministic and stochastic component, which can be treated and modelled separately. We also adopted this assumption in our methodology. In practice, multiple regression techniques are commonly used to model the deterministic part. However, this global (and usually linear) models commonly oversimplify the often complex and non-linear relationship, which has a crucial effect on the resulted soil maps. Thus, we integrated machine learning algorithms (namely random forest and quantile regression forest) in the elaborated methodology, supposing then to be more suitable for the problem in hand. This approach has enable us to model the GSOC17 soil properties in that complex and non-linear forms as the soil itself. Furthermore, it has enable us to model and assess the uncertainty of the results, which is highly relevant in decision making. The applied methodology has used geostatistical approach to model the stochastic part of the spatial variability of the soil properties of interest. We created GSOC17@HU map with 1 km grid resolution according to the GSPs specifications. The map contributes to the GSPs GSOC17 proposals, as well as to the development of global soil information system under GSP Pillar 4 on soil data and information. However, we elaborated our adherent code (created in R software environment) in such a way that it can be improved, specified and applied for further uses. Hence, it opens the door to create countrywide map(s) with higher grid resolution for SOC (or other soil related properties) using the advanced methodology, as well as to contribute and support the SOC (or other soil) related country level decision making. Our paper will present the soil mapping methodology itself, the resulted GSOC17@HU map, some of our conclusions drawn from the experiences and their effects on the further uses. Acknowledgement: Our work was supported by the Hungarian National Scientific Research Foundation (OTKA, Grant No. K105167).
Linear combination methods to improve diagnostic/prognostic accuracy on future observations
Kang, Le; Liu, Aiyi; Tian, Lili
2014-01-01
Multiple diagnostic tests or biomarkers can be combined to improve diagnostic accuracy. The problem of finding the optimal linear combinations of biomarkers to maximise the area under the receiver operating characteristic curve has been extensively addressed in the literature. The purpose of this article is threefold: (1) to provide an extensive review of the existing methods for biomarker combination; (2) to propose a new combination method, namely, the nonparametric stepwise approach; (3) to use leave-one-pair-out cross-validation method, instead of re-substitution method, which is overoptimistic and hence might lead to wrong conclusion, to empirically evaluate and compare the performance of different linear combination methods in yielding the largest area under receiver operating characteristic curve. A data set of Duchenne muscular dystrophy was analysed to illustrate the applications of the discussed combination methods. PMID:23592714